From steve at greengecko.co.nz Mon Jun 1 05:01:09 2015 From: steve at greengecko.co.nz (steve) Date: Mon, 01 Jun 2015 17:01:09 +1200 Subject: mail proxying In-Reply-To: References: Message-ID: <556BE715.7090009@greengecko.co.nz> Hi, On 31/05/15 11:58, dethegeek wrote: > Hi > > I'm setting up nginx as a reverse proxy for a postfix / dovecot setup. > > My imap server requires STARTTLS usage. Nginx seems to not issue STARTTLS > command before forwarding users credentials. > > Here is the error I found in /var/log/nginx/error.log > > [error] 928#0: *20 upstream sent invalid response: "* BAD [ALERT] Plaintext > authentication not allowed without SSL/TLS, but your client did it anyway. > If anyone was listening, the password was exposed. > > I did not found anything in the documentation to ask nginx to issue STARTTLS > command to the upstream server. Is there a way to achieve this ? > > I did not tried pop3 yet, but I'm expecting the same annoyance. and the same > answer; let me know if I'm wrong. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259279#msg-259279 > > Try the wiki. Specifically http://wiki.nginx.org/ImapProxyExample -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From Boying.Lu at emc.com Mon Jun 1 06:25:49 2015 From: Boying.Lu at emc.com (Lu, Boying) Date: Mon, 1 Jun 2015 02:25:49 -0400 Subject: How to send a REST request to another node if the service on the current node is not available? Message-ID: Hi, All, We setup a cluster of three nodes and running ngix on each node to load balance. I found that if a REST service is unavailable on current node, the corresponding REST request sending to this node will return the service unavailable response to the client. Is there a way (e.g. configuration parameter) to let ngix service running in this node to redirect the REST request to another node automatically in this case? Thanks Boying -------------- next part -------------- An HTML attachment was scrubbed... URL: From defan at nginx.com Mon Jun 1 07:45:04 2015 From: defan at nginx.com (Andrei Belov) Date: Mon, 1 Jun 2015 10:45:04 +0300 Subject: Compiling Nginx on Windows 7 In-Reply-To: <598d6667bf6af291d634d966adb5731f.NginxMailingListEnglish@forum.nginx.org> References: <598d6667bf6af291d634d966adb5731f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6A6DC929-E5C0-4DC6-B824-DE10F0A2E190@nginx.com> On 30 May 2015, at 23:00, z_kamikimo wrote: > Im experiencing issues with compiling Nginx on Windows 7, every thing goes > good until nmake -f objs/Makefile. > I get the following error > > Assembling: tmp32\sha1-586.asm > tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands > tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\ml.EXE"' : return code '0x1' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. Are you trying to do the build on 64-bit Windows 7? The attached patch may help (it was tested on Windows Server 2008 R2 Datacenter SP1 64-bit in the past). -------------- next part -------------- A non-text attachment was scrubbed... Name: win32-openssl-x64.patch Type: application/octet-stream Size: 846 bytes Desc: not available URL: From mail at renemoser.net Mon Jun 1 11:28:42 2015 From: mail at renemoser.net (Rene Moser) Date: Mon, 01 Jun 2015 13:28:42 +0200 Subject: proxy_cache not working anymore in 1.8 Message-ID: <556C41EA.90806@renemoser.net> Hi We use nginx as proxy_cache and identified a different somehow weird behaviour compared to 1.6.: In some situations we get a MISS or EXPIRED in 1.8, where with the same config, same resource and same origin, we got a HIT in 1.6. The weird thing is, it does never change to a HIT even after several requests to the same resource! We always get MISSED or EXPIRED. We do not use proxy_cache_min_uses. Below I show you some configs, nothing really fancy. Does anyone use 1.8 for proxy_cache? 1.6: rpm spec and sources: https://github.com/swisstxt/rpm-pcache/commit/cabe5f83b8c02ae8ce74543f669c9b0c0fc41f03 1.8: rpm spec and sources: https://github.com/swisstxt/rpm-pcache/commit/da336fd143e21500e445df4a5ece0af7f8a9448b Response from ORIGIN: Cache-Control: public, max-age=49 Content-Type: application/json; charset=utf-8 Content-MD5: O+EP2TJJZQZr61/mhkVqdA== Expires: Mon, 01 Jun 2015 11:09:05 GMT Last-Modified: Mon, 01 Jun 2015 11:08:05 GMT Vary: * access-control-allow-origin: * Date: Mon, 01 Jun 2015 11:08:15 GMT Content-Length: 743 nginx respone in 1.6.: Server: nginx Date: Mon, 01 Jun 2015 11:20:59 GMT Content-Type: application/json; charset=utf-8 Content-Length: 728 Connection: keep-alive Cache-Control: public, max-age=60 Content-MD5: aMGk4M7K6qzjnOIvVfy6fg== Expires: Mon, 01 Jun 2015 11:21:48 GMT Last-Modified: Mon, 01 Jun 2015 11:20:48 GMT Vary: * access-control-allow-origin: * X-Node: cache.example.com X-Cached: HIT 200 OK nginx response in 1.8.: Server: nginx Date: Mon, 01 Jun 2015 11:23:41 GMT Content-Type: application/json; charset=utf-8 Content-Length: 728 Connection: keep-alive Cache-Control: public, max-age=56 Content-MD5: aMGk4M7K6qzjnOIvVfy6fg== Expires: Mon, 01 Jun 2015 11:24:38 GMT Last-Modified: Mon, 01 Jun 2015 11:23:38 GMT Vary: * access-control-allow-origin: * X-Node: cache.example.com X-Cached: MISS 200 OK 2nd nginx response in 1.8.: Server: nginx Date: Mon, 01 Jun 2015 11:24:53 GMT Content-Type: application/json; charset=utf-8 Content-Length: 728 Connection: keep-alive Cache-Control: public, max-age=57 Content-MD5: aMGk4M7K6qzjnOIvVfy6fg== Expires: Mon, 01 Jun 2015 11:25:50 GMT Last-Modified: Mon, 01 Jun 2015 11:24:50 GMT Vary: * access-control-allow-origin: * X-Node: cache.example.com X-Cached: MISS 200 OK #file nginx.conf: http { include /etc/nginx/mime.types; default_type application/octet-stream; add_header X-Node $hostname; add_header X-Cached $upstream_cache_status; # changed to fake IPs resolver 1.2.3.4 1.2.3.5; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; server_tokens off; sendfile on; keepalive_timeout 10s; tcp_nopush on; tcp_nodelay on; proxy_temp_path /srv/www/temp; proxy_connect_timeout 10s; proxy_send_timeout 20s; proxy_read_timeout 20s; send_timeout 30s; charset utf-8; charset_types application/javascript text/css application/atom+xml; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ... } #file foobar.example.com.conf: upstream foobar-origins { server lb.foobar.example.com; } proxy_cache_path /srv/www/cache/foobar.example.com levels=1:2 keys_zone=foobar:10m inactive=2d max_size=1g; server { listen 1.1.104.140:80; server_name foobar.example.com; access_log /srv/www/log/foobar.example.com/access.log combined; error_log /srv/www/log/foobar.example.com/error.log; access_log /srv/www/log/foobar.example.com/pcache.log pcache_json buffer=16k; set $volume_key "foobar"; access_log /srv/www/log/global_pcache_volume.log volume-key; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache_key "$scheme$host$request_uri"; proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; location / { proxy_cache foobar; proxy_cache_valid 200 206 301 302 5m; proxy_cache_valid any 10s; proxy_cache_lock on; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504 http_403 http_404; proxy_pass http://foobar-origins; } ... } Any hints? Yours Ren? From nginx-forum at nginx.us Mon Jun 1 11:45:22 2015 From: nginx-forum at nginx.us (dethegeek) Date: Mon, 01 Jun 2015 07:45:22 -0400 Subject: mail proxying In-Reply-To: <4BDE5196-31CC-4C6B-BBAB-DE54AB2DB89A@nginx.com> References: <4BDE5196-31CC-4C6B-BBAB-DE54AB2DB89A@nginx.com> Message-ID: <8ece2ac9081ecb1077a5ca652f35dd2f.NginxMailingListEnglish@forum.nginx.org> Hi Thank you Andrew, You confirmed what I'm afraid of. I hope this feature will be implemented soon. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259296#msg-259296 From wangsamp at gmail.com Mon Jun 1 11:51:18 2015 From: wangsamp at gmail.com (Oleksandr V. Typlyns'kyi) Date: Mon, 1 Jun 2015 14:51:18 +0300 (EEST) Subject: proxy_cache not working anymore in 1.8 In-Reply-To: <556C41EA.90806@renemoser.net> References: <556C41EA.90806@renemoser.net> Message-ID: Today Jun 1, 2015 at 13:28 Rene Moser wrote: > Hi > > We use nginx as proxy_cache and identified a different somehow weird > behaviour compared to 1.6.: > > In some situations we get a MISS or EXPIRED in 1.8, where with the same > config, same resource and same origin, we got a HIT in 1.6. > Response from ORIGIN: > > Cache-Control: public, max-age=49 > Content-Type: application/json; charset=utf-8 > Content-MD5: O+EP2TJJZQZr61/mhkVqdA== > Expires: Mon, 01 Jun 2015 11:09:05 GMT > Last-Modified: Mon, 01 Jun 2015 11:08:05 GMT > Vary: * > access-control-allow-origin: * > Date: Mon, 01 Jun 2015 11:08:15 GMT > Content-Length: 743 http://nginx.org/r/proxy_cache_valid If the header includes the "Vary" field with the special value "*", such a response will not be cached (1.7.7). If the header includes the "Vary" field with another value, such a response will be cached taking into account the corresponding request header fields (1.7.7). -- WNGS-RIPE From nginx-forum at nginx.us Mon Jun 1 11:51:53 2015 From: nginx-forum at nginx.us (dethegeek) Date: Mon, 01 Jun 2015 07:51:53 -0400 Subject: mail proxying In-Reply-To: <556BE715.7090009@greengecko.co.nz> References: <556BE715.7090009@greengecko.co.nz> Message-ID: Hi Steve, thank you for your reply. I already read the page you mentionned, and as I understand it, either this feature is missing, either it is not documented. Andrew said TLS is not implemented, so I'll follow his advice to properly workaround this limitation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259297#msg-259297 From mail at renemoser.net Mon Jun 1 12:19:15 2015 From: mail at renemoser.net (Rene Moser) Date: Mon, 01 Jun 2015 14:19:15 +0200 Subject: proxy_cache not working anymore in 1.8 In-Reply-To: References: <556C41EA.90806@renemoser.net> Message-ID: <556C4DC3.3050401@renemoser.net> Hi On 01.06.2015 13:51, Oleksandr V. Typlyns'kyi wrote: >> Vary: * > http://nginx.org/r/proxy_cache_valid > If the header includes the "Vary" field with the special value "*", such > a response will not be cached (1.7.7). If the header includes the "Vary" > field with another value, such a response will be cached taking into > account the corresponding request header fields (1.7.7). I can confirm this was the "issue", thank you for this helpful and fast response! Yours Ren? From mdounin at mdounin.ru Mon Jun 1 12:39:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jun 2015 15:39:24 +0300 Subject: Compiling Nginx on Windows 7 In-Reply-To: <6A6DC929-E5C0-4DC6-B824-DE10F0A2E190@nginx.com> References: <598d6667bf6af291d634d966adb5731f.NginxMailingListEnglish@forum.nginx.org> <6A6DC929-E5C0-4DC6-B824-DE10F0A2E190@nginx.com> Message-ID: <20150601123924.GL26357@mdounin.ru> Hello! On Mon, Jun 01, 2015 at 10:45:04AM +0300, Andrei Belov wrote: > > On 30 May 2015, at 23:00, z_kamikimo wrote: > > > Im experiencing issues with compiling Nginx on Windows 7, every thing goes > > good until nmake -f objs/Makefile. > > I get the following error > > > > Assembling: tmp32\sha1-586.asm > > tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands > > tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands > > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > > 10.0\VC\BI > > N\ml.EXE"' : return code '0x1' > > Stop. > > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > > 10.0\VC\BI > > N\nmake.exe"' : return code '0x2' > > Stop. > > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > > 10.0\VC\BI > > N\nmake.exe"' : return code '0x2' > > Stop. > > Are you trying to do the build on 64-bit Windows 7? > > The attached patch may help (it was tested on Windows Server > 2008 R2 Datacenter SP1 64-bit in the past). This patch won't help, the error in question is a result of OpenSSL incorrect assembler handling in OpenSSL 1.0.2*. In OpenSSL 1.* times OpenSSL folks did the following things: - about 1.0.0 they changed default compilation procedure (the one used by nginx on Windows), previously documented to be a way to compile without asm at all, to use MASM; at the same time, they declared that they will only support NASM. - in 1.0.2 they broke both building with MASM (see errors above) and building without asm at all (as per new compilation procedure, introduced in 1.0.0). Some comments about this can be found here (RT link seems to be dead for now): https://github.com/openssl/openssl/issues/169 https://rt.openssl.org/Ticket/Display.html?id=3650&user=guest&pass=guest Trivial workaround is to use latest OpenSSL from the 1.0.1 branch (1.0.1m as of now), it compiles fine either way. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 1 13:42:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jun 2015 16:42:08 +0300 Subject: How to send a REST request to another node if the service on the current node is not available? In-Reply-To: References: Message-ID: <20150601134208.GO26357@mdounin.ru> Hello! On Mon, Jun 01, 2015 at 02:25:49AM -0400, Lu, Boying wrote: > Hi, All, > > We setup a cluster of three nodes and running ngix on each node > to load balance. > > I found that if a REST service is unavailable on current node, > the corresponding REST request sending to this node > will return the service unavailable response to the client. Is > there a way (e.g. configuration parameter) to let ngix service > running in this node > to redirect the REST request to another node automatically in > this case? Depending on your configuration, you may consider using one or more of the following mechanisms: - error_page, see http://nginx.org/r/error_page - proxy_next_upstream, see http://nginx.org/r/proxy_next_upstream - backup servers in an upstream blocks, see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#backup -- Maxim Dounin http://nginx.org/ From matthew.oriordan at gmail.com Mon Jun 1 14:40:12 2015 From: matthew.oriordan at gmail.com (Matthew O'Riordan) Date: Mon, 1 Jun 2015 15:40:12 +0100 Subject: In-flight HTTP requests fail during hot configuration reload (SIGHUP) Message-ID: We have recently migrated across from HAProxy to Nginx because it supports true zero-downtime configuration reloads. However, we are occasionally getting 502 and 504 errors from our monitoring systems during deployments. Looking into this, I have been able to consistently replicate the 502 and 504 errors as follows. I believe this is an error in how Nginx handles in-flight requests, but wanted to ask the community in case I am missing something obvious. Note the set up of Nginx is as follows: * Ubuntu 14.04 * Nginx version 1.9.1 * Configuration for an HTTP listener: map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 8080; # pass on real client's IP proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log /var/log/nginx/access.ws-8080.log combined; location / { proxy_pass http://server-ws-8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } upstream server-ws-8080 { least_conn; server 172.17.0.51:8080 max_fails=0; } 1. Telnet to the Nginx server on the HTTP port it is listening on. 2. Send a HTTP/1.1 request to the upstream server (172.17.0.51): GET /health HTTP/1.1 Host: localhost Connection: Keep-Alive This request succeeds and the response is valid 3. Start a new HTTP/1.1 request but don?t finish the request i.e. send the following line using telnet: GET /health HTTP/1.1 4. Whilst that request is now effectively in-flight because it?s not finished and Nginx is waiting for the request to be completed, reconfigure Nginx with a SIGHUP signal. The only difference in the config preceding the SIGHUP signal is that the upstream server has changed i.e. we intentionally want all new requests to go to the new upstream server. 5. Terminate the old upstream server 172.17.0.51 6. Complete the in-flight HTTP/1.1 request started in point 3 above with: Host: localhost Connection: Keep-Alive 7. Nginx will consistently respond with a 502 if the old upstream server rejects the request, or a 504 if there is no response on that IP and port. I believe this behaviour is incorrect as Nginx, once it receives the complete request, should direct the request to the current available upstream server. However, it seems that that Nginx is instead deciding which upstream server to send the request to before the request is completed and as such is directing the request to a server that no longer exists. Any advice appreciated. BTW. I tried to raise an issue on http://trac.nginx.com/ , however it seems that the authentication system is completely broken. I tried logging in with Google, My Open Id, Wordpress and Yahoo, and all of those OpenID providers no longer work. Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jun 1 15:28:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jun 2015 18:28:21 +0300 Subject: In-flight HTTP requests fail during hot configuration reload (SIGHUP) In-Reply-To: References: Message-ID: <20150601152821.GQ26357@mdounin.ru> Hello! On Mon, Jun 01, 2015 at 03:40:12PM +0100, Matthew O'Riordan wrote: > We have recently migrated across from HAProxy to Nginx because > it supports true zero-downtime configuration reloads. However, > we are occasionally getting 502 and 504 errors from our > monitoring systems during deployments. Looking into this, I > have been able to consistently replicate the 502 and 504 errors > as follows. I believe this is an error in how Nginx handles > in-flight requests, but wanted to ask the community in case I am > missing something obvious. > > Note the set up of Nginx is as follows: > * Ubuntu 14.04 > * Nginx version 1.9.1 > * Configuration for an HTTP listener: > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } > server { > listen 8080; > # pass on real client's IP > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > access_log /var/log/nginx/access.ws-8080.log combined; > > location / { > proxy_pass http://server-ws-8080; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > } > } > > upstream server-ws-8080 { > least_conn; > server 172.17.0.51:8080 max_fails=0; > } > > 1. Telnet to the Nginx server on the HTTP port it is listening on. > > 2. Send a HTTP/1.1 request to the upstream server (172.17.0.51): > GET /health HTTP/1.1 > Host: localhost > Connection: Keep-Alive > > This request succeeds and the response is valid > > 3. Start a new HTTP/1.1 request but don?t finish the request > i.e. send the following line using telnet: > GET /health HTTP/1.1 > > 4. Whilst that request is now effectively in-flight because it?s > not finished and Nginx is waiting for the request to be > completed, reconfigure Nginx with a SIGHUP signal. The only > difference in the config preceding the SIGHUP signal is that the > upstream server has changed i.e. we intentionally want all new > requests to go to the new upstream server. > > 5. Terminate the old upstream server 172.17.0.51 > > 6. Complete the in-flight HTTP/1.1 request started in point 3 > above with: > Host: localhost > Connection: Keep-Alive > > 7. Nginx will consistently respond with a 502 if the old > upstream server rejects the request, or a 504 if there is no > response on that IP and port. > > I believe this behaviour is incorrect as Nginx, once it receives > the complete request, should direct the request to the current > available upstream server. However, it seems that that Nginx is > instead deciding which upstream server to send the request to > before the request is completed and as such is directing the > request to a server that no longer exists. Your problem is in step (5). While you've started new nginx workers to handle new requests in step (4), this doesn't guarantee that old upstream servers are no longer needed. Only new connections will be processed by new worker processes with new nginx configuration. Old workers continue to service requests started before you've reconfigured nginx, and will only terminate once all previously started requests are finished. This includes requests already send to an upstream server and reading a response, and requests not yet read from a client. For these requests previous configuration apply, and you shouldn't stop old upstream servers till old worker processes are shut down. Some details about reconfiguration process can be found here: http://nginx.org/en/docs/control.html#reconfiguration -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 1 15:30:56 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jun 2015 18:30:56 +0300 Subject: [OT] Cant write across filesystem mounts? In-Reply-To: <1433020805.84539.YahooMailBasic@web142402.mail.bf1.yahoo.com> References: <1433020805.84539.YahooMailBasic@web142402.mail.bf1.yahoo.com> Message-ID: <20150601153056.GR26357@mdounin.ru> Hello! On Sat, May 30, 2015 at 02:20:05PM -0700, E.B. wrote: > Hi I dont think this is specific to nginx but I hope its a good > place to ask! > > When running PHP script through Nginx it writes OK to files > on the same disk mount where the PHP file is located but > not to the other parts of the system that are on another mount. > (well i dont know if its a matter of "same mount" or not, but > that is how it is behaving) > > Example, /tmp is on another mount than the web root. > > ini_set('display_errors', 'On'); > file_put_contents('/tmp/test', 'hello world'); > system('touch /tmp/test-touch'); > file_put_contents('/webroot/tmp/test', 'hello world'); > system('touch /webroot/tmp/test-touch'); > ?>hello world > > I run this script from CLI (sudo as ANY user including the php > user) and it always works fine (writes files in both places). If I > access it from a browser the write/touch commands to /tmp > fail silently. > > No AVC from selinux, no PHP or Nginx errors or warnings. > /tmp permissions are usual 777. Can someone help me in > right direction? In this particular case I would recommend to look into PHP configuration, open_basedir directive in particular: http://php.net/manual/en/ini.core.php#ini.open-basedir Either way this doesn't looks like an nginx-related problem, you may have better luck asking in more relevant lists. -- Maxim Dounin http://nginx.org/ From emailbuilder88 at yahoo.com Mon Jun 1 17:28:22 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Mon, 1 Jun 2015 10:28:22 -0700 Subject: [OT] Cant write across filesystem mounts? In-Reply-To: <20150601153056.GR26357@mdounin.ru> Message-ID: <1433179702.20135.YahooMailBasic@web142404.mail.bf1.yahoo.com> Thank you very much for yours response! > > When running PHP script through Nginx it writes OK to files > > on the same disk mount where the PHP file is located but > > not to the other parts of the system that are on another mount. > > (well i dont know if its a matter of "same mount" or not, but > > that is how it is behaving) > > > > Example, /tmp is on another mount than the web root. > > > > > ini_set('display_errors', 'On'); > > file_put_contents('/tmp/test', 'hello world'); > > system('touch /tmp/test-touch'); > > file_put_contents('/webroot/tmp/test', 'hello world'); > > system('touch /webroot/tmp/test-touch'); > > ?>hello world > > > > I run this script from CLI (sudo as ANY user including the php > > user) and it always works fine (writes files in both places). If I > > access it from a browser the write/touch commands to /tmp > > fail silently. > > > > No AVC from selinux, no PHP or Nginx errors or warnings. > > /tmp permissions are usual 777. Can someone help me in > > right direction? > > > In this particular case I would recommend to look into PHP > configuration, open_basedir directive in particular: > > http://php.net/manual/en/ini.core.php#ini.open-basedir Nothing is set for that. Also note the "restriction" doesnt happen when running the example script from the CLI. Issue may be peculiar to the O/S but I no idea how or where to start looking in this regard. Other ideas anyone? From emailbuilder88 at yahoo.com Mon Jun 1 19:26:00 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Mon, 1 Jun 2015 12:26:00 -0700 Subject: SOLVED: Re: [OT] Cant write across filesystem mounts? In-Reply-To: <1433020805.84539.YahooMailBasic@web142402.mail.bf1.yahoo.com> Message-ID: <1433186760.52836.YahooMailBasic@web142402.mail.bf1.yahoo.com> > When running PHP script through Nginx it writes OK to files > on the same disk mount where the PHP file is located but > not to the other parts of the system that are on another mount. > (well i dont know if its a matter of "same mount" or not, but > that is how it is behaving) > > Example, /tmp is on another mount than the web root. > > ini_set('display_errors', 'On'); > file_put_contents('/tmp/test', 'hello world'); > system('touch /tmp/test-touch'); > file_put_contents('/webroot/tmp/test', 'hello world'); > system('touch /webroot/tmp/test-touch'); > ?>hello world > > I run this script from CLI (sudo as ANY user including the php > user) and it always works fine (writes files in both places). If I > access it from a browser the write/touch commands to /tmp > fail silently. > > No AVC from selinux, no PHP or Nginx errors or warnings. > /tmp permissions are usual 777. Can someone help me in > right direction? The problem was the use of PrivateTmp in systemd for php-fpm. Writes to /tmp (and apparently /var/tmp) go to nowhereland per systemd (somewhere I don't know about???) But if I create a directory writable by php-fpm with another name, it works. Thanks for the comments. From steve at greengecko.co.nz Mon Jun 1 20:00:33 2015 From: steve at greengecko.co.nz (steve) Date: Tue, 02 Jun 2015 08:00:33 +1200 Subject: mail proxying In-Reply-To: References: <556BE715.7090009@greengecko.co.nz> Message-ID: <556CB9E1.3000704@greengecko.co.nz> HI On 01/06/15 23:51, dethegeek wrote: > Hi > > Steve, thank you for your reply. > > I already read the page you mentionned, and as I understand it, either this > feature is missing, either it is not documented. > > Andrew said TLS is not implemented, so I'll follow his advice to properly > workaround this limitation. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259297#msg-259297 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Although I've never done this, the example is pretty specific for STARTTLS. I've not known nginx docs to be incorrect. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From ahutchings at nginx.com Mon Jun 1 20:08:00 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Mon, 1 Jun 2015 21:08:00 +0100 Subject: mail proxying In-Reply-To: <556CB9E1.3000704@greengecko.co.nz> References: <556BE715.7090009@greengecko.co.nz> <556CB9E1.3000704@greengecko.co.nz> Message-ID: <9B01884C-8291-4652-B8B1-DB0540683814@nginx.com> > On 1 Jun 2015, at 21:00, steve wrote: > > HI > > On 01/06/15 23:51, dethegeek wrote: >> Hi >> >> Steve, thank you for your reply. >> >> I already read the page you mentionned, and as I understand it, either this >> feature is missing, either it is not documented. >> >> Andrew said TLS is not implemented, so I'll follow his advice to properly >> workaround this limitation. >> > Although I've never done this, the example is pretty specific for STARTTLS. I've not known nginx docs to be incorrect. > The example is for STARTTLS at the client to NGINX server level. I believe the question was for the NGINX server to upstream server level and unfortunately NGINX does not currently support this and has no configuration options for it. Kind Regards -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From matthew.oriordan at gmail.com Mon Jun 1 20:21:12 2015 From: matthew.oriordan at gmail.com (Matthew O'Riordan) Date: Mon, 1 Jun 2015 21:21:12 +0100 Subject: In-flight HTTP requests fail during hot configuration reload (SIGHUP) In-Reply-To: References: Message-ID: <294339CB-1A07-4321-A65D-59D670CB15AD@gmail.com> Hi Maxim Thanks for the reply. Few comments below with context for others reading this thread: >> 1. Telnet to the Nginx server on the HTTP port it is listening on. >> >> 2. Send a HTTP/1.1 request to the upstream server (172.17.0.51): >> GET /health HTTP/1.1 >> Host: localhost >> Connection: Keep-Alive >> >> This request succeeds and the response is valid >> >> 3. Start a new HTTP/1.1 request but don?t finish the request >> i.e. send the following line using telnet: >> GET /health HTTP/1.1 >> >> 4. Whilst that request is now effectively in-flight because it?s >> not finished and Nginx is waiting for the request to be >> completed, reconfigure Nginx with a SIGHUP signal. The only >> difference in the config preceding the SIGHUP signal is that the >> upstream server has changed i.e. we intentionally want all new >> requests to go to the new upstream server. >> >> 5. Terminate the old upstream server 172.17.0.51 >> >> 6. Complete the in-flight HTTP/1.1 request started in point 3 >> above with: >> Host: localhost >> Connection: Keep-Alive >> >> 7. Nginx will consistently respond with a 502 if the old >> upstream server rejects the request, or a 504 if there is no >> response on that IP and port. > > Your problem is in step (5). While you've started new nginx > workers to handle new requests in step (4), this doesn't guarantee > that old upstream servers are no longer needed. I realise that is the problem, but I am not quite sure what the best strategy to correct this is. We are experiencing this problem in production environments because Nginx sits behind an Amazon ELB. ELB by default will maintain a connection to the client (browser for example) and a backend server (Nginx in this case). What we seem to be experiencing is that because ELB has opened a connection to Nginx, Nginx has automatically assigned this socket to an existing healthy upstream server. So even if a SIGHUP is sent to Nginx, ELB?s next request will always be processed by the old upstream server at the time the connection to Nginx was opened. So therefore for us to do rolling deployments, we have to keep the old server running for periods of up to say 2 minutes to ensure existing connection requests are completed. We have designed our upstream server so that it will complete existing in-flight requests, however our upstream server thinks that an in-flight request is one that is being responded to, not one that is perhaps just opened and no data has been sent from the client to the server on the socket yet. > Only new connections will be processed by new worker processes with new > nginx configuration. Old workers continue to service requests > started before you've reconfigured nginx, and will only terminate > once all previously started requests are finished. This includes > requests already send to an upstream server and reading a > response, and requests not yet read from a client. For these > requests previous configuration apply, and you shouldn't stop old > upstream servers till old worker processes are shut down. Ok, however we do need a sensible timeout to ensure we do actually shut down our old upstream servers too. This is the problem I am finding with the strategy we currently have. ELB, for example, pipelines requests using a single TCP connection in accordance with the HTTP/1.1 spec. When a SIGHUP is sent to Nginx, how does it then deal with pipelined requests? Will it process all received requests and then issue a "Connection: Close? header, or will it process the current request and then close the connection? If the former, then it?s quite possible that in the time those in-flight requests are responded to, another X number of requests will have been received also in the pipeline. > Some details about reconfiguration process can be found here: > http://nginx.org/en/docs/control.html#reconfiguration I have read that page previously, but unfortunately I found it did not reveal much in regards to how it handles keep-alive and pipelining. Thanks again, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 1 20:31:50 2015 From: nginx-forum at nginx.us (badtzhou) Date: Mon, 01 Jun 2015 16:31:50 -0400 Subject: Occasionally 500 responses Message-ID: We were seeing occasionally 500 responses from nginx on our production servers. There is nothing in the error log correlated to the event. Upon turning on debugging log and TCP dump, we identified the occasionally 500 responses are caused by end user resetting the connection(especially end users who used IE). The error in the debug log showed as some thing like '2015/04/20 23:37:58 [info] 17423#0: *1188530641 writev() failed(104: Connection reset by peer)'. Is this supposed to be a 499 error response instead of 500 error. When we see 500 errors, we usually think there is something wrong with the server. This is actually end user resetting the connection. Is there any plan to fix that in the future? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259319,259319#msg-259319 From nginx-forum at nginx.us Tue Jun 2 04:02:09 2015 From: nginx-forum at nginx.us (George) Date: Tue, 02 Jun 2015 00:02:09 -0400 Subject: Nginx LibreSSL and BoringSSL alternative to OpenSSL ? Message-ID: Currently on CentOS 6/7, I source compile my Nginx 1.9.x versions with static OpenSSL 1.02a patched for chacha20_poly1305 but thinking about switching to LibreSSL or BoringSSL (for equal preference group cipher support). The question I have is anyone else using Nginx with LibreSSL or BoringSSL on CentOS/Redhat ? Any issues that needed working around or any features lost ? e.g. BoringSSL and OSCP stapling support etc ? Recommended steps for compilation with Nginx ? thanks George Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259325,259325#msg-259325 From nginx-forum at nginx.us Tue Jun 2 05:27:11 2015 From: nginx-forum at nginx.us (mex) Date: Tue, 02 Jun 2015 01:27:11 -0400 Subject: Nginx LibreSSL and BoringSSL alternative to OpenSSL ? In-Reply-To: References: Message-ID: Hi, nginx + libressl works without any issues; we have it running since last summer and have seen no problems so far, but did not tested it with 1.8.x though the following explians how to do it: https://8ack.de/guides/nginx-libressl-first-test cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259325,259327#msg-259327 From nginx-forum at nginx.us Tue Jun 2 05:47:52 2015 From: nginx-forum at nginx.us (dethegeek) Date: Tue, 02 Jun 2015 01:47:52 -0400 Subject: mail proxying In-Reply-To: <9B01884C-8291-4652-B8B1-DB0540683814@nginx.com> References: <9B01884C-8291-4652-B8B1-DB0540683814@nginx.com> Message-ID: <2bdf22c5a74243275c69ed4615638aba.NginxMailingListEnglish@forum.nginx.org> Hi As I understood the example given in the documentation, it is for a TLS session between a client and nginx. This is the next step in my roadmap. Right now, I'm focusing on the secure connection between nginx and the backend servers. It still would be interesting to implement whad I need directly in nginx; As I understand how nginx works with pop3 / imap / smtp protocols I guess this would be a reasonnable work. Thank you Andrew and Steve. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259328#msg-259328 From nginx-forum at nginx.us Tue Jun 2 06:44:13 2015 From: nginx-forum at nginx.us (George) Date: Tue, 02 Jun 2015 02:44:13 -0400 Subject: Nginx LibreSSL and BoringSSL alternative to OpenSSL ? In-Reply-To: References: Message-ID: thanks seems with LibreSSL 2.1.6 no longer need the steps for creating .openssl/lib and copying files to that directory and symlink to make it work seems it works on Nginx 1.9.1 with LibreSSL 2.1.6 sweet ! nginx -V nginx version: nginx/1.9.1 built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) built with LibreSSL 2.1.6 TLS SNI support enabled configure arguments: --with-ld-opt='-lrt -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib' --with-cc-opt='-m64 -mtune=native -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_ssl_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_sub_module --with-http_addition_module --with-http_image_filter_module --with-http_secure_link_module --with-http_flv_module --with-http_realip_module --with-http_geoip_module --with-openssl-opt=enable-tlsext --add-module=../ngx-fancyindex-ngx-fancyindex --add-module=../ngx_cache_purge-2.3 --add-module=../headers-more-nginx-module-0.25 --add-module=../nginx-accesskey-2.0.3 --add-module=../nginx-http-concat-master --with-http_dav_module --add-module=../nginx-dav-ext-module-0.0.3 --add-module=../openresty-memc-nginx-module-1518da4 --add-module=../openresty-srcache-nginx-module-ffa9ab7 --add-module=../ngx_devel_kit-0.2.19 --add-module=../set-misc-nginx-module-0.28 --add-module=../echo-nginx-module-0.57 --add-module=../lua-nginx-module-0.9.16rc1 --add-module=../lua-upstream-nginx-module-0.02 --add-module=../lua-upstream-cache-nginx-module-0.1.1 --add-module=../nginx_upstream_check_module-0.3.0 --add-module=../nginx-module-vts --with-openssl=../portable-2.1.6 --with-libatomic --with-threads --with-stream --with-stream_ssl_module --with-pcre=../pcre-8.37 --with-pcre-jit --with-http_spdy_module --add-module=../ngx_pagespeed-release-1.9.32.3-beta Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259325,259331#msg-259331 From nginx-forum at nginx.us Tue Jun 2 08:00:18 2015 From: nginx-forum at nginx.us (George) Date: Tue, 02 Jun 2015 04:00:18 -0400 Subject: Nginx LibreSSL and BoringSSL alternative to OpenSSL ? In-Reply-To: References: Message-ID: Tested fine with ECC 256 bit and RSA 2048 bit SSL and chacha20_poly1305 https://community.centminmod.com/threads/nginx-and-libressl-alternative-to-openssl.3146/ :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259325,259333#msg-259333 From oliver.schrenk at gmail.com Tue Jun 2 09:52:41 2015 From: oliver.schrenk at gmail.com (Oliver Schrenk) Date: Tue, 2 Jun 2015 11:52:41 +0200 Subject: Find out if config file is loaded/used Message-ID: <0F720CBC-4C0B-4DDC-B711-75E997194C9A@gmail.com> Hi, we configured load balancing around 10 machines in 2 clusters using symbolic links to various configuration file. We change the symbolic link to a different file if we need to do maintenance on one of the clusters and reload nginx. We are building some automation around this and want to make sure that a specific configuration is used. At the moment we just check the path of the symbolic link but that doesn't necessarily mean that the configuration is live. Is there a way to query nginx which configuration (file) is loaded? Cheers, Oliver From mdounin at mdounin.ru Tue Jun 2 12:29:01 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jun 2015 15:29:01 +0300 Subject: Find out if config file is loaded/used In-Reply-To: <0F720CBC-4C0B-4DDC-B711-75E997194C9A@gmail.com> References: <0F720CBC-4C0B-4DDC-B711-75E997194C9A@gmail.com> Message-ID: <20150602122901.GY26357@mdounin.ru> Hello! On Tue, Jun 02, 2015 at 11:52:41AM +0200, Oliver Schrenk wrote: > Hi, > > we configured load balancing around 10 machines in 2 clusters > using symbolic links to various configuration file. We change > the symbolic link to a different file if we need to do > maintenance on one of the clusters and reload nginx. We are > building some automation around this and want to make sure that > a specific configuration is used. At the moment we just check > the path of the symbolic link but that doesn't necessarily mean > that the configuration is live. > > Is there a way to query nginx which configuration (file) is > loaded? The configuration file which is loaded is one you've asked nginx to load - either with "-c" argument, or by default. It can be found from "ps" and/or "nginx -V" output, but it won't try to resolve symlinks and hence won't help in your case. It's mostly trivial to configure nginx to return some configuration id though, like this: location = /configuration_id { return 200 some-configuration-id; } -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jun 2 13:15:40 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jun 2015 16:15:40 +0300 Subject: In-flight HTTP requests fail during hot configuration reload (SIGHUP) In-Reply-To: <294339CB-1A07-4321-A65D-59D670CB15AD@gmail.com> References: <294339CB-1A07-4321-A65D-59D670CB15AD@gmail.com> Message-ID: <20150602131540.GZ26357@mdounin.ru> Hello! On Mon, Jun 01, 2015 at 09:21:12PM +0100, Matthew O'Riordan wrote: [...] > > Your problem is in step (5). While you've started new nginx > > workers to handle new requests in step (4), this doesn't guarantee > > that old upstream servers are no longer needed. > > I realise that is the problem, but I am not quite sure what the > best strategy to correct this is. We are experiencing this > problem in production environments because Nginx sits behind an > Amazon ELB. ELB by default will maintain a connection to the > client (browser for example) and a backend server (Nginx in this > case). What we seem to be experiencing is that because ELB has > opened a connection to Nginx, Nginx has automatically assigned > this socket to an existing healthy upstream server. So even if > a SIGHUP is sent to Nginx, ELB?s next request will always be > processed by the old upstream server at the time the connection > to Nginx was opened. So therefore for us to do rolling > deployments, we have to keep the old server running for periods > of up to say 2 minutes to ensure existing connection requests > are completed. We have designed our upstream server so that it > will complete existing in-flight requests, however our upstream > server thinks that an in-flight request is one that is being > responded to, not one that is perhaps just opened and no data > has been sent from the client to the server on the socket yet. Ideally, you should keep old upstream servers running till all old worker processes are terminated. This way you won't depend on configuration and/or implementation details. This can be a bit too long though, as old workers usually busy sending big responses to slow clients. > > Only new connections will be processed by new worker processes with new > > nginx configuration. Old workers continue to service requests > > started before you've reconfigured nginx, and will only terminate > > once all previously started requests are finished. This includes > > requests already send to an upstream server and reading a > > response, and requests not yet read from a client. For these > > requests previous configuration apply, and you shouldn't stop old > > upstream servers till old worker processes are shut down. > > Ok, however we do need a sensible timeout to ensure we do > actually shut down our old upstream servers too. This is the > problem I am finding with the strategy we currently have. > ELB, for example, pipelines requests using a single TCP > connection in accordance with the HTTP/1.1 spec. When a SIGHUP > is sent to Nginx, how does it then deal with pipelined requests? > Will it process all received requests and then issue a > "Connection: Close? header, or will it process the current > request and then close the connection? If the former, then it?s > quite possible that in the time those in-flight requests are > responded to, another X number of requests will have been > received also in the pipeline. Upon a SIGHUP, nginx will finish processing of requests it already started to process. No additional requests will be processed, including pipelined requests. This is considered to be an implementation details though, not something guaranteed. -- Maxim Dounin http://nginx.org/ From zxcvbn4038 at gmail.com Tue Jun 2 14:29:31 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 2 Jun 2015 10:29:31 -0400 Subject: SSL session caching Message-ID: In my current setup I have nginx behind a load balancing router (OSPF) where each connection to the same address has about 16% chance of hitting the same server as the last time. In a setup like that, does SSL session caching make any difference? I was thinking it through this morning and I'm betting that the browser would toss the old session ID unless it happened to be routed to the same backend, because in the other cases the backend servers would respond they don't know the session. Is that correct? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 2 14:38:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jun 2015 17:38:49 +0300 Subject: SSL session caching In-Reply-To: References: Message-ID: <20150602143849.GC26357@mdounin.ru> Hello! On Tue, Jun 02, 2015 at 10:29:31AM -0400, CJ Ess wrote: > In my current setup I have nginx behind a load balancing router (OSPF) > where each connection to the same address has about 16% chance of hitting > the same server as the last time. > > In a setup like that, does SSL session caching make any difference? I was > thinking it through this morning and I'm betting that the browser would > toss the old session ID unless it happened to be routed to the same > backend, because in the other cases the backend servers would respond they > don't know the session. Is that correct? If a server doesn't know the session, it will simply create a new one. There is no performance difference between "no session" and "an unknown session" cases. That is, session cache is still beneficial in such a setup - but has relatively small chance to help. In such a setup, it should also be beneficial to configure shared session ticket keys, see http://nginx.org/r/ssl_session_ticket_key. -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Tue Jun 2 17:46:24 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 2 Jun 2015 19:46:24 +0200 Subject: SSL session caching In-Reply-To: References: Message-ID: > In my current setup I have nginx behind a load balancing router (OSPF) > where each connection to the same address has about 16% chance of > hitting the same server as the last time. If you get your router to only hash source and destination IP instead of 5-tuple for the load-balancing, that will fix your issue. Lukas From nginx-forum at nginx.us Tue Jun 2 20:55:34 2015 From: nginx-forum at nginx.us (knofun) Date: Tue, 02 Jun 2015 16:55:34 -0400 Subject: Any reason stub_status would return 0.00 for last CPU? Message-ID: <473004a018803c1f15d58b06691b8961.NginxMailingListEnglish@forum.nginx.org> Server is running really hot so investigating. Stub_status displays requests, but CPU data is 0.00 for all: http://pastebin.com/U0pLCBQ8 I'm running 1.8.0 with php-fpm and fastcgi caching Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259350,259350#msg-259350 From viktor at szepe.net Tue Jun 2 21:04:08 2015 From: viktor at szepe.net (=?utf-8?b?U3rDqXBl?= Viktor) Date: Tue, 02 Jun 2015 23:04:08 +0200 Subject: Add header based on fastcgi response In-Reply-To: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> References: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> Message-ID: <20150602230408.Horde.0mzkdcE6lipzgpqBEKleOA3@szepe.net> Could anyone comment on this? Id?zem/Quoting Sz?pe Viktor : > Good morning! > > I'd like to add a X-Fastcgi-Cache header when there is a fastcgi > cache hit, when the response is stored to or retrieved from the cache. > > add_header X-Fastcgi-Cache 600; > > Could you help me? > > Thank you. Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let From mdounin at mdounin.ru Wed Jun 3 01:33:09 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jun 2015 04:33:09 +0300 Subject: Any reason stub_status would return 0.00 for last CPU? In-Reply-To: <473004a018803c1f15d58b06691b8961.NginxMailingListEnglish@forum.nginx.org> References: <473004a018803c1f15d58b06691b8961.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150603013309.GE26357@mdounin.ru> Hello! On Tue, Jun 02, 2015 at 04:55:34PM -0400, knofun wrote: > Server is running really hot so investigating. Stub_status displays > requests, but CPU data is 0.00 for all: > > http://pastebin.com/U0pLCBQ8 > > I'm running 1.8.0 with php-fpm and fastcgi caching The output in question is not from nginx stub_status. Output from nginx stub_status looks like: Active connections: 291 server accepts handled requests 16630948 16630948 31070465 Reading: 6 Writing: 179 Waiting: 106 See http://nginx.org/en/docs/http/ngx_http_stub_status_module.html for details. The output you've provided is likely from php-fpm status. -- Maxim Dounin http://nginx.org/ From ryd994 at 163.com Wed Jun 3 02:39:06 2015 From: ryd994 at 163.com (ryd994) Date: Wed, 03 Jun 2015 02:39:06 +0000 Subject: Add header based on fastcgi response In-Reply-To: <20150602230408.Horde.0mzkdcE6lipzgpqBEKleOA3@szepe.net> References: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> <20150602230408.Horde.0mzkdcE6lipzgpqBEKleOA3@szepe.net> Message-ID: On Tue, Jun 2, 2015 at 5:04 PM Sz?pe Viktor wrote: Could anyone comment on this? Id?zem/Quoting Sz?pe Viktor : > Good morning! > > I'd like to add a X-Fastcgi-Cache header when there is a fastcgi > cache hit, when the response is stored to or retrieved from the cache. > > add_header X-Fastcgi-Cache 600; > > Could you help me? > > Thank you. Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/ listinfo/nginx Does $upstream_cache_status does the work? Disclaimer: I'm a newbie. Please correct me if I'm wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor at szepe.net Wed Jun 3 13:11:11 2015 From: viktor at szepe.net (=?utf-8?b?U3rDqXBl?= Viktor) Date: Wed, 03 Jun 2015 15:11:11 +0200 Subject: Add header based on fastcgi response In-Reply-To: References: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> <20150602230408.Horde.0mzkdcE6lipzgpqBEKleOA3@szepe.net> Message-ID: <20150603151111.Horde.Gnp_VGQWPum8GeBg3H3aPw1@szepe.net> > > > Does $upstream_cache_status does the work? > > Disclaimer: I'm a newbie. Please correct me if I'm wrong. Thanks. I don't use the http_upstream module. It is a basic PHP-FPM setup. Id?zem/Quoting ryd994 : > On Tue, Jun 2, 2015 at 5:04 PM Sz?pe Viktor wrote: > > Could anyone comment on this? > > > Id?zem/Quoting Sz?pe Viktor : > >> Good morning! >> >> I'd like to add a X-Fastcgi-Cache header when there is a fastcgi >> cache hit, when the response is stored to or retrieved from the cache. >> >> add_header X-Fastcgi-Cache 600; >> >> Could you help me? >> >> Thank you. Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let From mdounin at mdounin.ru Wed Jun 3 14:10:42 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jun 2015 17:10:42 +0300 Subject: Add header based on fastcgi response In-Reply-To: <20150603151111.Horde.Gnp_VGQWPum8GeBg3H3aPw1@szepe.net> References: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> <20150602230408.Horde.0mzkdcE6lipzgpqBEKleOA3@szepe.net> <20150603151111.Horde.Gnp_VGQWPum8GeBg3H3aPw1@szepe.net> Message-ID: <20150603141042.GH26357@mdounin.ru> Hello! On Wed, Jun 03, 2015 at 03:11:11PM +0200, Sz?pe Viktor wrote: > >Does $upstream_cache_status does the work? > > > >Disclaimer: I'm a newbie. Please correct me if I'm wrong. > > Thanks. I don't use the http_upstream module. > It is a basic PHP-FPM setup. As long as you use fastcgi, you are using upstream module, as fastcgi module uses upstream module internally (much like proxy, uwsgi, scgi and memcached). -- Maxim Dounin http://nginx.org/ From viktor at szepe.net Wed Jun 3 17:10:17 2015 From: viktor at szepe.net (=?utf-8?b?U3rDqXBl?= Viktor) Date: Wed, 03 Jun 2015 19:10:17 +0200 Subject: Add header based on fastcgi response In-Reply-To: <20150603141042.GH26357@mdounin.ru> References: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> <20150602230408.Horde.0mzkdcE6lipzgpqBEKleOA3@szepe.net> <20150603151111.Horde.Gnp_VGQWPum8GeBg3H3aPw1@szepe.net> <20150603141042.GH26357@mdounin.ru> Message-ID: <20150603191017.Horde.TDLBZLSldm0WwD0YZbnJWg6@szepe.net> Thank you. Now I am proved to be a sub-rookie. Id?zem/Quoting Maxim Dounin : > Hello! > > On Wed, Jun 03, 2015 at 03:11:11PM +0200, Sz?pe Viktor wrote: > >> >Does $upstream_cache_status does the work? >> > >> >Disclaimer: I'm a newbie. Please correct me if I'm wrong. >> >> Thanks. I don't use the http_upstream module. >> It is a basic PHP-FPM setup. > > As long as you use fastcgi, you are using upstream module, as > fastcgi module uses upstream module internally (much like proxy, > uwsgi, scgi and memcached). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let From kpariani at zimbra.com Wed Jun 3 22:06:57 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Wed, 3 Jun 2015 17:06:57 -0500 (CDT) Subject: HUP signal to nginx doesn't work Ubuntu14 Message-ID: <482595213.2184121.1433369217404.JavaMail.zimbra@zimbra.com> Hello, Am seeing an issue while sending HUP signal to nginx (for config reload) on Ubuntu14. It just kills the master process & doesn't start the new worker processes. The same works just fine on CentOS 6.6 64-bit $ ps -eaf | grep nginx zimbra 10860 1 0 16:05 ? 00:00:00 nginx: master process /opt/zimbra/nginx/sbin/nginx -c /opt/zimbra/conf/nginx.conf zimbra 10861 10860 0 16:05 ? 00:00:00 nginx: worker process zimbra 10862 10860 0 16:05 ? 00:00:00 nginx: worker process zimbra 10863 10860 0 16:05 ? 00:00:00 nginx: worker process zimbra 10864 10860 0 16:05 ? 00:00:00 nginx: worker process zimbra 18638 25945 0 16:22 pts/0 00:00:00 grep nginx zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx $ $ kill -HUP 10860 $ $ ps -eaf | grep nginx zimbra 10861 1 0 16:05 ? 00:00:00 nginx: worker process <------ same old worker processes & master process is killed zimbra 10862 1 0 16:05 ? 00:00:00 nginx: worker process zimbra 10863 1 0 16:05 ? 00:00:00 nginx: worker process zimbra 10864 1 0 16:05 ? 00:00:00 nginx: worker process zimbra 18666 18641 0 16:22 pts/1 00:00:00 tail -f log/nginx.log zimbra 18986 25945 0 16:23 pts/0 00:00:00 grep nginx zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx $ >From nginx.log, i can see the SIGHUP is received 2015/06/03 16:23:14 [notice] 10860#0: signal 1 (SIGHUP) received, reconfiguring 2015/06/03 16:23:14 [debug] 10860#0: wake up, sigio 0 2015/06/03 16:23:14 [notice] 10860#0: reconfiguring 2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 00000000021EAA50:16384 @16 2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 000000000223D570:32768 @16 Any ideas on why this doesn't work on ubuntu only ? Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfransosi at gmail.com Wed Jun 3 22:16:39 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Wed, 3 Jun 2015 19:16:39 -0300 Subject: Thanks Message-ID: Thanks all devs who put Nginx together and made this awesome piece of work! With it and freedns.afraid.org I was able to host my site from my home server. -- Thiago Farina From mdounin at mdounin.ru Thu Jun 4 02:14:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jun 2015 05:14:29 +0300 Subject: HUP signal to nginx doesn't work Ubuntu14 In-Reply-To: <482595213.2184121.1433369217404.JavaMail.zimbra@zimbra.com> References: <482595213.2184121.1433369217404.JavaMail.zimbra@zimbra.com> Message-ID: <20150604021429.GQ26357@mdounin.ru> Hello! On Wed, Jun 03, 2015 at 05:06:57PM -0500, Kunal Pariani wrote: > Hello, > Am seeing an issue while sending HUP signal to nginx (for config reload) on Ubuntu14. It just kills the master process & doesn't start the new worker processes. The same works just fine on CentOS 6.6 64-bit > > $ ps -eaf | grep nginx > zimbra 10860 1 0 16:05 ? 00:00:00 nginx: master process /opt/zimbra/nginx/sbin/nginx -c /opt/zimbra/conf/nginx.conf > zimbra 10861 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10862 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10863 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10864 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 18638 25945 0 16:22 pts/0 00:00:00 grep nginx > zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx > $ > $ kill -HUP 10860 > $ > $ ps -eaf | grep nginx > zimbra 10861 1 0 16:05 ? 00:00:00 nginx: worker process <------ same old worker processes & master process is killed > zimbra 10862 1 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10863 1 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10864 1 0 16:05 ? 00:00:00 nginx: worker process > zimbra 18666 18641 0 16:22 pts/1 00:00:00 tail -f log/nginx.log > zimbra 18986 25945 0 16:23 pts/0 00:00:00 grep nginx > zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx > $ > > From nginx.log, i can see the SIGHUP is received > 2015/06/03 16:23:14 [notice] 10860#0: signal 1 (SIGHUP) received, reconfiguring > 2015/06/03 16:23:14 [debug] 10860#0: wake up, sigio 0 > 2015/06/03 16:23:14 [notice] 10860#0: reconfiguring > 2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 00000000021EAA50:16384 @16 > 2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 000000000223D570:32768 @16 > > Any ideas on why this doesn't work on ubuntu only ? It looks like master process dies for some reason. First of all I would recommend you to test if you are able to reproduce the problem without 3rd party (or your own) modules/patches. See also http://wiki.nginx.org/Debugging for some basic debugging hints. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jun 4 07:09:17 2015 From: nginx-forum at nginx.us (mex) Date: Thu, 04 Jun 2015 03:09:17 -0400 Subject: Nginx LibreSSL and BoringSSL alternative to OpenSSL ? In-Reply-To: References: Message-ID: thank you for your comment; i'll re-test with 1.8 and adjust the document accordingly. i think the config-workaround is obsolete too. cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259325,259372#msg-259372 From fxn at hashref.com Thu Jun 4 07:16:45 2015 From: fxn at hashref.com (Xavier Noria) Date: Thu, 4 Jun 2015 09:16:45 +0200 Subject: Accept-Encoding: gzip and the Vary header Message-ID: I have used gzip_static for some years without any issue that I am aware of with the default gzip_vary off. My reasoning is that the HTTP spec says in http://tools.ietf.org/html/rfc2616#page-145 that "the Vary field value advises the user agent about the criteria that were used to select the representation", and my understanding is that compressed content is not a representation per se. The representation would be the result of undoing what Content-Encoding says. So, given the same .html endpoint you could for example serve content in a language chosen according to Accept-Language. That's a representation that depends on headers in my understanding. If you serve the same .css over and over again no matter what, the representation does not vary. The compressed thing that is transferred is not the representation itself, so no Vary needed. Do you guys agree with that reading of the spec? Then, you read posts about buggy proxy servers. Have any of you founded a real (modern) case in which the lack of "Vary: Accept-Encoding" resulted in compressed content being delivered to a client that didn't support it? Or are those proxies mythical criatures as of today? Thanks! Xavier -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Jun 4 08:03:18 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 04 Jun 2015 11:03:18 +0300 Subject: Thanks In-Reply-To: References: Message-ID: <55700646.2060908@nginx.com> On 6/4/15 1:16 AM, Thiago Farina wrote: > Thanks all devs who put Nginx together and made this awesome piece of work! > > With it and freedns.afraid.org I was able to host my site from my home server. > Thanks for using nginx! -- Maxim Konovalov http://nginx.com From devel at jasonwoods.me.uk Thu Jun 4 08:56:20 2015 From: devel at jasonwoods.me.uk (Jason Woods) Date: Thu, 4 Jun 2015 09:56:20 +0100 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: References: Message-ID: Hi > On 4 Jun 2015, at 08:16, Xavier Noria wrote: > > I have used gzip_static for some years without any issue that I am aware of with the default gzip_vary off. > > My reasoning is that the HTTP spec says in > > http://tools.ietf.org/html/rfc2616#page-145 > > that "the Vary field value advises the user agent about the criteria that were used to select the representation", and my understanding is that compressed content is not a representation per se. The representation would be the result of undoing what Content-Encoding says. This is fine to do. However, there's a chance a proxy may cache an uncompressed version if a client does not support compression and its response ends up in a proxy cache. Any subsequent user also behind that cache, even if it accepts compression, would be served it uncompressed in most cases. > So, given the same .html endpoint you could for example serve content in a language chosen according to Accept-Language. That's a representation that depends on headers in my understanding. If you serve the same .css over and over again no matter what, the representation does not vary. The compressed thing that is transferred is not the representation itself, so no Vary needed. > > Do you guys agree with that reading of the spec? This bit of the spec (same page at bottom) explains it better I think: An HTTP/1.1 server SHOULD include a Vary header field with any cacheable response that is subject to server-driven negotiation. Doing so allows a cache to properly interpret future requests on that resource and informs the user agent about the presence of negotiation on that resource. I would say compression is a server driven negotiation. I would also say, based on my understanding, that when the spec says representation it means including encoding such as compression. That is, you can represent a resource with gzip or without gzip. > Then, you read posts about buggy proxy servers. Have any of you founded a real (modern) case in which the lack of "Vary: Accept-Encoding" resulted in compressed content being delivered to a client that didn't support it? Or are those proxies mythical criatures as of today? Proxy are bound by the spec too so yes it would be a buggy proxy. They can't send a Content-Encoding gzip unless the client sends Accept-Encoding. I'm not entirely sure what would happen though - I guess either bypass the compressed cache version or replace it uncompressed. Most likely up to the proxy implementation. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 4 09:28:44 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Thu, 04 Jun 2015 05:28:44 -0400 Subject: DNS configuration to invoke complete URL In-Reply-To: <20150526233531.GE2957@daoine.org> References: <20150526233531.GE2957@daoine.org> Message-ID: <942bdfc17b8da1e998bef07dc223092d.NginxMailingListEnglish@forum.nginx.org> Hi Francis Expectation is to redirect the request to the absolute URL (http://workspace.corp.no/workspace/agentLogin) when an end user access the request http:// NGINX is installed on server Node-01 The content to be loaded (workspace-dns-name) is deployed on another server Node-02 Steps to achieve pls.? Best regards, Madhu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,259376#msg-259376 From fxn at hashref.com Thu Jun 4 09:49:18 2015 From: fxn at hashref.com (Xavier Noria) Date: Thu, 4 Jun 2015 11:49:18 +0200 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 10:56 AM, Jason Woods wrote: An HTTP/1.1 server SHOULD include a Vary header field with any > cacheable response that is subject to server-driven negotiation. > Doing so allows a cache to properly interpret future requests on that > resource and informs the user agent about the presence of negotiation on that resource. > > You are right, and the section about server-driven negotiation http://tools.ietf.org/html/rfc2616#page-72 explicitly mentions Accept-Encoding as an example. So case closed. Next question is: why is gzip_vary off by default? Isn't the most common case that you want it enabled? Xavier PS: In my next reencarnation I promise to only work on specs written as axiomatic systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jun 4 13:11:25 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jun 2015 16:11:25 +0300 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: References: Message-ID: <20150604131125.GS26357@mdounin.ru> Hello! On Thu, Jun 04, 2015 at 11:49:18AM +0200, Xavier Noria wrote: > On Thu, Jun 4, 2015 at 10:56 AM, Jason Woods wrote: > > An HTTP/1.1 server SHOULD include a Vary header field with any > > cacheable response that is subject to server-driven negotiation. > > Doing so allows a cache to properly interpret future requests on that > > resource and informs the user agent about the presence of negotiation on that resource. > > > > > You are right, and the section about server-driven negotiation > > http://tools.ietf.org/html/rfc2616#page-72 > > explicitly mentions Accept-Encoding as an example. So case closed. > > Next question is: why is gzip_vary off by default? Isn't the most common > case that you want it enabled? The problem with Vary is that it causes bad effects on shared caches, in particular, it normaly results in cache duplication. So by default nginx doesn't add Vary, and also doesn't send compressed content to proxies (gzip_proxied off). This approach works with both HTTP/1.0 and HTTP/1.1 caches, and doesn't cause cache duplication. See related discussion in this thread: http://mailman.nginx.org/pipermail/nginx/2015-March/046965.html -- Maxim Dounin http://nginx.org/ From fxn at hashref.com Thu Jun 4 13:41:32 2015 From: fxn at hashref.com (Xavier Noria) Date: Thu, 4 Jun 2015 15:41:32 +0200 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: <20150604131125.GS26357@mdounin.ru> References: <20150604131125.GS26357@mdounin.ru> Message-ID: On Thu, Jun 4, 2015 at 3:11 PM, Maxim Dounin wrote: The problem with Vary is that it causes bad effects on shared caches, in > particular, it normaly results in cache duplication. You mean that if client A requests a resource with Accept-Encoding: gzip, and client B without, and the resource has Cache-Control: public, then a shared cache would store the compressed and uncompressed responses thus having the content kind of repeated? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jun 4 14:25:40 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jun 2015 17:25:40 +0300 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: References: <20150604131125.GS26357@mdounin.ru> Message-ID: <20150604142540.GT26357@mdounin.ru> Hello! On Thu, Jun 04, 2015 at 03:41:32PM +0200, Xavier Noria wrote: > On Thu, Jun 4, 2015 at 3:11 PM, Maxim Dounin wrote: > > The problem with Vary is that it causes bad effects on shared caches, in > > particular, it normaly results in cache duplication. > > > You mean that if client A requests a resource with Accept-Encoding: gzip, > and client B without, and the resource has Cache-Control: public, then a > shared cache would store the compressed and uncompressed responses thus > having the content kind of repeated? Not really. The main problem is that there is more than 2 clients, and many of them will use different Accept-Encoding headers, e.g.: gzip,deflate gzip, deflate gzip,deflate,sdch gzip deflate, gzip identity gzip,deflate,lzma,sdch gzip;q=1.0,deflate;q=0.6,identity;q=0.3 gzip;q=1.0, deflate;q=0.8, chunked;q=0.6, identity;q=0.4, *;q=0 deflate identity,gzip,deflate gzip, deflate, peerdist gzip, deflate, identity gzip, x-gzip gzip, deflate, compress As a result, there will be many copies of compressed and uncompressed responses in the cache. -- Maxim Dounin http://nginx.org/ From ryd994 at 163.com Thu Jun 4 15:12:38 2015 From: ryd994 at 163.com (Yidong) Date: Thu, 4 Jun 2015 23:12:38 +0800 (CST) Subject: DNS configuration to invoke complete URL In-Reply-To: <942bdfc17b8da1e998bef07dc223092d.NginxMailingListEnglish@forum.nginx.org> References: <20150526233531.GE2957@daoine.org> <942bdfc17b8da1e998bef07dc223092d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <33f40927.49453.14dbf2195c1.Coremail.ryd994@163.com> Does following line on <workspace-dns-name> work?

location = / {
return 301 http://workspace.corp.no/workspace/agentLogin;
}

You need some web server on <workspace-dns-name> to return a 301.
Or you can try port forwarding to Node-01, although IMHO, not a good idea. At 2015-06-04 17:28:44, "smsmaddy1981" wrote: >Hi Francis > >Expectation is to redirect the request to the absolute URL >(http://workspace.corp.no/workspace/agentLogin) when an end user access the >request http:// > >NGINX is installed on server Node-01 >The content to be loaded (workspace-dns-name) is deployed on another server >Node-02 > >Steps to achieve pls.? > > > >Best regards, >Madhu > >Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,259376#msg-259376 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From kpariani at zimbra.com Thu Jun 4 17:20:23 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Thu, 4 Jun 2015 12:20:23 -0500 (CDT) Subject: HUP signal to nginx doesn't work Ubuntu14 In-Reply-To: <20150604021429.GQ26357@mdounin.ru> References: <482595213.2184121.1433369217404.JavaMail.zimbra@zimbra.com> <20150604021429.GQ26357@mdounin.ru> Message-ID: <174813480.2260321.1433438423760.JavaMail.zimbra@zimbra.com> Hello, Thanks. Yeah looks like it has something to do with our patches although not sure what exactly. Worked just fine with a plain nginx binary. -Kunal ----- Original Message ----- From: "Maxim Dounin" To: nginx at nginx.org Sent: Wednesday, June 3, 2015 7:14:29 PM Subject: Re: HUP signal to nginx doesn't work Ubuntu14 Hello! On Wed, Jun 03, 2015 at 05:06:57PM -0500, Kunal Pariani wrote: > Hello, > Am seeing an issue while sending HUP signal to nginx (for config reload) on Ubuntu14. It just kills the master process & doesn't start the new worker processes. The same works just fine on CentOS 6.6 64-bit > > $ ps -eaf | grep nginx > zimbra 10860 1 0 16:05 ? 00:00:00 nginx: master process /opt/zimbra/nginx/sbin/nginx -c /opt/zimbra/conf/nginx.conf > zimbra 10861 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10862 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10863 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10864 10860 0 16:05 ? 00:00:00 nginx: worker process > zimbra 18638 25945 0 16:22 pts/0 00:00:00 grep nginx > zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx > $ > $ kill -HUP 10860 > $ > $ ps -eaf | grep nginx > zimbra 10861 1 0 16:05 ? 00:00:00 nginx: worker process <------ same old worker processes & master process is killed > zimbra 10862 1 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10863 1 0 16:05 ? 00:00:00 nginx: worker process > zimbra 10864 1 0 16:05 ? 00:00:00 nginx: worker process > zimbra 18666 18641 0 16:22 pts/1 00:00:00 tail -f log/nginx.log > zimbra 18986 25945 0 16:23 pts/0 00:00:00 grep nginx > zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx > $ > > From nginx.log, i can see the SIGHUP is received > 2015/06/03 16:23:14 [notice] 10860#0: signal 1 (SIGHUP) received, reconfiguring > 2015/06/03 16:23:14 [debug] 10860#0: wake up, sigio 0 > 2015/06/03 16:23:14 [notice] 10860#0: reconfiguring > 2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 00000000021EAA50:16384 @16 > 2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 000000000223D570:32768 @16 > > Any ideas on why this doesn't work on ubuntu only ? It looks like master process dies for some reason. First of all I would recommend you to test if you are able to reproduce the problem without 3rd party (or your own) modules/patches. See also http://wiki.nginx.org/Debugging for some basic debugging hints. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Jun 4 18:13:19 2015 From: nginx-forum at nginx.us (dethegeek) Date: Thu, 04 Jun 2015 14:13:19 -0400 Subject: reverse proxy SMTP - How distinguish MUA and MTA Message-ID: <8ee07c714de9c19d50978b5375424009.NginxMailingListEnglish@forum.nginx.org> Hi Still building a nginx reverse proxy for my mail servers. Thanks to the community, I now have a secure connection between nginx and my backend mail server. POP and IMAP are working well, from a MUA to my server. I'm wondering how nginx can manage SMTP coonnections as it is used by both MUA and MTA. A MUA must authenticate before sending mails; and my http_auth backend is able to authenticate users. More precisely, the authentication backend answers a server (auth-server / auth-port) depending on the domain of the destination email address. Now I guess I have to accept incoming emails without authentication if the client is a MTA, but I don't find a obvious way to distingish a MUA and a MTA and let mu auth backend behave depending on that. How to achieve that ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259384,259384#msg-259384 From fxn at hashref.com Thu Jun 4 19:36:43 2015 From: fxn at hashref.com (Xavier Noria) Date: Thu, 4 Jun 2015 21:36:43 +0200 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: <20150604142540.GT26357@mdounin.ru> References: <20150604131125.GS26357@mdounin.ru> <20150604142540.GT26357@mdounin.ru> Message-ID: Ahhh, I see. We've seen that if you want cache + compression, then you need Vary. So by counter-reciprocal the trade-off of gzip_vary off is that the response can't be cached at all in the sense that you're not sending the proper headers. *No matter if the cache is private or shared*. At least in theory. If you turn gzip_vary on to get some caching, but keep gzip_proxied off, and Cache-Control is "public", then I guess clients behind those shared caches would get uncompressed content unless the shared caches themselves compress on the fly (does that happen?) In the typical use case of a CSS file with a fingerprint in its filename for aggressive caching I guess you actually need to go with (off the top of my head): gzip_vary on; gzip_proxied on; expires max; add_header Cache-Control "public"; -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Thu Jun 4 19:40:46 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Thu, 4 Jun 2015 21:40:46 +0200 Subject: reverse proxy SMTP - How distinguish MUA and MTA In-Reply-To: <8ee07c714de9c19d50978b5375424009.NginxMailingListEnglish@forum.nginx.org> References: <8ee07c714de9c19d50978b5375424009.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Am 04.06.2015 um 20:13 schrieb dethegeek : > > Hi > > Still building a nginx reverse proxy for my mail servers. Thanks to the > community, I now have a secure connection between nginx and my backend mail > server. > > POP and IMAP are working well, from a MUA to my server. > > I'm wondering how nginx can manage SMTP coonnections as it is used by both > MUA and MTA. > > A MUA must authenticate before sending mails; and my http_auth backend is > able to authenticate users. More precisely, the authentication backend > answers a server (auth-server / auth-port) depending on the domain of the > destination email address. > > Now I guess I have to accept incoming emails without authentication if the > client is a MTA, but I don't find a obvious way to distingish a MUA and a > MTA and let mu auth backend behave depending on that. > > How to achieve that ? > MUA = Port 587 + 465 MTA = Port 25 Maybe use something like Haraka for SMTP? It?s supposed to be for SMTP-servers what NGINX is for Webservers ;-) Rainer From nginx-forum at nginx.us Thu Jun 4 19:47:24 2015 From: nginx-forum at nginx.us (dethegeek) Date: Thu, 04 Jun 2015 15:47:24 -0400 Subject: reverse proxy SMTP - How distinguish MUA and MTA In-Reply-To: References: Message-ID: Hi Rainer, Thank you for this quick answer. So my idea to declare a server for port 25 and an other for ports 465 and 587 is probably good. You pitch about Haraka lead me to investigate about it. Maybe it will solve my need to route emails as I need to do. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259384,259388#msg-259388 From fxn at hashref.com Thu Jun 4 19:49:37 2015 From: fxn at hashref.com (Xavier Noria) Date: Thu, 4 Jun 2015 21:49:37 +0200 Subject: Accept-Encoding: gzip and the Vary header In-Reply-To: References: <20150604131125.GS26357@mdounin.ru> <20150604142540.GT26357@mdounin.ru> Message-ID: On Thu, Jun 4, 2015 at 9:36 PM, Xavier Noria wrote: gzip_proxied on; > s/on/any/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfransosi at gmail.com Fri Jun 5 01:27:06 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Thu, 4 Jun 2015 22:27:06 -0300 Subject: TIME OUT Message-ID: Hi! Could nginx be the cause of (google chrome message): " This webpage is not available ERR_CONNECTION_TIMED_OUT " My IP change is dynamic and changed this afternoon, and after that all I'm getting now is this TIMEOUT. I have stopped and restarted nginx many times now but still the same problem. -- Thiago Farina From tfransosi at gmail.com Fri Jun 5 01:40:01 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Thu, 4 Jun 2015 22:40:01 -0300 Subject: TIME OUT In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 10:27 PM, Thiago Farina wrote: > Hi! > > Could nginx be the cause of (google chrome message): > > " > This webpage is not available > > ERR_CONNECTION_TIMED_OUT > " > > My IP change is dynamic and changed this afternoon, and after that all > I'm getting now is this TIMEOUT. I have stopped and restarted nginx > many times now but still the same problem. > Looks like 'keepalive_timeout 65;' has something to do with it. I have commented it out and the timeout seems to have gone. -- Thiago Farina From nginx-forum at nginx.us Fri Jun 5 05:44:14 2015 From: nginx-forum at nginx.us (EuroHoster) Date: Fri, 05 Jun 2015 01:44:14 -0400 Subject: HUP signal to nginx doesn't work Ubuntu14 In-Reply-To: <174813480.2260321.1433438423760.JavaMail.zimbra@zimbra.com> References: <174813480.2260321.1433438423760.JavaMail.zimbra@zimbra.com> Message-ID: <1a590753d31895a4020d0d68c5797e82.NginxMailingListEnglish@forum.nginx.org> I can't confirm this. HUP is working great on Ubuntu 14.04. p.s. nginx was installed from repository nginx, not from repo ubuntu. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259369,259394#msg-259394 From d.krikov at ngenix.net Fri Jun 5 12:24:23 2015 From: d.krikov at ngenix.net (Dmitry Krikov) Date: Fri, 5 Jun 2015 12:24:23 +0000 Subject: listen backlog for stream servers Message-ID: <8F785C20-4086-4275-BBD3-1EBCEC836122@ngenix.net> Hi, Listen backlog in Nginx defaults to NGX_LISTEN_BACKLOG=511 on Linux & other platforms (excepting FreeBSD & MacOS), not to system somaxconn (why does it differ for different OSes?). It?s not a problem to increase it using ?backlog? option of ?listen? directive for HTTP servers (http://nginx.org/en/docs/http/ngx_http_core_module.html#listen), but there is no such option for stream server (http://nginx.org/en/docs/stream/ngx_stream_core_module.html#listen). Is there any proper way to increase listen queue length for stream server (without patching source code) or globally? Seems to be a subject to fix. -- Best Regards, Dmitry Krikov -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jun 6 07:25:21 2015 From: nginx-forum at nginx.us (nima0102) Date: Sat, 06 Jun 2015 03:25:21 -0400 Subject: Writing first test Nginx module Message-ID: Hello everyone, Recently for some of our needs on Nginx, I am working to develop a new module on Nginx. So I have started to develop a test and basic module. My intention is to call a function before selecting one servers in upstream section. For this, I set NGX_STREAM_MODULE type. when running the nginx with new option "stream_test" in upstream configuration, I get below error : ERROR: nginx: [emerg] "stream_test" directive is not allowed here in xxxxxx I will appreciate if somebody assist me on this.Thanks Following is the source code of test module #######################################Test Module############################### #include #include #include #include static char *ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd,void *conf); static ngx_command_t ngx_stream_upstream_test_commands[] = { { ngx_string("stream_test"), NGX_STREAM_SRV_CONF|NGX_CONF_NOARGS, ngx_http_test, 0, 0, NULL }, ngx_null_command }; static ngx_stream_module_t ngx_http_test_ctx = { NULL,NULL,NULL,NULL }; ngx_module_t ngx_http_test_module = { NGX_MODULE_V1, &ngx_http_test_ctx, /* module context */ ngx_stream_upstream_test_commands, /* module directives */ NGX_STREAM_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ NULL, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; static char *ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd,void *conf) { //ngx_stream_upstream_srv_conf_t *uscf; //uscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_upstream_module); ngx_conf_log_error(NGX_LOG_ERR,cf,0,"Test Function was called!"); /*if (uscf->peer.init_upstream) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0,"Nima Test Func: load balancing method redefined"); }*/ return NGX_CONF_OK; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259411,259411#msg-259411 From nginx-forum at nginx.us Sat Jun 6 10:07:21 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Sat, 06 Jun 2015 06:07:21 -0400 Subject: TCP Connection details In-Reply-To: <90b7d602f5a27dbec2cecad5d67f94e0.NginxMailingListEnglish@forum.nginx.org> References: <90b7d602f5a27dbec2cecad5d67f94e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <340f2f553c49e1054fa600c64b191ba6.NginxMailingListEnglish@forum.nginx.org> Anyone knows about this? I feel it is very important to know the network delay between LB and the US servers Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258958,259412#msg-259412 From nginx-forum at nginx.us Sat Jun 6 12:27:48 2015 From: nginx-forum at nginx.us (nima0102) Date: Sat, 06 Jun 2015 08:27:48 -0400 Subject: Writing first test Nginx module In-Reply-To: References: Message-ID: Hi again, After some searching on Nginx source code, eventually i did find out the issue. The issue was on module type. I had defined it as NGX_STREAM_MODULE, but it must be NGX_HTTP_MODULE because i did intend to add some features to upstream module. Also I had to change configuration directive from NGX_STREAM_SRV_CONF|NGX_CONF_NOARGS to NGX_HTTP_UPS_CONF|NGX_CONF_NOARGS. Type of ngx_*_ctx was not correct and it was changed to ngx_http_module_t . Sincerely, Nima Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259411,259413#msg-259413 From al-nginx at none.at Sun Jun 7 10:41:38 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 07 Jun 2015 12:41:38 +0200 Subject: Right use of 'if' Message-ID: <0153b0c56cef3727c1632c2e0439989a@none.at> Hai. I try to refuse some attacks with map and if. The requests looks like: ############# /?id=../../../../../../etc/passwd%00&page=../../../../../../etc/passwd%00&file=../../../../../../etc/passwd%00&inc=../../../../../../etc/passwd%00&load=../../../../../../etc/passwd%00&path=../../../../../../etc/passwd%00 /index.php?id=../../../../../../etc/passwd%00&page=../../../../../../etc/passwd%00&file=../../../../../../etc/passwd%00&inc=../../../../../../etc/passwd%00&load=../../../../../../etc/passwd%00&path=../../../../../../etc/passwd%00 /index.php?culture=../../../../../../../../../../windows/win.ini&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D§ions=All%3Cscript%3Ealert(12345)%3C/script%3Ez /index.php?test=../../../../../../../../../../boot.ini ############# My solution: ################# # http request line: "GET /index.php?culture=../../../../../../../../../../windows/win.ini&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D§ions=All%3Cscript%3Ealert(12345)%3C/script%3Ez HTTP/1.1" # http uri: "/index.php" # http args: "culture=../../../../../../../../../../windows/win.ini&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D§ions=All%3Cscript%3Ealert(12345)%3C/script%3Ez" # http exten: "php" map $args $block { default 0; "~(boot|win)\.ini" 1; "~etc/passwd" 1; } location = /index.php { if ($block) { # include is here not allowed ;-/ # include /home/nginx/server/conf/global_setting_for_log_to_fail2ban_for_blocking.conf; access_log logs/fail2ban.log combined; return 403; } } ######################### Is this the most efficient way for nginx? BR Aleks From nginx-forum at nginx.us Sun Jun 7 11:44:12 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 07 Jun 2015 07:44:12 -0400 Subject: Right use of 'if' In-Reply-To: <0153b0c56cef3727c1632c2e0439989a@none.at> References: <0153b0c56cef3727c1632c2e0439989a@none.at> Message-ID: <3ad9a6bff19b070698fe0eb017aba37f.NginxMailingListEnglish@forum.nginx.org> Have a look at /conf/nginx-simple-WAF.conf on this site http://nginx-win.ecsds.eu/ Works on any OS. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259419,259420#msg-259420 From tfransosi at gmail.com Sun Jun 7 13:34:18 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Sun, 7 Jun 2015 10:34:18 -0300 Subject: handling subdirectories location Message-ID: Hi, I have the following in my nginx configuration: server { listen 8080; server_name myservername.com; root /data/www/myservername.com; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000. location ~ \.php$ { try_files $uri = 404; # fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } But how do I configure 'location' so /subdir/index.php is processed instead of /index.php? Basically I'm trying to put third_party apps in their /subdirs/ to test them, but when they navigate to index.php I'm thrown back to the root /index.html for example. Thanks in advance, -- Thiago Farina From nginx-forum at nginx.us Sun Jun 7 14:41:05 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Sun, 07 Jun 2015 10:41:05 -0400 Subject: Nginx LUA Message-ID: Can anyone please help me with a lua configuration which I can embedded into nginx.conf to send the following sepaately in access log. user_agent_os user_agent_browser user_agent_version At present all these fields are embedded in http_user_agent and I am writing parser to parse at the receiver end. I am looking for some input where I can send them separately as different fields from Nginx itself. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259422,259422#msg-259422 From tfransosi at gmail.com Sun Jun 7 17:51:25 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Sun, 7 Jun 2015 14:51:25 -0300 Subject: handling subdirectories location In-Reply-To: References: Message-ID: The issue I'm facing is that http://myservername.com/foo/index.php/secure/login redirects to http://myservername.com/index.html. Can someone help me fix this? -- Thiago Farina From nginx-forum at nginx.us Mon Jun 8 12:39:51 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 08 Jun 2015 08:39:51 -0400 Subject: DNS configuration to invoke complete URL In-Reply-To: <33f40927.49453.14dbf2195c1.Coremail.ryd994@163.com> References: <33f40927.49453.14dbf2195c1.Coremail.ryd994@163.com> Message-ID: <44021856f2cd387890da58d705e1869d.NginxMailingListEnglish@forum.nginx.org> Yes, the absolute URL http://workspace.corp.no/workspace/agentLogin works fine I just need to understand the configuration approach to redirect the URL request (http://workspance) to absolute url (http://workspace.corp.no/workspace/agentLogin) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,259427#msg-259427 From prameswar.lal at bizruntime.com Mon Jun 8 13:34:22 2015 From: prameswar.lal at bizruntime.com (Prameswar Lal) Date: Mon, 8 Jun 2015 19:04:22 +0530 Subject: problem : nginx with magento Message-ID: hi i am using nginx with magento which use fastCGI . whenever i type in url http://example.com/index.php then index.php start downloading . can anyone help me ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: example.com.conf Type: application/octet-stream Size: 860 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fastcgi.conf Type: application/octet-stream Size: 1034 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fastcgi_params Type: application/octet-stream Size: 964 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1462 bytes Desc: not available URL: From anoopalias01 at gmail.com Mon Jun 8 13:42:38 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 8 Jun 2015 19:12:38 +0530 Subject: problem : nginx with magento In-Reply-To: References: Message-ID: On Mon, Jun 8, 2015 at 7:04 PM, Prameswar Lal wrote: > hi i am using nginx with magento which use fastCGI . > whenever i type in url http://example.com/index.php then index.php start > downloading . > can anyone help me ? > > I think you have misspelled a directive In example.conf -- fastcqi_index index.php; ++ fastcgi_index index.php; Good Luck ! > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From prameswar.lal at bizruntime.com Mon Jun 8 13:47:15 2015 From: prameswar.lal at bizruntime.com (Prameswar Lal) Date: Mon, 8 Jun 2015 19:17:15 +0530 Subject: problem : nginx with magento In-Reply-To: References: Message-ID: still same problem . i have checked with different os like centos , ubuntu and different version of php , nginx On Mon, Jun 8, 2015 at 7:12 PM, Anoop Alias wrote: > > > On Mon, Jun 8, 2015 at 7:04 PM, Prameswar Lal < > prameswar.lal at bizruntime.com> wrote: > >> hi i am using nginx with magento which use fastCGI . >> whenever i type in url http://example.com/index.php then index.php start >> downloading . >> can anyone help me ? >> >> > I think you have misspelled a directive > In example.conf > > -- fastcqi_index index.php; > ++ fastcgi_index index.php; > > Good Luck ! > >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > *Anoop P Alias* > GNUSYS > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfransosi at gmail.com Mon Jun 8 17:07:54 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Mon, 8 Jun 2015 14:07:54 -0300 Subject: handling subdirectories location In-Reply-To: References: Message-ID: What I'm trying to do is something like the following: http://domainame.com/site1/index.php http://domainame.com/site2/index.php http://domainame.com/site3/index.php The closest I could find on Google was http://programmersjunk.blogspot.com.br/2013/11/nginx-multiple-sites-in-subdirectories.html Ruslan, could you help me setup the nginx config for this? -- Thiago Farina From nginx-forum at nginx.us Mon Jun 8 19:14:00 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 08 Jun 2015 15:14:00 -0400 Subject: fastcgi_pass / error page /error code in HTTP rsp Message-ID: <625320ea055fd73afbdea65616da3d1f.NginxMailingListEnglish@forum.nginx.org> Hi, I would like nginx to map a fastcgi error response a static error page, and include the HTTP error code in its HTTP response header; e.g. 1. have nginx return the proper error code in its header to the client. 2. have nginx return the proper error page based on the fastcgi_pass server's response error code. For example, if the fastcgi server returns '400 Bad Request', I would like NGINX to return "Status code: 400" along with the bad request html static error page. Is #2 feasible when fastcgi_pass is used? I was not able to do it, unless I used error code 302, a redirect. In other words, the only way I got nginx to return a specific error page was to have the fastcgi server respond with a redirect "Status: 302 Found\r\n" "Location: //badRequest.html\r\n" The problem with this method was that the error code (400 for example) did not appear in the HTTP response of the error page (requirement #1 was not met). How can I have nginx/fastcgi_pass return an error page with the HTTP error code (400 for example) appear in the HTTP header? My fastcgi server response did include 'Status' in the fastcgi response. I tried not using the redirect 302 method but my attempts failed; I had the following inside the server block or inside the location/fastcgi_pass block of nginx.conf: error_page 400 = /bad_request.html; location = /bad_request.html { try_files //bad_request.html 50x.html; } I tried the 'internal' directive also though I was not sure of its usage as the path of the html error page was not specified. Any help on how to get nginx to return the error code in the HTTP header response and an error page when fastcgi is used would be greatly appreciated, thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259436,259436#msg-259436 From mdounin at mdounin.ru Mon Jun 8 19:24:43 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Jun 2015 22:24:43 +0300 Subject: fastcgi_pass / error page /error code in HTTP rsp In-Reply-To: <625320ea055fd73afbdea65616da3d1f.NginxMailingListEnglish@forum.nginx.org> References: <625320ea055fd73afbdea65616da3d1f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150608192443.GI26357@mdounin.ru> Hello! On Mon, Jun 08, 2015 at 03:14:00PM -0400, nginxuser100 wrote: > Hi, I would like nginx to map a fastcgi error response a static error page, > and include the HTTP error code in its HTTP response header; e.g. > 1. have nginx return the proper error code in its header to the client. > 2. have nginx return the proper error page based on the fastcgi_pass > server's response error code. > For example, if the fastcgi server returns '400 Bad Request', I would like > NGINX to return "Status code: 400" along with the bad request html static > error page. > > Is #2 feasible when fastcgi_pass is used? I was not able to do it, unless I > used error code 302, a redirect. In other words, the only way I got nginx to > return a specific error page was to have the fastcgi server respond with a > redirect > "Status: 302 Found\r\n" > "Location: //badRequest.html\r\n" > The problem with this method was that the error code (400 for example) did > not appear in the HTTP response of the error page (requirement #1 was not > met). > > How can I have nginx/fastcgi_pass return an error page with the HTTP error > code (400 for example) appear in the HTTP header? My fastcgi server response > did include 'Status' in the fastcgi response. > I tried not using the redirect 302 method but my attempts failed; I had the > following inside the server block or inside the location/fastcgi_pass block > of nginx.conf: > > error_page 400 = /bad_request.html; > > location = /bad_request.html { > try_files //bad_request.html 50x.html; > } > I tried the 'internal' directive also though I was not sure of its usage as > the path of the html error page was not specified. > Any help on how to get nginx to return the error code in the HTTP header > response and an error page when fastcgi is used would be greatly > appreciated, thank you! http://nginx.org/r/fastcgi_intercept_errors -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jun 8 21:52:51 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 08 Jun 2015 17:52:51 -0400 Subject: fastcgi_pass / error page /error code in HTTP rsp In-Reply-To: <20150608192443.GI26357@mdounin.ru> References: <20150608192443.GI26357@mdounin.ru> Message-ID: Thank you Maxim, that was what I was looking for. However, it is still not returning the static error page. Does nginx expect a certain response format from the fcgi server? I tried: "HTTP/1.1 400 Bad Request\r\nStatus: 400 Bad Request\r\n"; and "HTTP/1.1 400 Bad Request"; The nginx.conf has: root ...; location xxx { include fastcgi_params; fastcgi_pass ...; error_page 400 /my_bad_request; <-- inside or outside this location block didn't make a difference fastcgi_intercept_errors on; } location /my_bad_request { try_files /bad_request.html = 400; } Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259436,259446#msg-259446 From francis at daoine.org Mon Jun 8 22:46:35 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jun 2015 23:46:35 +0100 Subject: handling subdirectories location In-Reply-To: References: Message-ID: <20150608224635.GX2957@daoine.org> On Mon, Jun 08, 2015 at 02:07:54PM -0300, Thiago Farina wrote: Hi there, > What I'm trying to do is something like the following: > > http://domainame.com/site1/index.php > http://domainame.com/site2/index.php > http://domainame.com/site3/index.php In nginx, one request is handled in one location. See http://nginx.org/r/location, for example, for the rules by which one location is chosen. In this case, you probably want one location ^~ /site1/ {} block, plus one location ^~ /site2/ {} block; and nested in those, one or more blocks which relate to requests which should be handled by php. If you want "every request should be handled by /site1/index.php", you write one configuration. If you want "every request that ends in .php should be handled by a matching file", you write a different configuration. If you want "every request that ends in .php, or that includes the string '.php/', should be handled by a matching file, you write yet another configuration. >From what I've read in these mails, I'm afraid that I do not know what exactly it is that you want. For example: does "/foo/index.php/secure/login" refer to exactly that request; or to any request that starts with "/foo/index.php/;", or to any request that starts with "/foo/" and does not exactly name a file on the filesystem; or to something else? Possibly something like root /var/www/html; include fastcgi.conf; location ^~ /site1/ { location ~ \.php($|/) { fastcgi_split_path_info (.*.php)(/.*); fastcgi_pass unix:php.sock; } } location ^~ /site2/ { location ~ \.php($|/) { fastcgi_split_path_info (.*.php)(/.*); fastcgi_pass unix:php.sock; } } will do some of what you want. Note that trying to configure a "php app" to work in a subdirectory does require help from the php app. Some insist on being installed in /. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Jun 8 22:49:40 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jun 2015 23:49:40 +0100 Subject: DNS configuration to invoke complete URL In-Reply-To: <44021856f2cd387890da58d705e1869d.NginxMailingListEnglish@forum.nginx.org> References: <33f40927.49453.14dbf2195c1.Coremail.ryd994@163.com> <44021856f2cd387890da58d705e1869d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150608224940.GY2957@daoine.org> On Mon, Jun 08, 2015 at 08:39:51AM -0400, smsmaddy1981 wrote: Hi there, > Yes, the absolute URL http://workspace.corp.no/workspace/agentLogin works > fine > > I just need to understand the configuration approach to redirect the URL > request (http://workspance) to absolute url > (http://workspace.corp.no/workspace/agentLogin) Was this not answered in http://forum.nginx.org/read.php?2,258982,259381 ? If the servers workspace.corp.no and workspance are the same, then location = / { return 301 /workspace/agentLogin; } and if they are two different servers, then on the workspance one: location = / { return 301 http://workspace.corp.no/workspace/agentLogin; } f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Jun 8 22:53:34 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jun 2015 23:53:34 +0100 Subject: problem : nginx with magento In-Reply-To: References: Message-ID: <20150608225334.GZ2957@daoine.org> On Mon, Jun 08, 2015 at 07:04:22PM +0530, Prameswar Lal wrote: Hi there, > hi i am using nginx with magento which use fastCGI . > whenever i type in url http://example.com/index.php then index.php start > downloading . > can anyone help me ? What does the file /var/log/nginx/nginx-access.log say about this request? As in: are you certain that this request is handled by this server{} block? And if so: what is the content of the file index.php? Does it include any php tags at all? Does it include short tags that your fastcgi server does not honour? What is the start of the output of "curl -i http://example.com/index.php"? Specifically: what are all of the http headers returned? f -- Francis Daly francis at daoine.org From tfransosi at gmail.com Mon Jun 8 23:08:10 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Mon, 8 Jun 2015 20:08:10 -0300 Subject: handling subdirectories location In-Reply-To: <20150608224635.GX2957@daoine.org> References: <20150608224635.GX2957@daoine.org> Message-ID: Hi Francis, First, thanks for replying. On Mon, Jun 8, 2015 at 7:46 PM, Francis Daly wrote: > On Mon, Jun 08, 2015 at 02:07:54PM -0300, Thiago Farina wrote: > > Hi there, > >> What I'm trying to do is something like the following: >> >> http://domainame.com/site1/index.php >> http://domainame.com/site2/index.php >> http://domainame.com/site3/index.php > > In nginx, one request is handled in one location. > > See http://nginx.org/r/location, for example, for the rules by which > one location is chosen. > > In this case, you probably want one > > location ^~ /site1/ {} > > block, plus one > > location ^~ /site2/ {} > > block; and nested in those, one or more blocks which relate to requests > which should be handled by php. > > If you want "every request should be handled by /site1/index.php", > you write one configuration. If you want "every request that ends > in .php should be handled by a matching file", you write a different > configuration. If you want "every request that ends in .php, or that > includes the string '.php/', should be handled by a matching file, > you write yet another configuration. > > From what I've read in these mails, I'm afraid that I do not know what > exactly it is that you want. > > For example: does "/foo/index.php/secure/login" refer to exactly that > request; or to any request that starts with "/foo/index.php/;", or to > any request that starts with "/foo/" and does not exactly name a file > on the filesystem; or to something else? > > > Possibly something like > > root /var/www/html; > include fastcgi.conf; > location ^~ /site1/ { > location ~ \.php($|/) { > fastcgi_split_path_info (.*.php)(/.*); > fastcgi_pass unix:php.sock; > } > } > location ^~ /site2/ { > location ~ \.php($|/) { > fastcgi_split_path_info (.*.php)(/.*); > fastcgi_pass unix:php.sock; > } > } > As you suggested I tried the following config: server { listen 80; server_name domainame.com; root /data/www/domainame.com; include fastcgi.conf; location ^~ /gocart/ { location ~ \.php($|/) { fastcgi_split_path_info (.*.php)(/.*); fastcgi_pass 127.0.0.1:9000; #unix:/var/run/php5-fpm.sock; } } } But when I navigate to http://domainame.com/gocart nginx returns 403 Forbidden. Basically what I'm trying to do is to run gocart (https://gocartdv.com) and phorum (www.phorum.org) in domainame.com/gocart and domainame.com/forum (they are in /data/www/domainname.com/gocart/ and /data/www/domainame.com/forum). -- Thiago Farina From francis at daoine.org Mon Jun 8 23:26:44 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jun 2015 00:26:44 +0100 Subject: handling subdirectories location In-Reply-To: References: <20150608224635.GX2957@daoine.org> Message-ID: <20150608232644.GA2957@daoine.org> On Mon, Jun 08, 2015 at 08:08:10PM -0300, Thiago Farina wrote: > On Mon, Jun 8, 2015 at 7:46 PM, Francis Daly wrote: > > On Mon, Jun 08, 2015 at 02:07:54PM -0300, Thiago Farina wrote: Hi there, > server { > listen 80; > server_name domainame.com; > > root /data/www/domainame.com; > include fastcgi.conf; > > location ^~ /gocart/ { > location ~ \.php($|/) { > fastcgi_split_path_info (.*.php)(/.*); > fastcgi_pass 127.0.0.1:9000; #unix:/var/run/php5-fpm.sock; > } > } > } > > But when I navigate to http://domainame.com/gocart nginx returns 403 Forbidden. What response do you want? Be as specific as possible. (I suspect that the answer is "a http redirect to http://domainame.com/gocart/"; but I'm reluctant to guess.) Perhaps adding "index index.html index.php;" at server-level will help? The machine will do exactly what you tell it to. It will only do what you want it to, if you tell it what you want. > Basically what I'm trying to do is to run gocart > (https://gocartdv.com) and phorum (www.phorum.org) in I don't see any obvious "how to install this on a web server, from scratch" documentation on either of those web sites. Perhaps I'm looking in the wrong place. (Or perhaps the authors don't want them to be installed anywhere.) f -- Francis Daly francis at daoine.org From tfransosi at gmail.com Mon Jun 8 23:53:45 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Mon, 8 Jun 2015 20:53:45 -0300 Subject: handling subdirectories location In-Reply-To: <20150608232644.GA2957@daoine.org> References: <20150608224635.GX2957@daoine.org> <20150608232644.GA2957@daoine.org> Message-ID: On Mon, Jun 8, 2015 at 8:26 PM, Francis Daly wrote: > On Mon, Jun 08, 2015 at 08:08:10PM -0300, Thiago Farina wrote: >> On Mon, Jun 8, 2015 at 7:46 PM, Francis Daly wrote: >> > On Mon, Jun 08, 2015 at 02:07:54PM -0300, Thiago Farina wrote: > > Hi there, > >> server { >> listen 80; >> server_name domainame.com; >> >> root /data/www/domainame.com; >> include fastcgi.conf; >> >> location ^~ /gocart/ { >> location ~ \.php($|/) { >> fastcgi_split_path_info (.*.php)(/.*); >> fastcgi_pass 127.0.0.1:9000; #unix:/var/run/php5-fpm.sock; >> } >> } >> } >> >> But when I navigate to http://domainame.com/gocart nginx returns 403 Forbidden. > > What response do you want? > > Be as specific as possible. > > (I suspect that the answer is "a http redirect to > http://domainame.com/gocart/"; but I'm reluctant to guess.) > Isn't obvious what I want? I want the page to load, so if it is "http redirect to http://domainame.com/gocart/", then yes. > Perhaps adding "index index.html index.php;" at server-level will help? > Yes, it helps load the start page. But all the links redirects to http://domainame.com/gocart/. What I'm trying to do is not common? I thought it would be pretty common to do what I'm trying to do, but the solution seems to be very complicated for something that does not look so complicated at glance. Trying once more, what I'm trying to setup is pretty much exactly this https://gist.github.com/LkeMitchll/b6d8aea6c0845e3a341f, http://stackoverflow.com/questions/24820657/nginx-multiple-php-sites-in-sub-directories. > The machine will do exactly what you tell it to. It will only do what > you want it to, if you tell it what you want. > >> Basically what I'm trying to do is to run gocart >> (https://gocartdv.com) and phorum (www.phorum.org) in > > I don't see any obvious "how to install this on a web server, from scratch" > documentation on either of those web sites. Perhaps I'm looking in the > wrong place. > That seems unrelated to this thread. But they all have install instructions. -- Thiago Farina From nginx-forum at nginx.us Tue Jun 9 00:13:03 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 08 Jun 2015 20:13:03 -0400 Subject: fastcgi_pass / error page /error code in HTTP rsp In-Reply-To: References: <20150608192443.GI26357@mdounin.ru> Message-ID: <2b20999f54fd9ae27bfbcfdf72e2c1a8.NginxMailingListEnglish@forum.nginx.org> Hi, I expected fastcgi_intercept_errors to return a static error page AND to have include the HTTP error code (e.g. 400) in the HTTP response header. >From what I see, it returns the static error page but with 200 OK. Is it the expected behavior? If yes, is there a way to have nginx return the error page and the error code to the client? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259436,259456#msg-259456 From steve at greengecko.co.nz Tue Jun 9 00:22:44 2015 From: steve at greengecko.co.nz (steve) Date: Tue, 09 Jun 2015 12:22:44 +1200 Subject: problem : nginx with magento In-Reply-To: References: Message-ID: <557631D4.8060800@greengecko.co.nz> Hi, On 09/06/15 01:34, Prameswar Lal wrote: > hi i am using nginx with magento which use fastCGI . > whenever i type in url http://example.com/index.php then index.php > start downloading . > can anyone help me ? > location ~ \.php$ { fastcqi_index index.php; fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_intercept_errors on; # By all means use a different server for the fcgi processes if you need to fastcgi_split_path_info ^(.+.php)(.*)$; } You have 2 fastcgi_pass lines, one to 127.0.0.1:9000 and one to unix:/var/run/php5-fpm.sock Only one of these should be there, the correct one will be defined in your php-fpm configuration, which isn't shown. I use a backend predefined in nginx.conf to identify the php-fpm pool to use. The 2 relevant location blocks in a base install of mine... location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass backend; } which will work with just about every PHP based CMS out there... well enough to get you started.... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jun 9 00:59:40 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jun 2015 01:59:40 +0100 Subject: handling subdirectories location In-Reply-To: References: <20150608224635.GX2957@daoine.org> <20150608232644.GA2957@daoine.org> Message-ID: <20150609005940.GB2957@daoine.org> On Mon, Jun 08, 2015 at 08:53:45PM -0300, Thiago Farina wrote: > On Mon, Jun 8, 2015 at 8:26 PM, Francis Daly wrote: > > On Mon, Jun 08, 2015 at 08:08:10PM -0300, Thiago Farina wrote: Hi there, > >> server { > >> listen 80; > >> server_name domainame.com; > >> > >> root /data/www/domainame.com; > >> include fastcgi.conf; > >> > >> location ^~ /gocart/ { > >> location ~ \.php($|/) { > >> fastcgi_split_path_info (.*.php)(/.*); > >> fastcgi_pass 127.0.0.1:9000; #unix:/var/run/php5-fpm.sock; > >> } > >> } > >> } > >> > >> But when I navigate to http://domainame.com/gocart nginx returns 403 Forbidden. > > > > What response do you want? > > > > Be as specific as possible. > > > > (I suspect that the answer is "a http redirect to > > http://domainame.com/gocart/"; but I'm reluctant to guess.) > > Isn't obvious what I want? I want the page to load, so if it is "http > redirect to http://domainame.com/gocart/", then yes. It's presumably obvious to you. If you are explicit, it avoids anyone else having to guess. What does "page to load" mean? In this context, consider every question to be "what did you do; what did you see; what did you expect to see?". So: "curl -i http://domainame.com/gocart". Expect http 301 to http://domainame.com/gocart/. Get that? Good. "curl -i http://domainame.com/gocart/". Expect the content of the file /data/www/domainame.com/gocart/index.html? Or the php-processed output of the file /data/www/domainame.com/gocart/index.php? Do you get one of those, or something else? Does a http header indicate that the php server was involved? Type the curl command. If the response is what you expect, report that and be happy. If the response is not what you expect, report how it differs. If you do not know what you expect, seek more help -- what is the response when the thing is installed in the author-intended environment? > > Perhaps adding "index index.html index.php;" at server-level will help? > > > Yes, it helps load the start page. But all the links redirects to > http://domainame.com/gocart/. What links are they? Again: be explicit. Spell everything out very slowly. Maybe these links come from something in the php that expects a specific variable to be set to a special value; maybe they come from an application config file; maybe you can adjust your nginx config to make everything work; maybe you need to change something outside of nginx. What curl command do you issue? What response do you get? What response do you want instead? > What I'm trying to do is not common? I thought it would be pretty > common to do what I'm trying to do, but the solution seems to be very > complicated for something that does not look so complicated at glance. What does your favourite search engine return for "install [this application] in a subdirectory on nginx"? If it is not very much, perhaps that is because the config is so trivially obvious that it is not worth documenting; or perhaps no-one has ever tried it before. Which is more likely? It looks like you are a trailblazer. This means you have to do the legwork to be the first person to ever get it working. (So far, I'm seeing about a dozen lines of config, which does not strike me as especially complicated.) > Trying once more, what I'm trying to setup is pretty much exactly this > https://gist.github.com/LkeMitchll/b6d8aea6c0845e3a341f, > http://stackoverflow.com/questions/24820657/nginx-multiple-php-sites-in-sub-directories. The first link confuses me: there's an 84-line config, and I don't see how anything after line 42 will ever be used. The second link doesn't show an obvious problem. > > I don't see any obvious "how to install this on a web server, from scratch" > > documentation on either of those web sites. Perhaps I'm looking in the > > wrong place. > > That seems unrelated to this thread. But they all have install instructions. If the authors will describe how they expect things to be configured in their intended environment (web server and whatever supporting infrastructure is required), that may help work out how things should be configured in any other environment (such as nginx). Good luck with it, f -- Francis Daly francis at daoine.org From prameswar.lal at bizruntime.com Tue Jun 9 01:46:14 2015 From: prameswar.lal at bizruntime.com (Prameswar Lal) Date: Tue, 9 Jun 2015 07:16:14 +0530 Subject: problem : nginx with magento In-Reply-To: <557631D4.8060800@greengecko.co.nz> References: <557631D4.8060800@greengecko.co.nz> Message-ID: Hi steve , i have checked with your setting also . its not working . On Tue, Jun 9, 2015 at 5:52 AM, steve wrote: > Hi, > > On 09/06/15 01:34, Prameswar Lal wrote: > > hi i am using nginx with magento which use fastCGI . > whenever i type in url http://example.com/index.php then index.php start > downloading . > can anyone help me ? > > location ~ \.php$ { > > fastcqi_index index.php; > fastcgi_pass 127.0.0.1:9000; > fastcgi_pass unix:/var/run/php5-fpm.sock; > include fastcgi_params; > fastcgi_intercept_errors on; > # By all means use a different server for the fcgi processes if you need to > > fastcgi_split_path_info ^(.+.php)(.*)$; > > } > > You have 2 fastcgi_pass lines, one to 127.0.0.1:9000 and one to unix:/var/run/php5-fpm.sock > > Only one of these should be there, the correct one will be defined in your php-fpm configuration, which isn't shown. > > > I use a backend predefined in nginx.conf to identify the php-fpm pool to use. The 2 relevant location blocks in a base install of mine... > > > location / { > try_files $uri $uri/ /index.php?$args; > } > > location ~ \.php$ { > try_files $uri =404; > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > include fastcgi_params; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass backend; > } > > > which will work with just about every PHP based CMS out there... well enough to get you started.... > > > Steve > > -- > Steve Holdoway BSc(Hons) MIITPhttp://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prameswar.lal at bizruntime.com Tue Jun 9 02:00:13 2015 From: prameswar.lal at bizruntime.com (Prameswar Lal) Date: Tue, 9 Jun 2015 07:30:13 +0530 Subject: problem : nginx with magento In-Reply-To: References: <557631D4.8060800@greengecko.co.nz> Message-ID: Hi , i am sending current access log after changing according to steve block . Not problem with index.php . it has problem with all page of .php in magento tool . i am using magento tool so this is file of magento . not written by me . magento use fastcgi programing in it . i have tested this magento with apache it works fine. i have tested on different system centos , ubuntu fresh machine. output of curl is . root at example:/var/log/nginx# curl -I http://example.com/index.php HTTP/1.1 200 OK Server: nginx/1.8.0 Date: Tue, 09 Jun 2015 01:59:02 GMT Content-Type: application/octet-stream Content-Length: 2642 Last-Modified: Wed, 14 May 2014 16:03:36 GMT Connection: keep-alive ETag: "537393d8-a52" Accept-Ranges: bytes On Tue, Jun 9, 2015 at 7:16 AM, Prameswar Lal wrote: > Hi steve , > i have checked with your setting also . its not working . > > > On Tue, Jun 9, 2015 at 5:52 AM, steve wrote: > >> Hi, >> >> On 09/06/15 01:34, Prameswar Lal wrote: >> >> hi i am using nginx with magento which use fastCGI . >> whenever i type in url http://example.com/index.php then index.php start >> downloading . >> can anyone help me ? >> >> location ~ \.php$ { >> >> fastcqi_index index.php; >> fastcgi_pass 127.0.0.1:9000; >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> include fastcgi_params; >> fastcgi_intercept_errors on; >> # By all means use a different server for the fcgi processes if you need to >> >> fastcgi_split_path_info ^(.+.php)(.*)$; >> >> } >> >> You have 2 fastcgi_pass lines, one to 127.0.0.1:9000 and one to unix:/var/run/php5-fpm.sock >> >> Only one of these should be there, the correct one will be defined in your php-fpm configuration, which isn't shown. >> >> >> I use a backend predefined in nginx.conf to identify the php-fpm pool to use. The 2 relevant location blocks in a base install of mine... >> >> >> location / { >> try_files $uri $uri/ /index.php?$args; >> } >> >> location ~ \.php$ { >> try_files $uri =404; >> >> fastcgi_split_path_info ^(.+\.php)(/.+)$; >> >> include fastcgi_params; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_pass backend; >> } >> >> >> which will work with just about every PHP based CMS out there... well enough to get you started.... >> >> >> Steve >> >> -- >> Steve Holdoway BSc(Hons) MIITPhttp://www.greengecko.co.nz >> Linkedin: http://www.linkedin.com/in/steveholdoway >> Skype: sholdowa >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: access.log Type: application/octet-stream Size: 1687 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 111 bytes Desc: not available URL: From steve at greengecko.co.nz Tue Jun 9 02:02:50 2015 From: steve at greengecko.co.nz (steve) Date: Tue, 09 Jun 2015 14:02:50 +1200 Subject: problem : nginx with magento In-Reply-To: References: <557631D4.8060800@greengecko.co.nz> Message-ID: <5576494A.3010405@greengecko.co.nz> Hi On 09/06/15 13:46, Prameswar Lal wrote: > Hi steve , > i have checked with your setting also . its not working . > > > On Tue, Jun 9, 2015 at 5:52 AM, steve > wrote: > > Hi, > > On 09/06/15 01:34, Prameswar Lal wrote: >> hi i am using nginx with magento which use fastCGI . >> whenever i type in url http://example.com/index.php then >> index.php start downloading . >> can anyone help me ? >> > location ~ \.php$ { > > fastcqi_index index.php; > fastcgi_pass127.0.0.1:9000 ; > fastcgi_pass unix:/var/run/php5-fpm.sock; > include fastcgi_params; > fastcgi_intercept_errors on; > # By all means use a different server for the fcgi processes if you need to > > fastcgi_split_path_info ^(.+.php)(.*)$; > > } > > You have 2 fastcgi_pass lines, one to127.0.0.1:9000 and one to unix:/var/run/php5-fpm.sock > > Only one of these should be there, the correct one will be defined in your php-fpm configuration, which isn't shown. > > > I use a backend predefined in nginx.conf to identify the php-fpm pool to use. The 2 relevant location blocks in a base install of mine... > > > location / { > try_files $uri $uri/ /index.php?$args; > } > > location ~ \.php$ { > try_files $uri =404; > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > include fastcgi_params; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass backend; > } > > > which will work with just about every PHP based CMS out there... well enough to get you started.... > > > Steve > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin:http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > If it's just downloading the code, then it's not being passed to your php-fastcgi processes. I assume you're not now getting a 'bad gateway' error, so it's never being asked to do so. Are you sure this is the config file you're actually processing, and there's no default one taking precedence ( note the format of the listen makes a difference )??? -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Jun 9 02:09:53 2015 From: steve at greengecko.co.nz (steve) Date: Tue, 09 Jun 2015 14:09:53 +1200 Subject: problem : nginx with magento In-Reply-To: References: <557631D4.8060800@greengecko.co.nz> Message-ID: <55764AF1.8020309@greengecko.co.nz> On 09/06/15 14:00, Prameswar Lal wrote: > Hi , > i am sending current access log after changing according to steve block . > > Not problem with index.php . it has problem with all page of .php in > magento tool . > i am using magento tool so this is file of magento . not written by me > . magento use fastcgi programing in it . > i have tested this magento with apache it works fine. > i have tested on different system centos , ubuntu fresh machine. > > output of curl is . > root at example:/var/log/nginx# curl -I http://example.com/index.php > HTTP/1.1 200 OK > Server: nginx/1.8.0 > Date: Tue, 09 Jun 2015 01:59:02 GMT > Content-Type: application/octet-stream > Content-Length: 2642 > Last-Modified: Wed, 14 May 2014 16:03:36 GMT > Connection: keep-alive > ETag: "537393d8-a52" > Accept-Ranges: bytes > > > > > > > On Tue, Jun 9, 2015 at 7:16 AM, Prameswar Lal > > > wrote: > > Hi steve , > i have checked with your setting also . its not working . > > > On Tue, Jun 9, 2015 at 5:52 AM, steve > wrote: > > Hi, > > On 09/06/15 01:34, Prameswar Lal wrote: >> hi i am using nginx with magento which use fastCGI . >> whenever i type in url http://example.com/index.php then >> index.php start downloading . >> can anyone help me ? >> > location ~ \.php$ { > > fastcqi_index index.php; > fastcgi_pass127.0.0.1:9000 ; > fastcgi_pass unix:/var/run/php5-fpm.sock; > include fastcgi_params; > fastcgi_intercept_errors on; > # By all means use a different server for the fcgi processes if you need to > > fastcgi_split_path_info ^(.+.php)(.*)$; > > } > > You have 2 fastcgi_pass lines, one to127.0.0.1:9000 and one to unix:/var/run/php5-fpm.sock > > Only one of these should be there, the correct one will be defined in your php-fpm configuration, which isn't shown. > > > I use a backend predefined in nginx.conf to identify the php-fpm pool to use. The 2 relevant location blocks in a base install of mine... > > > location / { > try_files $uri $uri/ /index.php?$args; > } > > location ~ \.php$ { > try_files $uri =404; > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > include fastcgi_params; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass backend; > } > > > which will work with just about every PHP based CMS out there... well enough to get you started.... > > > Steve > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin:http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx That confirms that you're not attempting to process your php. Try removing all host config files apart from this one ( in cond.f and sites-enabled ) -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 9 04:29:25 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 09 Jun 2015 00:29:25 -0400 Subject: fastcgi_pass / error page /error code in HTTP rsp In-Reply-To: <2b20999f54fd9ae27bfbcfdf72e2c1a8.NginxMailingListEnglish@forum.nginx.org> References: <20150608192443.GI26357@mdounin.ru> <2b20999f54fd9ae27bfbcfdf72e2c1a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8c2999386cf38f69116d19fec9a13173.NginxMailingListEnglish@forum.nginx.org> I also tried fastcgi_pass_header Status; along with the fastcgi_intercept_errors directive. NGINX still returned 200 OK instead of the 400 sent by the fastcgi server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259436,259464#msg-259464 From francis at daoine.org Tue Jun 9 07:35:31 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jun 2015 08:35:31 +0100 Subject: fastcgi_pass / error page /error code in HTTP rsp In-Reply-To: <2b20999f54fd9ae27bfbcfdf72e2c1a8.NginxMailingListEnglish@forum.nginx.org> References: <20150608192443.GI26357@mdounin.ru> <2b20999f54fd9ae27bfbcfdf72e2c1a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150609073531.GC2957@daoine.org> On Mon, Jun 08, 2015 at 08:13:03PM -0400, nginxuser100 wrote: Hi there, > Hi, I expected fastcgi_intercept_errors to return a static error page AND to > have include the HTTP error code (e.g. 400) in the HTTP response header. That's what it does, unless you break it by doing unnecessary things in your error_page directive. > From what I see, it returns the static error page but with 200 OK. error_page 400 = /bad_request.html; What do you think the "=" means? What does the documentation say that the "=" means? (http://nginx.org/r/error_page) Can you think of a way of rephrasing the documentation so that it would have been clearer to you, so that the next person will not have the same problem? What happens when you leave out the "=" and reload the config? > Is it the expected behavior? It is what you have configured nginx to do, so yes. > If yes, is there a way to have nginx return the error page and the error > code to the client? Configure it according to the "common" examples in the documentation, not according to the special-case examples. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jun 9 07:44:45 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jun 2015 08:44:45 +0100 Subject: problem : nginx with magento In-Reply-To: References: <557631D4.8060800@greengecko.co.nz> Message-ID: <20150609074445.GD2957@daoine.org> On Tue, Jun 09, 2015 at 07:30:13AM +0530, Prameswar Lal wrote: Hi there, > i am sending current access log after changing according to steve block . The config you showed does not write to access.log. If the request you make does write to access.log, then the request you make is not being handled by the config that you showed. Can you start with a small complete config that clearly shows the problem you are reporting? f -- Francis Daly francis at daoine.org From prameswar.lal at bizruntime.com Tue Jun 9 11:46:22 2015 From: prameswar.lal at bizruntime.com (Prameswar Lal) Date: Tue, 9 Jun 2015 17:16:22 +0530 Subject: problem : nginx with magento In-Reply-To: <20150609074445.GD2957@daoine.org> References: <557631D4.8060800@greengecko.co.nz> <20150609074445.GD2957@daoine.org> Message-ID: yes , i had default file in /etc/nginx/sites-available/. that i have remove so , now i getting "unable to connect " . curl -I http://example.com/index.php curl: (7) Failed to connect to example.com port 80: Connection refused in my example.com.conf file there is one line " access_log /var/log/nginx/nginx-access.log; " but i am not getting log here with name nginx-access.log . i am attaching my error.log file. and access.log is empty here . please see in my error.log. On Tue, Jun 9, 2015 at 1:14 PM, Francis Daly wrote: > On Tue, Jun 09, 2015 at 07:30:13AM +0530, Prameswar Lal wrote: > > Hi there, > > > i am sending current access log after changing according to steve block > . > > The config you showed does not write to access.log. > > If the request you make does write to access.log, then the request you > make is not being handled by the config that you showed. > > Can you start with a small complete config that clearly shows the problem > you are reporting? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 666 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1058 bytes Desc: not available URL: From steve at greengecko.co.nz Tue Jun 9 21:24:50 2015 From: steve at greengecko.co.nz (steve) Date: Wed, 10 Jun 2015 09:24:50 +1200 Subject: problem : nginx with magento In-Reply-To: References: <557631D4.8060800@greengecko.co.nz> <20150609074445.GD2957@daoine.org> Message-ID: <557759A2.4040506@greengecko.co.nz> Hi, On 09/06/15 23:46, Prameswar Lal wrote: > yes , i had default file in /etc/nginx/sites-available/. > that i have remove so , now i getting "unable to connect " . > > curl -I http://example.com/index.php > curl: (7) Failed to connect to example.com port > 80: Connection refused > > in my example.com.conf file there is one line " access_log > /var/log/nginx/nginx-access.log; " but i am not getting log here with > name nginx-access.log . > > i am attaching my error.log file. and access.log is empty here . > > please see in my error.log. > > > > On Tue, Jun 9, 2015 at 1:14 PM, Francis Daly > wrote: > > On Tue, Jun 09, 2015 at 07:30:13AM +0530, Prameswar Lal wrote: > > Hi there, > > > i am sending current access log after changing according to steve block . > > The config you showed does not write to access.log. > > If the request you make does write to access.log, then the request you > make is not being handled by the config that you showed. > > Can you start with a small complete config that clearly shows the > problem > you are reporting? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Look in your nginx.conf. include /etc/nginx/conf.d/*.conf; *THIS* is where your config file should reside. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 10 08:00:40 2015 From: nginx-forum at nginx.us (cobain86) Date: Wed, 10 Jun 2015 04:00:40 -0400 Subject: nginx upstream closed when spdy is active Message-ID: <6a11933b4b96471483f21ab3ac5cb8ca.NginxMailingListEnglish@forum.nginx.org> hi there we have a problem while downloading files from our website while spdy is enabled. the download wont start and after a few seconds is runs to a timeoute. if i disable spdy, the download is working. what could that be. The log files show me the following error 2015/06/10 09:26:14 [error] 2538#0: *6497287 upstream prematurely closed connection while reading upstream, client: x.x.x.x, server: www.x.com, request: "GET /xxx/report.do?action=ApprovedProductsCSV&xxxx&dateFrom=14.04.2015 HTTP/1.1", upstream: "http://xxxx:xxxx/xxx/report.do?action=ApprovedProductsCSV&xxx&dateFrom=14.04.2015", host: "www.xxx.com", referrer: "https://www.xxx.com/xxx/report.do?action=ApprovedProducts" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259482,259482#msg-259482 From black.fledermaus at arcor.de Wed Jun 10 09:46:49 2015 From: black.fledermaus at arcor.de (basti) Date: Wed, 10 Jun 2015 11:46:49 +0200 Subject: SO_REUSEPORT Message-ID: <55780789.5040904@arcor.de> Hello, Ihave rebuild nginx 1.9.1 from source to use SO_REUSEPORT on my wheezy install with kernel 3.16 (from backports). (packages from http://nginx.org/packages/mainline/debian/has not include SO_REUSEPORT) Some errors are still present: [emerg] 19351#19351: duplicate listen options for 0.0.0.0:80 in ... Is there a way to use "reuseports" for multiple locations? How can I test if it works for a special location? Is there a header send or something else? Or is the only way to compare "stress test" like siege? Regards, Basti From nginx-forum at nginx.us Wed Jun 10 10:06:22 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 10 Jun 2015 06:06:22 -0400 Subject: SO_REUSEPORT In-Reply-To: <55780789.5040904@arcor.de> References: <55780789.5040904@arcor.de> Message-ID: Try the latest 1.9.2, in 191 is was added but not working. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259486,259488#msg-259488 From luky-37 at hotmail.com Wed Jun 10 10:09:57 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 10 Jun 2015 12:09:57 +0200 Subject: SO_REUSEPORT In-Reply-To: <55780789.5040904@arcor.de> References: <55780789.5040904@arcor.de> Message-ID: > Some errors are still present: > > [emerg] 19351#19351: duplicate listen options for 0.0.0.0:80 in ... > > Is there a way to use "reuseports" for multiple locations? You have to declare it once and only once. Please read: http://nginx.org/en/docs/http/ngx_http_core_module.html#listen Lukas From black.fledermaus at arcor.de Wed Jun 10 10:11:33 2015 From: black.fledermaus at arcor.de (basti) Date: Wed, 10 Jun 2015 12:11:33 +0200 Subject: SO_REUSEPORT In-Reply-To: References: <55780789.5040904@arcor.de> Message-ID: <55780D55.40404@arcor.de> Where can I find? The latest tag I have found is 1.9.1. (http://trac.nginx.org/nginx/browser/nginx?order=name) On 10.06.2015 12:06, itpp2012 wrote: > Try the latest 1.9.2, in 191 is was added but not working. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259486,259488#msg-259488 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From maxim at nginx.com Wed Jun 10 10:40:08 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 10 Jun 2015 13:40:08 +0300 Subject: SO_REUSEPORT In-Reply-To: References: <55780789.5040904@arcor.de> Message-ID: <55781408.8040703@nginx.com> On 6/10/15 1:06 PM, itpp2012 wrote: > Try the latest 1.9.2, in 191 is was added but not working. > http part is fully functional in 1.9.1. The author has different issue and Lukas Tribus already answered. -- Maxim Konovalov http://nginx.com From black.fledermaus at arcor.de Wed Jun 10 10:52:00 2015 From: black.fledermaus at arcor.de (basti) Date: Wed, 10 Jun 2015 12:52:00 +0200 Subject: SO_REUSEPORT In-Reply-To: <55781408.8040703@nginx.com> References: <55780789.5040904@arcor.de> <55781408.8040703@nginx.com> Message-ID: <557816D0.50709@arcor.de> Thanks for your answer Maxim On 10.06.2015 12:40, Maxim Konovalov wrote: > On 6/10/15 1:06 PM, itpp2012 wrote: >> Try the latest 1.9.2, in 191 is was added but not working. >> > http part is fully functional in 1.9.1. The author has different > issue and Lukas Tribus already answered. > From mdounin at mdounin.ru Wed Jun 10 12:47:37 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jun 2015 15:47:37 +0300 Subject: SO_REUSEPORT In-Reply-To: References: <55780789.5040904@arcor.de> Message-ID: <20150610124737.GW26357@mdounin.ru> Hello! On Wed, Jun 10, 2015 at 06:06:22AM -0400, itpp2012 wrote: > Try the latest 1.9.2, in 191 is was added but not working. This is not true. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jun 10 12:53:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jun 2015 15:53:16 +0300 Subject: SO_REUSEPORT In-Reply-To: <55780789.5040904@arcor.de> References: <55780789.5040904@arcor.de> Message-ID: <20150610125316.GX26357@mdounin.ru> Hello! On Wed, Jun 10, 2015 at 11:46:49AM +0200, basti wrote: > Hello, > Ihave rebuild nginx 1.9.1 from source to use SO_REUSEPORT on my wheezy > install with kernel 3.16 (from backports). > (packages from http://nginx.org/packages/mainline/debian/has not include > SO_REUSEPORT) > > Some errors are still present: > > [emerg] 19351#19351: duplicate listen options for 0.0.0.0:80 in ... > > Is there a way to use "reuseports" for multiple locations? > How can I test if it works for a special location? > Is there a header send or something else? Or is the only way to compare > "stress test" like siege? Much like all other listening socket options, "reuseport" have to be specified only once, usually in a default server for a listen socket in question. That is, if you have many servers listening on port 80, you should write something like: server { listen 80 reuseport; server_name default.example.com; ... } server { listen 80; # no options here server_name virtual.example.com; ... } To check if reuseport actually works just check how many listening sockets were created - normally there will be just one, but with reuseport you'll see multiple listening sockets, one for each of nginx worker processes. Something like "ss -nlt" should show listening sockets on Linux. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jun 10 12:59:00 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 10 Jun 2015 08:59:00 -0400 Subject: SO_REUSEPORT In-Reply-To: <20150610124737.GW26357@mdounin.ru> References: <20150610124737.GW26357@mdounin.ru> Message-ID: <60a6a4aa9ef24498955f2fc39ea9b289.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Jun 10, 2015 at 06:06:22AM -0400, itpp2012 wrote: > > > Try the latest 1.9.2, in 191 is was added but not working. > > This is not true. I stand corrected, mistakenly took http://hg.nginx.org/nginx/rev/f654addf0eea for a http issue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259486,259508#msg-259508 From vbart at nginx.com Wed Jun 10 17:20:18 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 10 Jun 2015 20:20:18 +0300 Subject: nginx upstream closed when spdy is active In-Reply-To: <6a11933b4b96471483f21ab3ac5cb8ca.NginxMailingListEnglish@forum.nginx.org> References: <6a11933b4b96471483f21ab3ac5cb8ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9562546.WK6UfRjOte@vbart-workstation> On Wednesday 10 June 2015 04:00:40 cobain86 wrote: > hi there > we have a problem while downloading files from our website while spdy is > enabled. > > the download wont start and after a few seconds is runs to a timeoute. > if i disable spdy, the download is working. > > what could that be. > The log files show me the following error > > 2015/06/10 09:26:14 [error] 2538#0: *6497287 upstream prematurely closed > connection while reading upstream, client: x.x.x.x, server: www.x.com, > request: "GET > /xxx/report.do?action=ApprovedProductsCSV&xxxx&dateFrom=14.04.2015 > HTTP/1.1", upstream: > "http://xxxx:xxxx/xxx/report.do?action=ApprovedProductsCSV&xxx&dateFrom=14.04.2015", > host: "www.xxx.com", referrer: > "https://www.xxx.com/xxx/report.do?action=ApprovedProducts" > [..] You should check your backend logs to find the reason why it closed the connection. wbr, Valentin V. Bartenev From tfransosi at gmail.com Wed Jun 10 20:00:56 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Wed, 10 Jun 2015 17:00:56 -0300 Subject: handling subdirectories location In-Reply-To: <20150609005940.GB2957@daoine.org> References: <20150608224635.GX2957@daoine.org> <20150608232644.GA2957@daoine.org> <20150609005940.GB2957@daoine.org> Message-ID: Hi Francis, On Mon, Jun 8, 2015 at 9:59 PM, Francis Daly wrote: > On Mon, Jun 08, 2015 at 08:53:45PM -0300, Thiago Farina wrote: >> On Mon, Jun 8, 2015 at 8:26 PM, Francis Daly wrote: >> > On Mon, Jun 08, 2015 at 08:08:10PM -0300, Thiago Farina wrote: > > Hi there, > OK. Based on your response I will hold on on what I was trying to do for now. It seems it is not the way I should go. -- Thiago Farina From francis at daoine.org Wed Jun 10 21:37:38 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jun 2015 22:37:38 +0100 Subject: handling subdirectories location In-Reply-To: References: <20150608224635.GX2957@daoine.org> <20150608232644.GA2957@daoine.org> <20150609005940.GB2957@daoine.org> Message-ID: <20150610213738.GE2957@daoine.org> On Wed, Jun 10, 2015 at 05:00:56PM -0300, Thiago Farina wrote: > On Mon, Jun 8, 2015 at 9:59 PM, Francis Daly wrote: Hi there, > OK. Based on your response I will hold on on what I was trying to do for now. > > It seems it is not the way I should go. Ok. I was intrigued to see what was happening, so I fetched the gocart zip file, extracted it to /usr/local/nginx, did mv gocart-GoCart-324eccb gc and looked in the "gc" directory. There seems to be one main php file which is intended to handle everything, so I added the following to nginx.conf: == include fastcgi.conf; index index.php index.html; location ^~ /gc/ { root .; error_page 404 = /gc/index.php; location = /gc/index.php { fastcgi_pass unix:php.sock; } } == reloaded nginx.conf, then pointed my browser at http://localhost:8000/gc/ I see a page which asks for database details in order to do an installation; I give the details and I see that 29 tables are added in my database before I get the http response content of """ Fatal error: Call to undefined function locale_get_default() in /usr/local/nginx/gc/gocart/migrations/003_gocart2_3.php on line 252 """ Rather than solve it properly, I edit that line of that file to hard-code a valid locale, and go back to the /gc/ url. It invites me to login and says my cart is empty. The "login" page invites me to log in, or register, and has a "forgot password" link. All links seem to do something useful. Now there are also ".htaccess" files below the directory "gc", which say "Deny from all"; so presumably this nginx config will allow some files be fetched that the gocart authors would prefer not be fetched -- extra config will be needed to cater for those. But, unless I'm missing something, it looks like it is working, to the extent that "things that should succeed do succeed". If you're still willing to test, does that work for you? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jun 11 06:07:43 2015 From: nginx-forum at nginx.us (cobain86) Date: Thu, 11 Jun 2015 02:07:43 -0400 Subject: nginx upstream closed when spdy is active In-Reply-To: <9562546.WK6UfRjOte@vbart-workstation> References: <9562546.WK6UfRjOte@vbart-workstation> Message-ID: <1af7806b64e5c800a2f485da9862b2a1.NginxMailingListEnglish@forum.nginx.org> we have found the problem our backend system is sending the wrong content-lenght header information. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259482,259521#msg-259521 From nginx-forum at nginx.us Thu Jun 11 06:29:26 2015 From: nginx-forum at nginx.us (huakaibird) Date: Thu, 11 Jun 2015 02:29:26 -0400 Subject: nginx plus with ssl on TCP load balance not work Message-ID: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> Hi, I?m using nginx plus with ssl on TCP load balance, Configured like the documentation, but it not work. (All the IP below is not real-ip) I have web servers behind, I want to use ssl offloading, and I choose TCP load balance. listen on 443 and proxy to web server's 80. Page access always report ERR_TOO_MANY_REDIRECTS. Error log 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.0.0.1, server: 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, bytes from/to upstream:0/0 10.0.0.2 this ip is the nginx ip, while it is used as upstream? The configuration is like this, remove the real ip server { listen 80 so_keepalive=30m::10; proxy_pass backend; proxy_upstream_buffer 2048k; proxy_downstream_buffer 2048k; } server { listen 443 ssl; proxy_pass backend; #proxy_upstream_buffer 2048k; #proxy_downstream_buffer 2048k; ssl_certificate ssl/chained.crt; #ssl_certificate ssl/4582cfef411bb.crt; ssl_certificate_key ssl/zoomus20140410.key; #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_handshake_timeout 3s; #ssl_session_cache shared:SSL:20m; #ssl_session_timeout 4h; } upstream backend { server *.*.*.*:80; server *.*.*.*:80; } nginx -v nginx version: nginx/1.7.11 (nginx-plus-r6-p1) And I?m using amazon linux uname -a Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux BTW, tcp how to set access log? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259522#msg-259522 From arut at nginx.com Thu Jun 11 07:45:08 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 Jun 2015 10:45:08 +0300 Subject: nginx plus with ssl on TCP load balance not work In-Reply-To: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> Hi, Could you provide the full config of the nginx/stream balancer? On 11 Jun 2015, at 09:29, huakaibird wrote: > Hi, > > I?m using nginx plus with ssl on TCP load balance, Configured like the > documentation, but it not work. (All the IP below is not real-ip) > I have web servers behind, I want to use ssl offloading, and I choose TCP > load balance. listen on 443 and proxy to web server's 80. > > Page access always report ERR_TOO_MANY_REDIRECTS. > > Error log > 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: Connection > timed out) while connecting to upstream, client: 10.0.0.1, server: > 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, bytes > from/to upstream:0/0 > > 10.0.0.2 this ip is the nginx ip, while it is used as upstream? > > The configuration is like this, remove the real ip > > server { > listen 80 so_keepalive=30m::10; > proxy_pass backend; > proxy_upstream_buffer 2048k; > proxy_downstream_buffer 2048k; > > } > > server { > listen 443 ssl; > proxy_pass backend; > #proxy_upstream_buffer 2048k; > #proxy_downstream_buffer 2048k; > ssl_certificate ssl/chained.crt; > #ssl_certificate ssl/4582cfef411bb.crt; > ssl_certificate_key ssl/zoomus20140410.key; > #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > #ssl_ciphers HIGH:!aNULL:!MD5; > ssl_handshake_timeout 3s; > #ssl_session_cache shared:SSL:20m; > #ssl_session_timeout 4h; > > } > > > upstream backend { > server *.*.*.*:80; > server *.*.*.*:80; > } > > > > nginx -v > nginx version: nginx/1.7.11 (nginx-plus-r6-p1) > > And I?m using amazon linux > uname -a > Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 UTC > 2015 x86_64 x86_64 x86_64 GNU/Linux > > > BTW, tcp how to set access log? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259522#msg-259522 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From smith.hua at zoom.us Thu Jun 11 07:49:13 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 07:49:13 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= In-Reply-To: <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> Message-ID: <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> Nginx.conf: user nginx; worker_processes auto; worker_rlimit_nofile 65535; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { use epoll; worker_connections 65535; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } stream { include /etc/nginx/xxxx.d/*.conf; } And the content in previous email is in xxxx.d/xxxx.conf There is no file under /etc/nginx/conf.d Thanks. -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?11? 7:45 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work Hi, Could you provide the full config of the nginx/stream balancer? On 11 Jun 2015, at 09:29, huakaibird wrote: > Hi, > > I?m using nginx plus with ssl on TCP load balance, Configured like the > documentation, but it not work. (All the IP below is not real-ip) I > have web servers behind, I want to use ssl offloading, and I choose > TCP load balance. listen on 443 and proxy to web server's 80. > > Page access always report ERR_TOO_MANY_REDIRECTS. > > Error log > 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: > Connection timed out) while connecting to upstream, client: 10.0.0.1, server: > 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, > bytes from/to upstream:0/0 > > 10.0.0.2 this ip is the nginx ip, while it is used as upstream? > > The configuration is like this, remove the real ip > > server { > listen 80 so_keepalive=30m::10; > proxy_pass backend; > proxy_upstream_buffer 2048k; > proxy_downstream_buffer 2048k; > > } > > server { > listen 443 ssl; > proxy_pass backend; > #proxy_upstream_buffer 2048k; > #proxy_downstream_buffer 2048k; > ssl_certificate ssl/chained.crt; > #ssl_certificate ssl/4582cfef411bb.crt; > ssl_certificate_key ssl/zoomus20140410.key; > #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > #ssl_ciphers HIGH:!aNULL:!MD5; > ssl_handshake_timeout 3s; > #ssl_session_cache shared:SSL:20m; > #ssl_session_timeout 4h; > > } > > > upstream backend { > server *.*.*.*:80; > server *.*.*.*:80; > } > > > > nginx -v > nginx version: nginx/1.7.11 (nginx-plus-r6-p1) > > And I?m using amazon linux > uname -a > Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 > UTC > 2015 x86_64 x86_64 x86_64 GNU/Linux > > > BTW, tcp how to set access log? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259522,259522#msg-259522 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From arut at nginx.com Thu Jun 11 08:25:13 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 Jun 2015 11:25:13 +0300 Subject: nginx plus with ssl on TCP load balance not work In-Reply-To: <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> Message-ID: What about the 80 port of the stream balancer? Does it proxy the connection normally? PS: no access log is supported in the stream module. Connection information (addresses etc) is logged to error log with the info loglevel. On 11 Jun 2015, at 10:49, smith wrote: > Nginx.conf: > > user nginx; > worker_processes auto; > worker_rlimit_nofile 65535; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > > events { > use epoll; > worker_connections 65535; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > } > > > stream { > > include /etc/nginx/xxxx.d/*.conf; > } > > And the content in previous email is in > xxxx.d/xxxx.conf > > There is no file under /etc/nginx/conf.d > > > Thanks. > > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman > Arutyunyan > ????: 2015?6?11? 7:45 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > Hi, > > Could you provide the full config of the nginx/stream balancer? > > On 11 Jun 2015, at 09:29, huakaibird wrote: > >> Hi, >> >> I?m using nginx plus with ssl on TCP load balance, Configured like the >> documentation, but it not work. (All the IP below is not real-ip) I >> have web servers behind, I want to use ssl offloading, and I choose >> TCP load balance. listen on 443 and proxy to web server's 80. >> >> Page access always report ERR_TOO_MANY_REDIRECTS. >> >> Error log >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > server: >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >> bytes from/to upstream:0/0 >> >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >> >> The configuration is like this, remove the real ip >> >> server { >> listen 80 so_keepalive=30m::10; >> proxy_pass backend; >> proxy_upstream_buffer 2048k; >> proxy_downstream_buffer 2048k; >> >> } >> >> server { >> listen 443 ssl; >> proxy_pass backend; >> #proxy_upstream_buffer 2048k; >> #proxy_downstream_buffer 2048k; >> ssl_certificate ssl/chained.crt; >> #ssl_certificate ssl/4582cfef411bb.crt; >> ssl_certificate_key ssl/zoomus20140410.key; >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> #ssl_ciphers HIGH:!aNULL:!MD5; >> ssl_handshake_timeout 3s; >> #ssl_session_cache shared:SSL:20m; >> #ssl_session_timeout 4h; >> >> } >> >> >> upstream backend { >> server *.*.*.*:80; >> server *.*.*.*:80; >> } >> >> >> >> nginx -v >> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >> >> And I?m using amazon linux >> uname -a >> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 >> UTC >> 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> >> BTW, tcp how to set access log? >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From smith.hua at zoom.us Thu Jun 11 08:26:41 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 08:26:41 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= In-Reply-To: References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> Message-ID: <009301d0a420$59e9aee0$0dbd0ca0$@zoom.us> The 80 is normal -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?11? 8:25 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work What about the 80 port of the stream balancer? Does it proxy the connection normally? PS: no access log is supported in the stream module. Connection information (addresses etc) is logged to error log with the info loglevel. On 11 Jun 2015, at 10:49, smith wrote: > Nginx.conf: > > user nginx; > worker_processes auto; > worker_rlimit_nofile 65535; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > > events { > use epoll; > worker_connections 65535; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > } > > > stream { > > include /etc/nginx/xxxx.d/*.conf; > } > > And the content in previous email is in xxxx.d/xxxx.conf > > There is no file under /etc/nginx/conf.d > > > Thanks. > > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman > Arutyunyan > ????: 2015?6?11? 7:45 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > Hi, > > Could you provide the full config of the nginx/stream balancer? > > On 11 Jun 2015, at 09:29, huakaibird wrote: > >> Hi, >> >> I?m using nginx plus with ssl on TCP load balance, Configured like >> the documentation, but it not work. (All the IP below is not >> real-ip) I have web servers behind, I want to use ssl offloading, and >> I choose TCP load balance. listen on 443 and proxy to web server's 80. >> >> Page access always report ERR_TOO_MANY_REDIRECTS. >> >> Error log >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > server: >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >> bytes from/to upstream:0/0 >> >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >> >> The configuration is like this, remove the real ip >> >> server { >> listen 80 so_keepalive=30m::10; >> proxy_pass backend; >> proxy_upstream_buffer 2048k; >> proxy_downstream_buffer 2048k; >> >> } >> >> server { >> listen 443 ssl; >> proxy_pass backend; >> #proxy_upstream_buffer 2048k; >> #proxy_downstream_buffer 2048k; >> ssl_certificate ssl/chained.crt; >> #ssl_certificate ssl/4582cfef411bb.crt; >> ssl_certificate_key ssl/zoomus20140410.key; >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> #ssl_ciphers HIGH:!aNULL:!MD5; >> ssl_handshake_timeout 3s; >> #ssl_session_cache shared:SSL:20m; >> #ssl_session_timeout 4h; >> >> } >> >> >> upstream backend { >> server *.*.*.*:80; >> server *.*.*.*:80; >> } >> >> >> >> nginx -v >> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >> >> And I?m using amazon linux >> uname -a >> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 >> 22:50:37 UTC >> 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> >> BTW, tcp how to set access log? >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From smith.hua at zoom.us Thu Jun 11 08:28:16 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 08:28:16 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= In-Reply-To: References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> Message-ID: <009801d0a420$92e3b650$b8ab22f0$@zoom.us> The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?11? 8:25 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work What about the 80 port of the stream balancer? Does it proxy the connection normally? PS: no access log is supported in the stream module. Connection information (addresses etc) is logged to error log with the info loglevel. On 11 Jun 2015, at 10:49, smith wrote: > Nginx.conf: > > user nginx; > worker_processes auto; > worker_rlimit_nofile 65535; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > > events { > use epoll; > worker_connections 65535; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > } > > > stream { > > include /etc/nginx/xxxx.d/*.conf; > } > > And the content in previous email is in xxxx.d/xxxx.conf > > There is no file under /etc/nginx/conf.d > > > Thanks. > > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman > Arutyunyan > ????: 2015?6?11? 7:45 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > Hi, > > Could you provide the full config of the nginx/stream balancer? > > On 11 Jun 2015, at 09:29, huakaibird wrote: > >> Hi, >> >> I?m using nginx plus with ssl on TCP load balance, Configured like >> the documentation, but it not work. (All the IP below is not >> real-ip) I have web servers behind, I want to use ssl offloading, and >> I choose TCP load balance. listen on 443 and proxy to web server's 80. >> >> Page access always report ERR_TOO_MANY_REDIRECTS. >> >> Error log >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > server: >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >> bytes from/to upstream:0/0 >> >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >> >> The configuration is like this, remove the real ip >> >> server { >> listen 80 so_keepalive=30m::10; >> proxy_pass backend; >> proxy_upstream_buffer 2048k; >> proxy_downstream_buffer 2048k; >> >> } >> >> server { >> listen 443 ssl; >> proxy_pass backend; >> #proxy_upstream_buffer 2048k; >> #proxy_downstream_buffer 2048k; >> ssl_certificate ssl/chained.crt; >> #ssl_certificate ssl/4582cfef411bb.crt; >> ssl_certificate_key ssl/zoomus20140410.key; >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> #ssl_ciphers HIGH:!aNULL:!MD5; >> ssl_handshake_timeout 3s; >> #ssl_session_cache shared:SSL:20m; >> #ssl_session_timeout 4h; >> >> } >> >> >> upstream backend { >> server *.*.*.*:80; >> server *.*.*.*:80; >> } >> >> >> >> nginx -v >> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >> >> And I?m using amazon linux >> uname -a >> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 >> 22:50:37 UTC >> 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> >> BTW, tcp how to set access log? >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From smith.hua at zoom.us Thu Jun 11 08:34:44 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 08:34:44 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= In-Reply-To: <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> Message-ID: <009d01d0a421$7a478d50$6ed6a7f0$@zoom.us> When I'm trying http ssl, I found need to set proxy_set_header X-Forwarded-Proto $scheme; in server block, or it will also encounter ERR_TOO_MANY_REDIRECTS. Is TCP has same kind of setting? -----????----- ???: smith [mailto:smith.hua at zoom.us] ????: 2015?6?11? 8:28 ???: nginx at nginx.org ??: ??: nginx plus with ssl on TCP load balance not work The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?11? 8:25 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work What about the 80 port of the stream balancer? Does it proxy the connection normally? PS: no access log is supported in the stream module. Connection information (addresses etc) is logged to error log with the info loglevel. On 11 Jun 2015, at 10:49, smith wrote: > Nginx.conf: > > user nginx; > worker_processes auto; > worker_rlimit_nofile 65535; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > > events { > use epoll; > worker_connections 65535; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > } > > > stream { > > include /etc/nginx/xxxx.d/*.conf; > } > > And the content in previous email is in xxxx.d/xxxx.conf > > There is no file under /etc/nginx/conf.d > > > Thanks. > > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? > Roman > Arutyunyan > ????: 2015?6?11? 7:45 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > Hi, > > Could you provide the full config of the nginx/stream balancer? > > On 11 Jun 2015, at 09:29, huakaibird wrote: > >> Hi, >> >> I?m using nginx plus with ssl on TCP load balance, Configured like >> the documentation, but it not work. (All the IP below is not >> real-ip) I have web servers behind, I want to use ssl offloading, and >> I choose TCP load balance. listen on 443 and proxy to web server's 80. >> >> Page access always report ERR_TOO_MANY_REDIRECTS. >> >> Error log >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > server: >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >> bytes from/to upstream:0/0 >> >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >> >> The configuration is like this, remove the real ip >> >> server { >> listen 80 so_keepalive=30m::10; >> proxy_pass backend; >> proxy_upstream_buffer 2048k; >> proxy_downstream_buffer 2048k; >> >> } >> >> server { >> listen 443 ssl; >> proxy_pass backend; >> #proxy_upstream_buffer 2048k; >> #proxy_downstream_buffer 2048k; >> ssl_certificate ssl/chained.crt; >> #ssl_certificate ssl/4582cfef411bb.crt; >> ssl_certificate_key ssl/zoomus20140410.key; >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> #ssl_ciphers HIGH:!aNULL:!MD5; >> ssl_handshake_timeout 3s; >> #ssl_session_cache shared:SSL:20m; >> #ssl_session_timeout 4h; >> >> } >> >> >> upstream backend { >> server *.*.*.*:80; >> server *.*.*.*:80; >> } >> >> >> >> nginx -v >> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >> >> And I?m using amazon linux >> uname -a >> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 >> 22:50:37 UTC >> 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> >> BTW, tcp how to set access log? >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From arut at nginx.com Thu Jun 11 08:43:10 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 Jun 2015 11:43:10 +0300 Subject: nginx plus with ssl on TCP load balance not work In-Reply-To: <009d01d0a421$7a478d50$6ed6a7f0$@zoom.us> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> <009d01d0a421$7a478d50$6ed6a7f0$@zoom.us> Message-ID: Stream proxy has no idea what the underlying protocol is. It cannot change anything in it like http headers etc. On 11 Jun 2015, at 11:34, smith wrote: > When I'm trying http ssl, I found need to set proxy_set_header X-Forwarded-Proto $scheme; in server block, or it will also encounter ERR_TOO_MANY_REDIRECTS. > > Is TCP has same kind of setting? > > -----????----- > ???: smith [mailto:smith.hua at zoom.us] > ????: 2015?6?11? 8:28 > ???: nginx at nginx.org > ??: ??: nginx plus with ssl on TCP load balance not work > > The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan > ????: 2015?6?11? 8:25 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > What about the 80 port of the stream balancer? > Does it proxy the connection normally? > > PS: no access log is supported in the stream module. > Connection information (addresses etc) is logged to error log with the info loglevel. > > On 11 Jun 2015, at 10:49, smith wrote: > >> Nginx.conf: >> >> user nginx; >> worker_processes auto; >> worker_rlimit_nofile 65535; >> >> error_log /var/log/nginx/error.log warn; >> pid /var/run/nginx.pid; >> >> >> events { >> use epoll; >> worker_connections 65535; >> } >> >> >> http { >> include /etc/nginx/mime.types; >> default_type application/octet-stream; >> >> log_format main '$remote_addr - $remote_user [$time_local] "$request" >> ' >> '$status $body_bytes_sent "$http_referer" ' >> '"$http_user_agent" "$http_x_forwarded_for"'; >> >> access_log /var/log/nginx/access.log main; >> >> sendfile on; >> #tcp_nopush on; >> >> keepalive_timeout 65; >> >> #gzip on; >> >> include /etc/nginx/conf.d/*.conf; >> } >> >> >> stream { >> >> include /etc/nginx/xxxx.d/*.conf; >> } >> >> And the content in previous email is in xxxx.d/xxxx.conf >> >> There is no file under /etc/nginx/conf.d >> >> >> Thanks. >> >> >> -----????----- >> ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? >> Roman >> Arutyunyan >> ????: 2015?6?11? 7:45 >> ???: nginx at nginx.org >> ??: Re: nginx plus with ssl on TCP load balance not work >> >> Hi, >> >> Could you provide the full config of the nginx/stream balancer? >> >> On 11 Jun 2015, at 09:29, huakaibird wrote: >> >>> Hi, >>> >>> I?m using nginx plus with ssl on TCP load balance, Configured like >>> the documentation, but it not work. (All the IP below is not >>> real-ip) I have web servers behind, I want to use ssl offloading, and >>> I choose TCP load balance. listen on 443 and proxy to web server's 80. >>> >>> Page access always report ERR_TOO_MANY_REDIRECTS. >>> >>> Error log >>> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >>> Connection timed out) while connecting to upstream, client: 10.0.0.1, >> server: >>> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >>> bytes from/to upstream:0/0 >>> >>> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >>> >>> The configuration is like this, remove the real ip >>> >>> server { >>> listen 80 so_keepalive=30m::10; >>> proxy_pass backend; >>> proxy_upstream_buffer 2048k; >>> proxy_downstream_buffer 2048k; >>> >>> } >>> >>> server { >>> listen 443 ssl; >>> proxy_pass backend; >>> #proxy_upstream_buffer 2048k; >>> #proxy_downstream_buffer 2048k; >>> ssl_certificate ssl/chained.crt; >>> #ssl_certificate ssl/4582cfef411bb.crt; >>> ssl_certificate_key ssl/zoomus20140410.key; >>> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >>> #ssl_ciphers HIGH:!aNULL:!MD5; >>> ssl_handshake_timeout 3s; >>> #ssl_session_cache shared:SSL:20m; >>> #ssl_session_timeout 4h; >>> >>> } >>> >>> >>> upstream backend { >>> server *.*.*.*:80; >>> server *.*.*.*:80; >>> } >>> >>> >>> >>> nginx -v >>> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >>> >>> And I?m using amazon linux >>> uname -a >>> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 >>> 22:50:37 UTC >>> 2015 x86_64 x86_64 x86_64 GNU/Linux >>> >>> >>> BTW, tcp how to set access log? >>> >>> Posted at Nginx Forum: >>> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> -- >> Roman Arutyunyan >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From smith.hua at zoom.us Thu Jun 11 08:58:34 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 08:58:34 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= In-Reply-To: References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> <009d01d0a421$7a478d50$6ed6a7f0$@zoom.us> Message-ID: <00a801d0a424$cebbb390$6c331ab0$@zoom.us> So it's not supported? -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?11? 8:43 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work Stream proxy has no idea what the underlying protocol is. It cannot change anything in it like http headers etc. On 11 Jun 2015, at 11:34, smith wrote: > When I'm trying http ssl, I found need to set proxy_set_header X-Forwarded-Proto $scheme; in server block, or it will also encounter ERR_TOO_MANY_REDIRECTS. > > Is TCP has same kind of setting? > > -----????----- > ???: smith [mailto:smith.hua at zoom.us] > ????: 2015?6?11? 8:28 > ???: nginx at nginx.org > ??: ??: nginx plus with ssl on TCP load balance not work > > The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman > Arutyunyan > ????: 2015?6?11? 8:25 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > What about the 80 port of the stream balancer? > Does it proxy the connection normally? > > PS: no access log is supported in the stream module. > Connection information (addresses etc) is logged to error log with the info loglevel. > > On 11 Jun 2015, at 10:49, smith wrote: > >> Nginx.conf: >> >> user nginx; >> worker_processes auto; >> worker_rlimit_nofile 65535; >> >> error_log /var/log/nginx/error.log warn; >> pid /var/run/nginx.pid; >> >> >> events { >> use epoll; >> worker_connections 65535; >> } >> >> >> http { >> include /etc/nginx/mime.types; >> default_type application/octet-stream; >> >> log_format main '$remote_addr - $remote_user [$time_local] "$request" >> ' >> '$status $body_bytes_sent "$http_referer" ' >> '"$http_user_agent" "$http_x_forwarded_for"'; >> >> access_log /var/log/nginx/access.log main; >> >> sendfile on; >> #tcp_nopush on; >> >> keepalive_timeout 65; >> >> #gzip on; >> >> include /etc/nginx/conf.d/*.conf; >> } >> >> >> stream { >> >> include /etc/nginx/xxxx.d/*.conf; >> } >> >> And the content in previous email is in xxxx.d/xxxx.conf >> >> There is no file under /etc/nginx/conf.d >> >> >> Thanks. >> >> >> -----????----- >> ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? >> Roman >> Arutyunyan >> ????: 2015?6?11? 7:45 >> ???: nginx at nginx.org >> ??: Re: nginx plus with ssl on TCP load balance not work >> >> Hi, >> >> Could you provide the full config of the nginx/stream balancer? >> >> On 11 Jun 2015, at 09:29, huakaibird wrote: >> >>> Hi, >>> >>> I?m using nginx plus with ssl on TCP load balance, Configured like >>> the documentation, but it not work. (All the IP below is not >>> real-ip) I have web servers behind, I want to use ssl offloading, >>> and I choose TCP load balance. listen on 443 and proxy to web server's 80. >>> >>> Page access always report ERR_TOO_MANY_REDIRECTS. >>> >>> Error log >>> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >>> Connection timed out) while connecting to upstream, client: >>> 10.0.0.1, >> server: >>> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >>> bytes from/to upstream:0/0 >>> >>> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >>> >>> The configuration is like this, remove the real ip >>> >>> server { >>> listen 80 so_keepalive=30m::10; >>> proxy_pass backend; >>> proxy_upstream_buffer 2048k; >>> proxy_downstream_buffer 2048k; >>> >>> } >>> >>> server { >>> listen 443 ssl; >>> proxy_pass backend; >>> #proxy_upstream_buffer 2048k; >>> #proxy_downstream_buffer 2048k; >>> ssl_certificate ssl/chained.crt; >>> #ssl_certificate ssl/4582cfef411bb.crt; >>> ssl_certificate_key ssl/zoomus20140410.key; >>> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >>> #ssl_ciphers HIGH:!aNULL:!MD5; >>> ssl_handshake_timeout 3s; >>> #ssl_session_cache shared:SSL:20m; >>> #ssl_session_timeout 4h; >>> >>> } >>> >>> >>> upstream backend { >>> server *.*.*.*:80; >>> server *.*.*.*:80; >>> } >>> >>> >>> >>> nginx -v >>> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >>> >>> And I?m using amazon linux >>> uname -a >>> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 >>> 22:50:37 UTC >>> 2015 x86_64 x86_64 x86_64 GNU/Linux >>> >>> >>> BTW, tcp how to set access log? >>> >>> Posted at Nginx Forum: >>> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> -- >> Roman Arutyunyan >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From smith.hua at zoom.us Thu Jun 11 09:03:55 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 09:03:55 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> Message-ID: <00aa01d0a425$8e168490$aa438db0$@zoom.us> With info level log enabled. Found these: 80's log: 2015/06/11 08:48:18 [info] 12719#0: *449 client 10.0.0.1:1494 connected to 0.0.0.0:80 2015/06/11 08:48:18 [info] 12719#0: *449 proxy 172.31.5.228:17019 connected to 10.0.0.2:80 2015/06/11 08:48:19 [info] 12719#0: *449 upstream disconnected, bytes from/to client:689/7900, bytes from/to upstream:7900/689 It's success 443's log: tried several times, not work, now page show ERR_CONNECTION_CLOSED, still not work 2015/06/11 08:48:28 [info] 12719#0: *451 client 10.0.0.1:1642 connected to 0.0.0.0:80 2015/06/11 08:48:28 [info] 12719#0: *451 proxy 172.31.5.228:26620 connected to 10.0.0.3:80 2015/06/11 08:48:28 [info] 12719#0: *451 upstream disconnected, bytes from/to client:704/452, bytes from/to upstream:452/704 2015/06/11 08:48:28 [info] 12719#0: *453 client 10.0.0.1:1518 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *453 proxy 172.31.5.228:17021 connected to 10.0.0.2:80 2015/06/11 08:48:28 [info] 12719#0: *453 upstream disconnected, bytes from/to client:517/0, bytes from/to upstream:0/517 2015/06/11 08:48:28 [info] 12719#0: *455 client 10.0.0.1:2943 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *455 proxy 172.31.5.228:26622 connected to 10.0.0.3:80 2015/06/11 08:48:28 [info] 12719#0: *455 upstream disconnected, bytes from/to client:221/0, bytes from/to upstream:0/221 2015/06/11 08:48:28 [info] 12719#0: *457 client 10.0.0.1:2187 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *457 proxy 172.31.5.228:17023 connected to 10.0.0.2:80 2015/06/11 08:48:28 [info] 12719#0: *457 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 2015/06/11 08:48:28 [info] 12719#0: *459 client 10.0.0.1:2346 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *459 proxy 172.31.5.228:26624 connected to 10.0.0.3:80 2015/06/11 08:48:28 [info] 12719#0: *459 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 2015/06/11 08:48:29 [info] 12719#0: *461 client 10.0.0.1:2495 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *461 proxy 172.31.5.228:17025 connected to 10.0.0.2:80 2015/06/11 08:48:29 [info] 12719#0: *461 upstream disconnected, bytes from/to client:517/0, bytes from/to upstream:0/517 2015/06/11 08:48:29 [info] 12719#0: *463 client 10.0.0.1:3742 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *463 proxy 172.31.5.228:26626 connected to 10.0.0.3:80 2015/06/11 08:48:29 [info] 12719#0: *463 upstream disconnected, bytes from/to client:221/0, bytes from/to upstream:0/221 2015/06/11 08:48:29 [info] 12719#0: *465 client 10.0.0.1:3743 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *465 proxy 172.31.5.228:17027 connected to 10.0.0.2:80 2015/06/11 08:48:29 [info] 12719#0: *465 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 2015/06/11 08:48:29 [info] 12719#0: *467 client 10.0.0.1:2343 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *467 proxy 172.31.5.228:26628 connected to 10.0.0.3:80 2015/06/11 08:48:29 [info] 12719#0: *467 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 And from the backend web servers, found request not correct: 10.0.0.1,[11/Jun/2015:08:57:42 +0000],\x16\x03\x01\x02,/,HTTP/0.9,501,0,2030,-, 10.0.0.1 Normal request should be 172.31.11.248,[11/Jun/2015:09:00:30 +0000],GET,/signin,HTTP/1.1,200,5924,211592,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.60 Safari/537.36,36.7.69.39, 172.31.11.248 So it that any bug? -----????----- ???: smith [mailto:smith.hua at zoom.us] ????: 2015?6?11? 8:35 ???: 'nginx at nginx.org' ??: ??: nginx plus with ssl on TCP load balance not work When I'm trying http ssl, I found need to set proxy_set_header X-Forwarded-Proto $scheme; in server block, or it will also encounter ERR_TOO_MANY_REDIRECTS. Is TCP has same kind of setting? -----????----- ???: smith [mailto:smith.hua at zoom.us] ????: 2015?6?11? 8:28 ???: nginx at nginx.org ??: ??: nginx plus with ssl on TCP load balance not work The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?11? 8:25 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work What about the 80 port of the stream balancer? Does it proxy the connection normally? PS: no access log is supported in the stream module. Connection information (addresses etc) is logged to error log with the info loglevel. On 11 Jun 2015, at 10:49, smith wrote: > Nginx.conf: > > user nginx; > worker_processes auto; > worker_rlimit_nofile 65535; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > > events { > use epoll; > worker_connections 65535; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > } > > > stream { > > include /etc/nginx/xxxx.d/*.conf; > } > > And the content in previous email is in xxxx.d/xxxx.conf > > There is no file under /etc/nginx/conf.d > > > Thanks. > > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? > Roman > Arutyunyan > ????: 2015?6?11? 7:45 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > Hi, > > Could you provide the full config of the nginx/stream balancer? > > On 11 Jun 2015, at 09:29, huakaibird wrote: > >> Hi, >> >> I?m using nginx plus with ssl on TCP load balance, Configured like >> the documentation, but it not work. (All the IP below is not >> real-ip) I have web servers behind, I want to use ssl offloading, and >> I choose TCP load balance. listen on 443 and proxy to web server's 80. >> >> Page access always report ERR_TOO_MANY_REDIRECTS. >> >> Error log >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > server: >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, >> bytes from/to upstream:0/0 >> >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? >> >> The configuration is like this, remove the real ip >> >> server { >> listen 80 so_keepalive=30m::10; >> proxy_pass backend; >> proxy_upstream_buffer 2048k; >> proxy_downstream_buffer 2048k; >> >> } >> >> server { >> listen 443 ssl; >> proxy_pass backend; >> #proxy_upstream_buffer 2048k; >> #proxy_downstream_buffer 2048k; >> ssl_certificate ssl/chained.crt; >> #ssl_certificate ssl/4582cfef411bb.crt; >> ssl_certificate_key ssl/zoomus20140410.key; >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> #ssl_ciphers HIGH:!aNULL:!MD5; >> ssl_handshake_timeout 3s; >> #ssl_session_cache shared:SSL:20m; >> #ssl_session_timeout 4h; >> >> } >> >> >> upstream backend { >> server *.*.*.*:80; >> server *.*.*.*:80; >> } >> >> >> >> nginx -v >> nginx version: nginx/1.7.11 (nginx-plus-r6-p1) >> >> And I?m using amazon linux >> uname -a >> Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 >> 22:50:37 UTC >> 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> >> BTW, tcp how to set access log? >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259522,259522#msg-259522 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From ru at nginx.com Thu Jun 11 10:10:44 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 11 Jun 2015 13:10:44 +0300 Subject: =?UTF-8?Q?Re=3A_=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_bala?= =?UTF-8?Q?nce_not_work?= In-Reply-To: <00aa01d0a425$8e168490$aa438db0$@zoom.us> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> <00aa01d0a425$8e168490$aa438db0$@zoom.us> Message-ID: <20150611101044.GG42406@lo0.su> On Thu, Jun 11, 2015 at 09:03:55AM -0000, smith wrote: > With info level log enabled. > > Found these: > > 80's log: > 2015/06/11 08:48:18 [info] 12719#0: *449 client 10.0.0.1:1494 connected to 0.0.0.0:80 > 2015/06/11 08:48:18 [info] 12719#0: *449 proxy 172.31.5.228:17019 connected to 10.0.0.2:80 > 2015/06/11 08:48:19 [info] 12719#0: *449 upstream disconnected, bytes from/to client:689/7900, bytes from/to upstream:7900/689 > > It's success > > 443's log: tried several times, not work, now page show ERR_CONNECTION_CLOSED, still not work > > 2015/06/11 08:48:28 [info] 12719#0: *451 client 10.0.0.1:1642 connected to 0.0.0.0:80 > 2015/06/11 08:48:28 [info] 12719#0: *451 proxy 172.31.5.228:26620 connected to 10.0.0.3:80 > 2015/06/11 08:48:28 [info] 12719#0: *451 upstream disconnected, bytes from/to client:704/452, bytes from/to upstream:452/704 > 2015/06/11 08:48:28 [info] 12719#0: *453 client 10.0.0.1:1518 connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *453 proxy 172.31.5.228:17021 connected to 10.0.0.2:80 > 2015/06/11 08:48:28 [info] 12719#0: *453 upstream disconnected, bytes from/to client:517/0, bytes from/to upstream:0/517 > 2015/06/11 08:48:28 [info] 12719#0: *455 client 10.0.0.1:2943 connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *455 proxy 172.31.5.228:26622 connected to 10.0.0.3:80 > 2015/06/11 08:48:28 [info] 12719#0: *455 upstream disconnected, bytes from/to client:221/0, bytes from/to upstream:0/221 > 2015/06/11 08:48:28 [info] 12719#0: *457 client 10.0.0.1:2187 connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *457 proxy 172.31.5.228:17023 connected to 10.0.0.2:80 > 2015/06/11 08:48:28 [info] 12719#0: *457 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 > 2015/06/11 08:48:28 [info] 12719#0: *459 client 10.0.0.1:2346 connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *459 proxy 172.31.5.228:26624 connected to 10.0.0.3:80 > 2015/06/11 08:48:28 [info] 12719#0: *459 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 > 2015/06/11 08:48:29 [info] 12719#0: *461 client 10.0.0.1:2495 connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *461 proxy 172.31.5.228:17025 connected to 10.0.0.2:80 > 2015/06/11 08:48:29 [info] 12719#0: *461 upstream disconnected, bytes from/to client:517/0, bytes from/to upstream:0/517 > 2015/06/11 08:48:29 [info] 12719#0: *463 client 10.0.0.1:3742 connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *463 proxy 172.31.5.228:26626 connected to 10.0.0.3:80 > 2015/06/11 08:48:29 [info] 12719#0: *463 upstream disconnected, bytes from/to client:221/0, bytes from/to upstream:0/221 > 2015/06/11 08:48:29 [info] 12719#0: *465 client 10.0.0.1:3743 connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *465 proxy 172.31.5.228:17027 connected to 10.0.0.2:80 > 2015/06/11 08:48:29 [info] 12719#0: *465 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 > 2015/06/11 08:48:29 [info] 12719#0: *467 client 10.0.0.1:2343 connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *467 proxy 172.31.5.228:26628 connected to 10.0.0.3:80 > 2015/06/11 08:48:29 [info] 12719#0: *467 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 > > > And from the backend web servers, found request not correct: > 10.0.0.1,[11/Jun/2015:08:57:42 +0000],\x16\x03\x01\x02,/,HTTP/0.9,501,0,2030,-, 10.0.0.1 > > Normal request should be > 172.31.11.248,[11/Jun/2015:09:00:30 +0000],GET,/signin,HTTP/1.1,200,5924,211592,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.60 Safari/537.36,36.7.69.39, 172.31.11.248 > > So it that any bug? > > -----????----- > ???: smith [mailto:smith.hua at zoom.us] > ????: 2015?6?11? 8:35 > ???: 'nginx at nginx.org' > ??: ??: nginx plus with ssl on TCP load balance not work > > When I'm trying http ssl, I found need to set proxy_set_header X-Forwarded-Proto $scheme; in server block, or it will also encounter ERR_TOO_MANY_REDIRECTS. > > Is TCP has same kind of setting? > > -----????----- > ???: smith [mailto:smith.hua at zoom.us] > ????: 2015?6?11? 8:28 > ???: nginx at nginx.org > ??: ??: nginx plus with ssl on TCP load balance not work > > The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan > ????: 2015?6?11? 8:25 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > What about the 80 port of the stream balancer? > Does it proxy the connection normally? > > PS: no access log is supported in the stream module. > Connection information (addresses etc) is logged to error log with the info loglevel. > > On 11 Jun 2015, at 10:49, smith wrote: > > > Nginx.conf: > > > > user nginx; > > worker_processes auto; > > worker_rlimit_nofile 65535; > > > > error_log /var/log/nginx/error.log warn; > > pid /var/run/nginx.pid; > > > > > > events { > > use epoll; > > worker_connections 65535; > > } > > > > > > http { > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > > ' > > '$status $body_bytes_sent "$http_referer" ' > > '"$http_user_agent" "$http_x_forwarded_for"'; > > > > access_log /var/log/nginx/access.log main; > > > > sendfile on; > > #tcp_nopush on; > > > > keepalive_timeout 65; > > > > #gzip on; > > > > include /etc/nginx/conf.d/*.conf; > > } > > > > > > stream { > > > > include /etc/nginx/xxxx.d/*.conf; > > } > > > > And the content in previous email is in xxxx.d/xxxx.conf > > > > There is no file under /etc/nginx/conf.d > > > > > > Thanks. > > > > > > -----????----- > > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? > > Roman > > Arutyunyan > > ????: 2015?6?11? 7:45 > > ???: nginx at nginx.org > > ??: Re: nginx plus with ssl on TCP load balance not work > > > > Hi, > > > > Could you provide the full config of the nginx/stream balancer? > > > > On 11 Jun 2015, at 09:29, huakaibird wrote: > > > >> Hi, > >> > >> I?m using nginx plus with ssl on TCP load balance, Configured like > >> the documentation, but it not work. (All the IP below is not > >> real-ip) I have web servers behind, I want to use ssl offloading, and > >> I choose TCP load balance. listen on 443 and proxy to web server's 80. > >> > >> Page access always report ERR_TOO_MANY_REDIRECTS. > >> > >> Error log > >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: > >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > > server: > >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, > >> bytes from/to upstream:0/0 > >> > >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? > >> > >> The configuration is like this, remove the real ip > >> > >> server { > >> listen 80 so_keepalive=30m::10; > >> proxy_pass backend; > >> proxy_upstream_buffer 2048k; > >> proxy_downstream_buffer 2048k; > >> > >> } > >> > >> server { > >> listen 443 ssl; > >> proxy_pass backend; > >> #proxy_upstream_buffer 2048k; > >> #proxy_downstream_buffer 2048k; > >> ssl_certificate ssl/chained.crt; > >> #ssl_certificate ssl/4582cfef411bb.crt; > >> ssl_certificate_key ssl/zoomus20140410.key; > >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > >> #ssl_ciphers HIGH:!aNULL:!MD5; > >> ssl_handshake_timeout 3s; > >> #ssl_session_cache shared:SSL:20m; > >> #ssl_session_timeout 4h; > >> > >> } > >> > >> > >> upstream backend { > >> server *.*.*.*:80; > >> server *.*.*.*:80; > >> } It looks like you have "proxy_ssl on;" in the stream{} block, do you? From Tom.deBrouwer at nl.bosch.com Thu Jun 11 11:07:57 2015 From: Tom.deBrouwer at nl.bosch.com (de Brouwer Tom (ST-CO/ENG5.1)) Date: Thu, 11 Jun 2015 11:07:57 +0000 Subject: auth_basic plain password in html Message-ID: <82747f17908748ec9583dcf97ca8c8d8@SI-MBX1029.de.bosch.com> All, I have setup aut_basic on my nginx webserver, whenever I authenticate the username and password are send as plain text via the html request from my webbrowser, is there an easy solution for this? Or should I switch to the non default nginx_http_auth_digest module? Thanks, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 11 14:26:01 2015 From: nginx-forum at nginx.us (huakaibird) Date: Thu, 11 Jun 2015 10:26:01 -0400 Subject: =?UTF-8?Q?Re=3A_=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_bala?= =?UTF-8?Q?nce_not_work?= In-Reply-To: <20150611101044.GG42406@lo0.su> References: <20150611101044.GG42406@lo0.su> Message-ID: No, I did not set proxy_ssl on, is that default on? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259543#msg-259543 From smith.hua at zoom.us Thu Jun 11 14:42:53 2015 From: smith.hua at zoom.us (smith) Date: Thu, 11 Jun 2015 14:42:53 -0000 Subject: =?UTF-8?B?562U5aSNOiDnrZTlpI06IG5naW54IHBsdXMgd2l0aCBzc2wgb24gVENQIGxvYWQg?= =?UTF-8?B?YmFsYW5jZSBub3Qgd29yaw==?= In-Reply-To: <20150611101044.GG42406@lo0.su> References: <4615911b8ef3c8d01341cbb534a6184f.NginxMailingListEnglish@forum.nginx.org> <79CCB287-2E05-4123-946B-677FBEF02E63@nginx.com> <008e01d0a41b$1e5bcf70$5b136e50$@zoom.us> <009901d0a420$937fcf40$ba7f6dc0$@zoom.us> <00aa01d0a425$8e168490$aa438db0$@zoom.us> <20150611101044.GG42406@lo0.su> Message-ID: <001201d0a454$f6169e70$e243db50$@zoom.us> No, I did not set proxy_ssl on. Sorry, mymistake, the log from the backend server is normal, but all of them are 302, not 200. So there are always redirect, why? 10.0.0.2,[11/Jun/2015:14:34:33 +0000],GET,/signin,HTTP/1.1,302,0,12690,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36, 10.0.0.2 10.0.0.2,[11/Jun/2015:14:34:33 +0000],GET,/signin,HTTP/1.1,302,0,12690,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36, 10.0.0.2 10.0.0.2,[11/Jun/2015:14:34:33 +0000],GET,/signin,HTTP/1.1,302,0,12690,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36, 10.0.0.2 10.0.0.2,[11/Jun/2015:14:34:33 +0000],GET,/signin,HTTP/1.1,302,0,12690,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36, 10.0.0.2 10.0.0.2,[11/Jun/2015:14:34:33 +0000],GET,/signin,HTTP/1.1,302,0,12690,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36, 10.0.0.2 10.0.0.2,[11/Jun/2015:14:34:33 +0000],GET,/signin,HTTP/1.1,302,0,12690,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36, 10.0.0.2 And the log from nginx is still many try in nginx 2015/06/11 08:48:28 [info] 12719#0: *451 client 10.0.0.1:1642 connected to 0.0.0.0:80 2015/06/11 08:48:28 [info] 12719#0: *451 proxy 172.31.5.228:26620 connected to 10.0.0.3:80 2015/06/11 08:48:28 [info] 12719#0: *451 upstream disconnected, bytes from/to client:704/452, bytes from/to upstream:452/704 2015/06/11 08:48:28 [info] 12719#0: *453 client 10.0.0.1:1518 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *453 proxy 172.31.5.228:17021 connected to 10.0.0.2:80 2015/06/11 08:48:28 [info] 12719#0: *453 upstream disconnected, bytes from/to client:517/0, bytes from/to upstream:0/517 2015/06/11 08:48:28 [info] 12719#0: *455 client 10.0.0.1:2943 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *455 proxy 172.31.5.228:26622 connected to 10.0.0.3:80 2015/06/11 08:48:28 [info] 12719#0: *455 upstream disconnected, bytes from/to client:221/0, bytes from/to upstream:0/221 2015/06/11 08:48:28 [info] 12719#0: *457 client 10.0.0.1:2187 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *457 proxy 172.31.5.228:17023 connected to 10.0.0.2:80 2015/06/11 08:48:28 [info] 12719#0: *457 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 2015/06/11 08:48:28 [info] 12719#0: *459 client 10.0.0.1:2346 connected to 0.0.0.0:443 2015/06/11 08:48:28 [info] 12719#0: *459 proxy 172.31.5.228:26624 connected to 10.0.0.3:80 2015/06/11 08:48:28 [info] 12719#0: *459 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 2015/06/11 08:48:29 [info] 12719#0: *461 client 10.0.0.1:2495 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *461 proxy 172.31.5.228:17025 connected to 10.0.0.2:80 2015/06/11 08:48:29 [info] 12719#0: *461 upstream disconnected, bytes from/to client:517/0, bytes from/to upstream:0/517 2015/06/11 08:48:29 [info] 12719#0: *463 client 10.0.0.1:3742 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *463 proxy 172.31.5.228:26626 connected to 10.0.0.3:80 2015/06/11 08:48:29 [info] 12719#0: *463 upstream disconnected, bytes from/to client:221/0, bytes from/to upstream:0/221 2015/06/11 08:48:29 [info] 12719#0: *465 client 10.0.0.1:3743 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *465 proxy 172.31.5.228:17027 connected to 10.0.0.2:80 2015/06/11 08:48:29 [info] 12719#0: *465 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 2015/06/11 08:48:29 [info] 12719#0: *467 client 10.0.0.1:2343 connected to 0.0.0.0:443 2015/06/11 08:48:29 [info] 12719#0: *467 proxy 172.31.5.228:26628 connected to 10.0.0.3:80 2015/06/11 08:48:29 [info] 12719#0: *467 upstream disconnected, bytes from/to client:174/0, bytes from/to upstream:0/174 -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Ruslan Ermilov ????: 2015?6?11? 10:11 ???: nginx at nginx.org ??: Re: ??: nginx plus with ssl on TCP load balance not work On Thu, Jun 11, 2015 at 09:03:55AM -0000, smith wrote: > With info level log enabled. > > Found these: > > 80's log: > 2015/06/11 08:48:18 [info] 12719#0: *449 client 10.0.0.1:1494 > connected to 0.0.0.0:80 > 2015/06/11 08:48:18 [info] 12719#0: *449 proxy 172.31.5.228:17019 > connected to 10.0.0.2:80 > 2015/06/11 08:48:19 [info] 12719#0: *449 upstream disconnected, bytes > from/to client:689/7900, bytes from/to upstream:7900/689 > > It's success > > 443's log: tried several times, not work, now page show > ERR_CONNECTION_CLOSED, still not work > > 2015/06/11 08:48:28 [info] 12719#0: *451 client 10.0.0.1:1642 > connected to 0.0.0.0:80 > 2015/06/11 08:48:28 [info] 12719#0: *451 proxy 172.31.5.228:26620 > connected to 10.0.0.3:80 > 2015/06/11 08:48:28 [info] 12719#0: *451 upstream disconnected, bytes > from/to client:704/452, bytes from/to upstream:452/704 > 2015/06/11 08:48:28 [info] 12719#0: *453 client 10.0.0.1:1518 > connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *453 proxy 172.31.5.228:17021 > connected to 10.0.0.2:80 > 2015/06/11 08:48:28 [info] 12719#0: *453 upstream disconnected, bytes > from/to client:517/0, bytes from/to upstream:0/517 > 2015/06/11 08:48:28 [info] 12719#0: *455 client 10.0.0.1:2943 > connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *455 proxy 172.31.5.228:26622 > connected to 10.0.0.3:80 > 2015/06/11 08:48:28 [info] 12719#0: *455 upstream disconnected, bytes > from/to client:221/0, bytes from/to upstream:0/221 > 2015/06/11 08:48:28 [info] 12719#0: *457 client 10.0.0.1:2187 > connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *457 proxy 172.31.5.228:17023 > connected to 10.0.0.2:80 > 2015/06/11 08:48:28 [info] 12719#0: *457 upstream disconnected, bytes > from/to client:174/0, bytes from/to upstream:0/174 > 2015/06/11 08:48:28 [info] 12719#0: *459 client 10.0.0.1:2346 > connected to 0.0.0.0:443 > 2015/06/11 08:48:28 [info] 12719#0: *459 proxy 172.31.5.228:26624 > connected to 10.0.0.3:80 > 2015/06/11 08:48:28 [info] 12719#0: *459 upstream disconnected, bytes > from/to client:174/0, bytes from/to upstream:0/174 > 2015/06/11 08:48:29 [info] 12719#0: *461 client 10.0.0.1:2495 > connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *461 proxy 172.31.5.228:17025 > connected to 10.0.0.2:80 > 2015/06/11 08:48:29 [info] 12719#0: *461 upstream disconnected, bytes > from/to client:517/0, bytes from/to upstream:0/517 > 2015/06/11 08:48:29 [info] 12719#0: *463 client 10.0.0.1:3742 > connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *463 proxy 172.31.5.228:26626 > connected to 10.0.0.3:80 > 2015/06/11 08:48:29 [info] 12719#0: *463 upstream disconnected, bytes > from/to client:221/0, bytes from/to upstream:0/221 > 2015/06/11 08:48:29 [info] 12719#0: *465 client 10.0.0.1:3743 > connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *465 proxy 172.31.5.228:17027 > connected to 10.0.0.2:80 > 2015/06/11 08:48:29 [info] 12719#0: *465 upstream disconnected, bytes > from/to client:174/0, bytes from/to upstream:0/174 > 2015/06/11 08:48:29 [info] 12719#0: *467 client 10.0.0.1:2343 > connected to 0.0.0.0:443 > 2015/06/11 08:48:29 [info] 12719#0: *467 proxy 172.31.5.228:26628 > connected to 10.0.0.3:80 > 2015/06/11 08:48:29 [info] 12719#0: *467 upstream disconnected, bytes > from/to client:174/0, bytes from/to upstream:0/174 > > > And from the backend web servers, found request not correct: > 10.0.0.1,[11/Jun/2015:08:57:42 > +0000],\x16\x03\x01\x02,/,HTTP/0.9,501,0,2030,-, 10.0.0.1 > > Normal request should be > 172.31.11.248,[11/Jun/2015:09:00:30 > +0000],GET,/signin,HTTP/1.1,200,5924,211592,Mozilla/5.0 (Windows NT > 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.60 > Safari/537.36,36.7.69.39, 172.31.11.248 > > So it that any bug? > > -----????----- > ???: smith [mailto:smith.hua at zoom.us] > ????: 2015?6?11? 8:35 > ???: 'nginx at nginx.org' > ??: ??: nginx plus with ssl on TCP load balance not work > > When I'm trying http ssl, I found need to set proxy_set_header X-Forwarded-Proto $scheme; in server block, or it will also encounter ERR_TOO_MANY_REDIRECTS. > > Is TCP has same kind of setting? > > -----????----- > ???: smith [mailto:smith.hua at zoom.us] > ????: 2015?6?11? 8:28 > ???: nginx at nginx.org > ??: ??: nginx plus with ssl on TCP load balance not work > > The 80 is normal, And I tried use http ssl, also works. Don't know Why TCP not work. > > -----????----- > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman > Arutyunyan > ????: 2015?6?11? 8:25 > ???: nginx at nginx.org > ??: Re: nginx plus with ssl on TCP load balance not work > > What about the 80 port of the stream balancer? > Does it proxy the connection normally? > > PS: no access log is supported in the stream module. > Connection information (addresses etc) is logged to error log with the info loglevel. > > On 11 Jun 2015, at 10:49, smith wrote: > > > Nginx.conf: > > > > user nginx; > > worker_processes auto; > > worker_rlimit_nofile 65535; > > > > error_log /var/log/nginx/error.log warn; > > pid /var/run/nginx.pid; > > > > > > events { > > use epoll; > > worker_connections 65535; > > } > > > > > > http { > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > > ' > > '$status $body_bytes_sent "$http_referer" ' > > '"$http_user_agent" "$http_x_forwarded_for"'; > > > > access_log /var/log/nginx/access.log main; > > > > sendfile on; > > #tcp_nopush on; > > > > keepalive_timeout 65; > > > > #gzip on; > > > > include /etc/nginx/conf.d/*.conf; } > > > > > > stream { > > > > include /etc/nginx/xxxx.d/*.conf; > > } > > > > And the content in previous email is in xxxx.d/xxxx.conf > > > > There is no file under /etc/nginx/conf.d > > > > > > Thanks. > > > > > > -----????----- > > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? > > Roman > > Arutyunyan > > ????: 2015?6?11? 7:45 > > ???: nginx at nginx.org > > ??: Re: nginx plus with ssl on TCP load balance not work > > > > Hi, > > > > Could you provide the full config of the nginx/stream balancer? > > > > On 11 Jun 2015, at 09:29, huakaibird wrote: > > > >> Hi, > >> > >> I?m using nginx plus with ssl on TCP load balance, Configured like > >> the documentation, but it not work. (All the IP below is not > >> real-ip) I have web servers behind, I want to use ssl offloading, and > >> I choose TCP load balance. listen on 443 and proxy to web server's 80. > >> > >> Page access always report ERR_TOO_MANY_REDIRECTS. > >> > >> Error log > >> 2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: > >> Connection timed out) while connecting to upstream, client: 10.0.0.1, > > server: > >> 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, > >> bytes from/to upstream:0/0 > >> > >> 10.0.0.2 this ip is the nginx ip, while it is used as upstream? > >> > >> The configuration is like this, remove the real ip > >> > >> server { > >> listen 80 so_keepalive=30m::10; > >> proxy_pass backend; > >> proxy_upstream_buffer 2048k; > >> proxy_downstream_buffer 2048k; > >> > >> } > >> > >> server { > >> listen 443 ssl; > >> proxy_pass backend; > >> #proxy_upstream_buffer 2048k; > >> #proxy_downstream_buffer 2048k; > >> ssl_certificate ssl/chained.crt; > >> #ssl_certificate ssl/4582cfef411bb.crt; > >> ssl_certificate_key ssl/zoomus20140410.key; > >> #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > >> #ssl_ciphers HIGH:!aNULL:!MD5; > >> ssl_handshake_timeout 3s; > >> #ssl_session_cache shared:SSL:20m; > >> #ssl_session_timeout 4h; > >> > >> } > >> > >> > >> upstream backend { > >> server *.*.*.*:80; > >> server *.*.*.*:80; > >> } It looks like you have "proxy_ssl on;" in the stream{} block, do you? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From sarah at nginx.com Thu Jun 11 18:25:34 2015 From: sarah at nginx.com (Sarah Novotny) Date: Thu, 11 Jun 2015 11:25:34 -0700 Subject: Upcoming NGINX Events Message-ID: <85F874D1-DF32-43AB-8074-FDA4F17E754C@nginx.com> Hi All! NGINX has lots of upcoming events and we're looking for a variety of talks for them ranging from beginner to advanced ? topics on and around NGINX and its use cases - from APIs, to resilient modern web architecture, to microservice implementations, to well won war stories. This is your opportunity to share your insights and tell us what you?re working on. If you're newer to NGINX and wouldn?t feel comfortable speaking, please join us as a Summit attendee. We?ll host our NGINX Fundamentals course in the morning at each of the Summits and have guest speakers in the afternoon. Past speakers have included Jeff Kaufman of Google, Andrew Stein of Distil Networks, Vanessa Ramos and Nicolas Greni? of 3Scale, Andrew Fong of Dropbox, John Wason ofDisqus, Dustin Whittle of AppDynamics, and Chris Richardson, creator of the original CloudFoundry.com. After all the content, stick around for snacks, drinks, and social time with fellow NGINX users and the NGINX team. The NGINX summit series (CFP here - https://docs.google.com/forms/d/1qwjBzyqVYjoeBKnSpby51cBjX55ss-HWbD3B0J2QUNA/viewform) Raleigh, NC ? July 9 ? https://www.eventbrite.com/e/nginx-summit-training-raleigh-tickets-16979353704 Portland, OR ? July 19 (co-locating with OSCON, Sunday before OSCON kicks off) ? https://www.eventbrite.com/e/nginx-summit-training-portland-tickets-17031615019 Boston, MA ? August 18 ? https://www.eventbrite.com/e/nginx-summit-training-boston-tickets-17031867775 New York City, NY ? August 20 ? https://www.eventbrite.com/e/nginx-summit-training-new-york-city-tickets-17032035276 Chicago, IL ? August 25 ? https://www.eventbrite.com/e/nginx-summit-training-chicago-tickets-17032251924 Denver, CO ? August 27 (coming soon) Austin, TX ? October 21 ? https://www.eventbrite.com/e/nginx-summit-training-austin-tickets-17032667166 Los Angeles, CA ? early November (coming soon) and, please do mark your calendar for the the NGINX user conference at Fort Mason, San Francisco from September 22-24. Additional details are on our call for proposals page. https://nginxconf15.busyconf.com/proposals/new I look forward to seeing your submission(s) and meeting you at the events. Sarah From francis at daoine.org Thu Jun 11 19:29:36 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jun 2015 20:29:36 +0100 Subject: auth_basic plain password in html In-Reply-To: <82747f17908748ec9583dcf97ca8c8d8@SI-MBX1029.de.bosch.com> References: <82747f17908748ec9583dcf97ca8c8d8@SI-MBX1029.de.bosch.com> Message-ID: <20150611192936.GJ2957@daoine.org> On Thu, Jun 11, 2015 at 11:07:57AM +0000, de Brouwer Tom (ST-CO/ENG5.1) wrote: Hi there, > I have setup aut_basic on my nginx webserver, whenever I authenticate the username and password are send as plain text via the html request from my webbrowser, is there an easy solution for this? HTTP Basic Authentication is effectively plain text between the browser and the server. The way to make that not easily readable is to wrap it in tls - so run a https service instead of a http service. > Or should I switch to the non default nginx_http_auth_digest module? The other option is not to use HTTP Basic Authentication; HTTP Digest Authentication is probably the most familiar alternative for common browsers. f -- Francis Daly francis at daoine.org From steve at greengecko.co.nz Fri Jun 12 01:50:15 2015 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Jun 2015 13:50:15 +1200 Subject: A bit confused... Message-ID: <557A3AD7.1060307@greengecko.co.nz> I'm tryiong to make some sense out of this and am left a bit cold! What could cause this: ( I've left out any attempt at anonymising in case I hide something ) From the docroot... $ ls -l images/models/Lapierre/Overvolt* -rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09 images/models/Lapierre/Overvolt FS.png -rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09 images/models/Lapierre/Overvolt HT.png $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ FS.png HTTP/1.1 200 OK Server: nginx/1.9.1 Date: Fri, 12 Jun 2015 01:47:14 GMT Content-Type: image/png Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT ETag: "55795e70-53965" Expires: Sat, 13 Jun 2015 01:47:14 GMT Cache-Control: max-age=86400 Accept-Ranges: bytes Content-Length: 342373 Connection: Keep-Alive $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ HT.png HTTP/1.1 400 Bad Request Server: nginx/1.9.1 Date: Fri, 12 Jun 2015 01:47:05 GMT Content-Type: text/html Content-Length: 172 Connection: close The second one shows no entry at all in the access log but I can't find any reason why they're processed differently at all. Suggestions please! -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From steve at greengecko.co.nz Fri Jun 12 01:52:46 2015 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Jun 2015 13:52:46 +1200 Subject: A bit confused... In-Reply-To: <557A3AD7.1060307@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> Message-ID: <557A3B6E.1050900@greengecko.co.nz> Just a quick addition... I've tried it from this office, which is IPv4, and from IPv6 enabled locations. This makes no difference. On 12/06/15 13:50, steve wrote: > I'm tryiong to make some sense out of this and am left a bit cold! > What could cause this: > > ( I've left out any attempt at anonymising in case I hide something ) > > From the docroot... > > $ ls -l images/models/Lapierre/Overvolt* > -rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09 > images/models/Lapierre/Overvolt FS.png > -rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09 > images/models/Lapierre/Overvolt HT.png > > > $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ > FS.png > HTTP/1.1 200 OK > Server: nginx/1.9.1 > Date: Fri, 12 Jun 2015 01:47:14 GMT > Content-Type: image/png > Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT > ETag: "55795e70-53965" > Expires: Sat, 13 Jun 2015 01:47:14 GMT > Cache-Control: max-age=86400 > Accept-Ranges: bytes > Content-Length: 342373 > Connection: Keep-Alive > > $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ > HT.png > HTTP/1.1 400 Bad Request > Server: nginx/1.9.1 > Date: Fri, 12 Jun 2015 01:47:05 GMT > Content-Type: text/html > Content-Length: 172 > Connection: close > > The second one shows no entry at all in the access log but I can't > find any reason why they're processed differently at all. > > Suggestions please! > -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From miguelmclara at gmail.com Fri Jun 12 01:56:17 2015 From: miguelmclara at gmail.com (Miguel Clara) Date: Fri, 12 Jun 2015 02:56:17 +0100 Subject: A bit confused... In-Reply-To: <557A3B6E.1050900@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> <557A3B6E.1050900@greengecko.co.nz> Message-ID: Interesting I tried at my side got the same results but it does work like this: curl -I "http://backend.right.bike/images/models/Lapierre/Overvolt%20HT.png" HTTP/1.1 200 OK Server: nginx/1.9.1 Date: Fri, 12 Jun 2015 01:55:27 GMT Content-Type: image/png Content-Length: 318335 Last-Modified: Thu, 11 Jun 2015 10:09:54 GMT Connection: keep-alive ETag: "55795e72-4db7f" Expires: Sat, 13 Jun 2015 01:55:27 GMT Cache-Control: max-age=86400 Accept-Ranges: bytes Melhores Cumprimentos // Best Regards ----------------------------------------------- Miguel Clara IT - Sys Admin & Developer On Fri, Jun 12, 2015 at 2:52 AM, steve wrote: > Just a quick addition... I've tried it from this office, which is IPv4, and > from IPv6 enabled locations. This makes no difference. > > > > On 12/06/15 13:50, steve wrote: >> >> I'm tryiong to make some sense out of this and am left a bit cold! What >> could cause this: >> >> ( I've left out any attempt at anonymising in case I hide something ) >> >> From the docroot... >> >> $ ls -l images/models/Lapierre/Overvolt* >> -rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09 >> images/models/Lapierre/Overvolt FS.png >> -rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09 >> images/models/Lapierre/Overvolt HT.png >> >> >> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ >> FS.png >> HTTP/1.1 200 OK >> Server: nginx/1.9.1 >> Date: Fri, 12 Jun 2015 01:47:14 GMT >> Content-Type: image/png >> Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT >> ETag: "55795e70-53965" >> Expires: Sat, 13 Jun 2015 01:47:14 GMT >> Cache-Control: max-age=86400 >> Accept-Ranges: bytes >> Content-Length: 342373 >> Connection: Keep-Alive >> >> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ >> HT.png >> HTTP/1.1 400 Bad Request >> Server: nginx/1.9.1 >> Date: Fri, 12 Jun 2015 01:47:05 GMT >> Content-Type: text/html >> Content-Length: 172 >> Connection: close >> >> The second one shows no entry at all in the access log but I can't find >> any reason why they're processed differently at all. >> >> Suggestions please! >> > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From steve at greengecko.co.nz Fri Jun 12 02:00:03 2015 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Jun 2015 14:00:03 +1200 Subject: A bit confused... In-Reply-To: References: <557A3AD7.1060307@greengecko.co.nz> <557A3B6E.1050900@greengecko.co.nz> Message-ID: <557A3D23.8000302@greengecko.co.nz> Aargh! On 12/06/15 13:56, Miguel Clara wrote: > Interesting I tried at my side got the same results but it does work like this: > > curl -I "http://backend.right.bike/images/models/Lapierre/Overvolt%20HT.png" > HTTP/1.1 200 OK > Server: nginx/1.9.1 > Date: Fri, 12 Jun 2015 01:55:27 GMT > Content-Type: image/png > Content-Length: 318335 > Last-Modified: Thu, 11 Jun 2015 10:09:54 GMT > Connection: keep-alive > ETag: "55795e72-4db7f" > Expires: Sat, 13 Jun 2015 01:55:27 GMT > Cache-Control: max-age=86400 > Accept-Ranges: bytes > > > > Melhores Cumprimentos // Best Regards > ----------------------------------------------- > Miguel Clara > IT - Sys Admin & Developer > > > On Fri, Jun 12, 2015 at 2:52 AM, steve wrote: >> Just a quick addition... I've tried it from this office, which is IPv4, and >> from IPv6 enabled locations. This makes no difference. >> >> >> >> On 12/06/15 13:50, steve wrote: >>> I'm tryiong to make some sense out of this and am left a bit cold! What >>> could cause this: >>> >>> ( I've left out any attempt at anonymising in case I hide something ) >>> >>> From the docroot... >>> >>> $ ls -l images/models/Lapierre/Overvolt* >>> -rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09 >>> images/models/Lapierre/Overvolt FS.png >>> -rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09 >>> images/models/Lapierre/Overvolt HT.png >>> >>> >>> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ >>> FS.png >>> HTTP/1.1 200 OK >>> Server: nginx/1.9.1 >>> Date: Fri, 12 Jun 2015 01:47:14 GMT >>> Content-Type: image/png >>> Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT >>> ETag: "55795e70-53965" >>> Expires: Sat, 13 Jun 2015 01:47:14 GMT >>> Cache-Control: max-age=86400 >>> Accept-Ranges: bytes >>> Content-Length: 342373 >>> Connection: Keep-Alive >>> >>> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ >>> HT.png >>> HTTP/1.1 400 Bad Request >>> Server: nginx/1.9.1 >>> Date: Fri, 12 Jun 2015 01:47:05 GMT >>> Content-Type: text/html >>> Content-Length: 172 >>> Connection: close >>> >>> The second one shows no entry at all in the access log but I can't find >>> any reason why they're processed differently at all. >>> >>> Suggestions please! >>> Well, looks like there's a workaround probably available. This happens to about 20 out of 700 files.... Thanks for the lateral thinking. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From miguelmclara at gmail.com Fri Jun 12 02:07:25 2015 From: miguelmclara at gmail.com (Miguel Clara) Date: Fri, 12 Jun 2015 03:07:25 +0100 Subject: A bit confused... In-Reply-To: <557A3D23.8000302@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> <557A3B6E.1050900@greengecko.co.nz> <557A3D23.8000302@greengecko.co.nz> Message-ID: On Fri, Jun 12, 2015 at 3:00 AM, steve wrote: > Aargh! > > > On 12/06/15 13:56, Miguel Clara wrote: >> >> Interesting I tried at my side got the same results but it does work like >> this: >> >> curl -I >> "http://backend.right.bike/images/models/Lapierre/Overvolt%20HT.png" >> HTTP/1.1 200 OK >> Server: nginx/1.9.1 >> Date: Fri, 12 Jun 2015 01:55:27 GMT >> Content-Type: image/png >> Content-Length: 318335 >> Last-Modified: Thu, 11 Jun 2015 10:09:54 GMT >> Connection: keep-alive >> ETag: "55795e72-4db7f" >> Expires: Sat, 13 Jun 2015 01:55:27 GMT >> Cache-Control: max-age=86400 >> Accept-Ranges: bytes >> >> >> >> Melhores Cumprimentos // Best Regards >> ----------------------------------------------- >> Miguel Clara >> IT - Sys Admin & Developer >> >> >> On Fri, Jun 12, 2015 at 2:52 AM, steve wrote: >>> >>> Just a quick addition... I've tried it from this office, which is IPv4, >>> and >>> from IPv6 enabled locations. This makes no difference. >>> >>> >>> >>> On 12/06/15 13:50, steve wrote: >>>> >>>> I'm tryiong to make some sense out of this and am left a bit cold! What >>>> could cause this: >>>> >>>> ( I've left out any attempt at anonymising in case I hide something ) >>>> >>>> From the docroot... >>>> >>>> $ ls -l images/models/Lapierre/Overvolt* >>>> -rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09 >>>> images/models/Lapierre/Overvolt FS.png >>>> -rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09 >>>> images/models/Lapierre/Overvolt HT.png >>>> >>>> >>>> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ >>>> FS.png >>>> HTTP/1.1 200 OK >>>> Server: nginx/1.9.1 >>>> Date: Fri, 12 Jun 2015 01:47:14 GMT >>>> Content-Type: image/png >>>> Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT >>>> ETag: "55795e70-53965" >>>> Expires: Sat, 13 Jun 2015 01:47:14 GMT >>>> Cache-Control: max-age=86400 >>>> Accept-Ranges: bytes >>>> Content-Length: 342373 >>>> Connection: Keep-Alive >>>> >>>> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ >>>> HT.png >>>> HTTP/1.1 400 Bad Request >>>> Server: nginx/1.9.1 >>>> Date: Fri, 12 Jun 2015 01:47:05 GMT >>>> Content-Type: text/html >>>> Content-Length: 172 >>>> Connection: close >>>> >>>> The second one shows no entry at all in the access log but I can't find >>>> any reason why they're processed differently at all. >>>> >>>> Suggestions please! >>>> > Well, looks like there's a workaround probably available. This happens to > about 20 out of 700 files.... > > Thanks for the lateral thinking. > NP, I usually go for %20 cause its what browsers do anyway, but its indeed interesting that curls works for some and not others, what does nginx error log tells you?? > Steve > > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From miguelmclara at gmail.com Fri Jun 12 02:15:50 2015 From: miguelmclara at gmail.com (Miguel Clara) Date: Fri, 12 Jun 2015 03:15:50 +0100 Subject: A bit confused... In-Reply-To: References: <557A3AD7.1060307@greengecko.co.nz> <557A3B6E.1050900@greengecko.co.nz> <557A3D23.8000302@greengecko.co.nz> Message-ID: BTW, I test a few more URLS, and all others give 404, but anything with "http://backend.right.bike/images/models/Lapierre/Overvolt\ H***" fails with 400 not that only "Overvolt\ H" and "Overvolt H fails not "Overvolt\ h" or "Overvolt h" I just have no clue why, maybe something in the config Melhores Cumprimentos // Best Regards ----------------------------------------------- Miguel Clara IT - Sys Admin & Developer From steve at greengecko.co.nz Fri Jun 12 02:31:53 2015 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Jun 2015 14:31:53 +1200 Subject: A bit confused... In-Reply-To: References: <557A3AD7.1060307@greengecko.co.nz> <557A3B6E.1050900@greengecko.co.nz> <557A3D23.8000302@greengecko.co.nz> Message-ID: <557A4499.7080304@greengecko.co.nz> A bit more into... On 12/06/15 14:15, Miguel Clara wrote: > BTW, I test a few more URLS, and all others give 404, but anything > with "http://backend.right.bike/images/models/Lapierre/Overvolt\ H***" > fails with 400 > > not that only "Overvolt\ H" and "Overvolt H fails not "Overvolt\ h" > or "Overvolt h" > > I just have no clue why, maybe something in the config > Melhores Cumprimentos // Best Regards > ----------------------------------------------- > Miguel Clara > IT - Sys Admin & Developer > > It seems to be objecting to the string ' H' in the URL. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From steve at greengecko.co.nz Fri Jun 12 02:42:03 2015 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Jun 2015 14:42:03 +1200 Subject: A bit confused... In-Reply-To: <557A4499.7080304@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> <557A3B6E.1050900@greengecko.co.nz> <557A3D23.8000302@greengecko.co.nz> <557A4499.7080304@greengecko.co.nz> Message-ID: <557A46FB.7010307@greengecko.co.nz> Hmm... On 12/06/15 14:31, steve wrote: > A bit more into... > > On 12/06/15 14:15, Miguel Clara wrote: >> BTW, I test a few more URLS, and all others give 404, but anything >> with "http://backend.right.bike/images/models/Lapierre/Overvolt\ H***" >> fails with 400 >> >> not that only "Overvolt\ H" and "Overvolt H fails not "Overvolt\ h" >> or "Overvolt h" >> >> I just have no clue why, maybe something in the config >> Melhores Cumprimentos // Best Regards >> ----------------------------------------------- >> Miguel Clara >> IT - Sys Admin & Developer >> >> > > It seems to be objecting to the string ' H' in the URL. > Have tried on a number of different installs, 1.7 to 1.9 by touching then attempting to access 'f H.png'. Most of my configs are for php-based CMSes, but I still get the same 400 code for static, cookie free setups ( that one was 1.7.1 ). Should I be raising a bug, and if so, can someone point me towards a howto please? Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From oscaretu at gmail.com Fri Jun 12 06:10:43 2015 From: oscaretu at gmail.com (oscaretu .) Date: Fri, 12 Jun 2015 08:10:43 +0200 Subject: A bit confused... In-Reply-To: <557A3AD7.1060307@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> Message-ID: Hello Can you try something command like the ones at http://unix.stackexchange.com/questions/2182/identifying-files-with-special-characters-in-its-name-in-a-terminal to see if you have special chars in some of the filenames? On Fri, Jun 12, 2015 at 3:50 AM, steve wrote: > I'm tryiong to make some sense out of this and am left a bit cold! What > could cause this: > > ( I've left out any attempt at anonymising in case I hide something ) > > From the docroot... > > $ ls -l images/models/Lapierre/Overvolt* > -rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09 > images/models/Lapierre/Overvolt FS.png > -rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09 > images/models/Lapierre/Overvolt HT.png > > > $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ > FS.png > HTTP/1.1 200 OK > Server: nginx/1.9.1 > Date: Fri, 12 Jun 2015 01:47:14 GMT > Content-Type: image/png > Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT > ETag: "55795e70-53965" > Expires: Sat, 13 Jun 2015 01:47:14 GMT > Cache-Control: max-age=86400 > Accept-Ranges: bytes > Content-Length: 342373 > Connection: Keep-Alive > > $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ > HT.png > HTTP/1.1 400 Bad Request > Server: nginx/1.9.1 > Date: Fri, 12 Jun 2015 01:47:05 GMT > Content-Type: text/html > Content-Length: 172 > Connection: close > > The second one shows no entry at all in the access log but I can't find > any reason why they're processed differently at all. > > Suggestions please! > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jun 12 06:59:04 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 12 Jun 2015 07:59:04 +0100 Subject: A bit confused... In-Reply-To: <557A3AD7.1060307@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> Message-ID: <20150612065904.GA3018@daoine.org> On Fri, Jun 12, 2015 at 01:50:15PM +1200, steve wrote: Hi there, > I'm tryiong to make some sense out of this and am left a bit cold! > What could cause this: Both requests are invalid - "space" may not appear in a url. Encode it as %20 and things will work. nginx happens to try one form of "dwim" error recovery when the character after the invalid space(s) is not "H", and does not try it when the character is "H". > $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ FS.png > HTTP/1.1 200 OK > $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ HT.png > HTTP/1.1 400 Bad Request > The second one shows no entry at all in the access log but I can't > find any reason why they're processed differently at all. > > Suggestions please! I presume that the nginx request-line parser stops at the whitespace which says "end of url, what follows is the HTTP version", sees that it does not start with "H", and decides "perhaps this is an invalid url; I'll carry on parsing and maybe I can helpfully handle this broken request"; or sees that it does start with "H" and decides "clearly this was the end of the url, I shall now identify the HTTP request version; oh, it's broken, error 400". You could argue that nginx could try an extra level of dwimmery to try to drag something useful out of the second broken request; or you could argue that it should fail the first broken request as well. Or you could accept that the client has broken the protocol, and the server is mostly free to do what it likes in response. I suspect that "fail the first broken request" won't happen, as a practical QoI matter; and "try to accept the second broken request" might happen if someone who cares can provide a low-impact patch -- it's easy for me to say "it's a Simple Matter of Programming", because I don't intend to write the patch ;-) But "don't make invalid requests" is the way to see the bicycle. f -- Francis Daly francis at daoine.org From steve at greengecko.co.nz Fri Jun 12 07:38:19 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 12 Jun 2015 19:38:19 +1200 Subject: A bit confused... In-Reply-To: <20150612065904.GA3018@daoine.org> References: <557A3AD7.1060307@greengecko.co.nz> <20150612065904.GA3018@daoine.org> Message-ID: <557A8C6B.1020509@greengecko.co.nz> Hi, On 12/06/15 18:59, Francis Daly wrote: > On Fri, Jun 12, 2015 at 01:50:15PM +1200, steve wrote: > > Hi there, > >> I'm tryiong to make some sense out of this and am left a bit cold! >> What could cause this: > Both requests are invalid - "space" may not appear in a url. Encode it > as %20 and things will work. > > nginx happens to try one form of "dwim" error recovery when the character > after the invalid space(s) is not "H", and does not try it when the character > is "H". > >> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ FS.png >> HTTP/1.1 200 OK >> $ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ HT.png >> HTTP/1.1 400 Bad Request >> The second one shows no entry at all in the access log but I can't >> find any reason why they're processed differently at all. >> >> Suggestions please! > I presume that the nginx request-line parser stops at the whitespace which > says "end of url, what follows is the HTTP version", sees that it does > not start with "H", and decides "perhaps this is an invalid url; I'll > carry on parsing and maybe I can helpfully handle this broken request"; > or sees that it does start with "H" and decides "clearly this was the end > of the url, I shall now identify the HTTP request version; oh, it's broken, > error 400". > > You could argue that nginx could try an extra level of dwimmery to try > to drag something useful out of the second broken request; or you could > argue that it should fail the first broken request as well. > > Or you could accept that the client has broken the protocol, and the > server is mostly free to do what it likes in response. > > I suspect that "fail the first broken request" won't happen, as a > practical QoI matter; and "try to accept the second broken request" > might happen if someone who cares can provide a low-impact patch -- > it's easy for me to say "it's a Simple Matter of Programming", because > I don't intend to write the patch ;-) > > But "don't make invalid requests" is the way to see the bicycle. > > f I have 750 image files, many of them have spaces in their names. The example I showed, and the 30 that deliver a 400 bad request status *all* contain a ' H' in the file name. ' h', ' G' and most things similar return a 200 status. No matter what, one passes and one fails. It's not repeatable behaviour. So don't succeed riding the bicycle most of the time is the way I see it. The first time I've ever disagreed with you Francis! Steve From francis at daoine.org Fri Jun 12 08:35:19 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 12 Jun 2015 09:35:19 +0100 Subject: A bit confused... In-Reply-To: <557A8C6B.1020509@greengecko.co.nz> References: <557A3AD7.1060307@greengecko.co.nz> <20150612065904.GA3018@daoine.org> <557A8C6B.1020509@greengecko.co.nz> Message-ID: <20150612083519.GB3018@daoine.org> On Fri, Jun 12, 2015 at 07:38:19PM +1200, Steve Holdoway wrote: > On 12/06/15 18:59, Francis Daly wrote: > >On Fri, Jun 12, 2015 at 01:50:15PM +1200, steve wrote: Hi there, > >>$ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ FS.png > >>HTTP/1.1 200 OK > >>$ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ HT.png > >>HTTP/1.1 400 Bad Request > >I suspect that "fail the first broken request" won't happen, as a > >practical QoI matter; and "try to accept the second broken request" > >might happen if someone who cares can provide a low-impact patch -- > >it's easy for me to say "it's a Simple Matter of Programming", because > >I don't intend to write the patch ;-) > > > >But "don't make invalid requests" is the way to see the bicycle. > I have 750 image files, many of them have spaces in their names. The > example I showed, and the 30 that deliver a 400 bad request status > *all* contain a ' H' in the file name. ' h', ' G' and most things > similar return a 200 status. A filename with spaces isn't a problem. A http request (url) with spaces is a problem. Create different files called "50%good", "50%bad", "%", "%25", and "wtf?", and try to access them as if their filenames can be used directly in http requests. You'll see different responses -- errors or not-the-file-you-wanted -- all of which are understandable when you accept that a filename cannot be used directly in a url. You must always url-encode a filename when creating a url. If the filename is restricted to alnum-dot-underscore, then "url-encode" is the identity transform. For the full details, RTFRFC. > No matter what, one passes and one fails. It's not repeatable > behaviour. space-H fails, space-anything-else passes. That looks repeatable to me :-) http://trac.nginx.org/nginx/ticket/196 has some background. The short version is that all *should* fail, and all did fail, but to be kind to broken clients, nginx was changed to let most pass. That was a convenient change, but does lead to this confusion. (I think that a subsequent change meant that the response is in HTTP/1 format, rather than the HTTP/0.9 that it originally should have been. That one was a good change.) > So don't succeed riding the bicycle most of the time is > the way I see it. urlpart=urlescape($filename) Then always use $urlpart instead of $filename when you write the link, and it will always work. (This is a http thing, not an nginx thing. Other web servers will have their own error-handling and error-correction, which will probably not be identical to nginx's.) > The first time I've ever disagreed with you Francis! Not a problem. I think the only difference of opinion is whether, given a broken request, most should pass or none should pass. And both opinions are reasonable. I think that the Right Answer is for there to be Yet Another Option so that one can configure "reject_malformed_http1_requests" to make all requests containing space (and possibly all http/0.9 requests, as an implementation-convenience consequence) fail immediately. Or just revert the patch linked from the trac message and hear the users complain. But since I won't be writing any of the code, my vote counts for little. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Jun 12 18:31:22 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 12 Jun 2015 14:31:22 -0400 Subject: [ANN] Windows nginx 1.9.2.1 Lizard Message-ID: <54ea9a3304f88ba35f3b270bb41f16da.NginxMailingListEnglish@forum.nginx.org> 19:19 12-6-2015 nginx 1.9.2.1 Lizard Based on nginx 1.9.2 (9-6-2015) with; + Openssl-1.0.1o (upgraded 12-6-2015) + Naxsi WAF v0.53-3 (upgraded 12-6-2015) + Openssl-1.0.1n (CVE-2015-4000, CVE-2015-1788, CVE-2015-1789, CVE-2015-1790, CVE-2015-1792, CVE-2015-1791) + pcre-8.37b-r1566 (upgraded 10-6-2015, overflow fixes) + nginx-module-vts (fix for 32bit overflow counters including totals) + nginx-auth-ldap (upgraded 9-6-2015) + nginx-module-vts, fixes for 1.9.1 (upgraded 19-5-2015) + LuaJIT-2.0.4 (upgraded 18-5-2015) Tnx to Mike Pall for his hard work! + lua51.dll (upgraded 18-5-2015) DO NOT FORGET TO REPLACE THIS FILE ! + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259589,259589#msg-259589 From kris at cs.ucsb.edu Fri Jun 12 19:28:33 2015 From: kris at cs.ucsb.edu (kristian kvilekval) Date: Fri, 12 Jun 2015 12:28:33 -0700 Subject: config parsing (fastouter) Message-ID: I am trying to configure fastrouter through environment variable and running into trouble. 1. A blank loop still seems to run.. expect that no subscription would take place? [uwsgi] ... fastrouter_keys= fastrouter_ip= fastrouter_port= # Subscribe this instance to a fastrouter for=%(fastrouter_keys) subscribe-to=%(fastrouter_ip):%(fastrouter_port):%(_) endfor= and the log ... subscribing to :: send_subscription()/sendto(): Invalid argument [core/subscription.c line 665] send_subscription()/sendto(): Invalid argument [core/subscription.c line 665] send_subscription()/sendto(): Invalid argument [core/subscription.c line 665] 2. A list of values is treated single? export FASTROUTER_KEYS="a b c" [uwsgi] fastrouter_keys=$(FASTROUTER_KEYS) fastrouter_ip=... fastrouter_port=... # Subscribe this instance to a fastrouter for=%(fastrouter_keys) subscribe-to=%(fastrouter_ip):%(fastrouter_port):%(_) endfor= And logs on fastrouter [uwsgi-subscription for pid 5] new pool: a b c (hash key: 3007) fastrouter_1 | [uwsgi-subscription for pid 5] a b c => new node: 172.17.1.37:56481 I was expecting to see three separate subscribes. Any help appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 12 23:13:08 2015 From: nginx-forum at nginx.us (iamnginxer) Date: Fri, 12 Jun 2015 19:13:08 -0400 Subject: error_page at http context Message-ID: <55432a787cd31109877856a1336569d0.NginxMailingListEnglish@forum.nginx.org> I can't get error_page override default error pages when using it in HTTP context (rather than server {} / location {}). Can someone share a real-world working example? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259592,259592#msg-259592 From abhi at littlewiki.in Sat Jun 13 18:33:47 2015 From: abhi at littlewiki.in (Abhi) Date: Sun, 14 Jun 2015 00:03:47 +0530 Subject: Help secure my location block Message-ID: <557C778B.7000103@littlewiki.in> I have files that are served by the backend web app at |/xxx/File?file=yyy.png|. These files are stored at |/storage/files| on the server. So, I wrote a location block to serve these files from storage directly from the web server. Here is my first take: |location /xxx/File { if ($request_method = POST ) { proxy_pass http://backend; } alias /storage/files/; try_files $arg_file =404; } | The issue is I can do something like |/xxx/File?file=../../etc/foo.bar| and nginx will serve the foo.bar file for me. So, I switched to this following: |location /xxx/File { if ($request_method = POST ) { proxy_pass http://backend; } if ($arg_file ~ \.\.) { return 403; } alias /storage/files/$arg_file; } | Can someone point me to any corner cases that can be exploited and what is the best practice for situations like these? -- Abhi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jun 14 00:53:54 2015 From: nginx-forum at nginx.us (huakaibird) Date: Sat, 13 Jun 2015 20:53:54 -0400 Subject: =?UTF-8?B?UmU6IOetlOWkjTog562U5aSNOiBuZ2lueCBwbHVzIHdpdGggc3NsIG9uIFRDUCBs?= =?UTF-8?B?b2FkIGJhbGFuY2Ugbm90IHdvcms=?= In-Reply-To: <001201d0a454$f6169e70$e243db50$@zoom.us> References: <001201d0a454$f6169e70$e243db50$@zoom.us> Message-ID: Any help on this? not working Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259599#msg-259599 From agentzh at gmail.com Sun Jun 14 02:54:39 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 14 Jun 2015 10:54:39 +0800 Subject: Nginx LUA In-Reply-To: References: Message-ID: Hello! On Sun, Jun 7, 2015 at 10:41 PM, nginxsantos wrote: > Can anyone please help me with a lua configuration which I can embedded into > nginx.conf to send the following sepaately in access log. > > user_agent_os > user_agent_browser > user_agent_version > > At present all these fields are embedded in http_user_agent and I am writing > parser to parse at the receiver end. I am looking for some input where I can > send them separately as different fields from Nginx itself. > You need to parse the User-Agent header value yourself in Lua with regexes or something. Not sure if there's a ready-to-use 3rd-party Lua libraries that can already parse that for you. BTW, you're recommended to post ngx_lua related questions to the openresty-en mailing list instead. Please see https://openresty.org/#Community for more details. Thanks. Best regards, -agentzh From arut at nginx.com Sun Jun 14 10:18:37 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Sun, 14 Jun 2015 13:18:37 +0300 Subject: nginx plus with ssl on TCP load balance not work In-Reply-To: References: <001201d0a454$f6169e70$e243db50$@zoom.us> Message-ID: <5D666E63-159F-4A97-99B1-5D02549BEC44@nginx.com> If you proxy http with a tcp proxy to an http backend, and receive the 302 code, then IMHO you should look for problems in your http backend. On 14 Jun 2015, at 03:53, huakaibird wrote: > Any help on this? not working > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259599#msg-259599 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Roman Arutyunyan From nginx-forum at nginx.us Sun Jun 14 11:21:42 2015 From: nginx-forum at nginx.us (Replace) Date: Sun, 14 Jun 2015 07:21:42 -0400 Subject: Dynamic configuration Message-ID: HI there, these days i reinstall my windows and ... i found hot water, with vagrant. I very like it those stuff, i installed ubuntu, with nginx, php5-fpm and many other stuff, but still missing basic configuration. In my project directory, i have many projects - symfony2, wordpress, my own framework, facebook app and simple test files. In my nginx configuration, those not work actually, becouse (i think) a root directory. I wanna every project can open in a separate sub directory (ex: /localhost/projects/wordpress1, or /localhost/development/symfony2/web/). Is it possible with nginx :S Can u give me some example configuration like those Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259602,259602#msg-259602 From nginx-forum at nginx.us Sun Jun 14 13:52:26 2015 From: nginx-forum at nginx.us (Floris) Date: Sun, 14 Jun 2015 09:52:26 -0400 Subject: sendfile_max_chunk breaking unbufferred php-fcgi Message-ID: <44833ed19b6fdf803980322073176e62.NginxMailingListEnglish@forum.nginx.org> Hi, I was having the problem that if a single client on the local LAN is downloading a large static file, the download is effectively monopolizing nginx, and no other requests are handled simultaneously. Reading the manual I came across the sendfile_max_chunk option that sounded like that may fix it: == Syntax: sendfile_max_chunk size; Default: sendfile_max_chunk 0; Context: http, server, location When set to a non-zero value, limits the amount of data that can be transferred in a single sendfile() call. Without the limit, one fast connection may seize the worker process entirely. == However I noticed that if I enable that, PHP scripts running without buffering suddenly no longer work properly. nginx.conf: == events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80; server_name $hostname; sendfile on; sendfile_max_chunk 8192; root /var/www; location / { index index.php index.html index.htm; } location ~ \.php$ { try_files $uri =404; fastcgi_buffering off; fastcgi_pass unix:/var/run/php-fpm.sock; include fastcgi.conf; } } } == t2.php for testing purposes: == Hi, We're using Nginx to serve videos on one of our Storage server(contains mp4 videos) and due to high amount of requests we're planning to have a separate caching Node based on Fast SSD drives to serve "Hot" content in order to reduce load from Storage. We're planning to have following method for caching : If there are exceeding 1K requests for http://storage.domain.com/test.mp4 , nginx should construct a Redirect URL for rest of the requests related to test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of requests for test.mp4 from Caching Node while long tail would still be served from storage. So, can we achieve this approach with nginx or other like varnish ? Thanks in advance. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Sun Jun 14 21:20:56 2015 From: steve at greengecko.co.nz (steve) Date: Mon, 15 Jun 2015 09:20:56 +1200 Subject: Redirect on specific threshold !! In-Reply-To: References: Message-ID: <557DF038.9050909@greengecko.co.nz> Hi, On 15/06/15 05:12, shahzaib shahzaib wrote: > Hi, > > We're using Nginx to serve videos on one of our Storage > server(contains mp4 videos) and due to high amount of requests we're > planning to have a separate caching Node based on Fast SSD drives to > serve "Hot" content in order to reduce load from Storage. We're > planning to have following method for caching : > > If there are exceeding 1K requests for > http://storage.domain.com/test.mp4 , nginx should construct a > Redirect URL for rest of the requests related to test.mp4 i.e > http://cache.domain.com/test.mp4 and entertain the rest of requests > for test.mp4 from Caching Node while long tail would still be served > from storage. > > So, can we achieve this approach with nginx or other like varnish ? > > Thanks in advance. > > Regards. > Shahzaib > > > > On the assumption that you're hosting with a linux infrastructure, this is simply done by just adding more memory! By default any spare memory will be used to cache common files. If you want more control over it, then set your cache area up on a tmpfs backed partition. However, you'll then have to manage what you cache yourself. With setups like this, it's normally the bandwidth of the network that becones the bottleneck. Maybe a bit of round-robin DNS would help with this? Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From ryd994 at 163.com Mon Jun 15 03:55:43 2015 From: ryd994 at 163.com (ryd994) Date: Mon, 15 Jun 2015 03:55:43 +0000 Subject: Nginx LUA In-Reply-To: References: Message-ID: I would suggest using some log parser. You can find some by searching. Those "for Apache" ones would also work. If you know some python or shell script, writing one yourself won't take more than a few minutes. Analyzing access log on the fly just doesn't make much sense. On Sun, Jun 7, 2015 at 10:41 AM nginxsantos wrote: > Can anyone please help me with a lua configuration which I can embedded > into > nginx.conf to send the following sepaately in access log. > > user_agent_os > user_agent_browser > user_agent_version > > At present all these fields are embedded in http_user_agent and I am > writing > parser to parse at the receiver end. I am looking for some input where I > can > send them separately as different fields from Nginx itself. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259422,259422#msg-259422 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryd994 at 163.com Mon Jun 15 04:07:44 2015 From: ryd994 at 163.com (ryd994) Date: Mon, 15 Jun 2015 04:07:44 +0000 Subject: Dynamic configuration In-Reply-To: References: Message-ID: IMHO, it's too much to explain in a few lines. Did you read http://nginx.org/en/docs/beginners_guide.html ? Then you might want to read about rewrite module, fcgi and php, and configure your application to output correct link or use subs module. If you know how to use hosts file, use it with nginx virtual site will be a good idea for testing. If all you need is a working environment, asking a friend who knows nginx to spend an hour for you will be much easier. Sorry I can't just "write a config". On Sun, Jun 14, 2015 at 7:21 AM Replace wrote: > HI there, these days i reinstall my windows and ... i found hot water, with > vagrant. I very like it those stuff, i installed ubuntu, with nginx, > php5-fpm and many other stuff, but still missing basic configuration. > In my project directory, i have many projects - symfony2, wordpress, my own > framework, facebook app and simple test files. In my nginx configuration, > those not work actually, becouse (i think) a root directory. > I wanna every project can open in a separate sub directory (ex: > /localhost/projects/wordpress1, or /localhost/development/symfony2/web/). > Is > it possible with nginx :S > Can u give me some example configuration like those > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259602,259602#msg-259602 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryd994 at 163.com Mon Jun 15 04:13:11 2015 From: ryd994 at 163.com (ryd994) Date: Mon, 15 Jun 2015 04:13:11 +0000 Subject: Redirect on specific threshold !! In-Reply-To: References: Message-ID: Does a nginx reverse proxy with cache fit you need? Client -> Caching server (with SSD and nginx proxy cache configured) -> Storage server(s) (Slow) You can add even more storage server by utilizing nginx upstream module. On Sun, Jun 14, 2015 at 1:12 PM shahzaib shahzaib wrote: > Hi, > > We're using Nginx to serve videos on one of our Storage server(contains > mp4 videos) and due to high amount of requests we're planning to have a > separate caching Node based on Fast SSD drives to serve "Hot" content in > order to reduce load from Storage. We're planning to have following method > for caching : > > If there are exceeding 1K requests for http://storage.domain.com/test.mp4 > , nginx should construct a Redirect URL for rest of the requests related > to test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest > of requests for test.mp4 from Caching Node while long tail would still be > served from storage. > > So, can we achieve this approach with nginx or other like varnish ? > > Thanks in advance. > > Regards. > Shahzaib > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryd994 at 163.com Mon Jun 15 04:21:05 2015 From: ryd994 at 163.com (ryd994) Date: Mon, 15 Jun 2015 04:21:05 +0000 Subject: =?UTF-8?B?UmU6IOetlOWkjTog562U5aSNOiBuZ2lueCBwbHVzIHdpdGggc3NsIG9uIFRDUCBs?= =?UTF-8?B?b2FkIGJhbGFuY2Ugbm90IHdvcms=?= In-Reply-To: References: <001201d0a454$f6169e70$e243db50$@zoom.us> Message-ID: As long as you get something from your backend, I don't see anything could be wrong on the proxy. Backend get the connection, that's all. I guess it was because your backend get frustrated for all incoming requests being from frontend. HTTP proxy can pass client IP info by "X-Forwarded-For". That doesn't work for TCP proxy. For testing, I suggest you to start with some basic http service on your backend. On Sat, Jun 13, 2015 at 8:53 PM huakaibird wrote: > Any help on this? not working > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259522,259599#msg-259599 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smith.hua at zoom.us Mon Jun 15 05:33:37 2015 From: smith.hua at zoom.us (smith) Date: Mon, 15 Jun 2015 05:33:37 -0000 Subject: =?UTF-8?B?562U5aSNOiDnrZTlpI06IOetlOWkjTogbmdpbnggcGx1cyB3aXRoIHNzbCBvbiBU?= =?UTF-8?B?Q1AgbG9hZCBiYWxhbmNlIG5vdCB3b3Jr?= In-Reply-To: References: <001201d0a454$f6169e70$e243db50$@zoom.us> Message-ID: <017701d0a72c$d6515b40$82f411c0$@zoom.us> Thanks, is this means TCP proxy not suitable for web usage, for web usage may have many redirect usage? I?m going to use http/https proxy instead ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? ryd994 ????: 2015?6?15? 4:21 ???: nginx at nginx.org ??: Re: ??: ??: nginx plus with ssl on TCP load balance not work As long as you get something from your backend, I don't see anything could be wrong on the proxy. Backend get the connection, that's all. I guess it was because your backend get frustrated for all incoming requests being from frontend. HTTP proxy can pass client IP info by "X-Forwarded-For". That doesn't work for TCP proxy. For testing, I suggest you to start with some basic http service on your backend. On Sat, Jun 13, 2015 at 8:53 PM huakaibird wrote: Any help on this? not working Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259599#msg-259599 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 15 06:29:59 2015 From: nginx-forum at nginx.us (mex) Date: Mon, 15 Jun 2015 02:29:59 -0400 Subject: TCP-Loadbalancer and allow/deny Message-ID: <0deac393c4e144dc3d1643788e948b86.NginxMailingListEnglish@forum.nginx.org> Hello, happily testing the stream{} - feature and loadbalancing-mechanism with nginx 1.9 and it works very smoth; looks like we ca use nginx as http-lb as well as tcp-lb in production very soon; thank you, nginx-team! is there something like allow/deny planned for the stream {} - method? http://nginx.org/en/docs/http/ngx_http_access_module.html#allow atm we use a packetfilter, but having this feature in nginx - stream {} would be a great addition. thanx in advance, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259613,259613#msg-259613 From nginx-forum at nginx.us Mon Jun 15 06:36:10 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 15 Jun 2015 02:36:10 -0400 Subject: TCP-Loadbalancer and allow/deny In-Reply-To: <0deac393c4e144dc3d1643788e948b86.NginxMailingListEnglish@forum.nginx.org> References: <0deac393c4e144dc3d1643788e948b86.NginxMailingListEnglish@forum.nginx.org> Message-ID: It's recently been ported over, http://hg.nginx.org/nginx/rev/8807a2369b1a Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259613,259615#msg-259615 From nginx-forum at nginx.us Mon Jun 15 07:05:46 2015 From: nginx-forum at nginx.us (mex) Date: Mon, 15 Jun 2015 03:05:46 -0400 Subject: TCP-Loadbalancer and allow/deny In-Reply-To: References: <0deac393c4e144dc3d1643788e948b86.NginxMailingListEnglish@forum.nginx.org> Message-ID: thank you very much, looks promising! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259613,259617#msg-259617 From shahzaib.cb at gmail.com Mon Jun 15 07:07:18 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 15 Jun 2015 12:07:18 +0500 Subject: Redirect on specific threshold !! In-Reply-To: References: Message-ID: Hi, Thanks for the help guys. Regarding @ryd994 suggestion, the reason we don't want to deploy this structure is that the Caching node will have to respond for each client's request and even it will be only doing proxy for most of the requests(without caching them), high i/o will still be required to serve the big proxy request(700MB mp4) to the user and that way caching node will eventually become the bottleneck between user and storage node, isn't it ? @steve thanks for tmpfs point. but we're using caching node with 1TB+ SSD storage and will prefer SSD cache over RAM(though RAM is faster but not as big as SSD). Using redirect URL we believe would be only pointing specific requests towards the cachind node and than this node will fetch requested file using proxy_cache. Regards. Shahzaib. On Mon, Jun 15, 2015 at 9:13 AM, ryd994 wrote: > Does a nginx reverse proxy with cache fit you need? > > Client -> Caching server (with SSD and nginx proxy cache configured) -> > Storage server(s) (Slow) > > You can add even more storage server by utilizing nginx upstream module. > > On Sun, Jun 14, 2015 at 1:12 PM shahzaib shahzaib > wrote: > >> Hi, >> >> We're using Nginx to serve videos on one of our Storage >> server(contains mp4 videos) and due to high amount of requests we're >> planning to have a separate caching Node based on Fast SSD drives to serve >> "Hot" content in order to reduce load from Storage. We're planning to have >> following method for caching : >> >> If there are exceeding 1K requests for http://storage.domain.com/test.mp4 >> , nginx should construct a Redirect URL for rest of the requests related >> to test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest >> of requests for test.mp4 from Caching Node while long tail would still be >> served from storage. >> >> So, can we achieve this approach with nginx or other like varnish ? >> >> Thanks in advance. >> >> Regards. >> Shahzaib >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Jun 15 09:50:09 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 15 Jun 2015 12:50:09 +0300 Subject: nginx plus with ssl on TCP load balance not work In-Reply-To: <017701d0a72c$d6515b40$82f411c0$@zoom.us> References: <001201d0a454$f6169e70$e243db50$@zoom.us> <017701d0a72c$d6515b40$82f411c0$@zoom.us> Message-ID: <18E1B4E6-F30D-4224-B943-59269CF6B036@nginx.com> Redirect usage is not related to the TCP proxy. Please search for problems in your backend. TCP proxy can be used for proxying any protocol unless you need to change the data bytes (alter HTTP headers, change method etc) > On 15 Jun 2015, at 08:33, smith wrote: > > Thanks, is this means TCP proxy not suitable for web usage, for web usage may have many redirect usage? > > I?m going to use http/https proxy instead > > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? ryd994 > ????: 2015?6?15? 4:21 > ???: nginx at nginx.org > ??: Re: ??: ??: nginx plus with ssl on TCP load balance not work > > As long as you get something from your backend, I don't see anything could be wrong on the proxy. Backend get the connection, that's all. > > I guess it was because your backend get frustrated for all incoming requests being from frontend. HTTP proxy can pass client IP info by "X-Forwarded-For". That doesn't work for TCP proxy. > > For testing, I suggest you to start with some basic http service on your backend. > > On Sat, Jun 13, 2015 at 8:53 PM huakaibird wrote: >> Any help on this? not working >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259522,259599#msg-259599 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From smith.hua at zoom.us Mon Jun 15 09:53:25 2015 From: smith.hua at zoom.us (smith) Date: Mon, 15 Jun 2015 09:53:25 -0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_nginx_plus_with_ssl_on_TCP_load_balance_no?= =?UTF-8?Q?t_work?= In-Reply-To: <18E1B4E6-F30D-4224-B943-59269CF6B036@nginx.com> References: <001201d0a454$f6169e70$e243db50$@zoom.us> <017701d0a72c$d6515b40$82f411c0$@zoom.us> <18E1B4E6-F30D-4224-B943-59269CF6B036@nginx.com> Message-ID: <01d801d0a751$220f6c10$662e4430$@zoom.us> My backend is work normally under nginx http/https load balancer. And also works under amazon Elastic load balancer. -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Roman Arutyunyan ????: 2015?6?15? 9:50 ???: nginx at nginx.org ??: Re: nginx plus with ssl on TCP load balance not work Redirect usage is not related to the TCP proxy. Please search for problems in your backend. TCP proxy can be used for proxying any protocol unless you need to change the data bytes (alter HTTP headers, change method etc) > On 15 Jun 2015, at 08:33, smith wrote: > > Thanks, is this means TCP proxy not suitable for web usage, for web usage may have many redirect usage? > > I?m going to use http/https proxy instead > > ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? > ryd994 > ????: 2015?6?15? 4:21 > ???: nginx at nginx.org > ??: Re: ??: ??: nginx plus with ssl on TCP load balance not work > > As long as you get something from your backend, I don't see anything could be wrong on the proxy. Backend get the connection, that's all. > > I guess it was because your backend get frustrated for all incoming requests being from frontend. HTTP proxy can pass client IP info by "X-Forwarded-For". That doesn't work for TCP proxy. > > For testing, I suggest you to start with some basic http service on your backend. > > On Sat, Jun 13, 2015 at 8:53 PM huakaibird wrote: >> Any help on this? not working >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259522,259599#msg-259599 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jun 15 09:59:30 2015 From: nginx-forum at nginx.us (ajjH6) Date: Mon, 15 Jun 2015 05:59:30 -0400 Subject: Deploying newly compiled nginx from test server to production Message-ID: <4138318be071de0395862ae71660d192.NginxMailingListEnglish@forum.nginx.org> Hello What is a good method for deploying a newly compiled nginx binary with an extra module? (mod_security) I can get all to compile ok. However, I do not want to compile on my production server. There are two many dependencies (ie HTTPD for mod_sec). In the case of mod_security, it seems only the Apache Portable Runtime (apr-util) is required if I manually move the binary over. I tried building my own RPM but hit some issues. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259622,259622#msg-259622 From vbart at nginx.com Mon Jun 15 10:39:30 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 15 Jun 2015 13:39:30 +0300 Subject: sendfile_max_chunk breaking unbufferred php-fcgi In-Reply-To: <44833ed19b6fdf803980322073176e62.NginxMailingListEnglish@forum.nginx.org> References: <44833ed19b6fdf803980322073176e62.NginxMailingListEnglish@forum.nginx.org> Message-ID: <23137751.KFC3XChhbO@vbart-workstation> On Sunday 14 June 2015 09:52:26 Floris wrote: > Hi, > > I was having the problem that if a single client on the local LAN is > downloading a large static file, the download is effectively monopolizing > nginx, and no other requests are handled simultaneously. > Reading the manual I came across the sendfile_max_chunk option that sounded > like that may fix it: > > == > Syntax: sendfile_max_chunk size; > Default: sendfile_max_chunk 0; > Context: http, server, location > > When set to a non-zero value, limits the amount of data that can be > transferred in a single sendfile() call. Without the limit, one fast > connection may seize the worker process entirely. > == > > > However I noticed that if I enable that, PHP scripts running without > buffering suddenly no longer work properly. > > > nginx.conf: > > == > events { > worker_connections 1024; > } > > http { > include mime.types; > default_type application/octet-stream; > > server { > listen 80; > server_name $hostname; > sendfile on; > sendfile_max_chunk 8192; > > root /var/www; > > location / { > index index.php index.html index.htm; > } > > location ~ \.php$ { > try_files $uri =404; > > fastcgi_buffering off; > fastcgi_pass unix:/var/run/php-fpm.sock; > include fastcgi.conf; > } > } > } > == > > > t2.php for testing purposes: > > == > > for ($i = 0; $i < 10; $i++) > { > echo "test!\n"; > flush(); > sleep(1); > } > == > > When retrieving that, the connection stalls after the first flush: > > == > $ telnet 192.168.178.26 80 > Trying 192.168.178.26... > Connected to 192.168.178.26. > Escape character is '^]'. > GET /t2.php HTTP/1.0 > > HTTP/1.1 200 OK > Server: nginx/1.6.3 > Date: Sun, 14 Jun 2015 13:21:53 GMT > Content-Type: text/html; charset=UTF-8 > Connection: close > X-Powered-By: PHP/5.6.9 > > test! > == > > If I remove either the "sendfile_max_chunk 8192;" or "fastcgi_buffering > off;" line it does work, and I do get all 10 test! messages: > > == > telnet 192.168.178.26 80 > Trying 192.168.178.26... > Connected to 192.168.178.26. > Escape character is '^]'. > GET /t2.php HTTP/1.0 > > HTTP/1.1 200 OK > Server: nginx/1.6.3 > Date: Sun, 14 Jun 2015 13:22:23 GMT > Content-Type: text/html; charset=UTF-8 > Connection: close > X-Powered-By: PHP/5.6.9 > > test! > test! > test! > test! > test! > test! > test! > test! > test! > test! > Connection closed by foreign host. > == > > Am I doing something wrong, or is this a bug? > Yes, it's a bug. You should set "sendfile_max_chunk 0;" in the location with unbuffered fastcgi. Or you can try the following patch: diff -r c041f1e0655f src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Jun 10 19:18:20 2015 +0300 +++ b/src/http/ngx_http_upstream.c Mon Jun 15 13:32:55 2015 +0300 @@ -3303,6 +3303,9 @@ ngx_http_upstream_process_non_buffered_r downstream = r->connection; upstream = u->peer.connection; + /* workaround for sendfile_max_chunk */ + downstream->write->delayed = 0; + b = &u->buffer; do_write = do_write || u->length == 0; From vbart at nginx.com Mon Jun 15 10:45:42 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 15 Jun 2015 13:45:42 +0300 Subject: Redirect on specific threshold !! In-Reply-To: References: Message-ID: <2182722.jmPLUFHago@vbart-workstation> On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote: > Hi, > > We're using Nginx to serve videos on one of our Storage server(contains > mp4 videos) and due to high amount of requests we're planning to have a > separate caching Node based on Fast SSD drives to serve "Hot" content in > order to reduce load from Storage. We're planning to have following method > for caching : > > If there are exceeding 1K requests for http://storage.domain.com/test.mp4 , > nginx should construct a Redirect URL for rest of the requests related to > test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of > requests for test.mp4 from Caching Node while long tail would still be > served from storage. > > So, can we achieve this approach with nginx or other like varnish ? > [..] You can use limit_conn and limit_req modules to set limits: http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html http://nginx.org/en/docs/http/ngx_http_limit_req_module.html and the error_page directive to construct the redirect. wbr, Valentin V. Bartenev From ryd994 at 163.com Mon Jun 15 12:37:12 2015 From: ryd994 at 163.com (ryd994) Date: Mon, 15 Jun 2015 12:37:12 +0000 Subject: Deploying newly compiled nginx from test server to production In-Reply-To: <4138318be071de0395862ae71660d192.NginxMailingListEnglish@forum.nginx.org> References: <4138318be071de0395862ae71660d192.NginxMailingListEnglish@forum.nginx.org> Message-ID: I would prefer RPM. Just patching the official one. RPM patching process is pretty standardized and shouldn't take more than a few minutes. Could you explain what problem you have? On Mon, Jun 15, 2015, 05:59 ajjH6 wrote: > Hello > > What is a good method for deploying a newly compiled nginx binary with an > extra module? (mod_security) > > I can get all to compile ok. However, I do not want to compile on my > production server. There are two many dependencies (ie HTTPD for mod_sec). > > In the case of mod_security, it seems only the Apache Portable Runtime > (apr-util) is required if I manually move the binary over. > > I tried building my own RPM but hit some issues. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259622,259622#msg-259622 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jun 15 14:31:53 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Jun 2015 17:31:53 +0300 Subject: config parsing (fastouter) In-Reply-To: References: Message-ID: <20150615143153.GN26357@mdounin.ru> Hello! On Fri, Jun 12, 2015 at 12:28:33PM -0700, kristian kvilekval wrote: > I am trying to configure fastrouter through environment variable and > running into trouble. > > 1. A blank loop still seems to run.. expect that no subscription would > take place? > > [uwsgi] > ... > fastrouter_keys= > fastrouter_ip= > fastrouter_port= [...] This looks unrelated to nginx. You may have better luck asking in uWSGI mailing lists instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jun 16 03:59:01 2015 From: nginx-forum at nginx.us (ajjH6) Date: Mon, 15 Jun 2015 23:59:01 -0400 Subject: Deploying newly compiled nginx from test server to production In-Reply-To: References: Message-ID: Thanks ryd994. I eventually build the RPM ok. I am attempting to build a stripped down nginx RPM with minimal modules, but also with modsec. I found a suggested config at - https://www.digitalocean.com/community/tutorials/how-to-compile-nginx-from-source-on-a-centos-6-4-x64-vps ./configure \ --user=nginx \ --group=nginx \ --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-http_gzip_static_module \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-pcre \ --with-file-aio \ --with-http_realip_module \ --without-http_scgi_module \ --without-http_uwsgi_module \ --without-http_fastcgi_module I was able to build the RPM with the following in the nginx.spec file - %build ./configure \ --prefix=%{_sysconfdir}/nginx \ --sbin-path=%{_sbindir}/nginx \ --conf-path=%{_sysconfdir}/nginx/nginx.conf \ --error-log-path=%{_localstatedir}/log/nginx/error.log \ --http-log-path=%{_localstatedir}/log/nginx/access.log \ --pid-path=%{_localstatedir}/run/nginx.pid \ --lock-path=%{_localstatedir}/run/nginx.lock \ --user=%{nginx_user} \ --group=%{nginx_group} \ --with-http_gzip_static_module \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-pcre \ --with-file-aio \ --with-http_realip_module \ --without-http_scgi_module \ --without-http_uwsgi_module \ --without-http_fastcgi_module \ %{?with_spdy:--with-http_spdy_module} \ --with-cc-opt="%{optflags} $(pcre-config --cflags)" \ --add-module=%{_builddir}/%{name}-%{version}/modsecurity-2.9.0/nginx/modsecurity $* I am unsure on the a couple of lines at the bottom - %{?with_spdy:--with-http_spdy_module} \ --with-cc-opt="%{optflags} $(pcre-config --cflags)" \ The RPM installs fine (apr-devel rpm dependency). Version outputs the following - nginx -V nginx version: nginx/1.8.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nginx --group=nginx --with-http_gzip_static_module --with-http_stub_status_module --with-http_ssl_module --with-pcre --with-file-aio --with-http_realip_module --without-http_scgi_module --without-http_uwsgi_module --without-http_fastcgi_module --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/home/test/rpmbuild/BUILD/nginx-1.8.0/modsecurity-2.9.0/nginx/modsecurity Basically I want a minimal nginx install to serve static files over SSL. Might you have any suggestions to improve this? I also found a separate issue which I discovered when modsec is compiled - "configure: WARNING: APR util was not compiled with crypto support. SecRemoteRule will not support the parameter 'crypto'" Basically the rhel6 apr-devel rpm does not have crypto support. Trying to determine what are the ramifications are here. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259622,259636#msg-259636 From nginx-forum at nginx.us Tue Jun 16 07:35:21 2015 From: nginx-forum at nginx.us (patjomkin) Date: Tue, 16 Jun 2015 03:35:21 -0400 Subject: Page with ssl doesn't open from safari Message-ID: Good afternoon UCC ssl the certificate is bought from godaddy for 5 domain names. One of the sites (settles down on the separate server) doesn't open from safari "Safari can't open the page https://sendy .mysite.com because the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then TR again." In too time it normally opens from all other browsers. Debug log of nginx (nginx 1.8.0, ubuntu 14.04): 2015/06/15 09:48:27 [debug] 15611#0: *6 SSL NPN advertised 2015/06/15 09:48:27 [debug] 15611#0: *6 SSL_do_handshake: -1 2015/06/15 09:48:27 [debug] 15611#0: *6 SSL_get_error: 2 2015/06/15 09:48:27 [debug] 15611#0: *6 reusable connection: 0 2015/06/15 09:48:27 [debug] 15611#0: *6 SSL handshake handler: 0 2015/06/15 09:48:30 [debug] 29320#0: *7 SSL_do_handshake: -1 2015/06/15 09:48:30 [debug] 29320#0: *7 SSL_get_error: 2 2015/06/15 09:48:30 [debug] 29320#0: *7 reusable connection: 0 2015/06/15 09:48:31 [debug] 29320#0: *7 SSL handshake handler: 0 2015/06/15 09:48:33 [debug] 29322#0: *8 SSL_do_handshake: -1 2015/06/15 09:48:33 [debug] 29322#0: *8 SSL_get_error: 2 2015/06/15 09:48:33 [debug] 29322#0: *8 reusable connection: 0 2015/06/15 09:48:33 [debug] 29322#0: *8 SSL handshake handler: 0 Config vhost: server { listen 80; server_name sendy.mysite.com; location / { rewrite ^(.*) https://sendy.mysite.com$1 permanent; } } server { listen 443; server_name sendy.mysite.com; ssl on; ssl_certificate /etc/nginx/ssl/www.mysite2.com.crt; ssl_certificate_key /etc/nginx/ssl/www.mysite2.com.key; index index.php index.html; root /home/ubuntu/sendy; access_log /var/log/nginx/sendy.access.log; error_log /var/log/nginx/sendy.error.log debug; proxy_buffers 8 32k; proxy_buffer_size 64k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; location = / { index index.php; } location / { if (!-f $request_filename){ rewrite ^/([a-zA-Z0-9-]+)$ /$1.php last;} } location /l/ { rewrite ^/l/([a-zA-Z0-9/]+)$ /l.php?i=$1 last; } location /t/ { rewrite ^/t/([a-zA-Z0-9/]+)$ /t.php?i=$1 last; } location /w/ { rewrite ^/w/([a-zA-Z0-9/]+)$ /w.php?i=$1 last; } location /unsubscribe/ { rewrite ^/unsubscribe/(.*)$ /unsubscribe.php?i=$1 last; } location /subscribe/ { rewrite ^/subscribe/(.*)$ /subscribe.php?i=$1 last; } location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found off; } location ~ \.php { fastcgi_index index.php; include fastcgi_params; keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } } All other sites / domains are located on another server with the same certificate ucc, and they open in Safari without any problems What is the cause of the problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259639,259639#msg-259639 From mdounin at mdounin.ru Tue Jun 16 15:26:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jun 2015 18:26:58 +0300 Subject: nginx-1.9.2 Message-ID: <20150616152658.GX26357@mdounin.ru> Changes with nginx 1.9.2 16 Jun 2015 *) Feature: the "backlog" parameter of the "listen" directives of the mail proxy and stream modules. *) Feature: the "allow" and "deny" directives in the stream module. *) Feature: the "proxy_bind" directive in the stream module. *) Feature: the "proxy_protocol" directive in the stream module. *) Feature: the -T switch. *) Feature: the REQUEST_SCHEME parameter added to the fastcgi.conf, fastcgi_params, scgi_params, and uwsgi_params standard configuration files. *) Bugfix: the "reuseport" parameter of the "listen" directive of the stream module did not work. *) Bugfix: OCSP stapling might return an expired OCSP response in some cases. -- Maxim Dounin http://nginx.org/ From ryd994 at 163.com Tue Jun 16 14:42:27 2015 From: ryd994 at 163.com (ryd994) Date: Tue, 16 Jun 2015 14:42:27 +0000 Subject: Deploying newly compiled nginx from test server to production In-Reply-To: References: Message-ID: Congratulations for get the RPM. If you search in spec file for "with_spdy", you should find that tag (think it as some variable) around. I'm not quite sure about --with-cc-opt="%{optflags} $(pcre-config --cflags)". Seems it is there to keep nginx compiled with same options of pcre lib. If all you need is a static file server, I guess you can remove follwing: --with-http_gzip_static_module (pre compressed file) --with-http_stub_status_module (stub page might be used by some monitoring tools, like longview) --with-http_realip_module (parse X-Forwarded-For) --with-pcre (regex) If you want to rip off more, you can try add --without-* options: http://wiki.nginx.org/Modules Wiki is somewhat outdated. If you get invalid options, that module might already excluded from default build. Don't worry for that. Also, You can always recompile again if you cut off too much. Shouldn't take long. I never used modsec before, so I can't help with the APR issue. Maybe you should rebuild and install apr first. Regards, On Mon, Jun 15, 2015 at 11:59 PM ajjH6 wrote: > Thanks ryd994. > > I eventually build the RPM ok. > > I am attempting to build a stripped down nginx RPM with minimal modules, > but > also with modsec. I found a suggested config at - > > > https://www.digitalocean.com/community/tutorials/how-to-compile-nginx-from-source-on-a-centos-6-4-x64-vps > > ./configure \ > --user=nginx \ > --group=nginx \ > --prefix=/etc/nginx \ > --sbin-path=/usr/sbin/nginx \ > --conf-path=/etc/nginx/nginx.conf \ > --pid-path=/var/run/nginx.pid \ > --lock-path=/var/run/nginx.lock \ > --error-log-path=/var/log/nginx/error.log \ > --http-log-path=/var/log/nginx/access.log \ > --with-http_gzip_static_module \ > --with-http_stub_status_module \ > --with-http_ssl_module \ > --with-pcre \ > --with-file-aio \ > --with-http_realip_module \ > --without-http_scgi_module \ > --without-http_uwsgi_module \ > --without-http_fastcgi_module > > > I was able to build the RPM with the following in the nginx.spec file - > > %build > ./configure \ > --prefix=%{_sysconfdir}/nginx \ > --sbin-path=%{_sbindir}/nginx \ > --conf-path=%{_sysconfdir}/nginx/nginx.conf \ > --error-log-path=%{_localstatedir}/log/nginx/error.log \ > --http-log-path=%{_localstatedir}/log/nginx/access.log \ > --pid-path=%{_localstatedir}/run/nginx.pid \ > --lock-path=%{_localstatedir}/run/nginx.lock \ > --user=%{nginx_user} \ > --group=%{nginx_group} \ > --with-http_gzip_static_module \ > --with-http_stub_status_module \ > --with-http_ssl_module \ > --with-pcre \ > --with-file-aio \ > --with-http_realip_module \ > --without-http_scgi_module \ > --without-http_uwsgi_module \ > --without-http_fastcgi_module \ > %{?with_spdy:--with-http_spdy_module} \ > --with-cc-opt="%{optflags} $(pcre-config --cflags)" \ > > > --add-module=%{_builddir}/%{name}-%{version}/modsecurity-2.9.0/nginx/modsecurity > $* > > > I am unsure on the a couple of lines at the bottom - > > %{?with_spdy:--with-http_spdy_module} \ > --with-cc-opt="%{optflags} $(pcre-config --cflags)" \ > > > The RPM installs fine (apr-devel rpm dependency). > > Version outputs the following - > > nginx -V > nginx version: nginx/1.8.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) > built with OpenSSL 1.0.1e-fips 11 Feb 2013 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock --user=nginx --group=nginx > --with-http_gzip_static_module --with-http_stub_status_module > --with-http_ssl_module --with-pcre --with-file-aio > --with-http_realip_module > --without-http_scgi_module --without-http_uwsgi_module > --without-http_fastcgi_module --with-http_spdy_module --with-cc-opt='-O2 -g > -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > --add-module=/home/test/rpmbuild/BUILD/nginx-1.8.0/modsecurity-2.9.0/nginx/modsecurity > > > Basically I want a minimal nginx install to serve static files over SSL. > Might you have any suggestions to improve this? > > > I also found a separate issue which I discovered when modsec is compiled - > > "configure: WARNING: APR util was not compiled with crypto support. > SecRemoteRule will not support the parameter 'crypto'" > > Basically the rhel6 apr-devel rpm does not have crypto support. Trying to > determine what are the ramifications are here. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259622,259636#msg-259636 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Jun 16 18:27:06 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 16 Jun 2015 14:27:06 -0400 Subject: nginx-1.9.2 In-Reply-To: <20150616152658.GX26357@mdounin.ru> References: <20150616152658.GX26357@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.2 for Windows http://goo.gl/UjOXx8 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jun 16, 2015 at 11:26 AM, Maxim Dounin wrote: > Changes with nginx 1.9.2 16 Jun > 2015 > > *) Feature: the "backlog" parameter of the "listen" directives of the > mail proxy and stream modules. > > *) Feature: the "allow" and "deny" directives in the stream module. > > *) Feature: the "proxy_bind" directive in the stream module. > > *) Feature: the "proxy_protocol" directive in the stream module. > > *) Feature: the -T switch. > > *) Feature: the REQUEST_SCHEME parameter added to the fastcgi.conf, > fastcgi_params, scgi_params, and uwsgi_params standard configuration > files. > > *) Bugfix: the "reuseport" parameter of the "listen" directive of the > stream module did not work. > > *) Bugfix: OCSP stapling might return an expired OCSP response in some > cases. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jun 16 20:47:32 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Jun 2015 21:47:32 +0100 Subject: error_page at http context In-Reply-To: <55432a787cd31109877856a1336569d0.NginxMailingListEnglish@forum.nginx.org> References: <55432a787cd31109877856a1336569d0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150616204732.GB23844@daoine.org> On Fri, Jun 12, 2015 at 07:13:08PM -0400, iamnginxer wrote: Hi there, > I can't get error_page override default error pages when using it in HTTP > context (rather than server {} / location {}). What did you try? > Can someone share a real-world working example? http://nginx.org/r/error_page == http { error_page 404 /404.html; server { listen 8080; } } == $ echo my-404 > html/404.html $ curl http://127.0.0.1:8080/not-there my-404 f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jun 16 21:30:30 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Jun 2015 22:30:30 +0100 Subject: Redirect on specific threshold !! In-Reply-To: <2182722.jmPLUFHago@vbart-workstation> References: <2182722.jmPLUFHago@vbart-workstation> Message-ID: <20150616213030.GC23844@daoine.org> On Mon, Jun 15, 2015 at 01:45:42PM +0300, Valentin V. Bartenev wrote: > On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote: Hi there, > > If there are exceeding 1K requests for http://storage.domain.com/test.mp4 , > > nginx should construct a Redirect URL for rest of the requests related to > > test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of > > requests for test.mp4 from Caching Node while long tail would still be > > served from storage. > You can use limit_conn and limit_req modules to set limits: > http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > and the error_page directive to construct the redirect. limit_conn and limit_req are the right answer if you care about concurrent requests. (For example: rate=1r/m with burst=1000 might do most of what you want, without too much work on your part.) I think you might care about historical requests, instead -- so if a url is ever accessed 1K times, then it is "popular" and future requests should be redirected. To do that, you probably will find it simpler to do it outside of nginx, at least initially. Have something read the recent-enough log files[*], and whenever there are more that 1K requests for the same resource, add a fragment like location = /test.mp4 { return 301 http://cache.domain.com/test.mp4; } to nginx.conf (and remove similar fragments that are no longer currently popular-enough, if appropriate), and do a no-downtime config reload. You can probably come up with a module or a code config that does the same thing, but I think it would take me longer to do that. [*] or accesses the statistics by a method of your choice f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jun 17 09:14:36 2015 From: nginx-forum at nginx.us (Replace) Date: Wed, 17 Jun 2015 05:14:36 -0400 Subject: Dynamic configuration In-Reply-To: References: Message-ID: Thanks for reply @ryd994. I've read the documentation and still read, but did not find a way :) Nevermind, I decided every project have own server block, for better debug. That's also nice, but now i have little problem with move some old project, with very large htaccess :( Thanks again! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259602,259679#msg-259679 From senko.rasic at gmail.com Wed Jun 17 09:25:33 2015 From: senko.rasic at gmail.com (Senko Rasic) Date: Wed, 17 Jun 2015 11:25:33 +0200 Subject: Writing a new auth module - request for comments Message-ID: Hi, I'm writing a new module (out-of-tree) for supporting authentication using Stormpath's user management API (https://stormpath.com/). Basically, the module makes one or more HTTP requests to the Stormpath API to determine if the client request should be authorized to access a location or not. Since this is somewhat different than other modules I could learn from, and since all my knowledge about nginx internals is from looking at how other modules & core is written, I'm wondering if anyone could comment on how I designed the module and raise any issues if I did anything problematic, wrong or weird. For reference, the work-in-progress code for the module is available here: https://github.com/stormpath/stormpath-nginx-module Since I have to contact the external API I'm using the upstream module to do it. But I don't want the users (admins) to have to define an upstream block in nginx.conf so my module creates and configures an upstrem configuration internally instead. https://github.com/stormpath/stormpath-nginx-module/blob/master/src/ngx_http_auth_stormpath_module.c#L864 I haven't seen any other module do that, but I don't see that it's possible to avoid users having to define upstream manually otherwise. For the above reasons (wanting to handle everything invisible to the user), I'm not using nginx_http_proxy_module, but implement the upstream handler (create_request & friends) myself. But since I have to construct a HTTP request, parse status line, parse headers, parse body (eg. if it's chunked transfer-encoding), I end up duplicating a lot of functionality already in http proxy (although greatly simplified because I know exactly how to talk to the upstream server and what to expect in return). One example is I parse the headers manually, because I haven't found a way to init the http_upstream header parser hash, and to reuse the parser (originally the init is done in ngx_http_upstream_init_main_conf). https://github.com/stormpath/stormpath-nginx-module/blob/master/src/ngx_http_auth_stormpath_module.c#L304 (I'll also hit similar problems with caching the requests to the upstream. I'd like to reuse the caching functionality already in nginx, but it seems to me like http_proxy_module does a lot of manual heavy lifting in that regard that I'd have to reimplement (or *shudder* copy-paste) to support it?) Does the above make sense? Is there an obvious way to do it differently that I've missed? Are there any guides or documentation on how this should be done (besides Evan Miller's obsolete-but-useful guides I went through already)? Any comments, suggestions, warnings or flames are welcome. Thanks, Senko From nginx-forum at nginx.us Wed Jun 17 10:16:15 2015 From: nginx-forum at nginx.us (Replace) Date: Wed, 17 Jun 2015 06:16:15 -0400 Subject: Move old apache project Message-ID: <407274f22062fca2338177390cd959d3.NginxMailingListEnglish@forum.nginx.org> HI again, guys. I have another trouble with own old projects basec on large htaccess rewriet rules. That's my configuration http://pastebin.com/7JCmiaSm in server block for one site, and everything work fine except when open url without "index.php" there. Nginx return me "Access denied", and i can't understand why. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259684,259684#msg-259684 From nginx-forum at nginx.us Wed Jun 17 10:43:14 2015 From: nginx-forum at nginx.us (Replace) Date: Wed, 17 Jun 2015 06:43:14 -0400 Subject: Move old apache project In-Reply-To: <407274f22062fca2338177390cd959d3.NginxMailingListEnglish@forum.nginx.org> References: <407274f22062fca2338177390cd959d3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ec0d385b80d4f1e62aa3a5124b39319.NginxMailingListEnglish@forum.nginx.org> Sorry, i find out :) Add these missing params in php-fpm configuration and work like charm fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259684,259685#msg-259685 From vbart at nginx.com Wed Jun 17 15:29:34 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 17 Jun 2015 18:29:34 +0300 Subject: Writing a new auth module - request for comments In-Reply-To: References: Message-ID: <7284840.z1S8qMRbDI@vbart-workstation> On Wednesday 17 June 2015 11:25:33 Senko Rasic wrote: > Hi, > > I'm writing a new module (out-of-tree) for supporting authentication > using Stormpath's user management API (https://stormpath.com/). > > Basically, the module makes one or more HTTP requests to the > Stormpath API to determine if the client request should be authorized > to access a location or not. > [..] Have you checked the auth_request module? See: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html wbr, Valentin V. Bartenev From senko.rasic at gmail.com Thu Jun 18 07:42:07 2015 From: senko.rasic at gmail.com (Senko Rasic) Date: Thu, 18 Jun 2015 09:42:07 +0200 Subject: Writing a new auth module - request for comments In-Reply-To: <7284840.z1S8qMRbDI@vbart-workstation> References: <7284840.z1S8qMRbDI@vbart-workstation> Message-ID: Hi, thanks for your reply Valentin. I have checked auth_request module, in fact the module I'm writing started as modifications to auth_request module. To clarify, I'm not trying to do one-off setup for my server using the Stormpath API. The idea is to provide a module so any Stormpath's user can easily integrate the two. Specifically, the reasons why auth_request wasn't enough: * It requires another location on the local server to be provided (that location can be proxied using http_proxy_module, but still has to be added) to which it'll make the requests. I wanted to avoid forcing the users to need to add another location block and proxy_pass directives to the external API (felt like a hack). * It requires specific semantics regarding the response (200, 401, 403 are interpreted as usual, everything else is server error). Stormpath's API has different semantics so it wouldn't work anyways. * You can't do more than one auth request per client request. In some cases, I need two - first to authenticate the client, then to check if the user is in a specific group (and to be able to do this, I need to parse the response body). So it looks like auth_request module would be ideal if the users provide a small authorization web service that does whichever auth logic is needed, and then responds according to auth_request semantics. If I just wanted to implement the integration for my (one) specific use-case, I'd likely do that. But the motivation for the module is to avoid forcing users to do these one-off auth services, and instead just compile in and use a module that provides this. Best, Senko On Wed, Jun 17, 2015 at 5:29 PM, Valentin V. Bartenev wrote: > On Wednesday 17 June 2015 11:25:33 Senko Rasic wrote: >> Hi, >> >> I'm writing a new module (out-of-tree) for supporting authentication >> using Stormpath's user management API (https://stormpath.com/). >> >> Basically, the module makes one or more HTTP requests to the >> Stormpath API to determine if the client request should be authorized >> to access a location or not. >> > [..] > > Have you checked the auth_request module? > > See: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Senko Rasic From ahutchings at nginx.com Thu Jun 18 10:33:23 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 18 Jun 2015 13:33:23 +0300 Subject: Writing a new auth module - request for comments In-Reply-To: References: Message-ID: <3B338000-AB72-4363-8D9D-4F9C0DB67D84@nginx.com> Hi Senko, I am Andrew Hutchings and am a Developer Advocate for Nginx. Part of my job is to help the community writing such modules and aiding communication between the community and Nginx's internal teams. It is worth noting that at some point in the 1.9.x mainline release we will be adding dynamic modules which will let you compile modules out-of-tree and load them on starting Nginx. I'll reply as best I can inline. > On 17 Jun 2015, at 12:25, Senko Rasic wrote: > > Hi, > > I'm writing a new module (out-of-tree) for supporting authentication > using Stormpath's user management API (https://stormpath.com/). > > Basically, the module makes one or more HTTP requests to the > Stormpath API to determine if the client request should be authorized > to access a location or not. Excellent :) > Since this is somewhat different than other modules I could learn from, and > since all my knowledge about nginx internals is from looking at how other > modules & core is written, I'm wondering if anyone could comment on how I > designed the module and raise any issues if I did anything problematic, > wrong or weird. I have scanned through some of the module so far and there doesn't appear to be anything wrong or weird. > For reference, the work-in-progress code for the module is available > here: https://github.com/stormpath/stormpath-nginx-module > > Since I have to contact the external API I'm using the upstream module to > do it. But I don't want the users (admins) to have to define an upstream > block in nginx.conf so my module creates and configures an upstrem > configuration internally instead. > > https://github.com/stormpath/stormpath-nginx-module/blob/master/src/ngx_http_auth_stormpath_module.c#L864 > > I haven't seen any other module do that, but I don't see that > it's possible to avoid users having to define upstream manually otherwise. I agree I have not seen anyone else do this so far. That isn't to say it is a bad thing. It is an interesting take on it. > For the above reasons (wanting to handle everything invisible to the user), > I'm not using nginx_http_proxy_module, but implement the upstream handler > (create_request & friends) myself. But since I have to construct a HTTP > request, parse status line, parse headers, parse body (eg. if it's chunked > transfer-encoding), I end up duplicating a lot of functionality already > in http proxy (although greatly simplified because I know exactly how to > talk to the upstream server and what to expect in return). > > One example is I parse the headers manually, because I haven't found a way > to init the http_upstream header parser hash, and to reuse the parser > (originally the init is done in ngx_http_upstream_init_main_conf). > > https://github.com/stormpath/stormpath-nginx-module/blob/master/src/ngx_http_auth_stormpath_module.c#L304 This appears to be the correct way to do it as far as I can see, but I'm happy to defer to the main Nginx developers on this. > (I'll also hit similar problems with caching the requests to the upstream. > I'd like to reuse the caching functionality already in nginx, but it seems > to me like http_proxy_module does a lot of manual heavy lifting in that > regard that I'd have to reimplement (or *shudder* copy-paste) to > support it?) > > Does the above make sense? Is there an obvious way to do it differently that > I've missed? Are there any guides or documentation on how this should be > done (besides Evan Miller's obsolete-but-useful guides I went through > already)? At the moment there isn't much beyond Evan Miller's guides and the Nginx Wiki. I will be creating more up-to-date documentation as we head towards the dynamic modules feature being released. > Any comments, suggestions, warnings or flames are welcome. If you have any questions feel free to contact me directly. Kind Regards -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From jsfrerot at ludia.com Thu Jun 18 13:18:35 2015 From: jsfrerot at ludia.com (=?UTF-8?Q?Jean=2DS=C3=A9bastien_Frerot?=) Date: Thu, 18 Jun 2015 09:18:35 -0400 Subject: Trying to mirror nginx repository for centos6/7 Message-ID: Hi, I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? Thank you. for centos7 /usr/bin/reposync --repoid=nginx7 --norepopath -p /opt/data/repos/nginx/7/ nginx-1.6.0-2.el7.ngx.x86_64.r FAILED nginx-1.6.1-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.2-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.3-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.0-2.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.1-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.2-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.3-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.0-2.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.1-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.2-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.3-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA 1:nginx-debug-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-debuginfo-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-debug-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-debuginfo-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. and for centos6 /usr/bin/reposync --repoid=nginx6 --norepopath -p /opt/data/repos/nginx/6/ nginx-1.0.5-1.el6.ngx.x86_64.r FAILED nginx-1.0.6-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.7-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.8-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.8-2.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.9-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.10-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.11-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.12-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.13-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.14-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.0.15-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.0-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.1-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.2-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.3-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.4-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.5-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.6-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.7-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.2.8-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.0-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.1-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.2-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.3-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.4-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.5-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.6-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.4.7-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.0-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.0-2.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.1-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.2-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.3-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.9-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.10-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.11-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.12-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.13-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.14-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.0.15-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.0-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.1-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.2-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.3-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.4-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.5-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.6-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.7-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.8-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.0-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.1-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.2-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.3-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.4-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.5-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.6-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.4.7-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.0-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.0-2.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.1-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.2-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.3-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.0-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.0-2.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.1-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.2-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.3-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.2.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.12-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.13-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.11-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.13-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.15-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.10-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.10-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.11-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.12-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.14-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.0.9-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.9-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.2.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.14-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.15-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.4.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.0.8-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.4.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.2.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. -- *Jean-S?bastien Frerot* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahutchings at nginx.com Thu Jun 18 13:53:28 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 18 Jun 2015 16:53:28 +0300 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: References: Message-ID: Hi Jean-S?bastien, Many thanks for reporting this, I've passed it on to our systems engineering team. Kind Regards Andrew > On 18 Jun 2015, at 16:18, Jean-S?bastien Frerot wrote: > > Hi, > I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. > > Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? > > Thank you. > > for centos7 > /usr/bin/reposync --repoid=nginx7 --norepopath -p /opt/data/repos/nginx/7/ > nginx-1.6.0-2.el7.ngx.x86_64.r FAILED > nginx-1.6.1-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.2-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.3-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.0-2.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.1-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.2-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.3-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.0-2.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.1-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.2-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.3-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > 1:nginx-debug-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-debuginfo-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-debug-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-debuginfo-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > > > and for centos6 > /usr/bin/reposync --repoid=nginx6 --norepopath -p /opt/data/repos/nginx/6/ > nginx-1.0.5-1.el6.ngx.x86_64.r FAILED > nginx-1.0.6-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.7-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.8-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.8-2.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.9-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.10-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.11-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.12-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.13-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.14-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.0.15-1.el6.ngx.x86_64. FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.0-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.1-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.2-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.3-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.4-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.5-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.6-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.7-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.2.8-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.0-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.1-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.2-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.3-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.4-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.5-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.6-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.4.7-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.0-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.0-2.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.1-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.2-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.3-1.el6.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.9-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.10-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.11-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.12-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.13-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.14-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.0.15-1.el6.ngx.x FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.0-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.1-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.2-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.3-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.4-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.5-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.6-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.7-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.8-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.0-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.1-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.2-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.3-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.4-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.5-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.6-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.4.7-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.0-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.0-2.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.1-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.2-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.3-1.el6.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.0-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.0-2.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.1-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.2-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.3-1.el6.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.2.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.12-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.13-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.11-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.13-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.15-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.10-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.10-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.11-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.12-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.14-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.0.9-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.9-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.2.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.14-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.15-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debuginfo-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.4.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.0.8-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.4.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.2.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try. > > -- > Jean-S?bastien Frerot > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From nginx-forum at nginx.us Thu Jun 18 14:25:39 2015 From: nginx-forum at nginx.us (zilog80) Date: Thu, 18 Jun 2015 10:25:39 -0400 Subject: long wait configtest In-Reply-To: <72d1db7dc233819103a46c23413f549c.NginxMailingListEnglish@forum.nginx.org> References: <6597777.HjAC5QEC5v@vbart-workstation> <72d1db7dc233819103a46c23413f549c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi all i solve my problem. All configuration was correct. The problem was IPV6 configuration, disable it and DNS responce get fast! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258721,259726#msg-259726 From cj.wijtmans at gmail.com Thu Jun 18 15:04:16 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Thu, 18 Jun 2015 17:04:16 +0200 Subject: do not fail when ssl cert not present. Message-ID: I tried to not fail the nginx server if ssl cert is not available. However the directive is not even allowed inside a statement. if (-f /var/www/x/etc/ssl.crt) { ssl_certificate /var/www/x/etc/ssl.crt; ssl_certificate_key /var/www/x/etc/ssl.key; } Also i do not believe its proper to fail the entire server if one server block fails. From vader8765 at gmail.com Thu Jun 18 15:29:55 2015 From: vader8765 at gmail.com (Vader Mader) Date: Thu, 18 Jun 2015 11:29:55 -0400 Subject: conditionally setting a cookie help Message-ID: Hi All, I'm having trouble setting a cookie conditionally based upon an upstream variable The hope is to cache an auth token in an encrypted session and only go to the backend auth token generator once. I have something like this but it seems set-cookie happens no matter what, so I alternate between 'my_login=1848430=' and 'my_login='. location = /auth { set_decode_base32 $b32 $cookie_my_login; set_decrypt_session $auth_tok $b32; if ($auth_tok != '') { return 200; } include fastcgi_params; fastcgi_pass unix:/tmp/fcgi_auth_tok_gen.sock; } location / { root /var/www; index index.html index.htm; auth_request /auth; auth_request_set $new_auth_tok $upstream_http_auth_tok; if ($new_auth_tok != false) { set_encrypt_session $enc_auth_tok $new_auth_tok; set_encode_base32 $b32 $enc_auth_tok; add_header Set-Cookie 'my_login=$b32'; } } Ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Thu Jun 18 16:02:42 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 18 Jun 2015 18:02:42 +0200 Subject: do not fail when ssl cert not present. In-Reply-To: References: Message-ID: Hi, > I tried to not fail the nginx server if ssl cert is not available. You do that by checking the config first (nginx -t), if successful, then you?reload. This is the proper way to do it. > Also i do not believe its proper to fail the entire server if one > server block fails. It is. From ahutchings at nginx.com Thu Jun 18 16:32:16 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 18 Jun 2015 19:32:16 +0300 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: References: Message-ID: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> Hi Jean-S?bastien, > On 18 Jun 2015, at 16:18, Jean-S?bastien Frerot wrote: > > I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. > > Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? This should be fixed now, please let us know if you have any more problems with it. Kind Regards -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From matt+forums at ustyme.com Thu Jun 18 17:13:18 2015 From: matt+forums at ustyme.com (Matt) Date: Thu, 18 Jun 2015 10:13:18 -0700 Subject: Fwd: How can i have multiple nginx plus servers route to the same app servers with sticky sessions on? In-Reply-To: References: Message-ID: I have multiple nginx instances behind an AWS elastic load balancer. In the nginx config files, I am using ip_hash to force sticky sessions when connecting upstream. Is there a way to sync the route tables between the multiple nginx servers, so that no matter which nginx server handles the request, the traffic is sent to the same backend application server. When I first set this scenario up, I had no problems. But after heavy testing with multiple clients from different parts of the world, I was able to verify that the multiple nginx servers were not choosing the same backend application servers to route to. I attached a drawing that explains the architecture visually. Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: elbnginxdrawingforsupportforums.png Type: image/png Size: 33399 bytes Desc: not available URL: From jsfrerot at ludia.com Thu Jun 18 17:17:38 2015 From: jsfrerot at ludia.com (=?ISO-8859-1?Q?Jean-S=E9bastien?= Frerot) Date: Thu, 18 Jun 2015 13:17:38 -0400 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> References: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> Message-ID: <1434647858.11213.2.camel@ludia.com> Still the same issue. Here are my repo files: [nginx6] name=Nginx Repositories for Centos 6 baseurl=http://nginx.org/packages/centos/6/$basearch/ enabled=0 gpgcheck=0 [nginx7] name=Nginx Repositories for Centos 7 baseurl=http://nginx.org/packages/centos/7/$basearch/ enabled=0 gpgcheck=0 -- Jean-S?bastien Frerot -----Original Message-----From: Andrew Hutchings Reply-to: nginx at nginx.org To: nginx at nginx.org Subject: Re: Trying to mirror nginx repository for centos6/7 Date: Thu, 18 Jun 2015 19:32:16 +0300 Hi Jean-S?bastien, > On 18 Jun 2015, at 16:18, Jean-S?bastien Frerot wrote: > > I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. > > Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? This should be fixed now, please let us know if you have any more problems with it. Kind Regards From mdounin at mdounin.ru Thu Jun 18 17:24:56 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Jun 2015 20:24:56 +0300 Subject: do not fail when ssl cert not present. In-Reply-To: References: Message-ID: <20150618172456.GT26357@mdounin.ru> Hello! On Thu, Jun 18, 2015 at 05:04:16PM +0200, Christ-Jan Wijtmans wrote: > I tried to not fail the nginx server if ssl cert is not available. > However the directive is not even allowed inside a statement. > > if (-f /var/www/x/etc/ssl.crt) > { > ssl_certificate /var/www/x/etc/ssl.crt; > ssl_certificate_key /var/www/x/etc/ssl.key; > } This won't work, as nginx loads certificates and keys while parsing configuration, but "if" is a directive of the rewrite module and it is executed during request processing, see http://nginx.org/r/if. If you want nginx to only load existing certificates, you'll have to teach it to do so by only using appropriate directives when certificates and keys are actually available. The "include" directive may help if you want to automate this, see http://nginx.org/r/include. > Also i do not believe its proper to fail the entire server if one > server block fails. Current approach is as follows: if there is a problem with a configuration, nginx will refuse to use it. This way, if you'll make an typo in your configuration and ask nginx to reload the configuration, nginx will just refuse to load bad configuration and will continue to work with old one. This makes sure that nginx won't suddenly become half-working due to a typo which can be easily detected. This may be not very familiar if you used to just restart daemons with a new configuration, but this is how nginx works. Basically, you never restart it at all - you either reconfigure nginx, or upgrade it to a new version by changing executable on the fly. And it's working all the time. See some details on how to control nginx at http://nginx.org/en/docs/control.html. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jun 18 17:25:45 2015 From: nginx-forum at nginx.us (pinkboi) Date: Thu, 18 Jun 2015 13:25:45 -0400 Subject: Nginx is killing my threads Message-ID: <3395c08770b659645d19e36ad3e14cbd.NginxMailingListEnglish@forum.nginx.org> I have an application in C++ we originally made for Windows using Microsoft's http lib, that I have ported to be cross-platform using nginx. To keep it working mostly the same (rewriting as little as possible), I made it as a lib that gets loaded by a module I made for nginx that communicates with it via a C interface with simple functions (startup, shutdown, callEndpoint). It works for any calls that don't depend on threads, but anything that depends on the worker threads that we create (not using nginx's apis) doesn't work. When I attach to either of the nginx processes with gdb and do `info threads`, I only see one thread each. I set our function that loads the lib, then calls startup(), which launches these worker threads, as the "init module" function. The handle for the lib remains so many of the calls work just fine. But the threads are gone, so anything that depends on these worker threads doesn't function (it freezes actually). Is there a way to let my module start threads without having to use nginx's api or otherwise drastically change my architecture? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259737,259737#msg-259737 From sb at nginx.com Thu Jun 18 17:37:59 2015 From: sb at nginx.com (Sergey Budnevitch) Date: Thu, 18 Jun 2015 20:37:59 +0300 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: <1434647858.11213.2.camel@ludia.com> References: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> <1434647858.11213.2.camel@ludia.com> Message-ID: <5577AE8D-704C-47ED-BBAC-9508A858391D@nginx.com> > On 18 Jun 2015, at 20:17, Jean-S?bastien Frerot wrote: > > Still the same issue. It works for me. Please try to purge yum?s caches with yum clean all (read description before). > > Here are my repo files: > [nginx6] > name=Nginx Repositories for Centos 6 > baseurl=http://nginx.org/packages/centos/6/$basearch/ > enabled=0 > gpgcheck=0 > > [nginx7] > name=Nginx Repositories for Centos 7 > baseurl=http://nginx.org/packages/centos/7/$basearch/ > enabled=0 > gpgcheck=0 > > -- > Jean-S?bastien Frerot > > -----Original Message-----From: Andrew Hutchings > Reply-to: nginx at nginx.org > To: nginx at nginx.org > Subject: Re: Trying to mirror nginx repository for centos6/7 > Date: Thu, 18 Jun 2015 19:32:16 +0300 > > Hi Jean-S?bastien, > >> On 18 Jun 2015, at 16:18, Jean-S?bastien Frerot wrote: >> >> I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. >> >> Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? > > This should be fixed now, please let us know if you have any more problems with it. > > Kind Regards > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jsfrerot at ludia.com Thu Jun 18 17:40:23 2015 From: jsfrerot at ludia.com (=?ISO-8859-1?Q?Jean-S=E9bastien?= Frerot) Date: Thu, 18 Jun 2015 13:40:23 -0400 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: <5577AE8D-704C-47ED-BBAC-9508A858391D@nginx.com> References: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> <1434647858.11213.2.camel@ludia.com> <5577AE8D-704C-47ED-BBAC-9508A858391D@nginx.com> Message-ID: <1434649223.11213.4.camel@ludia.com> Still not working [root tmp]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base extras sensu updates Cleaning up everything Cleaning up list of fastest mirrors [root tmp]# /usr/bin/reposync --repoid=nginx7 --norepopath -p /opt/data/repos/nginx/7/ base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 sensu | 951 B 00:00:00 updates | 3.4 kB 00:00:00 (1/4): extras/7/x86_64/primary_db | 54 kB 00:00:00 (2/4): base/7/x86_64/group_gz | 154 kB 00:00:00 (3/4): updates/7/x86_64/primary_db | 1.8 MB 00:00:00 (4/4): base/7/x86_64/primary_db | 5.1 MB 00:00:07 sensu/7/x86_64/primary | 31 kB 00:00:00 nginx-1.6.0-2.el7.ngx.x86_64.r FAILED nginx-1.6.1-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.2-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-1.6.3-1.el7.ngx.x86_64.r FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.0-2.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.1-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.2-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.3-1.el7.ngx.x8 FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.0-2.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.1-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.2-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA nginx-debuginfo-1.6.3-1.el7.ng FAILED ] 0.0 B/s | 0 B --:--:-- ETA 1:nginx-debug-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. ] 0.0 B/s | 0 B --:--:-- ETA nginx-debug-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-debuginfo-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debuginfo-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-debug-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-debuginfo-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. 1:nginx-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. nginx-debug-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. -- Jean-S?bastien Frerot -----Original Message-----From: Sergey Budnevitch Reply-to: nginx at nginx.org To: nginx at nginx.org Subject: Re: Trying to mirror nginx repository for centos6/7 Date: Thu, 18 Jun 2015 20:37:59 +0300 > On 18 Jun 2015, at 20:17, Jean-S?bastien Frerot wrote: > > Still the same issue. It works for me. Please try to purge yum?s caches with yum clean all (read description before). > > Here are my repo files: > [nginx6] > name=Nginx Repositories for Centos 6 > baseurl=http://nginx.org/packages/centos/6/$basearch/ > enabled=0 > gpgcheck=0 > > [nginx7] > name=Nginx Repositories for Centos 7 > baseurl=http://nginx.org/packages/centos/7/$basearch/ > enabled=0 > gpgcheck=0 > > -- > Jean-S?bastien Frerot > > -----Original Message-----From: Andrew Hutchings > Reply-to: nginx at nginx.org > To: nginx at nginx.org > Subject: Re: Trying to mirror nginx repository for centos6/7 > Date: Thu, 18 Jun 2015 19:32:16 +0300 > > Hi Jean-S?bastien, > >> On 18 Jun 2015, at 16:18, Jean-S?bastien Frerot wrote: >> >> I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. >> >> Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? > > This should be fixed now, please let us know if you have any more problems with it. > > Kind Regards > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Jun 18 18:11:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Jun 2015 21:11:34 +0300 Subject: Fwd: How can i have multiple nginx plus servers route to the same app servers with sticky sessions on? In-Reply-To: References: Message-ID: <20150618181134.GV26357@mdounin.ru> Hello! On Thu, Jun 18, 2015 at 10:13:18AM -0700, Matt wrote: > I have multiple nginx instances behind an AWS elastic load balancer. In the > nginx config files, I am using ip_hash to force sticky sessions when > connecting upstream. Is there a way to sync the route tables between the > multiple nginx servers, so that no matter which nginx server handles the > request, the traffic is sent to the same backend application server. > > When I first set this scenario up, I had no problems. But after heavy > testing with multiple clients from different parts of the world, I was able > to verify that the multiple nginx servers were not choosing the same > backend application servers to route to. First of all, as you use AWS, make sure all nginx instances properly see client addresses (and not addresses of Amazon ELB). If nginx sees ELB addresses instead, you have to configure the realip module appropriately, see http://nginx.org/en/docs/http/ngx_http_realip_module.html. An additional problem which may hurt your case is upstream server errors. See this message for a detailed explanation: http://mailman.nginx.org/pipermail/nginx/2015-May/047590.html -- Maxim Dounin http://nginx.org/ From sb at nginx.com Thu Jun 18 18:19:39 2015 From: sb at nginx.com (Sergey Budnevitch) Date: Thu, 18 Jun 2015 21:19:39 +0300 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: <1434649223.11213.4.camel@ludia.com> References: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> <1434647858.11213.2.camel@ludia.com> <5577AE8D-704C-47ED-BBAC-9508A858391D@nginx.com> <1434649223.11213.4.camel@ludia.com> Message-ID: <1A7CB633-EBD7-4661-A91A-B71CACA2F9DF@nginx.com> > On 18 Jun 2015, at 20:40, Jean-S?bastien Frerot wrote: > > Still not working They are disabled in repo files, so you need to clean it explicitly: yum --enablerepo=nginx7 clean all > > [root tmp]# yum clean all > Loaded plugins: fastestmirror > Cleaning repos: base extras sensu updates > Cleaning up everything > Cleaning up list of fastest mirrors > [root tmp]# /usr/bin/reposync --repoid=nginx7 --norepopath > -p /opt/data/repos/nginx/7/ > base > | 3.6 kB 00:00:00 > extras > | 3.4 kB 00:00:00 > sensu > | 951 B 00:00:00 > updates > | 3.4 kB 00:00:00 > (1/4): extras/7/x86_64/primary_db > | 54 kB 00:00:00 > (2/4): base/7/x86_64/group_gz > | 154 kB 00:00:00 > (3/4): updates/7/x86_64/primary_db > | 1.8 MB 00:00:00 > (4/4): base/7/x86_64/primary_db > | 5.1 MB 00:00:07 > sensu/7/x86_64/primary > | 31 kB 00:00:00 > nginx-1.6.0-2.el7.ngx.x86_64.r > FAILED > nginx-1.6.1-1.el7.ngx.x86_64.r FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.2-1.el7.ngx.x86_64.r FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-1.6.3-1.el7.ngx.x86_64.r FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.0-2.el7.ngx.x8 FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.1-1.el7.ngx.x8 FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.2-1.el7.ngx.x8 FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.3-1.el7.ngx.x8 FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.0-2.el7.ng FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.1-1.el7.ng FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.2-1.el7.ng FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debuginfo-1.6.3-1.el7.ng FAILED > ] 0.0 B/s | 0 B --:--:-- ETA > 1:nginx-debug-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to > try. ] 0.0 B/s | 0 B --:--:-- ETA > nginx-debug-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-debuginfo-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to > try. > nginx-debuginfo-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to > try. > nginx-debuginfo-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to > try. > nginx-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > 1:nginx-debug-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to > try. > 1:nginx-debuginfo-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to > try. > 1:nginx-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > nginx-debug-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try. > > > -- > Jean-S?bastien Frerot > > -----Original Message-----From: Sergey Budnevitch > Reply-to: nginx at nginx.org > To: nginx at nginx.org > Subject: Re: Trying to mirror nginx repository for centos6/7 > Date: Thu, 18 Jun 2015 20:37:59 +0300 > >> On 18 Jun 2015, at 20:17, Jean-S?bastien Frerot wrote: >> >> Still the same issue. > > It works for me. > Please try to purge yum?s caches with yum clean all (read description before). > >> >> Here are my repo files: >> [nginx6] >> name=Nginx Repositories for Centos 6 >> baseurl=http://nginx.org/packages/centos/6/$basearch/ >> enabled=0 >> gpgcheck=0 >> >> [nginx7] >> name=Nginx Repositories for Centos 7 >> baseurl=http://nginx.org/packages/centos/7/$basearch/ >> enabled=0 >> gpgcheck=0 >> >> -- >> Jean-S?bastien Frerot >> >> -----Original Message-----From: Andrew Hutchings >> Reply-to: nginx at nginx.org >> To: nginx at nginx.org >> Subject: Re: Trying to mirror nginx repository for centos6/7 >> Date: Thu, 18 Jun 2015 19:32:16 +0300 >> >> Hi Jean-S?bastien, >> >>> On 18 Jun 2015, at 16:18, Jean-S?bastien Frerot wrote: >>> >>> I'm trying to sync nginx repositories to my local mirror but all the old package are no longer available. See this log from reposync command below. >>> >>> Would it be possible to either update the package list from the repo to no longer include the packages that are not available or to put back the old packages in your repositry? >> >> This should be fixed now, please let us know if you have any more problems with it. >> >> Kind Regards >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jsfrerot at ludia.com Thu Jun 18 18:26:03 2015 From: jsfrerot at ludia.com (=?UTF-8?Q?Jean=2DS=C3=A9bastien_Frerot?=) Date: Thu, 18 Jun 2015 14:26:03 -0400 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: <1A7CB633-EBD7-4661-A91A-B71CACA2F9DF@nginx.com> References: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> <1434647858.11213.2.camel@ludia.com> <5577AE8D-704C-47ED-BBAC-9508A858391D@nginx.com> <1434649223.11213.4.camel@ludia.com> <1A7CB633-EBD7-4661-A91A-B71CACA2F9DF@nginx.com> Message-ID: Thank you ! Sorry for that was obvious. It works now, but I see that we lost all the old versions of the package. I guess this was expected. 2015-06-18 14:19 GMT-04:00 Sergey Budnevitch : > > > On 18 Jun 2015, at 20:40, Jean-S?bastien Frerot > wrote: > > > > Still not working > > They are disabled in repo files, so you need to clean it explicitly: > > yum --enablerepo=nginx7 clean all > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Jean-S?bastien Frerot* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahutchings at nginx.com Thu Jun 18 21:02:43 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Fri, 19 Jun 2015 00:02:43 +0300 Subject: Trying to mirror nginx repository for centos6/7 In-Reply-To: References: <603AB4E2-8249-4A03-9575-EE56A750FCF4@nginx.com> <1434647858.11213.2.camel@ludia.com> <5577AE8D-704C-47ED-BBAC-9508A858391D@nginx.com> <1434649223.11213.4.camel@ludia.com> <1A7CB633-EBD7-4661-A91A-B71CACA2F9DF@nginx.com> Message-ID: <859228A5-7245-45AF-B01B-99E38E356C44@nginx.com> Hi, > On 18 Jun 2015, at 21:26, Jean-S?bastien Frerot wrote: > > Thank you ! Sorry for that was obvious. > > It works now, but I see that we lost all the old versions of the package. I guess this was expected. If you need a copy of the old RPMs for anything they are in this directory: http://nginx.org/packages/old/rhel/ Unfortunately they are not in a repo format but they are still there for archival purposes. Kind Regards -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From cj.wijtmans at gmail.com Thu Jun 18 21:22:27 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Thu, 18 Jun 2015 23:22:27 +0200 Subject: do not fail when ssl cert not present. In-Reply-To: <20150618172456.GT26357@mdounin.ru> References: <20150618172456.GT26357@mdounin.ru> Message-ID: > If you want nginx to only load existing certificates, you'll have > to teach it to do so by only using appropriate directives when > certificates and keys are actually available. The "include" > directive may help if you want to automate this, see > http://nginx.org/r/include. I dont see how include here helps. Basically currently there is no certificate. And i want to give the user control over the certificate which is why i placed in ~/etc/. Which means when the user deletes it the server wont restart. >> Also i do not believe its proper to fail the entire server if one >> server block fails. > > Current approach is as follows: if there is a problem with a > configuration, nginx will refuse to use it. This way, if you'll > make an typo in your configuration and ask nginx to reload the > configuration, nginx will just refuse to load bad configuration > and will continue to work with old one. This makes sure that > nginx won't suddenly become half-working due to a typo which can > be easily detected. The server config didnt fail. There was no typo. From nginx-forum at nginx.us Fri Jun 19 02:55:30 2015 From: nginx-forum at nginx.us (kecorbin) Date: Thu, 18 Jun 2015 22:55:30 -0400 Subject: Trouble with stream directive Message-ID: <515896ab55846e140c67630b066abe87.NginxMailingListEnglish@forum.nginx.org> I'm trying to test the TCP load balancing function in 1.9.2, but am having some problems getting a very basic configuration working. TIA ubuntu at dev:~$ cat /etc/nginx/stream.d/test.conf stream { server { listen 12345; proxy_pass mybackend:12345; } } ubuntu at dev:~$ sudo nginx -t nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/stream.d/test.conf:1 nginx: configuration file /etc/nginx/nginx.conf test failed ubuntu at dev:~$ nginx -V nginx version: nginx/1.9.2 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) built with OpenSSL 1.0.1f 6 Jan 2014 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-threads --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,--as-needed' --with-ipv6 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259748,259748#msg-259748 From arut at nginx.com Fri Jun 19 04:21:24 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 19 Jun 2015 07:21:24 +0300 Subject: Trouble with stream directive In-Reply-To: <515896ab55846e140c67630b066abe87.NginxMailingListEnglish@forum.nginx.org> References: <515896ab55846e140c67630b066abe87.NginxMailingListEnglish@forum.nginx.org> Message-ID: Please watch the top-level config (nginx.conf) where this file is included. Most probably it already has the ?stream? directive surrounding the include. > On 19 Jun 2015, at 05:55, kecorbin wrote: > > I'm trying to test the TCP load balancing function in 1.9.2, but am having > some problems getting a very basic configuration working. TIA > > ubuntu at dev:~$ cat /etc/nginx/stream.d/test.conf > stream { > > server { > listen 12345; > proxy_pass mybackend:12345; > } > } > ubuntu at dev:~$ sudo nginx -t > nginx: [emerg] "stream" directive is not allowed here in > /etc/nginx/stream.d/test.conf:1 > nginx: configuration file /etc/nginx/nginx.conf test failed > > > ubuntu at dev:~$ nginx -V > nginx version: nginx/1.9.2 > built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) > built with OpenSSL 1.0.1f 6 Jan 2014 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-http_ssl_module --with-http_realip_module --with-http_addition_module > --with-http_sub_module --with-http_dav_module --with-http_flv_module > --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_auth_request_module --with-threads --with-stream > --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-file-aio > --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector > --param=ssp-buffer-size=4 -Wformat -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions > -Wl,-z,relro -Wl,--as-needed' --with-ipv6 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259748,259748#msg-259748 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Roman Arutyunyan From nginx-forum at nginx.us Fri Jun 19 10:45:35 2015 From: nginx-forum at nginx.us (kecorbin) Date: Fri, 19 Jun 2015 06:45:35 -0400 Subject: Trouble with stream directive In-Reply-To: References: Message-ID: <845e5cc84b49f30bcd7b03c48fa2173c.NginxMailingListEnglish@forum.nginx.org> You nailed it. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259748,259755#msg-259755 From emailbuilder88 at yahoo.com Fri Jun 19 12:53:46 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Fri, 19 Jun 2015 05:53:46 -0700 Subject: Server info in SSL handshake? Message-ID: <1434718426.90964.YahooMailBasic@web142405.mail.bf1.yahoo.com> Hi, today I was watching traffic on port 443 for other reasons and I saw a line go by that had unusual information, looking a little bit like server info headers (I saw "nginx" and the version number and a couple other info I think maybe "REMOTE_ADDR" or something like it). Its port 443 so it surprised me -- does some server info leak out during the SSL handshake? I saw it at least twice but now it isn't coming back and I wasn't able to capture it. :( From emailbuilder88 at yahoo.com Fri Jun 19 13:10:32 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Fri, 19 Jun 2015 06:10:32 -0700 Subject: Server info in SSL handshake? In-Reply-To: <1434718426.90964.YahooMailBasic@web142405.mail.bf1.yahoo.com> Message-ID: <1434719432.96961.YahooMailBasic@web142405.mail.bf1.yahoo.com> Sorry, I think false alarm. I think it happened when I removed the port so listening on all ports and it caught traffic to php-fpm immediately below the traffic from the outside to nginx so it looked like the server info was on port 443 but i think it was not. -------------------------------------------- Subject: Server info in SSL handshake? To: nginx at nginx.org Hi, today I was watching traffic on port 443 for other reasons and I saw a line go by that had unusual information, looking a little bit like server info headers (I saw "nginx" and the version number and a couple other info I think maybe "REMOTE_ADDR" or something like it). Its port 443 so it surprised me -- does some server info leak out during the SSL handshake? I saw it at least twice but now it isn't coming back and I wasn't able to capture it.? :( From vader8765 at gmail.com Fri Jun 19 14:20:43 2015 From: vader8765 at gmail.com (Vader Mader) Date: Fri, 19 Jun 2015 10:20:43 -0400 Subject: set_encrypt_session after access phase? Message-ID: On Thu, Jun 18, 2015 at 11:29 AM, Vader Mader wrote: > > I'm having trouble setting a cookie conditionally based upon > an upstream variable The hope is to cache an auth token in an > encrypted session and only go to the backend auth token generator once. I managed to figure out how to use map to set the cookie: map $new_auth_tok $cond_cookie_k { '' ''; default "my_login="; } map $new_auth_tok $cond_cookie_v { '' ''; default $b32_session; } add_header Set-Cookie $cond_cookie_k$cond_cookie_v; However, my problem is that set_encrypt_session actually runs in the rewrite phase before my authentication back end like this: location / { root /var/www; index index.html index.htm; set_encrypt_session $enc_auth_tok $new_auth_tok; set_encode_base32 $b32 $enc_auth_tok; auth_request /auth; auth_request_set $new_auth_tok $upstream_http_auth_tok; add_header Set-Cookie $cond_cookie_k$cond_cookie_v; } Is there any way to encrypt after the access phase? From mdounin at mdounin.ru Fri Jun 19 14:38:11 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jun 2015 17:38:11 +0300 Subject: do not fail when ssl cert not present. In-Reply-To: References: <20150618172456.GT26357@mdounin.ru> Message-ID: <20150619143811.GZ26357@mdounin.ru> Hello! On Thu, Jun 18, 2015 at 11:22:27PM +0200, Christ-Jan Wijtmans wrote: > > If you want nginx to only load existing certificates, you'll have > > to teach it to do so by only using appropriate directives when > > certificates and keys are actually available. The "include" > > directive may help if you want to automate this, see > > http://nginx.org/r/include. > > I dont see how include here helps. Basically currently there is no > certificate. And i want to give the user control over the certificate > which is why i placed in ~/etc/. Which means when the user deletes it > the server wont restart. You'll have to write a script to automate checking if a user placed a certificate or not, and update nginx config appropriately. Generating a single include file is usually easier than re-generating the whole config. > >> Also i do not believe its proper to fail the entire server if one > >> server block fails. > > > > Current approach is as follows: if there is a problem with a > > configuration, nginx will refuse to use it. This way, if you'll > > make an typo in your configuration and ask nginx to reload the > > configuration, nginx will just refuse to load bad configuration > > and will continue to work with old one. This makes sure that > > nginx won't suddenly become half-working due to a typo which can > > be easily detected. > > The server config didnt fail. There was no typo. You've asked nginx to load a non-existing file. That's an obvious error which is easy to detect. The above paragraph tries to explain why the nginx behaviour is such a situation is to reject the configuration, and why this behaviour won't be changed. -- Maxim Dounin http://nginx.org/ From ka76115 at gmail.com Fri Jun 19 17:29:00 2015 From: ka76115 at gmail.com (Ashish k) Date: Fri, 19 Jun 2015 22:59:00 +0530 Subject: Nginx URL Fallback setup with internal redirection in reverse-proxy settings Message-ID: Hi List Members Gurus I have installed and configured a virtual host using NGINX in Ubuntu 14.04 Now I want to add couple of functionalities like-- Set up Nginx config so that any URL address which is not found, goes to a fallback path with internal redirection (NO change or redirection in browser URL). Assuming the following: Fallback path needed for configuration: http://myvirtualhost/fallback_directory/fallbackhandler.php Anything typed after http://localhost when not found, should hit the fallback path (internal redirection, meaning NO change or redirection in browser address bar). The fallback path is given above (http://app_servers/fallback_directory/fallbackhandler.php) which needs to be setup in Nginx config. For example, when i visits www.test.com/not_existing_directory/ and not_existing_directory doesn't exist, it should hit the fallback path while still retaining www.test.com/not_existing_directory/ in browser address bar. Please point me to NGINX resources and a steps that will be required so that I can grasp NGINX quickly to do the above task. Thanks in advance Ashish From francis at daoine.org Fri Jun 19 19:53:15 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Jun 2015 20:53:15 +0100 Subject: Nginx URL Fallback setup with internal redirection in reverse-proxy settings In-Reply-To: References: Message-ID: <20150619195315.GE23844@daoine.org> On Fri, Jun 19, 2015 at 10:59:00PM +0530, Ashish k wrote: Hi there, > Set up Nginx config so that any URL address which is not found, goes > to a fallback path with internal redirection (NO change or redirection > in browser URL). > Please point me to NGINX resources and a steps that will be required > so that I can grasp NGINX quickly to do the above task. It sounds like you want your own handler for 404 errors. So error_page 404 @404; or error_page 404 = @404; http://nginx.org/r/error_page for details. and the difference between the two lines. Then in "location @404 {}", do what you want - proxy_pass or fastcgi_pass to something external with appropriate other settings, most likely. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Jun 20 06:29:59 2015 From: nginx-forum at nginx.us (ErikDubbelboer) Date: Sat, 20 Jun 2015 02:29:59 -0400 Subject: [ANNOUNCE] nginx-http-stub-requests Message-ID: <48eeca137d91bee44d0b7139cda3d400.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm pleased to announce a new module called stub-requests. It allows you to see a table of all the currently running requests including duration, ip, uri and more. The source, install instructions and an example can be found on: https://github.com/atomx/nginx-http-stub-requests Cheers, Erik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259769,259769#msg-259769 From funwithnginx at yahoo.com Sat Jun 20 06:43:57 2015 From: funwithnginx at yahoo.com (John Smith) Date: Sat, 20 Jun 2015 06:43:57 +0000 (UTC) Subject: Request limit calculation Message-ID: <431287242.2706978.1434782637422.JavaMail.yahoo@mail.yahoo.com> Hello, I'm John and I'm a nginx noob. I was wondering how the request limit reach is calculated when using?limit_req_zone and?limit_req.My problem is that, in development, I'm not concatenating static files such as .js and .css files. And so the browser does about 27 requests when the first page is loaded. I've set up a rate of 50r/s, but out of 27, about 18 requests receive a 503 response and I don't understand why, since the rate isn't exceeded. My config looks something like this. I have a link to this from the sites-enabled folder. limit_req_zone $binary_remote_addr zone=one:10m rate=50r/s; server {? ? listen 443 ssl; ? ?? ? ssl_certificate ...;? ? ssl_certificate_key ...;? ? server_name localhost; ??? ? server_tokens off; ? ??? ? gzip_types *;? ? root ...; ? ? limit_req zone=one; ? ? location = / { ? ? ? ? ? ? ? ?? ? ? ? index index.html;? ? } ? ? location = /index.html {? ? ? ? ...? ? } ? ? location / {? ? ? ? ...? ? }} ?? If I use?limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;with?limit_req zone=one burst=50 nodelay; it works OK. I was wondering why I would have to specify a burst in order for this to work. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.holway at otternetworks.de Sun Jun 21 12:19:06 2015 From: andrew.holway at otternetworks.de (Andrew Holway) Date: Sun, 21 Jun 2015 14:19:06 +0200 Subject: Nginx not logging to socket. Message-ID: Hallo! Using rsyslog I have set up a logging socket and confirmed that its working by piping in some stuff to "logger -u /dev/log" nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is below.. I'm probably doing something dumb. This is the first time I set this up. Cheers, Andrew # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes 4; #error_log /var/log/nginx/error.log debug; error_log syslog:server=unix:/dev/log debug; pid /run/nginx.pid; events { worker_connections 1024; } http { # Log to Rsyslog socket log_format syslog '$remote_addr $host:$server_port "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"'; access_log syslog:server=unix:/dev/log syslog; # log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; # # access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; } -- Otter Networks UG http://otternetworks.de fon: +49 30 54 88 5197 Gotenstra?e 17 10829 Berlin -- Otter Networks UG http://otternetworks.de fon: +49 30 54 88 5197 Gotenstra?e 17 10829 Berlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jun 21 12:58:53 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 21 Jun 2015 15:58:53 +0300 Subject: Request limit calculation In-Reply-To: <431287242.2706978.1434782637422.JavaMail.yahoo@mail.yahoo.com> References: <431287242.2706978.1434782637422.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20150621125852.GF26357@mdounin.ru> Hello! On Sat, Jun 20, 2015 at 06:43:57AM +0000, John Smith wrote: > Hello, I'm John and I'm a nginx noob. > I was wondering how the request limit reach is calculated when using?limit_req_zone and?limit_req.My problem is that, in development, I'm not concatenating static files such as .js and .css files. And so the browser does about 27 requests when the first page is loaded. I've set up a rate of 50r/s, but out of 27, about 18 requests receive a 503 response and I don't understand why, since the rate isn't exceeded. > My config looks something like this. I have a link to this from the sites-enabled folder. > limit_req_zone $binary_remote_addr zone=one:10m rate=50r/s; > server {? ? listen 443 ssl; ? ?? ? ssl_certificate ...;? ? ssl_certificate_key ...;? ? server_name localhost; ??? ? server_tokens off; ? ??? ? gzip_types *;? ? root ...; > ? ? limit_req zone=one; > ? ? location = / { ? ? ? ? ? ? ? ?? ? ? ? index index.html;? ? } > ? ? location = /index.html {? ? ? ? ...? ? } > ? ? location / {? ? ? ? ...? ? }} ?? > If I use?limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;with?limit_req zone=one burst=50 nodelay; > it works OK. > I was wondering why I would have to specify a burst in order for this to work. The 50r/s rate means no more than one request per 20 milliseconds. If a client will try to do 2 requests at the same time (with less than 20 milliseconds difference from nginx point of view), the 503 error will be returned to the second request. To handle such cases there is the burst parameter, and that's why you should use it instead of trying to set enormous rate to accomodate bursts. Some additional details may be found in docs and in the Wikipedia article about the algorithm used: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html https://en.wikipedia.org/wiki/Leaky_bucket -- Maxim Dounin http://nginx.org/ From emailbuilder88 at yahoo.com Mon Jun 22 10:05:05 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Mon, 22 Jun 2015 03:05:05 -0700 Subject: Understanding alias (used as rewrite) Message-ID: <1434967505.62578.YahooMailBasic@web142404.mail.bf1.yahoo.com> Hi, I'm confused about the details of "alias" used as a kind of rewrite (which should be more efficient as I understand it, as long as its appropriately used). I found I can do this: location = /path/number/one.html { alias /some/other/path/script.php; include fastcgi.conf; } So I was confucsed why this not working: location ^~ /my-long-prefix-goes-here { alias /another/different/path/anotherscript.php; include fastcgi.conf; } In other words, alias of exact location match does a cheap "rewrite" perfectly. But now I want to match addresses like: /my-long-prefix-goes-here /my-long-prefix-goes-herexxx /my-long-prefix-goes-here/ /my-long-prefix-goes-here/filename Only the first one works, the others are 404. Is Nginx adding the tail end of the matched prefix to the aliased location? I tried to make my alias: alias /another/different/path/anotehrscript.php?; so the stuff on the end turns into a query arg which php can ignore. But that didn't work. I also tried to use regex to match the location: location ~ ^/my-long-prefix-goes-here { But now NONE of the addresses work - even the exact match is 404. Why?? I found this was the only way to make it work: root /another/different/path; rewrite ^(.*)$ /anotehrscript.php break; In this situation is rewrite the only solution? From nginx-forum at nginx.us Mon Jun 22 10:24:20 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 22 Jun 2015 06:24:20 -0400 Subject: DNS configuration to invoke complete URL In-Reply-To: <20150608224940.GY2957@daoine.org> References: <20150608224940.GY2957@daoine.org> Message-ID: Thanks everyone and sorry for the late reply here...due to OS upgrade from IT team. I tried using return directive to invoke the absolute URL and below case is true : -- if they are two different servers, then on the workspance one: location = / { return 301 http://workspace.corp.no/workspace/agentLogin; } This resulted in an error message during nginx restart "unknown directive return". And working nginx version is #1.6.3. FYI, nginx installation was successful installing pcre, pcre-devel packages (latest) Not sure, how to resolve this (return directive issue)??? ----------------------------------------------- Later I attempted to configure below way and IS WORKING... server { listen 80; server_name workspace.corp.no; location /{ proxy_pass http://workspace123/workspace/agentLogin/; } } NOT SURE, IF THIS IS THE RIGHT APROACH? //Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,259782#msg-259782 From nginx-forum at nginx.us Mon Jun 22 10:33:58 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 22 Jun 2015 06:33:58 -0400 Subject: PCRE Message-ID: Hi Experts, I am working on NGINX-1.6.3 version. Installation is successful using latest pcre, pcre-devel packages too.. While accessing the service, static contents were not loaded. As suggested in google, I tried configuring as below where the static contents are available @ "http://livetests123/livetest/WEB-INF/classes/static/" server { listen 80; server_name livetest.corp.com; location ~"*\.(js|jpg|png|css)$" { root http://livetests123/livetest/; expires 30d; } location /{ proxy_pass http://livetest123/livetest/login/; } } On the first step while starting Nginx, I could see below message about PCRE nginx: [emerg] using regex ""*\.(js|jpg|png|css)$"" requires PCRE library I have confirmed again with yum install PCRE that the "latest version is already installed and nothing to do" message in return. Pls. assist for the below queries: (1) How to fix the issue - nginx: [emerg] using regex ""*\.(js|jpg|png|css)$"" requires PCRE library (2) Post which, how to configure in achievint static content available @ "http://livetests123/livetest/WEB-INF/classes/static/"? Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259783,259783#msg-259783 From mdounin at mdounin.ru Mon Jun 22 11:09:35 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Jun 2015 14:09:35 +0300 Subject: PCRE In-Reply-To: References: Message-ID: <20150622110935.GI26357@mdounin.ru> Hello! On Mon, Jun 22, 2015 at 06:33:58AM -0400, smsmaddy1981 wrote: > Hi Experts, > > I am working on NGINX-1.6.3 version. Installation is successful using latest > pcre, pcre-devel packages too.. > > While accessing the service, static contents were not loaded. > As suggested in google, I tried configuring as below where the static > contents are available @ > "http://livetests123/livetest/WEB-INF/classes/static/" > > server { > listen 80; > server_name livetest.corp.com; > > location ~"*\.(js|jpg|png|css)$" { > root http://livetests123/livetest/; The "root" directive is to configure filesystem paths. If you want nginx to proxy requests to another URL, you have to use proxy_pass. See details at http://nginx.org/r/proxy_pass. > expires 30d; > } > > location /{ > proxy_pass http://livetest123/livetest/login/; > } > } > > On the first step while starting Nginx, I could see below message about > PCRE > nginx: [emerg] using regex ""*\.(js|jpg|png|css)$"" requires PCRE library The error message suggests that your nginx binary was compiled without the PCRE library. You have to recompile nginx if you want it to support regexes. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jun 22 12:35:32 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 22 Jun 2015 08:35:32 -0400 Subject: Static Content on Different Server Isn't Loaded?? Message-ID: <1dbbc401fdb9832dc1f16163992f0090.NginxMailingListEnglish@forum.nginx.org> Hi Team, I have static content available on the remote server (say NODE 02) and PATH: ../livetest/WEB-INF/static/classes/ under which I have /image, /js, /styles folders I have installed nginx-1.8.0 on server (Say Node 01). While accessing the application, the static contents are not getting loaded. Pls. suggest? I tried below option: location ~*\.(js|jpg|png|css)$ { root /WEB-INF/classes/static/; http:///livetest/WEB-INF/classes/static/classes; expires 30d; } The resultant in access.log shows that the path is weird /var/gvp/Nginx/nginx-1.8.0/http:///livetest/WEB-INF/classes/static/classes/livetest/.... Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,259786#msg-259786 From me at myconan.net Mon Jun 22 18:21:02 2015 From: me at myconan.net (Edho Arief) Date: Tue, 23 Jun 2015 03:21:02 +0900 Subject: Understanding alias (used as rewrite) In-Reply-To: <1434967505.62578.YahooMailBasic@web142404.mail.bf1.yahoo.com> References: <1434967505.62578.YahooMailBasic@web142404.mail.bf1.yahoo.com> Message-ID: On Mon, Jun 22, 2015 at 7:05 PM, E.B. wrote: > Hi, I'm confused about the details of "alias" used as > a kind of rewrite (which should be more efficient as > I understand it, as long as its appropriately used). > > I found I can do this: > > location = /path/number/one.html { > alias /some/other/path/script.php; > include fastcgi.conf; > } > > So I was confucsed why this not working: > > location ^~ /my-long-prefix-goes-here { > alias /another/different/path/anotherscript.php; > include fastcgi.conf; > } > > In other words, alias of exact location match does > a cheap "rewrite" perfectly. But now I want to match > addresses like: > > /my-long-prefix-goes-here > /my-long-prefix-goes-herexxx > /my-long-prefix-goes-here/ > /my-long-prefix-goes-here/filename > > Only the first one works, the others are 404. Is > Nginx adding the tail end of the matched prefix > to the aliased location? I tried to make my alias: > > alias /another/different/path/anotehrscript.php?; > > so the stuff on the end turns into a query arg which > php can ignore. But that didn't work. > > I also tried to use regex to match the location: > > location ~ ^/my-long-prefix-goes-here { > > But now NONE of the addresses work - even the > exact match is 404. Why?? > > I found this was the only way to make it work: > > root /another/different/path; > rewrite ^(.*)$ /anotehrscript.php break; > > In this situation is rewrite the only solution? > You're probably looking for this fastcgi_param SCRIPT_FILENAME /another/different/path/anotehrscript.php; From emailbuilder88 at yahoo.com Mon Jun 22 20:32:17 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Mon, 22 Jun 2015 13:32:17 -0700 Subject: Understanding alias (used as rewrite) In-Reply-To: Message-ID: <1435005137.71617.YahooMailBasic@web142405.mail.bf1.yahoo.com> > > So I was confucsed why this not working: > > > > location ^~ /my-long-prefix-goes-here { > > alias /another/different/path/anotherscript.php; > > include fastcgi.conf; > > } > > > > In other words, alias of exact location match does > > a cheap "rewrite" perfectly. But now I want to match > > addresses like: > > > > /my-long-prefix-goes-here > > /my-long-prefix-goes-herexxx > > /my-long-prefix-goes-here/ > > /my-long-prefix-goes-here/filename > > > > Only the first one works, the others are 404. Is > > Nginx adding the tail end of the matched prefix > > to the aliased location? I tried to make my alias: > > > > alias /another/different/path/anotehrscript.php?; > > > > so the stuff on the end turns into a query arg which > > php can ignore. But that didn't work. > > > > I also tried to use regex to match the location: > > > > location ~ ^/my-long-prefix-goes-here { > > > > But now NONE of the addresses work - even the > > exact match is 404. Why?? > > > > I found this was the only way to make it work: > > > > root /another/different/path; > > rewrite ^(.*)$ /anotehrscript.php break; > > > > In this situation is rewrite the only solution? > > > > > You're probably looking for this > > fastcgi_param SCRIPT_FILENAME /another/different/path/anotehrscript.php; Excellent point! Thanks you! However, what if the alias was NOT to a php file? Is using rewrite the only solution - alias not able to working? What is alias doing to cause 404? From rusdso at gmail.com Mon Jun 22 23:00:51 2015 From: rusdso at gmail.com (Russel D'Souza) Date: Mon, 22 Jun 2015 16:00:51 -0700 Subject: NTLM or HTTP Digest authentication to Parent proxy Message-ID: Hi, I need to configure a proxy on my machine to use another proxy (installed on another machine) as the parent proxy. The parent proxy requires HTTP Digest or NTLM authorization. I want to to set up a local proxy which deals with the parent proxy's authorization details and provides authorization free access to programs that access the network through my machine. [image: Inline image 1] Basically, as in the above figure the user has to access the Internet going through an NGINX Proxy (with no auth). Next the NGINX proxy must add the DIGEST/NTLM usename:password to the message and pass it on to the DIGEST/NTLM Server and give the results back to the user. Please let me know if there is any way to resolve this issue using NGINX and what keywords should I search for to find the relevant steps/docs to do this. If NGINX does not have this feature, what is the best way to achieve this ? Thanks, Russel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 23927 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Jun 22 23:01:11 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 22 Jun 2015 19:01:11 -0400 Subject: PCRE In-Reply-To: <20150622110935.GI26357@mdounin.ru> References: <20150622110935.GI26357@mdounin.ru> Message-ID: <3cfd4902fb202f03f22950a12ef2cf2d.NginxMailingListEnglish@forum.nginx.org> I have installed 1.8.0 and configure is successful and no PCRE exceptions are thrown now. But, I face issues in loading static content available on different servers. How to achieve this, pls. suggest? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259783,259795#msg-259795 From me at myconan.net Tue Jun 23 03:17:24 2015 From: me at myconan.net (Edho Arief) Date: Tue, 23 Jun 2015 12:17:24 +0900 Subject: Understanding alias (used as rewrite) In-Reply-To: <1435005137.71617.YahooMailBasic@web142405.mail.bf1.yahoo.com> References: <1435005137.71617.YahooMailBasic@web142405.mail.bf1.yahoo.com> Message-ID: On Tue, Jun 23, 2015 at 5:32 AM, E.B. wrote: >> >> You're probably looking for this >> >> fastcgi_param SCRIPT_FILENAME /another/different/path/anotehrscript.php; > > Excellent point! Thanks you! > However, what if the alias was NOT to a php file? Is using > rewrite the only solution - alias not able to working? What > is alias doing to cause 404? This config works for me. location ~ ^/test { alias /data/public_html/somefile.php; include fastcgi.conf; fastcgi_pass 127.0.0.1:8900; } Check what your error log says to find out why it returns 404. Also the question mark you put will become part of file name php look for (it'll look for 'file.php?' instead of 'file.php'. From nginx-forum at nginx.us Tue Jun 23 06:06:48 2015 From: nginx-forum at nginx.us (pascual.lin) Date: Tue, 23 Jun 2015 02:06:48 -0400 Subject: solaris use eventport proxy upstream bug Message-ID: environment: solaris11 + http proxy + upstream + keepalive + method get first request ok second request will hangup,after eventport del event there is no event add NGX_READ_EVENT or NGX_WRITE_EVENT here is my patch --- ngx_event_connect.c.src 2015-06-23 12:00:49.232424329 +0800 +++ ngx_event_connect.c 2015-06-23 12:01:17.644539000 +0800 @@ -24,6 +24,11 @@ rc = pc->get(pc, pc->data); if (rc != NGX_OK) { + c = pc->connection; + rev = c->read; + wev = c->write; + rc = -1; + goto register_event; return rc; } @@ -195,6 +200,8 @@ return NGX_OK; } +register_event: + if (ngx_event_flags & NGX_USE_CLEAR_EVENT) { Sorry for my poor English Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259798,259798#msg-259798 From office at 5hosting.com Tue Jun 23 09:14:44 2015 From: office at 5hosting.com (5hosting GmbH) Date: Tue, 23 Jun 2015 11:14:44 +0200 Subject: High load due to reload Message-ID: <011201d0ad95$0b79ac00$226d0400$@5hosting.com> Hi guys, I have a small problem with a nginx system that acts as a loadbalancing proxy. We do have lots of vhosts and ssl certificates and each time we do a /etc/init.d/nginx reload, the load of our server goes up to 20 due to swapping. Is there any other way to reload nginx to get aware of ssl or vhost changes without getting high loads? J?rgen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6079 bytes Desc: not available URL: From smith.hua at zoom.us Tue Jun 23 09:20:15 2015 From: smith.hua at zoom.us (smith) Date: Tue, 23 Jun 2015 09:20:15 -0000 Subject: when work processor will auto restart? Message-ID: <019d01d0ad95$d3396230$79ac2690$@zoom.us> Hi, I found that sometimes the nginx work processor will auto restart with no reload/restart command executed. When the nginx run some days, found that the pid of work processor changed but maser pid not changed. I want to know when nginx will auto restart the work processor? Best regards Smith -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jun 23 13:14:14 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 23 Jun 2015 16:14:14 +0300 Subject: when work processor will auto restart? In-Reply-To: <019d01d0ad95$d3396230$79ac2690$@zoom.us> References: <019d01d0ad95$d3396230$79ac2690$@zoom.us> Message-ID: <2551797.c0KeoyBSGi@vbart-workstation> On Tuesday 23 June 2015 09:20:15 smith wrote: > Hi, > > > > I found that sometimes the nginx work processor will auto restart with no > reload/restart command executed. > > When the nginx run some days, found that the pid of work processor changed > but maser pid not changed. > > I want to know when nginx will auto restart the work processor? > When a worker process is crashed. You should check the error log. wbr, Valentin V. Bartenev From lists at der-ingo.de Tue Jun 23 14:43:23 2015 From: lists at der-ingo.de (Ingo Lafrenz) Date: Tue, 23 Jun 2015 16:43:23 +0200 Subject: SSL on/off on same port and IP Message-ID: <5589708B.9020607@der-ingo.de> Hi, consider the following very simple nginx config: http { server { listen 127.0.0.1:123; server_name abc; } server { listen 127.0.0.1:123 ssl; server_name xyz; ssl_certificate...; } } In words: I instruct nginx to listen on the same port and IP, one time without ssl, one time with ssl. IMHO this is a broken config, however nginx accepts it. What would you say? Should nginx reject such a config? Right now you only get an error at request time. It gets even worse, if the 2nd server is configured with the ssl directive instead of "listen ssl": server { listen 127.0.0.1:123; server_name xyz; ssl on; ssl_certificate...; } In that case you don't even see an error in the logs anymore and clients can't connect via https anymore. Cheers, Ingo =;-> From nginx-forum at nginx.us Tue Jun 23 14:59:44 2015 From: nginx-forum at nginx.us (sergiofm) Date: Tue, 23 Jun 2015 10:59:44 -0400 Subject: Nginx and Apach2 Message-ID: <928b7b5d16290f42b55a111d4dc9f6d3.NginxMailingListEnglish@forum.nginx.org> I have nginx and apache2 in the same Ubuntu Server 14.04 and i want to use nginx as proxy server. My idea is when people write www.mysite.com, nginx redirect to apache and i can do that, but the problem is that apache is in port 81 and when nginx redirect, the url show www.mysite.com:81/mysite. I just want people to see www.mysite.com, without :81/mysite. Tanks, S?rgio Marques Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259809,259809#msg-259809 From nginx-forum at nginx.us Tue Jun 23 15:21:13 2015 From: nginx-forum at nginx.us (nngin) Date: Tue, 23 Jun 2015 11:21:13 -0400 Subject: Convert Apache .htaccess rewrite to nginx Message-ID: <88e0ad2d1117b67e776b6e11383a1a81.NginxMailingListEnglish@forum.nginx.org> Hello, I am new to nginx and I am having trouble converting htaccess rewrite to nginx rewrite. Please help me convert the following mod_rewrites with a brief explanation. And should I put the nginx rewrite back into the .htaccess file or is there a designated config file to put the nginx location blocks? Options -MultiViews RewriteEngine On # Redirect Trailing Slashes... RewriteRule ^(.*)/$ /$1 [L,R=301] # Handle Front Controller... RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259810,259810#msg-259810 From wandenberg at gmail.com Tue Jun 23 15:25:02 2015 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Tue, 23 Jun 2015 12:25:02 -0300 Subject: Nginx and Apach2 In-Reply-To: <928b7b5d16290f42b55a111d4dc9f6d3.NginxMailingListEnglish@forum.nginx.org> References: <928b7b5d16290f42b55a111d4dc9f6d3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Take a look on *port_in_redirect *and *proxy_pass * configuration. You probably have to do a proxy_pass to Apache and ensure that the port number is removed from Apache response qhen there is a redirect. On Tue, Jun 23, 2015 at 11:59 AM, sergiofm wrote: > I have nginx and apache2 in the same Ubuntu Server 14.04 and i want to use > nginx as proxy server. My idea is when people write www.mysite.com, nginx > redirect to apache and i can do that, but the problem is that apache is in > port 81 and when nginx redirect, the url show www.mysite.com:81/mysite. I > just want people to see www.mysite.com, without :81/mysite. > > Tanks, > S?rgio Marques > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259809,259809#msg-259809 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Tue Jun 23 20:19:48 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 23 Jun 2015 16:19:48 -0400 Subject: Caching fastcgi url Message-ID: Hello, I am looking for advice. I am using nginx to terminate SSL and forward the request to php via fastcgi. Of all of requests I am forwarding to fastcgi there is one particular URL that I want to cache, hopefully bypassing communication with the fastcgi and php processes altogether. - Would I need to define a separate location stanza for the URL I want to cache and duplicate all of the fastcgi configuration that is normally required? Or is there a way to indicate that of all the fastcgi requests only the one matching /xyz is to be cached? - If multiple request for the same URL arrive at around the same time, and the cache is stale, they will all wait on the one request that is refreshing the cache, correct? So I should only see one request for the cached location per worker per minute on the backend? - Since my one URI is fairly small, can I indicate that no file backing is needed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jun 23 21:36:48 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Jun 2015 22:36:48 +0100 Subject: NTLM or HTTP Digest authentication to Parent proxy In-Reply-To: References: Message-ID: <20150623213648.GG23844@daoine.org> On Mon, Jun 22, 2015 at 04:00:51PM -0700, Russel D'Souza wrote: Hi there, > I need to configure a proxy on my machine to use another proxy (installed > on another machine) as the parent proxy. nginx is not a proxy server, and does not talk to proxy servers. > Please let me know if there is any way to resolve this issue using NGINX No. Search for proxy servers; they'll have a better idea of what they can do. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jun 23 21:48:46 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Jun 2015 22:48:46 +0100 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <1dbbc401fdb9832dc1f16163992f0090.NginxMailingListEnglish@forum.nginx.org> References: <1dbbc401fdb9832dc1f16163992f0090.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150623214846.GH23844@daoine.org> On Mon, Jun 22, 2015 at 08:35:32AM -0400, smsmaddy1981 wrote: Hi there, > I have static content available on the remote server (say NODE 02) and PATH: > ../livetest/WEB-INF/static/classes/ > under which I have /image, /js, /styles folders nginx doesn't know about static content. nginx knows about requests that should be served from this filesystem -- directives root (http://nginx.org/r/root) or alias (http://nginx.org/r/alias) are probably most interesting -- and requests that should be proxy_pass'ed to another web server (http://nginx.org/r/proxy_pass) and requests that should be fastcgi_pass'ed to a fastcgi server (http://nginx.org/r/fastcgi_pass) and a few other things. > While accessing the application, the static contents are not getting > loaded. Can you show one request for one thing that you consider to be some static content? "curl" is usually a good command for showing the request made and the response received. What do you want nginx to do with this request? The answer should (probably) be "serve *this named file* from the local filesystem", or "tell the client to go and request *this other url* from this or another web server", or "fetch *this specific url* from *this other web server*, and send it to the client". > location ~*\.(js|jpg|png|css)$ { > root /WEB-INF/classes/static/; > http:///livetest/WEB-INF/classes/static/classes; > expires 30d; > } I get nginx: [emerg] unknown directive "http:///livetest/WEB-INF/classes/static/classes" in /usr/local/nginx/conf/nginx.conf:15 If you are going to show config you use, please copy-paste and do not re-type. After you describe what exactly you want nginx to do, if the answer is not already clear to you, possibly someone will be able to help. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jun 23 22:07:49 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Jun 2015 23:07:49 +0100 Subject: Caching fastcgi url In-Reply-To: References: Message-ID: <20150623220749.GI23844@daoine.org> On Tue, Jun 23, 2015 at 04:19:48PM -0400, CJ Ess wrote: Hi there, > - Would I need to define a separate location stanza for the URL I want to > cache and duplicate all of the fastcgi configuration that is normally > required? Or is there a way to indicate that of all the fastcgi requests > only the one matching /xyz is to be cached? fastcgi caching is handled by the fastcgi_cache directive, documented at http://nginx.org/r/fastcgi_cache It is set per-location. See also directives like fastcgi_cache_bypass and fastcgi_no_cache. It is probably simplest to have on exact-match location for this url and not worry about the no_cache side of things. "all the configuration that is normally required" is typically four lines -- one "include" of common stuff; one or two extra fastcgi_param values, and a fastcgi_pass. > - If multiple request for the same URL arrive at around the same time, and > the cache is stale, they will all wait on the one request that is > refreshing the cache, correct? So I should only see one request for the > cached location per worker per minute on the backend? If that's what you want, you can probably configure it. http://nginx.org/r/fastcgi_cache_use_stale http://nginx.org/r/fastcgi_cache_lock > - Since my one URI is fairly small, can I indicate that no file backing is > needed? I don't think so. But you can have fastcgi_cache_path set to a ramdisk, I think. f -- Francis Daly francis at daoine.org From zxcvbn4038 at gmail.com Tue Jun 23 22:27:30 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 23 Jun 2015 18:27:30 -0400 Subject: Caching fastcgi url In-Reply-To: <20150623220749.GI23844@daoine.org> References: <20150623220749.GI23844@daoine.org> Message-ID: So looks like your saying the best way to do it is to do a separate location and duplicate the fastcgi setup in that location and add the fastcgi_cache stuff. I can work with that, however I came across this example while googling ( https://gist.github.com/magnetikonline/10450786) that uses "if" to set a variable which I could use to match on the URL and trigger fastcgi_cache_bypass for everything not matching. Is "if" so toxic that I shouldn't consider doing it this way? On Tue, Jun 23, 2015 at 6:07 PM, Francis Daly wrote: > On Tue, Jun 23, 2015 at 04:19:48PM -0400, CJ Ess wrote: > > Hi there, > > > - Would I need to define a separate location stanza for the URL I want to > > cache and duplicate all of the fastcgi configuration that is normally > > required? Or is there a way to indicate that of all the fastcgi requests > > only the one matching /xyz is to be cached? > > fastcgi caching is handled by the fastcgi_cache directive, documented > at http://nginx.org/r/fastcgi_cache > > It is set per-location. > > See also directives like fastcgi_cache_bypass and fastcgi_no_cache. > > It is probably simplest to have on exact-match location for this url > and not worry about the no_cache side of things. > > "all the configuration that is normally required" is typically four lines > -- one "include" of common stuff; one or two extra fastcgi_param values, > and a fastcgi_pass. > > > - If multiple request for the same URL arrive at around the same time, > and > > the cache is stale, they will all wait on the one request that is > > refreshing the cache, correct? So I should only see one request for the > > cached location per worker per minute on the backend? > > If that's what you want, you can probably configure it. > > http://nginx.org/r/fastcgi_cache_use_stale > http://nginx.org/r/fastcgi_cache_lock > > > - Since my one URI is fairly small, can I indicate that no file backing > is > > needed? > > I don't think so. But you can have fastcgi_cache_path set to a ramdisk, > I think. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jun 23 22:39:18 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Jun 2015 23:39:18 +0100 Subject: Caching fastcgi url In-Reply-To: References: <20150623220749.GI23844@daoine.org> Message-ID: <20150623223918.GJ23844@daoine.org> On Tue, Jun 23, 2015 at 06:27:30PM -0400, CJ Ess wrote: Hi there, > So looks like your saying the best way to do it is to do a separate > location and duplicate the fastcgi setup in that location and add the > fastcgi_cache stuff. It strikes me as clearer to have one extra location{} than to have a map{} or an if() to set a variable and then use fastcgi_no_cache and/or fastcgi_cache_bypass with that variable. But either way should work. I'm not going to be the one trying to understand it in a year's time. > I can work with that, however I came across this example while googling ( > https://gist.github.com/magnetikonline/10450786) that uses "if" to set a > variable which I could use to match on the URL and trigger > fastcgi_cache_bypass for everything not matching. Is "if" so toxic that I > shouldn't consider doing it this way? "if" seems fine for that, so long as it is not inside location{}. I'd probably use "map", though, if I were going to do that. f -- Francis Daly francis at daoine.org From emailbuilder88 at yahoo.com Wed Jun 24 06:44:33 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Tue, 23 Jun 2015 23:44:33 -0700 Subject: Understanding alias (used as rewrite) In-Reply-To: Message-ID: <1435128273.71376.YahooMailBasic@web142404.mail.bf1.yahoo.com> Thanks for your ongoing helps! I hope someone can advise further > >> You're probably looking for this > >> > >> fastcgi_param SCRIPT_FILENAME /another/different/path/anotehrscript.php; > > > > Excellent point! Thanks you! > > However, what if the alias was NOT to a php file? Is using > > rewrite the only solution - alias not able to working? What > > is alias doing to cause 404? > > This config works for me. > > location ~ ^/test { > alias /data/public_html/somefile.php; > > include fastcgi.conf; > > fastcgi_pass 127.0.0.1:8900; > } Yes, I had also got similar to work, but only for the exact match uri-- the first in my list of possible uris that must work: /my-long-prefix-goes-here /my-long-prefix-goes-herexxx /my-long-prefix-goes-here/ /my-long-prefix-goes-here/filename I still get 404 for the last 3. That's why I thinking it was adding the end of the original uri to the alias redirect. but Im not sure. (I used prefix match not regex match but I now tried your version with regex and get same result) > Check what your error log says to find out why it returns 404. Im not getting anything in my log I dont know why (access log shows the requested uri but I cant find the file not found error). I checked php log too and basic nginx error log. I have a access and error log set for this domain. Is that where the file not found error should be going? > Also the question mark you put will become part of file name php look > for (it'll look for 'file.php?' instead of 'file.php'. I was using the ? to try and avoid unknown file lookups because I thought the part of the uri after the matching location prefix was being added to the aliased location. I thought I could pass it as a query string to PHP so I could ignore it that way. But you help clarify that it won't work that way so I understand partly why I am getting 404, thank you. Part of what is not clear is if/when alias will have the rest of the uri after the match prefix added to it?? The doc: http://nginx.org/en/docs/http/ngx_http_core_module.html#alias says nothing, but it begins to appear to me that if you end the aliased location with / then it will have the rest of the original uri added to it, but if you don't end the alias with / then it will be taken without changes. If that's not right, can someone explain? And if that is right, why isn't it documented (a simply and imporant key feature should have that documentde! From djczaski at gmail.com Wed Jun 24 10:12:49 2015 From: djczaski at gmail.com (Danomi Czaski) Date: Wed, 24 Jun 2015 06:12:49 -0400 Subject: Nginx not logging to socket. In-Reply-To: References: Message-ID: Hello, On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway wrote: > Hallo! > > Using rsyslog I have set up a logging socket and confirmed that its working > by piping in some stuff to "logger -u /dev/log" > nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is > below.. Any luck? I'm seeing the same problem on 1.7.12. From nginx-forum at nginx.us Wed Jun 24 10:45:38 2015 From: nginx-forum at nginx.us (mudgil.gaurav) Date: Wed, 24 Jun 2015 06:45:38 -0400 Subject: 502 Error Issue Message-ID: <45db5a9287fd6171d6abe4841b249ee8.NginxMailingListEnglish@forum.nginx.org> Hi All, We recently migrated from apache to nginx. OS - CetnOS 6 Nginx - 1.62 (4 CPU 8 GB RAM) PHP-FPM (php 5.4.37) (4 CPU 8 GB RAM) APC (Code Cache APC 3.1.13 beta) Memcache (data cache) I have upstream of 4 php servers for php-fpm service. I am facing two issues: 1. I am getting 502 status code in access log means nginx generating 502 bad gateway error 2. Randomly connections got pile up on nginx up to 2500 and on php-fpm servers we do not see any load in top only 1-2 process seems to be running during that time and without taking any restart after 2-3 minutes it starts working normally and connections goes down to normal without taking any restart. Please check nginx configuration in-line user nguser; worker_processes 4; pid /var/run/nginx.pid; worker_rlimit_nofile 70000; events { worker_connections 1024; multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_requests 1000; keepalive_timeout 65; send_timeout 15; types_hash_max_size 2048; server_tokens off; client_max_body_size 50M; client_body_buffer_size 1m; client_body_timeout 15; client_header_timeout 15; server_names_hash_bucket_size 64; server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## #access_log on; access_log /var/log/nginx/access.log combined; #error_log /dev/null crit; error_log /var/log/nginx/error.log error; #gzip on; gzip on; gzip_static on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_min_length 512; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml; open_file_cache max=2000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; fastcgi_buffers 256 16k; fastcgi_buffer_size 128k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_intercept_errors on; reset_timedout_connection on; upstream fastcgiservers { least_conn; server xxx.xxx.xx.xx:9000; server xxx.xxx.xx.xx:9000; server xxx.xxx.xx.xx:9000; server xxx.xxx.xx.xx:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_pass fastcgiservers; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } ---------------------------------------------------------PHP-FPM Settings ------------------------------------------------------------------ pm.max_children = 150 pm.start_servers = 90 pm.min_spare_servers = 70 pm.max_spare_servers = 100 pm.max_requests = 1500 Any help or suggestion would be greatly appreciated. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259833,259833#msg-259833 From me at myconan.net Wed Jun 24 10:50:25 2015 From: me at myconan.net (Edho Arief) Date: Wed, 24 Jun 2015 19:50:25 +0900 Subject: Understanding alias (used as rewrite) In-Reply-To: <1435128273.71376.YahooMailBasic@web142404.mail.bf1.yahoo.com> References: <1435128273.71376.YahooMailBasic@web142404.mail.bf1.yahoo.com> Message-ID: On Wed, Jun 24, 2015 at 3:44 PM, E.B. wrote: > Thanks for your ongoing helps! I hope someone > can advise further > >> >> You're probably looking for this >> >> >> >> fastcgi_param SCRIPT_FILENAME /another/different/path/anotehrscript.php; >> > >> > Excellent point! Thanks you! >> > However, what if the alias was NOT to a php file? Is using >> > rewrite the only solution - alias not able to working? What >> > is alias doing to cause 404? >> >> This config works for me. >> >> location ~ ^/test { >> alias /data/public_html/somefile.php; >> >> include fastcgi.conf; >> >> fastcgi_pass 127.0.0.1:8900; >> } > > Yes, I had also got similar to work, but > only for the exact match uri-- the first > in my list of possible uris that must work: > > /my-long-prefix-goes-here > /my-long-prefix-goes-herexxx > /my-long-prefix-goes-here/ > /my-long-prefix-goes-here/filename > > I still get 404 for the last 3. That's why > I thinking it was adding the end of the original > uri to the alias redirect. but Im not sure. > you need the regexp-based alias (as in my example). > (I used prefix match not regex match but I > now tried your version with regex and get same > result) > >> Check what your error log says to find out why it returns 404. > > Im not getting anything in my log I dont know > why (access log shows the requested uri but > I cant find the file not found error). I checked > php log too and basic nginx error log. I have a > access and error log set for this domain. Is > that where the file not found error should be going? > >> Also the question mark you put will become part of file name php look >> for (it'll look for 'file.php?' instead of 'file.php'. > > I was using the ? to try and avoid unknown file > lookups because I thought the part of the uri > after the matching location prefix was being > added to the aliased location. I thought I could > pass it as a query string to PHP so I could ignore > it that way. > > But you help clarify that it won't work that way > so I understand partly why I am getting 404, thank > you. > > Part of what is not clear is if/when alias will have > the rest of the uri after the match prefix added to > it?? The doc: > > http://nginx.org/en/docs/http/ngx_http_core_module.html#alias > > says nothing, but it begins to appear to me that if > you end the aliased location with / then it will have > the rest of the original uri added to it, but if you > don't end the alias with / then it will be taken without > changes. > > If that's not right, can someone explain? > it depends on the type of locaton. If it's regexp-based alias (location ~ ^/some/(regexp)), the full path is replaced with whatever in alias params but otherwise the trailing request uri (the one after path specified in location) will be appended to the alias. From vl at nginx.com Wed Jun 24 11:03:18 2015 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 24 Jun 2015 14:03:18 +0300 Subject: Nginx not logging to socket In-Reply-To: References: Message-ID: <20150624110318.GA4347@vlpc.nginx.com> On Wed, Jun 24, 2015 at 06:12:49AM -0400, Danomi Czaski wrote: > Hello, > > On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway > wrote: > > Hallo! > > > > Using rsyslog I have set up a logging socket and confirmed that its working > > by piping in some stuff to "logger -u /dev/log" > > nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is > > below.. > > Any luck? I'm seeing the same problem on 1.7.12. > do you see some errors in the local error log? If nginx is unable to send data to socket for some reasons (check socket permissions, selinux and similar), you will see errors in the local log file. From nginx-forum at nginx.us Wed Jun 24 12:11:28 2015 From: nginx-forum at nginx.us (bagas) Date: Wed, 24 Jun 2015 08:11:28 -0400 Subject: unknown directive "thread_pool" In-Reply-To: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> Message-ID: <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> ?????? ????. ?? ??????? ???, ???? ???????? ????????? ?? ?????. # uname -orm FreeBSD 10.1-RELEASE-p12 amd64 nginx -v nginx version: nginx/1.8.0 nginx ?????? ? ????? --with-threads ?????. nginx.conf ? ?????? http ???????: thread_pool one threads=32; thread_pool cen threads=32; ????? ????? ????????. # nginx -t nginx: [emerg] "thread_pool" directive is not allowed here in /usr/local/etc/nginx/nginx.conf:46 nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed ??? ? ?????????? ???????, ??? ? ?????? http ??????????? ????????? ?? ?????, ? ????? ???????? ??????? ??? ?????????, ??? ????? theread_pool ??? ?? ?????. ??? ???? ? ???? ??????. ???? ??????????? ?????? ???????? ?? ????? ?? ??????????? ????????. ? ???? ?? ????? ???????????? ???? ?? nginx ?? ??????? FreeBSD 10.1 ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259166,259836#msg-259836 From miguelmclara at gmail.com Wed Jun 24 12:18:59 2015 From: miguelmclara at gmail.com (Miguel Clara) Date: Wed, 24 Jun 2015 12:18:59 +0000 Subject: 502 Error Issue In-Reply-To: <45db5a9287fd6171d6abe4841b249ee8.NginxMailingListEnglish@forum.nginx.org> References: <45db5a9287fd6171d6abe4841b249ee8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <746B4E15-FDFD-4975-81AC-8A2795C112AC@gmail.com> On June 24, 2015 11:45:38 AM GMT+01:00, "mudgil.gaurav" wrote: >Hi All, > >We recently migrated from apache to nginx. > >OS - CetnOS 6 >Nginx - 1.62 (4 CPU 8 GB RAM) >PHP-FPM (php 5.4.37) (4 CPU 8 GB RAM) >APC (Code Cache APC 3.1.13 beta) >Memcache (data cache) > >I have upstream of 4 php servers for php-fpm service. > >I am facing two issues: > >1. I am getting 502 status code in access log means nginx generating >502 bad >gateway error > >2. Randomly connections got pile up on nginx up to 2500 and on php-fpm >servers we do not see any load in top only 1-2 process seems to be >running >during that time and without taking any restart after 2-3 minutes it >starts >working normally and connections goes down to normal without taking any >restart. > > >Please check nginx configuration in-line > >user nguser; >worker_processes 4; > >pid /var/run/nginx.pid; >worker_rlimit_nofile 70000; > >events { > worker_connections 1024; > multi_accept on; > use epoll; >} > >http { > ## > # Basic Settings > ## > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_requests 1000; > keepalive_timeout 65; > send_timeout 15; > types_hash_max_size 2048; > server_tokens off; > client_max_body_size 50M; > client_body_buffer_size 1m; > client_body_timeout 15; > client_header_timeout 15; > server_names_hash_bucket_size 64; > server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # Logging Settings >## > #access_log on; > > access_log /var/log/nginx/access.log combined; > #error_log /dev/null crit; > error_log /var/log/nginx/error.log error; > > > #gzip on; > gzip on; > gzip_static on; > gzip_disable "msie6"; > gzip_vary on; > gzip_proxied any; > gzip_comp_level 6; > gzip_min_length 512; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/css text/javascript text/xml text/plain >text/x-component > application/javascript application/x-javascript application/json > application/xml application/rss+xml font/truetype >application/x-font-ttf > font/opentype application/vnd.ms-fontobject image/svg+xml; > > open_file_cache max=2000 inactive=20s; > open_file_cache_valid 60s; > open_file_cache_min_uses 5; > open_file_cache_errors off; > > fastcgi_buffers 256 16k; > fastcgi_buffer_size 128k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > fastcgi_connect_timeout 300; > fastcgi_send_timeout 300; > fastcgi_read_timeout 300; > fastcgi_intercept_errors on; >reset_timedout_connection on; > > upstream fastcgiservers { > least_conn; > server xxx.xxx.xx.xx:9000; > server xxx.xxx.xx.xx:9000; > server xxx.xxx.xx.xx:9000; > server xxx.xxx.xx.xx:9000; > } > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; >} > >location ~ \.php$ { > try_files $uri =404; > include fastcgi_params; > fastcgi_pass fastcgiservers; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > } > >---------------------------------------------------------PHP-FPM >Settings >------------------------------------------------------------------ > >pm.max_children = 150 >pm.start_servers = 90 >pm.min_spare_servers = 70 >pm.max_spare_servers = 100 >pm.max_requests = 1500 > php logs would be useful but I can notice at least that the value for pm.max_requests is too low. I've made the same mistake in the first times I've setup php-fpm servers, IMHO you should defiantly set a value, but such a low values means php respaws a lot in a very short period of time... I would recommend something higher that 10000... Also I've been having better results with ondemand but that might be related to a lot of different things, but you might also give it a try if you like. hope this helps >Any help or suggestion would be greatly appreciated. > >Thanks > >Posted at Nginx Forum: >http://forum.nginx.org/read.php?2,259833,259833#msg-259833 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From djczaski at gmail.com Wed Jun 24 12:31:31 2015 From: djczaski at gmail.com (Danomi Czaski) Date: Wed, 24 Jun 2015 08:31:31 -0400 Subject: Nginx not logging to socket In-Reply-To: <20150624110318.GA4347@vlpc.nginx.com> References: <20150624110318.GA4347@vlpc.nginx.com> Message-ID: On Wed, Jun 24, 2015 at 7:03 AM, Vladimir Homutov wrote: > On Wed, Jun 24, 2015 at 06:12:49AM -0400, Danomi Czaski wrote: >> Hello, >> >> On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway >> wrote: >> > Hallo! >> > >> > Using rsyslog I have set up a logging socket and confirmed that its working >> > by piping in some stuff to "logger -u /dev/log" >> > nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is >> > below.. >> >> Any luck? I'm seeing the same problem on 1.7.12. >> > > do you see some errors in the local error log? If nginx is unable to > send data to socket for some reasons (check socket permissions, selinux > and similar), you will see errors in the local log file. My config looks like: error_log syslog:server=unix:/dev/log; The only error I see is that nginx can't open /var/log/nginx/error.log. $ nginx -t nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: Unknown error) nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test failed Of course /var/log/nginx isn't there because I'm trying to use syslog. If I create /var/log/nginx, nginx starts and I'll see debugging logs there but nothing related to syslog problems. The permissions on /dev/log look fine: $ ls -l /dev/log srw-rw-rw- 1 root root 0 Jun 24 12:41 /dev/log= From a.marinov at ucdn.com Wed Jun 24 13:06:23 2015 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Wed, 24 Jun 2015 16:06:23 +0300 Subject: unknown directive "thread_pool" In-Reply-To: <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am not sure you need threads with freebsd because it has native aio support. 2015-06-24 15:11 GMT+03:00 bagas : > ?????? ????. > ?? ??????? ???, ???? ???????? ????????? ?? ?????. > # uname -orm > FreeBSD 10.1-RELEASE-p12 amd64 > nginx -v > nginx version: nginx/1.8.0 > nginx ?????? ? ????? --with-threads > ?????. > nginx.conf ? ?????? http ???????: > thread_pool one threads=32; > thread_pool cen threads=32; > ????? ????? ????????. > # nginx -t > nginx: [emerg] "thread_pool" directive is not allowed here in > /usr/local/etc/nginx/nginx.conf:46 > nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed > ??? ? ?????????? ???????, ??? ? ?????? http ??????????? ????????? ?? ?????, > ? ????? ???????? ??????? ??? ?????????, ??? ????? theread_pool ??? ?? > ?????. > ??? ???? ? ???? ??????. > ???? ??????????? ?????? ???????? ?? ????? ?? ??????????? ????????. > ? ???? ?? ????? ???????????? ???? ?? nginx ?? ??????? FreeBSD 10.1 ? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259166,259836#msg-259836 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 24 13:30:23 2015 From: nginx-forum at nginx.us (ajjH6) Date: Wed, 24 Jun 2015 09:30:23 -0400 Subject: ModSecurity compile, ""WARNING: APR util was not compiled with crypto support." Message-ID: Hi, When compiling modsec, I came across the following - "configure: WARNING: APR util was not compiled with crypto support. SecRemoteRule will not support the parameter 'crypto'" Basically the rhel6 apr-devel rpm does not have crypto support. Trying to determine what are the ramifications are here. Might anyone know what this means? Am having difficulty finding what this SecRemoteRule is. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259844,259844#msg-259844 From nginx-forum at nginx.us Wed Jun 24 13:32:45 2015 From: nginx-forum at nginx.us (bagas) Date: Wed, 24 Jun 2015 09:32:45 -0400 Subject: unknown directive "thread_pool" In-Reply-To: References: Message-ID: ??-????, ?? ??????? FreeBSD ?????? ??? ???????????? ??? ??????? ?? nginx? ??? ??? ?? nginx ???? ?????????? ? ?????????. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259166,259845#msg-259845 From a.marinov at ucdn.com Wed Jun 24 13:39:16 2015 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Wed, 24 Jun 2015 16:39:16 +0300 Subject: unknown directive "thread_pool" In-Reply-To: References: Message-ID: http://serverfault.com/questions/476765/how-do-i-enable-aio-on-nginx-on-freebsd Just add 'aio on;' instead of 'aio threads;'. Also you should compile nginx without threads. 2015-06-24 16:32 GMT+03:00 bagas : > ??-????, ?? ??????? FreeBSD ?????? ??? ???????????? ??? ??????? ?? nginx? > ??? ??? ?? nginx ???? ?????????? ? ?????????. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259166,259845#msg-259845 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.marinov at ucdn.com Wed Jun 24 13:40:37 2015 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Wed, 24 Jun 2015 16:40:37 +0300 Subject: unknown directive "thread_pool" In-Reply-To: References: Message-ID: RTFM :) http://nginx.org/en/docs/http/ngx_http_core_module.html#aio 2015-06-24 16:39 GMT+03:00 Anatoli Marinov : > > http://serverfault.com/questions/476765/how-do-i-enable-aio-on-nginx-on-freebsd > Just add 'aio on;' instead of 'aio threads;'. > Also you should compile nginx without threads. > > > 2015-06-24 16:32 GMT+03:00 bagas : > >> ??-????, ?? ??????? FreeBSD ?????? ??? ???????????? ??? ??????? ?? nginx? >> ??? ??? ?? nginx ???? ?????????? ? ?????????. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,259166,259845#msg-259845 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Wed Jun 24 14:00:57 2015 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 24 Jun 2015 17:00:57 +0300 Subject: Nginx not logging to socket In-Reply-To: References: <20150624110318.GA4347@vlpc.nginx.com> Message-ID: <20150624140056.GB4347@vlpc.nginx.com> On Wed, Jun 24, 2015 at 08:31:31AM -0400, Danomi Czaski wrote: > On Wed, Jun 24, 2015 at 7:03 AM, Vladimir Homutov wrote: > > On Wed, Jun 24, 2015 at 06:12:49AM -0400, Danomi Czaski wrote: > >> Hello, > >> > >> On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway > >> wrote: > >> > Hallo! > >> > > >> > Using rsyslog I have set up a logging socket and confirmed that its working > >> > by piping in some stuff to "logger -u /dev/log" > >> > nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is > >> > below.. > >> > >> Any luck? I'm seeing the same problem on 1.7.12. > >> > > > > do you see some errors in the local error log? If nginx is unable to > > send data to socket for some reasons (check socket permissions, selinux > > and similar), you will see errors in the local log file. > > My config looks like: > > error_log syslog:server=unix:/dev/log; > > The only error I see is that nginx can't open /var/log/nginx/error.log. > > $ nginx -t > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (2: Unknown error) > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test failed > > Of course /var/log/nginx isn't there because I'm trying to use syslog. > If I create /var/log/nginx, nginx starts and I'll see debugging logs > there but nothing related to syslog problems. > > The permissions on /dev/log look fine: > > $ ls -l /dev/log > srw-rw-rw- 1 root root 0 Jun 24 12:41 /dev/log= > did you try increasing log level? If there are no errors, nginx will not write anything to log in your case. you can add one more error_log directive and point it to some local file with write permissions to check there for possible errors. From nginx-forum at nginx.us Wed Jun 24 14:03:19 2015 From: nginx-forum at nginx.us (bagas) Date: Wed, 24 Jun 2015 10:03:19 -0400 Subject: unknown directive "thread_pool" In-Reply-To: References: Message-ID: <0fa81d734a3a4ab450de85b786dc05f5.NginxMailingListEnglish@forum.nginx.org> ? ???? aio ????????, ??? ??????? ?? ????????? ?? 1,9 ?????. ? ?????????? ? ????????? ???? aio threads; ?? ??? ??????????, ???????? ??? ??????? ??????? ??????? ???? ???. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259166,259851#msg-259851 From djczaski at gmail.com Wed Jun 24 14:21:06 2015 From: djczaski at gmail.com (Danomi Czaski) Date: Wed, 24 Jun 2015 10:21:06 -0400 Subject: Nginx not logging to socket In-Reply-To: <20150624140056.GB4347@vlpc.nginx.com> References: <20150624110318.GA4347@vlpc.nginx.com> <20150624140056.GB4347@vlpc.nginx.com> Message-ID: On Wed, Jun 24, 2015 at 10:00 AM, Vladimir Homutov wrote: > On Wed, Jun 24, 2015 at 08:31:31AM -0400, Danomi Czaski wrote: >> On Wed, Jun 24, 2015 at 7:03 AM, Vladimir Homutov wrote: >> > On Wed, Jun 24, 2015 at 06:12:49AM -0400, Danomi Czaski wrote: >> >> Hello, >> >> >> >> On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway >> >> wrote: >> >> > Hallo! >> >> > >> >> > Using rsyslog I have set up a logging socket and confirmed that its working >> >> > by piping in some stuff to "logger -u /dev/log" >> >> > nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is >> >> > below.. >> >> >> >> Any luck? I'm seeing the same problem on 1.7.12. >> >> >> > >> > do you see some errors in the local error log? If nginx is unable to >> > send data to socket for some reasons (check socket permissions, selinux >> > and similar), you will see errors in the local log file. >> >> My config looks like: >> >> error_log syslog:server=unix:/dev/log; >> >> The only error I see is that nginx can't open /var/log/nginx/error.log. >> >> $ nginx -t >> nginx: [alert] could not open error log file: open() >> "/var/log/nginx/error.log" failed (2: Unknown error) >> nginx: the configuration file /etc/nginx/nginx.conf syntax is ok >> nginx: configuration file /etc/nginx/nginx.conf test failed >> >> Of course /var/log/nginx isn't there because I'm trying to use syslog. >> If I create /var/log/nginx, nginx starts and I'll see debugging logs >> there but nothing related to syslog problems. >> >> The permissions on /dev/log look fine: >> >> $ ls -l /dev/log >> srw-rw-rw- 1 root root 0 Jun 24 12:41 /dev/log= >> > > did you try increasing log level? If there are no errors, nginx will > not write anything to log in your case. > > you can add one more error_log directive and point it to some local > file with write permissions to check there for possible errors. Okay, I see messages going to syslog, I had to increase the log level as you said. Thanks. It seems like there _must_ be a file logger or nginx won't start. If I don't want any log file it looks like I have to do something like: error_log /dev/null emerg; error_log syslog:server=unix:/dev/log debug; From vl at nginx.com Wed Jun 24 14:45:30 2015 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 24 Jun 2015 17:45:30 +0300 Subject: Nginx not logging to socket In-Reply-To: References: <20150624110318.GA4347@vlpc.nginx.com> <20150624140056.GB4347@vlpc.nginx.com> Message-ID: <20150624144529.GC4347@vlpc.nginx.com> On Wed, Jun 24, 2015 at 10:21:06AM -0400, Danomi Czaski wrote: > On Wed, Jun 24, 2015 at 10:00 AM, Vladimir Homutov wrote: > > On Wed, Jun 24, 2015 at 08:31:31AM -0400, Danomi Czaski wrote: > >> On Wed, Jun 24, 2015 at 7:03 AM, Vladimir Homutov wrote: > >> > On Wed, Jun 24, 2015 at 06:12:49AM -0400, Danomi Czaski wrote: > >> >> Hello, > >> >> > >> >> On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway > >> >> wrote: > >> >> > Hallo! > >> >> > > >> >> > Using rsyslog I have set up a logging socket and confirmed that its working > >> >> > by piping in some stuff to "logger -u /dev/log" > >> >> > nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is > >> >> > below.. > >> >> > >> >> Any luck? I'm seeing the same problem on 1.7.12. > >> >> > >> > > >> > do you see some errors in the local error log? If nginx is unable to > >> > send data to socket for some reasons (check socket permissions, selinux > >> > and similar), you will see errors in the local log file. > >> > >> My config looks like: > >> > >> error_log syslog:server=unix:/dev/log; > >> > >> The only error I see is that nginx can't open /var/log/nginx/error.log. > >> > >> $ nginx -t > >> nginx: [alert] could not open error log file: open() > >> "/var/log/nginx/error.log" failed (2: Unknown error) > >> nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > >> nginx: configuration file /etc/nginx/nginx.conf test failed > >> > >> Of course /var/log/nginx isn't there because I'm trying to use syslog. > >> If I create /var/log/nginx, nginx starts and I'll see debugging logs > >> there but nothing related to syslog problems. > >> > >> The permissions on /dev/log look fine: > >> > >> $ ls -l /dev/log > >> srw-rw-rw- 1 root root 0 Jun 24 12:41 /dev/log= > >> > > > > did you try increasing log level? If there are no errors, nginx will > > not write anything to log in your case. > > > > you can add one more error_log directive and point it to some local > > file with write permissions to check there for possible errors. > > Okay, I see messages going to syslog, I had to increase the log level > as you said. Thanks. > > It seems like there _must_ be a file logger or nginx won't start. If I > don't want any log file it looks like I have to do something like: > > error_log /dev/null emerg; > error_log syslog:server=unix:/dev/log debug; > This is intentionally: syslog is not reliable and nginx by default will write logs to files. And there is a simple workaround - you just found it iself. From francis at daoine.org Wed Jun 24 15:36:54 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Jun 2015 16:36:54 +0100 Subject: unknown directive "thread_pool" In-Reply-To: <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150624153654.GK23844@daoine.org> On Wed, Jun 24, 2015 at 08:11:28AM -0400, bagas wrote: > ?????? ????. "unknown directive" != "directive is not allowed here" > ?? ??????? ???, ???? ???????? ????????? ?? ?????. http://nginx.org/ru/docs/ngx_core_module.html#thread_pool """ ????????: main """ nginx.conf: == thread_pool ...; events { } http { server { } } == f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jun 24 15:42:38 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Jun 2015 16:42:38 +0100 Subject: Understanding alias (used as rewrite) In-Reply-To: <1435128273.71376.YahooMailBasic@web142404.mail.bf1.yahoo.com> References: <1435128273.71376.YahooMailBasic@web142404.mail.bf1.yahoo.com> Message-ID: <20150624154238.GL23844@daoine.org> On Tue, Jun 23, 2015 at 11:44:33PM -0700, E.B. wrote: Hi there, > Thanks for your ongoing helps! I hope someone > can advise further you seem to keep referring to "alias used as rewrite". I do not know what you mean by that. Could you explain? When you do, it may be that it becomes clear where your mental model of what "alias" (or maybe "rewrite") does, breaks down. "alias" is, as you noted, documented at http://nginx.org/r/alias. It is used to identify a filename that should be used to handle the request. In short: there are two relevant kinds of location: prefix and regex. ("exact" is a special case of "prefix".) In a prefix location, the starting part of the request that matches the "location" value is replaced with the "alias" value, and what results becomes $request_filename. (There is more to it than that, but that should be enough for now.) In a regex location, the entire request is replaced with the "alias" value, and what results becomes $request_filename. In your example configs, if you replace the "include" line with something like return 200 "location #1: request $request_uri -> file $request_filename\n"; (change the #1 to something that will identify the location to you each time), then you can use "curl" to make your various requests and see the responses. Does that show you how each configuration was used? If $request_filename identifies a file that does exist, it will probably be used; if it identifies a file that does not exist, there will probably be an error. But it all depends on the configuration that is not yet shown. And "rewrite" does something different, documented at http://nginx.org/r/rewrite; it involves uris, not filenames. f -- Francis Daly francis at daoine.org From maxim at nginx.com Wed Jun 24 15:55:05 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 24 Jun 2015 18:55:05 +0300 Subject: unknown directive "thread_pool" In-Reply-To: <20150624153654.GK23844@daoine.org> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> <20150624153654.GK23844@daoine.org> Message-ID: <558AD2D9.5040104@nginx.com> Hi Francis, it seems that you start to learn Russian! :-) On 6/24/15 6:36 PM, Francis Daly wrote: > On Wed, Jun 24, 2015 at 08:11:28AM -0400, bagas wrote: >> ?????? ????. > > "unknown directive" != "directive is not allowed here" > >> ?? ??????? ???, ???? ???????? ????????? ?? ?????. > > http://nginx.org/ru/docs/ngx_core_module.html#thread_pool > > """ > ????????: main > """ > > nginx.conf: > > == > thread_pool ...; > events { > } > http { > server { > } > } > == > > f > Valentin wrote a good blog post about threads: http://nginx.com/blog/thread-pools-boost-performance-9x/ There is a complete and simple configuration in the "Configuring Thread Pools" section, near the end of the post. Also, there is a russian version of this article on habrahabr.ru. -- Maxim Konovalov http://nginx.com From vbart at nginx.com Wed Jun 24 16:03:48 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 24 Jun 2015 19:03:48 +0300 Subject: unknown directive "thread_pool" In-Reply-To: References: Message-ID: <2378964.BLz9nRiqpU@vbart-workstation> On Wednesday 24 June 2015 09:32:45 bagas wrote: > ??-????, ?? ??????? FreeBSD ?????? ??? ???????????? ??? ??????? ?? nginx? > ??? ??? ?? nginx ???? ?????????? ? ?????????. > You should use native AIO on FreeBSD instead. See: http://nginx.com/blog/thread-pools-boost-performance-9x/ or in Russian: http://habrahabr.ru/post/260669/ wbr, Valentin V. Bartenev From ryd994 at 163.com Wed Jun 24 16:37:55 2015 From: ryd994 at 163.com (ryd994) Date: Wed, 24 Jun 2015 16:37:55 +0000 Subject: Caching fastcgi url In-Reply-To: References: <20150623220749.GI23844@daoine.org> Message-ID: On Tue, Jun 23, 2015 at 6:27 PM CJ Ess wrote: > So looks like your saying the best way to do it is to do a separate > location and duplicate the fastcgi setup in that location and add the > fastcgi_cache stuff. > > I can work with that, however I came across this example while googling ( > https://gist.github.com/magnetikonline/10450786) that uses "if" to set a > variable which I could use to match on the URL and trigger > fastcgi_cache_bypass for everything not matching. Is "if" so toxic that I > shouldn't consider doing it this way? > > > On Tue, Jun 23, 2015 at 6:07 PM, Francis Daly wrote: > >> On Tue, Jun 23, 2015 at 04:19:48PM -0400, CJ Ess wrote: >> >> Hi there, >> >> > - Would I need to define a separate location stanza for the URL I want >> to >> > cache and duplicate all of the fastcgi configuration that is normally >> > required? Or is there a way to indicate that of all the fastcgi requests >> > only the one matching /xyz is to be cached? >> >> fastcgi caching is handled by the fastcgi_cache directive, documented >> at http://nginx.org/r/fastcgi_cache >> >> It is set per-location. >> >> See also directives like fastcgi_cache_bypass and fastcgi_no_cache. >> >> It is probably simplest to have on exact-match location for this url >> and not worry about the no_cache side of things. >> >> "all the configuration that is normally required" is typically four lines >> -- one "include" of common stuff; one or two extra fastcgi_param values, >> and a fastcgi_pass. >> >> > - If multiple request for the same URL arrive at around the same time, >> and >> > the cache is stale, they will all wait on the one request that is >> > refreshing the cache, correct? So I should only see one request for the >> > cached location per worker per minute on the backend? >> >> If that's what you want, you can probably configure it. >> >> http://nginx.org/r/fastcgi_cache_use_stale >> http://nginx.org/r/fastcgi_cache_lock >> >> > - Since my one URI is fairly small, can I indicate that no file backing >> is >> > needed? >> >> I don't think so. But you can have fastcgi_cache_path set to a ramdisk, >> I think. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx For short, if is evil. If you are not sure, don't use. If you don't have other fastcgi locations, you can do fastcgi setup in server block. It's fine to set fastcgi parameters, they won't affect static requests until you do fastcgi_pass. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 24 17:18:02 2015 From: nginx-forum at nginx.us (bagas) Date: Wed, 24 Jun 2015 13:18:02 -0400 Subject: unknown directive "thread_pool" In-Reply-To: <2378964.BLz9nRiqpU@vbart-workstation> References: <2378964.BLz9nRiqpU@vbart-workstation> Message-ID: ???????. ?? ????? ?? ? ? ???????? ???? ??????, ?????? ??? ? ???? ? ? ????? ???. ) ???? ?????? ????????? ?????, ????? ???????????? ?????? ???????. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259166,259871#msg-259871 From nginx-forum at nginx.us Wed Jun 24 17:19:16 2015 From: nginx-forum at nginx.us (bagas) Date: Wed, 24 Jun 2015 13:19:16 -0400 Subject: unknown directive "thread_pool" In-Reply-To: <20150624153654.GK23844@daoine.org> References: <20150624153654.GK23844@daoine.org> Message-ID: ??? ? ??? ???????. ??????? ?? ??????????. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259166,259872#msg-259872 From zxcvbn4038 at gmail.com Wed Jun 24 17:46:25 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Wed, 24 Jun 2015 13:46:25 -0400 Subject: Caching fastcgi url In-Reply-To: References: <20150623220749.GI23844@daoine.org> Message-ID: Everything is fastcgi, my question is how best to treat one single fastcgi URL differently (caching it instead of forwarding every request to the backend). On Wed, Jun 24, 2015 at 12:37 PM, ryd994 wrote: > > > On Tue, Jun 23, 2015 at 6:27 PM CJ Ess wrote: > >> So looks like your saying the best way to do it is to do a separate >> location and duplicate the fastcgi setup in that location and add the >> fastcgi_cache stuff. >> >> I can work with that, however I came across this example while googling ( >> https://gist.github.com/magnetikonline/10450786) that uses "if" to set a >> variable which I could use to match on the URL and trigger >> fastcgi_cache_bypass for everything not matching. Is "if" so toxic that I >> shouldn't consider doing it this way? >> >> >> On Tue, Jun 23, 2015 at 6:07 PM, Francis Daly wrote: >> >>> On Tue, Jun 23, 2015 at 04:19:48PM -0400, CJ Ess wrote: >>> >>> Hi there, >>> >>> > - Would I need to define a separate location stanza for the URL I want >>> to >>> > cache and duplicate all of the fastcgi configuration that is normally >>> > required? Or is there a way to indicate that of all the fastcgi >>> requests >>> > only the one matching /xyz is to be cached? >>> >>> fastcgi caching is handled by the fastcgi_cache directive, documented >>> at http://nginx.org/r/fastcgi_cache >>> >>> It is set per-location. >>> >>> See also directives like fastcgi_cache_bypass and fastcgi_no_cache. >>> >>> It is probably simplest to have on exact-match location for this url >>> and not worry about the no_cache side of things. >>> >>> "all the configuration that is normally required" is typically four lines >>> -- one "include" of common stuff; one or two extra fastcgi_param values, >>> and a fastcgi_pass. >>> >>> > - If multiple request for the same URL arrive at around the same time, >>> and >>> > the cache is stale, they will all wait on the one request that is >>> > refreshing the cache, correct? So I should only see one request for the >>> > cached location per worker per minute on the backend? >>> >>> If that's what you want, you can probably configure it. >>> >>> http://nginx.org/r/fastcgi_cache_use_stale >>> http://nginx.org/r/fastcgi_cache_lock >>> >>> > - Since my one URI is fairly small, can I indicate that no file >>> backing is >>> > needed? >>> >>> I don't think so. But you can have fastcgi_cache_path set to a ramdisk, >>> I think. >>> >>> f >>> -- >>> Francis Daly francis at daoine.org >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > For short, if is evil. If you are not sure, don't use. > > If you don't have other fastcgi locations, you can do fastcgi setup in > server > block. It's fine to set fastcgi parameters, they won't affect static > requests until you do fastcgi_pass. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 24 17:55:54 2015 From: nginx-forum at nginx.us (zimmerle) Date: Wed, 24 Jun 2015 13:55:54 -0400 Subject: ModSecurity compile, ""WARNING: APR util was not compiled with crypto support." In-Reply-To: References: Message-ID: <747b968653f3520419a1d852cd8eaf82.NginxMailingListEnglish@forum.nginx.org> Hi, It should not be a problem. The parameter "crypto" is optional and should not affect ModSecurity behavior. Check the SecRemoteRules description on ModSecurity reference manual, at: https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#secremoterules Br, Z. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259844,259874#msg-259874 From nginx-forum at nginx.us Wed Jun 24 19:31:18 2015 From: nginx-forum at nginx.us (nngin) Date: Wed, 24 Jun 2015 15:31:18 -0400 Subject: Convert Apache .htaccess rewrite to nginx In-Reply-To: <88e0ad2d1117b67e776b6e11383a1a81.NginxMailingListEnglish@forum.nginx.org> References: <88e0ad2d1117b67e776b6e11383a1a81.NginxMailingListEnglish@forum.nginx.org> Message-ID: <167b6a0dc8c96afc2ea37fee2dd485ac.NginxMailingListEnglish@forum.nginx.org> searching the site, i found that this same question goes unanswered. Please point me in the right direction of where i would be able to get help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259810,259876#msg-259876 From djczaski at gmail.com Wed Jun 24 23:05:31 2015 From: djczaski at gmail.com (djczaski at gmail.com) Date: Wed, 24 Jun 2015 19:05:31 -0400 Subject: Nginx not logging to socket In-Reply-To: <20150624144529.GC4347@vlpc.nginx.com> References: <20150624110318.GA4347@vlpc.nginx.com> <20150624140056.GB4347@vlpc.nginx.com> <20150624144529.GC4347@vlpc.nginx.com> Message-ID: > On Jun 24, 2015, at 10:45 AM, Vladimir Homutov wrote: > >> On Wed, Jun 24, 2015 at 10:21:06AM -0400, Danomi Czaski wrote: >>> On Wed, Jun 24, 2015 at 10:00 AM, Vladimir Homutov wrote: >>>> On Wed, Jun 24, 2015 at 08:31:31AM -0400, Danomi Czaski wrote: >>>>> On Wed, Jun 24, 2015 at 7:03 AM, Vladimir Homutov wrote: >>>>>> On Wed, Jun 24, 2015 at 06:12:49AM -0400, Danomi Czaski wrote: >>>>>> Hello, >>>>>> >>>>>> On Sun, Jun 21, 2015 at 8:19 AM, Andrew Holway >>>>>> wrote: >>>>>>> Hallo! >>>>>>> >>>>>>> Using rsyslog I have set up a logging socket and confirmed that its working >>>>>>> by piping in some stuff to "logger -u /dev/log" >>>>>>> nginx/1.8.0 does not seem to be dumping in logs however. The nginx config is >>>>>>> below.. >>>>>> >>>>>> Any luck? I'm seeing the same problem on 1.7.12. >>>>> >>>>> do you see some errors in the local error log? If nginx is unable to >>>>> send data to socket for some reasons (check socket permissions, selinux >>>>> and similar), you will see errors in the local log file. >>>> >>>> My config looks like: >>>> >>>> error_log syslog:server=unix:/dev/log; >>>> >>>> The only error I see is that nginx can't open /var/log/nginx/error.log. >>>> >>>> $ nginx -t >>>> nginx: [alert] could not open error log file: open() >>>> "/var/log/nginx/error.log" failed (2: Unknown error) >>>> nginx: the configuration file /etc/nginx/nginx.conf syntax is ok >>>> nginx: configuration file /etc/nginx/nginx.conf test failed >>>> >>>> Of course /var/log/nginx isn't there because I'm trying to use syslog. >>>> If I create /var/log/nginx, nginx starts and I'll see debugging logs >>>> there but nothing related to syslog problems. >>>> >>>> The permissions on /dev/log look fine: >>>> >>>> $ ls -l /dev/log >>>> srw-rw-rw- 1 root root 0 Jun 24 12:41 /dev/log= >>> >>> did you try increasing log level? If there are no errors, nginx will >>> not write anything to log in your case. >>> >>> you can add one more error_log directive and point it to some local >>> file with write permissions to check there for possible errors. >> >> Okay, I see messages going to syslog, I had to increase the log level >> as you said. Thanks. >> >> It seems like there _must_ be a file logger or nginx won't start. If I >> don't want any log file it looks like I have to do something like: >> >> error_log /dev/null emerg; >> error_log syslog:server=unix:/dev/log debug; > > This is intentionally: syslog is not reliable and nginx by default will > write logs to files. And there is a simple workaround - you Thank you for your help. From shahzaib.cb at gmail.com Wed Jun 24 23:17:19 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 25 Jun 2015 04:17:19 +0500 Subject: Nginx support fot Weedfs !! Message-ID: Hi, We're deploying WeedFS distributed filesystem for thumbs storage and scalabilty. Weedfs is composed of two layers (Master, Volume). Master server does all metadata mapping to track the corresponding volume server against user requested file whereas volume server is the actual storage to serve those requested files back to user via HTTP. Currently, weedfs default webserver is being used as HTTP but it would be better to have nginx webserver on volume servers for its low foot prints, stability and robust response time for static .jpg files. So we need to know if we can use nginx with weedFS ? Following is the github we found, but need to confirm if it will fulfill our needs ? https://github.com/medcl/lua-resty-weedfs Thanks in advance. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailbuilder88 at yahoo.com Thu Jun 25 00:07:06 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Wed, 24 Jun 2015 17:07:06 -0700 Subject: Understanding alias (used as rewrite) In-Reply-To: Message-ID: <1435190826.75511.YahooMailBasic@web142403.mail.bf1.yahoo.com> > >> This config works for me. > >> > >> location ~ ^/test { > >> alias /data/public_html/somefile.php; > >> > >> include fastcgi.conf; > >> > >> fastcgi_pass 127.0.0.1:8900; > >> } > > > > Yes, I had also got similar to work, but > > only for the exact match uri-- the first > > in my list of possible uris that must work: > > > > /my-long-prefix-goes-here > > /my-long-prefix-goes-herexxx > > /my-long-prefix-goes-here/ > > /my-long-prefix-goes-here/filename > > > > I still get 404 for the last 3. That's why > > I thinking it was adding the end of the original > > uri to the alias redirect. but Im not sure. > > you need the regexp-based alias (as in my example). Well, in the very next sentence I told you I already tried your regex version with no luck. But with some helpful explaining from you and Francis I understand better and also found a bug. Next mail for more. > it depends on the type of locaton. If it's regexp-based alias > (location ~ ^/some/(regexp)), the full path is replaced with whatever > in alias params but otherwise the trailing request uri (the one after > path specified in location) will be appended to the alias. OK. It wasn't working that way for me but I discovered cuz of a config bug. So now I understand this but WHY ISNT THIS DOCUMENTED? From emailbuilder88 at yahoo.com Thu Jun 25 00:19:52 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Wed, 24 Jun 2015 17:19:52 -0700 Subject: Understanding alias (used as rewrite) In-Reply-To: <20150624154238.GL23844@daoine.org> Message-ID: <1435191592.10836.YahooMailBasic@web142405.mail.bf1.yahoo.com> First big appreciated to you and Edho for helping!! Especially for things not documented . :) > you seem to keep referring to "alias used as rewrite". I do not know > what you mean by that. > > Could you explain? Sure I wanted to take a prefix: /my-long-base-path and make sure all uris that match that prefix are served to the same file. I got a rewrite to do that perfectly but I see a lot of talk that rewrites are what you used in apache and are slow and not the nginx way of thinking. and it make sense that because the target file is fixed that avoiding the rewrite engine will be faster just pointing to a static aliased file. I digress but hearing your opinion on this is interesting to me. > "alias" is, as you noted, documented at http://nginx.org/r/alias. It is > used to identify a filename that should be used to handle the request. Im still wondering why the critical part you kindly explaining below is not documented. Isn't this basic feature and important info to know? > In short: there are two relevant kinds of location: prefix and > regex. ("exact" is a special case of "prefix".) > > In a prefix location, the starting part of the request that matches the > "location" value is replaced with the "alias" value, and what results > becomes $request_filename. (There is more to it than that, but that > should be enough for now.) > > In a regex location, the entire request is replaced with the "alias" > value, and what results becomes $request_filename. thanks. might have been causing less trouble to the mailing list if this was known from the docs. :) > In your example configs, if you replace the "include" line with something > like > > return 200 "location #1: request $request_uri -> file $request_filename\n"; this was so helpful it is amazing! removing all guessworkd! this should be included in the docs too! i am very thankful to you for this! > (change the #1 to something that will identify the location to you each > time), then you can use "curl" to make your various requests and see > the responses. > > Does that show you how each configuration was used? i discovered in the fastcgi conf file there was a: try_files $uri =404 so it overrided the alias and causeding the problem. i guess this was a security measure to prevent sneaking around the filesystem for php requests. is there a better way to effect same protection? try_files $request_filename =404???? From nginx-forum at nginx.us Thu Jun 25 01:04:58 2015 From: nginx-forum at nginx.us (ajjH6) Date: Wed, 24 Jun 2015 21:04:58 -0400 Subject: uWSGI - upstream prematurely closed connection while reading response header from upstream Message-ID: <452c44c7dcdac9c27c0d31af3688fdb9.NginxMailingListEnglish@forum.nginx.org> Hi, I have a script which runs for 70 seconds. I have NGINX connecting to it via uWSGI. I have set "uwsgi_read_timeout 90;". However, NGINX drops the connection exactly at 60 seconds - "upstream prematurely closed connection while reading response header from upstream" My script continues to run and completes after 70 seconds, however my browser connection has died before then (502 error). The option "uwsgi_read_timeout" does its job for anything less than 60 seconds (ie uwsgi_read_timeout 30;) and terminates with a 504 error as expected. I don't quite understand what is catching the 502 bad gateway error even though I have instructed nginx to permit a uwsgi read timeout of 90 Other - I also have set "keepalive_timeout 300 300;" Any ideas as to the cause? I see numerous posts on the internet advising to use "uwsgi_read_timeout" Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259882,259882#msg-259882 From nginx-forum at nginx.us Thu Jun 25 01:07:11 2015 From: nginx-forum at nginx.us (ajjH6) Date: Wed, 24 Jun 2015 21:07:11 -0400 Subject: uWSGI - upstream prematurely closed connection while reading response header from upstream In-Reply-To: <452c44c7dcdac9c27c0d31af3688fdb9.NginxMailingListEnglish@forum.nginx.org> References: <452c44c7dcdac9c27c0d31af3688fdb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: BTW - this is uWSGI HTTP Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259882,259883#msg-259883 From me at myconan.net Thu Jun 25 09:16:42 2015 From: me at myconan.net (Edho Arief) Date: Thu, 25 Jun 2015 18:16:42 +0900 Subject: trac.nginx.org incorrect https Message-ID: I noticed that trac.nginx.org has https/SNI configured for the host but no actual ssl configuration (how do you even do that): $ openssl s_client -connect trac.nginx.org:443 -servername trac.nginx.org CONNECTED(00000003) 140010415498912:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:770: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 318 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Relevant (which is how I noticed it in the first place): https://github.com/EFForg/https-everywhere/pull/1993 From francis at daoine.org Thu Jun 25 17:21:04 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Jun 2015 18:21:04 +0100 Subject: unknown directive "thread_pool" In-Reply-To: <558AD2D9.5040104@nginx.com> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> <20150624153654.GK23844@daoine.org> <558AD2D9.5040104@nginx.com> Message-ID: <20150625172104.GM23844@daoine.org> On Wed, Jun 24, 2015 at 06:55:05PM +0300, Maxim Konovalov wrote: Hi there, > it seems that you start to learn Russian! :-) That's the joy of machine translation :-) I hope I didn't end up answering the wrong question. > Valentin wrote a good blog post about threads: > > http://nginx.com/blog/thread-pools-boost-performance-9x/ > > There is a complete and simple configuration in the "Configuring > Thread Pools" section, near the end of the post. Thanks for the pointer; I've had a read through it. It looks generally good -- but it does show the thread_pool directive at http level: possibly either that or the documentation at http://nginx.org/en/docs/ngx_core_module.html#thread_pool should be updated to be consistent? I think the original poster has the answers wanted -- thread_pool goes outside http, and probably isn't needed on FreeBSD for file reads -- so it all looks good. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jun 25 17:53:08 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Jun 2015 18:53:08 +0100 Subject: Understanding alias (used as rewrite) In-Reply-To: <1435191592.10836.YahooMailBasic@web142405.mail.bf1.yahoo.com> References: <20150624154238.GL23844@daoine.org> <1435191592.10836.YahooMailBasic@web142405.mail.bf1.yahoo.com> Message-ID: <20150625175308.GN23844@daoine.org> On Wed, Jun 24, 2015 at 05:19:52PM -0700, E.B. wrote: Hi there, > > Could you explain? > > Sure I wanted to take a prefix: > > /my-long-base-path > > and make sure all uris that match > that prefix are served to the same > file. For this mail thread, that's probably fine. But for a general solution, you'll want to make sure that you are clear about what "served to the same file" means. If it is "serve a file from filesystem", then alias and/or try_files may be useful. If it is "send to an upstream such as via fastcgi_pass", then alias may not be needed at all. > I got a rewrite to do that perfectly > but I see a lot of talk that rewrites > are what you used in apache and are > slow and not the nginx way of thinking. > and it make sense that because the > target file is fixed that avoiding > the rewrite engine will be faster just > pointing to a static aliased file. > > I digress but hearing your opinion on > this is interesting to me. Generally, if I can avoid rewrite but achieve the same result simply, that's what I'd tend to do. But without knowing what the original rewrite was doing, it was hard to guess. > > "alias" is, as you noted, documented at http://nginx.org/r/alias. It is > > used to identify a filename that should be used to handle the request. > > Im still wondering why the critical part > you kindly explaining below is not documented. > Isn't this basic feature and important info > to know? My suspicion is that what is documented is clear enough to the writer; and no reader had previously pointed out any problems. (I think that my explanation does follow from the documented words; but I can see how it may not be immediately obvious. So if the nginx documentation team is reading: this thread has some suggested alternate/additional words for the documentation of "alias".) > i discovered in the fastcgi conf file there > was a: > try_files $uri =404 > > so it overrided the alias and causeding > the problem. Yes; "alias" and "try_files" may not work together as you may immediately expect. It doesn't actually *override* the alias; but it does do (and fail) a test that you probably do not want it to do. (And that try_files line is part of "the configuration that is not yet shown". The "include" file could have almost anything in it which would change the way that the configuration is understood.) > i guess this was a security > measure to prevent sneaking around the > filesystem for php requests. I confess I've never been quite sure of the point of that line. I can see what it does, and I think that it might be useful in some limited circumstances which include "...and my php is configured badly and I won't change it..."; but I've tried to avoid those circumstances. > is there a > better way to effect same protection? If you can specify what you consider the "same protection" to be, then maybe. And kudos for correct use of the verb "to effect" ;-) > try_files $request_filename =404???? That won't do what you want because of how try_files handles its not-last arguments. Possibly in this one specific case -- so not in fastcgi.conf that is included elsewhere -- try_files "" =404; would do it. But you know that you are sending SCRIPT_FILENAME (or whatever your fastcgi server honours) set to one specific filename only, and you know that the matching file exists. So what is the test doing that would be bad if it were not done? Cheers, f -- Francis Daly francis at daoine.org From ryd994 at 163.com Thu Jun 25 23:10:45 2015 From: ryd994 at 163.com (ryd994) Date: Thu, 25 Jun 2015 23:10:45 +0000 Subject: Convert Apache .htaccess rewrite to nginx In-Reply-To: <167b6a0dc8c96afc2ea37fee2dd485ac.NginxMailingListEnglish@forum.nginx.org> References: <88e0ad2d1117b67e776b6e11383a1a81.NginxMailingListEnglish@forum.nginx.org> <167b6a0dc8c96afc2ea37fee2dd485ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Jun 24, 2015, 15:31 nngin wrote: searching the site, i found that this same question goes unanswered. Please point me in the right direction of where i would be able to get help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259810,259876#msg-259876 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Not quite familiar with apache rewrite rules, but I think you just need: rewrite ^(.*)/$ $1 permanent; try_files $uri /index.php; Not tested, apologies if I'm wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Fri Jun 26 07:22:59 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 26 Jun 2015 10:22:59 +0300 Subject: unknown directive "thread_pool" In-Reply-To: <20150625172104.GM23844@daoine.org> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> <20150624153654.GK23844@daoine.org> <558AD2D9.5040104@nginx.com> <20150625172104.GM23844@daoine.org> Message-ID: <558CFDD3.7050206@nginx.com> On 6/25/15 8:21 PM, Francis Daly wrote: > On Wed, Jun 24, 2015 at 06:55:05PM +0300, Maxim Konovalov wrote: > > Hi there, > >> it seems that you start to learn Russian! :-) > > That's the joy of machine translation :-) > > I hope I didn't end up answering the wrong question. > >> Valentin wrote a good blog post about threads: >> >> http://nginx.com/blog/thread-pools-boost-performance-9x/ >> >> There is a complete and simple configuration in the "Configuring >> Thread Pools" section, near the end of the post. > > Thanks for the pointer; I've had a read through it. It looks > generally good -- but it does show the thread_pool directive > at http level: possibly either that or the documentation at > http://nginx.org/en/docs/ngx_core_module.html#thread_pool should be > updated to be consistent? > Good catch. It should be fixed. Thanks. > I think the original poster has the answers wanted -- thread_pool goes > outside http, and probably isn't needed on FreeBSD for file reads -- > so it all looks good. > > Cheers, > > f > -- Maxim Konovalov http://nginx.com From cj.wijtmans at gmail.com Fri Jun 26 08:46:35 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Fri, 26 Jun 2015 10:46:35 +0200 Subject: Dash in request url messes up regex? Message-ID: I am requesting the url /magento-check.php and it gives me the php code instead of running it through fpm. Other php files work just fine. Seems like the dash is screwing with the php regex location and going through root location with try_files, serving the php code. location ~* \.php(/.*)?$ { if (!-e $request_filename) { return 404; } fastcgi_pass unix:/var/run/php-fpm/blah.sock; fastcgi_split_path_info ^(.*\.php)(/.*)?$; include fastcgi.conf; expires off; } location / { try_files $uri $uri/ =404; expires 28d; } Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans From nginx-forum at nginx.us Fri Jun 26 10:08:54 2015 From: nginx-forum at nginx.us (klinem2112) Date: Fri, 26 Jun 2015 06:08:54 -0400 Subject: Using threads and poll in nginx module Message-ID: <3a457dd5b8294fe2b30709180afe4d53.NginxMailingListEnglish@forum.nginx.org> Dear all, I run into following problem when writing a module for nginx under Linux. Within my module I have to use a library which internally uses multiple threads (pthreads) as well as poll. When using/calling methods of this library in main initialization handler of my nginx module everything works fine. Problem: But when I try to execute same code within other handling methods then the library is no longer working and method calls that internally use poll and threads seems to hang forever. So my questions are: 1) Is it possible to use a library in an nginx module that internally uses poll and multiple threads? 2) If not, is there any approach to use such a library within a nginx module? Thanks for your help. Best regards Kline Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259897,259897#msg-259897 From nginx-forum at nginx.us Fri Jun 26 14:13:04 2015 From: nginx-forum at nginx.us (vgallissot) Date: Fri, 26 Jun 2015 10:13:04 -0400 Subject: Requests are not concurrent with SPDY and proxy_cache modules Message-ID: Hi all, I'm experiencing a strange behaviour on my SPDY tests. I use nginx as a reverse proxy of a node.Js application. I enabled spdy module and proxy_cache (nginx.conf file bellow) and I actually see no difference in requests speed. Requests are well handed by SPDY protocol (one TCP connection and multiple streams catched with tcpdump), but they should be concurrent and they actually are sequentials like without SPDY support. Here are two screenshots of the waterfalls to see the behaviour 1) With proxy_cache support and SPDY not enabled : https://lut.im/9QKXlT5T/LVFNWqxN 2) With proxy_cache support and SPDY enabled : https://lut.im/DmJ2wqFp/43ZOyhTE Here is the configuration file I use to test on my laptop : 8------------------------------------------------------------------------------------------------------------------------------------> ... ####### tests SPDY keepalive_requests 1000; keepalive_timeout 10; client_body_timeout 10; client_header_timeout 10; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 128; server_name_in_redirect off; proxy_http_version 1.1; fastcgi_buffers 256 4k; gzip_static on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_http_version 1.1; ssl_session_timeout 5m; ssl_session_cache shared:SSL:20m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK'; ssl_prefer_server_ciphers on; ssl_ecdh_curve secp384r1; ssl_dhparam /etc/nginx/ssl/dhparam.pem; upstream node_servers { keepalive 64; server node-server1; server node-server2; } # cache options proxy_buffering on; proxy_cache_path /etc/nginx/cache/ levels=1:1:2 keys_zone=nodes:256m inactive=2048m max_size=4096m; proxy_ignore_client_abort off; proxy_intercept_errors on; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_cache_methods GET; spdy_keepalive_timeout 10s; # inactivity timeout after which the SPDY connection is closed spdy_recv_timeout 4s; # timeout if nginx is currently expecting data from the client but nothing arrives server { listen 127.0.0.1:443 ssl spdy; listen [::1]:443 ssl spdy; server_name beta.mydomain.com; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/key.pem; add_header Alternate-Protocol "443:npn-spdy/3.1"; location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; proxy_pass http://node_servers; proxy_cache nodes; proxy_cache_valid any 1h; proxy_cache_min_uses 10; add_header X-Nginx-Cached $upstream_cache_status; } } ... <------------------------------------------------------------------------------------------------------------------------------------8 More infos : nginx-1.8.0-4.fc22.x86_64 So my question is : can those two modules work together or am I missing something in the conf ? Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259906,259906#msg-259906 From vbart at nginx.com Fri Jun 26 15:02:36 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 26 Jun 2015 18:02:36 +0300 Subject: Requests are not concurrent with SPDY and proxy_cache modules In-Reply-To: References: Message-ID: <3503647.RysauILhMo@vbart-workstation> On Friday 26 June 2015 10:13:04 vgallissot wrote: > Hi all, > > I'm experiencing a strange behaviour on my SPDY tests. > > I use nginx as a reverse proxy of a node.Js application. > > I enabled spdy module and proxy_cache (nginx.conf file bellow) and I > actually see no difference in requests speed. SPDY (and HTTP/2.0 as well) has no performance benefits for low-latency connections. > Requests are well handed by > SPDY protocol (one TCP connection and multiple streams catched with > tcpdump), but they should be concurrent and they actually are sequentials > like without SPDY support. > > Here are two screenshots of the waterfalls to see the behaviour > 1) With proxy_cache support and SPDY not enabled : > https://lut.im/9QKXlT5T/LVFNWqxN > > 2) With proxy_cache support and SPDY enabled : > https://lut.im/DmJ2wqFp/43ZOyhTE > [..] You're testing on loopback interface with about zero latency. So nginx had time and throughput to respond on the request before starting processing the next one, since most of your resources are just a few kilobytes. wbr, Valentin V. Bartenev From francis at daoine.org Fri Jun 26 17:45:07 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 26 Jun 2015 18:45:07 +0100 Subject: Dash in request url messes up regex? In-Reply-To: References: Message-ID: <20150626174507.GO23844@daoine.org> On Fri, Jun 26, 2015 at 10:46:35AM +0200, Christ-Jan Wijtmans wrote: Hi there, > I am requesting the url /magento-check.php and it gives me the php > code instead of running it through fpm. Other php files work just > fine. Does "curl -i http://your-server/magento-check.php" show a http header which indicates that PHP is processing the file? Is there any difference in response if you add " Message-ID: <1435388667.8456.YahooMailBasic@web142406.mail.bf1.yahoo.com> > > i guess this was a security > > measure to prevent sneaking around the > > filesystem for php requests. > > I confess I've never been quite sure of the point of that line. > > I can see what it does, and I think that it might be useful in some > limited circumstances which include "...and my php is configured badly > and I won't change it..."; but I've tried to avoid those circumstances. > > > is there a > > better way to effect same protection? > > If you can specify what you consider the "same protection" to be, > then maybe. > > And kudos for correct use of the verb "to effect" ;-) > > > try_files $request_filename =404???? > > That won't do what you want because of how try_files handles its not-last > arguments. > > Possibly in this one specific case -- so not in fastcgi.conf that is > included elsewhere -- > > try_files "" =404; > > would do it. But you know that you are sending SCRIPT_FILENAME (or > whatever your fastcgi server honours) set to one specific filename only, > and you know that the matching file exists. So what is the test doing > that would be bad if it were not done? I read more about what the "security protection" could have been aiming at and I think it was as you suspect, coverage for bad php config. To answer your last question, php in some cases could execute some code hidden in a .gif file if the .php path didn't exist ("http://exmaple.org/test.gif/test.php") so the test was trying to verify if test.php exists or not. I thinking it's not the best way to protect this. Thanks for your kind helpful responsing! From nginx-forum at nginx.us Sat Jun 27 13:45:29 2015 From: nginx-forum at nginx.us (guillefar) Date: Sat, 27 Jun 2015 09:45:29 -0400 Subject: Malware in /tmp/nginx_client Message-ID: The software maldet, discovered some malware in the the /tmp/nginx_client directory, like this: > {HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0050030641 > {HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0060442670 I did some research, and found out that indeed, there were some malicious code in them. I did a extensive search in the sites, and nothing malicious was found, including the code that appeared in the tmp files. Around the time the files were created, there were similar requests, to non existent Worpress plugins, and to a file of the Worpres backend. Digging up a little, I found this: blog.inurl.com.br/2015/03/wordpress-revslider-exploit-0day-inurl.html Basically an exploit for a Wordpress plugin vulnerability, (it doesn't affect my sites, though), that do similar requests to the ones I found. One of those, is a post request that includes an attacker's php, file that thanks to this vulnerability will be uploaded to the site and it can be run by the attacker. So what it seems to be happenning is that nxing is caching post requests with malicious code, that later is found by the antimalware software. Could this be the case? I read and seems that Nginx does't cache post request by default, so it seems odd. Is there a way to know if that tmp files are caching internal or external content? I will be thankful for any info about it. Nginx is working as reverse proxy only. This is a bit of another file that was marked as malware: > > --13530703071348311 > Content-Disposition: form-data; name="uploader_url" > > http:/MISITE/wp-content/plugins/wp-symposium/server/php/ > --13530703071348311 > Content-Disposition: form-data; name="uploader_uid" > 1 > --13530703071348311 > Content-Disposition: form-data; name="uploader_dir" > > ./NgzaJG > --13530703071348311 > Content-Disposition: form-data; name="files[]"; filename="SFAlTDrV.php" > Content-Type: application/octet-stream Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259948,259948#msg-259948 From lucas at slcoding.com Sat Jun 27 14:35:20 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Sat, 27 Jun 2015 16:35:20 +0200 Subject: Malware in /tmp/nginx_client In-Reply-To: References: Message-ID: <558EB4A8.2010601@slcoding.com> It's not harmful that they're there, but you could simply exclude the /tmp/nginx_client folder from maldet, It's due to the option client_body_in_file_only being set to on in your nginx.conf (Sounds like you're using http://www.nginxcp.com/ for cpanel) > guillefar > 27 Jun 2015 15:45 > The software maldet, discovered some malware in the the /tmp/nginx_client > directory, like this: > >> {HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0050030641 >> {HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0060442670 > > I did some research, and found out that indeed, there were some malicious > code in them. > > I did a extensive search in the sites, and nothing malicious was found, > including the code that appeared in the tmp files. > > Around the time the files were created, there were similar requests, to non > existent Worpress plugins, and to a file of the Worpres backend. > > Digging up a little, I found this: > blog.inurl.com.br/2015/03/wordpress-revslider-exploit-0day-inurl.html > > Basically an exploit for a Wordpress plugin vulnerability, (it doesn't > affect my sites, though), that do similar requests to the ones I found. > > One of those, is a post request that includes an attacker's php, file that > thanks to this vulnerability will be uploaded to the site and it can be run > by the attacker. > > So what it seems to be happenning is that nxing is caching post requests > with malicious code, that later is found by the antimalware software. > > Could this be the case? I read and seems that Nginx does't cache post > request by default, so it seems odd. > > Is there a way to know if that tmp files are caching internal or external > content? > > I will be thankful for any info about it. > > Nginx is working as reverse proxy only. > > > This is a bit of another file that was marked as malware: > >> --13530703071348311 >> Content-Disposition: form-data; name="uploader_url" >> >> http:/MISITE/wp-content/plugins/wp-symposium/server/php/ >> --13530703071348311 >> Content-Disposition: form-data; name="uploader_uid" > >> 1 >> --13530703071348311 >> Content-Disposition: form-data; name="uploader_dir" >> >> ./NgzaJG >> --13530703071348311 >> Content-Disposition: form-data; name="files[]"; filename="SFAlTDrV.php" >> Content-Type: application/octet-stream > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259948,259948#msg-259948 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvrv7575 at yahoo.com Sun Jun 28 15:12:23 2015 From: rvrv7575 at yahoo.com (Rv Rv) Date: Sun, 28 Jun 2015 15:12:23 +0000 (UTC) Subject: Documentation of buf struct Message-ID: <1127573755.783251.1435504343438.JavaMail.yahoo@mail.yahoo.com> I am not very clear on the purpose of different data members within the buf structure.(appended below) After looking through the code, I can figure out the purpose of? - pos,last (sliding window)? - file_pos, file_last, start, end ,(data start and end) - tag, (which module owns this buf) - file (name of the file if any associated with the data) - memory(cannot be released by any module that processes the buf). - mmap (buf is memory map) - last_in_chain(last in the chain of bufs) - last_buf(last in the response) - For temporary: can the temporary buffer be released by any module that processes it or can it be released by only the module that owns it as indicated in the tag It will be good if the purpose of other data members is described also. Thanks for any inputs struct ngx_buf_s {? ? u_char ? ? ? ? ?*pos;? ? u_char ? ? ? ? ?*last;? ? off_t ? ? ? ? ? ?file_pos;? ? off_t ? ? ? ? ? ?file_last; ? ? u_char ? ? ? ? ?*start; ? ? ? ? /* start of buffer */? ? u_char ? ? ? ? ?*end; ? ? ? ? ? /* end of buffer */? ? ngx_buf_tag_t ? ?tag;? ? ngx_file_t ? ? ?*file;? ? ngx_buf_t ? ? ? *shadow; ? ? /* the buf's content could be changed */? ? unsigned ? ? ? ? temporary:1; ? ? /*? ? ?* the buf's content is in a memory cache or in a read only memory? ? ?* and must not be changed? ? ?*/? ? unsigned ? ? ? ? memory:1; ? ? /* the buf's content is mmap()ed and must not be changed */? ? unsigned ? ? ? ? mmap:1; ? ? unsigned ? ? ? ? recycled:1;? ? unsigned ? ? ? ? in_file:1;? ? unsigned ? ? ? ? flush:1;? ? unsigned ? ? ? ? sync:1;? ? unsigned ? ? ? ? last_buf:1;? ? unsigned ? ? ? ? last_in_chain:1; ? ? unsigned ? ? ? ? last_shadow:1;? ? unsigned ? ? ? ? temp_file:1; ? ? /* STUB */ int ? num;}; -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jun 28 16:20:06 2015 From: nginx-forum at nginx.us (prozit) Date: Sun, 28 Jun 2015 12:20:06 -0400 Subject: OCSP stapling for client certificates In-Reply-To: <20140827165554.GW1849@mdounin.ru> References: <20140827165554.GW1849@mdounin.ru> Message-ID: Hi, Actually, I had the same questions. Is this something that's available by now, or is it in the pipeline of any new release of Nginx or will it never be? I'm just asking since I believe this might be a good feature to add since CRL's could get very big when lots of certificate have been revoked, and since it is not a realtime updating mechanism. By using a OCSP, there is a little overhead of contacting the OCSP for checking each client certificate that is being validated... I believe this to be much more efficient than regularly downloading/uploading a CRL and reloading Nginx. This process can fail on multiple locations which makes it harder to track and a big disadvantage of the CRL's is that they are not realtime updated, which is the case for OCSP's. This way revoking a certificate will cause it to immediately retract the access to client certificate secured applications (for all new sessions). Is it already supported in some version of Nginx or is it planned somewhere in the future? Many thanks, Kind regards, Francis Claessens. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252893,259954#msg-259954 From nginx-forum at nginx.us Mon Jun 29 09:08:25 2015 From: nginx-forum at nginx.us (guillefar) Date: Mon, 29 Jun 2015 05:08:25 -0400 Subject: Malware in /tmp/nginx_client In-Reply-To: <558EB4A8.2010601@slcoding.com> References: <558EB4A8.2010601@slcoding.com> Message-ID: Yes, I think I will do that. So they are indeed cached requests? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259948,259960#msg-259960 From lucas at slcoding.com Mon Jun 29 09:12:22 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Mon, 29 Jun 2015 11:12:22 +0200 Subject: Malware in /tmp/nginx_client In-Reply-To: References: <558EB4A8.2010601@slcoding.com> Message-ID: <55910BF6.70605@slcoding.com> It's not cached requests: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_in_file_only It's just temporary files that isn't deleted, if you set it to 'clean' instead of 'on' - I think you can even put it off as default... I don't know why it's actually enabled. > guillefar > 29 Jun 2015 11:08 > Yes, I think I will do that. So they are indeed cached requests? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259948,259960#msg-259960 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Lucas Rolff > 27 Jun 2015 16:35 > It's not harmful that they're there, but you could simply exclude the > /tmp/nginx_client folder from maldet, > > It's due to the option client_body_in_file_only being set to on in > your nginx.conf (Sounds like you're using http://www.nginxcp.com/ for > cpanel) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prakash.prax at gmail.com Mon Jun 29 10:29:14 2015 From: prakash.prax at gmail.com (Prakash Premkumar) Date: Mon, 29 Jun 2015 15:59:14 +0530 Subject: Nginx and fastcgi with c Message-ID: Where can I find good tutorials on writing fastcgi applications with nginx and c ? ( Googling didn't help much. ) How do I handle HTTP requests received by the nginx server with c code ? Please point out some resources towards learning the same . Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From krebs.seb at gmail.com Mon Jun 29 10:35:17 2015 From: krebs.seb at gmail.com (Sebastian Krebs) Date: Mon, 29 Jun 2015 12:35:17 +0200 Subject: Nginx and fastcgi with c In-Reply-To: References: Message-ID: Hi, The Specs helped my writing a client, so it should help writing a server as well. Maybe there's also a ready-to-use library out there. Worth to mention, that this is independent from nginx. Theoretically you just need to listen on a socket, parse the request regarding the specs, *do something* and return the response again regarding the specs. See http://www.fastcgi.com/devkit/doc/fcgi-spec.html 2015-06-29 12:29 GMT+02:00 Prakash Premkumar : > Where can I find good tutorials on writing fastcgi applications with nginx > and c ? ( Googling didn't help much. ) > > How do I handle HTTP requests received by the nginx server with c code ? > > Please point out some resources towards learning the same . > > Thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- github.com/KingCrunch -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 29 13:05:10 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 29 Jun 2015 09:05:10 -0400 Subject: Small bug in src/stream/ngx_stream_proxy_module.c Message-ID: (1066) : warning C4244: '=' : conversion from 'off_t' to 'size_t', possible loss of data diff line 1066: if (size > (size_t) limit) { - size = limit; + size = (size_t) limit; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259969,259969#msg-259969 From shahzaib.cb at gmail.com Mon Jun 29 23:28:32 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 30 Jun 2015 04:28:32 +0500 Subject: Nginx-1.9.2 fatal code 2 !! Message-ID: Hi, We've just compiled latest nginx-1.9.2 on Debian wheezy 7 in order to utilize aio threads directive for our storage but nginx started to crash since we enabled aio threads on it. Following is the compiled options and log about the crash : root at archive3:/usr/local/nginx/conf/vhosts# nginx -V nginx version: nginx/1.9.2 built by gcc 4.7.2 (Debian 4.7.2-5) configure arguments: --sbin-path=/usr/local/sbin/nginx --with-http_flv_module --with-http_mp4_module --with-threads --with-stream --with-debug error_log : 2015/06/30 04:14:07 [alert] 32076#32076: worker process 11097 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:03 [alert] 32079#32079: pthread_create() failed (11: Resource temporarily unavailable) 2015/06/30 04:14:07 [alert] 32076#32076: worker process 17232 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 18584 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 595 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 32121 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 7557 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 16852 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 32083 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 5933 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:07 [alert] 32076#32076: worker process 32079 exited with fatal code 2 and cannot be respawned 2015/06/30 04:14:03 [alert] 25360#25360: pthread_create() failed (11: Resource temporarily unavailable) 2015/06/30 04:14:03 [alert] 18540#18540: pthread_create() failed (11: Resource temporarily unavailable) 2015/06/30 04:14:03 [alert] 11093#11093: pthread_create() failed (11: Resource temporarily unavailable) 2015/06/30 04:14:03 [alert] 23953#23953: pthread_create() failed (11: Resource temporarily unavailable) Thanks in advance. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From cherian.in at gmail.com Tue Jun 30 01:27:11 2015 From: cherian.in at gmail.com (Cherian Thomas) Date: Mon, 29 Jun 2015 18:27:11 -0700 Subject: Serving from cache even when the origin server goes down Message-ID: Is it possible to configure Nginx to serve from cache even when the origin server is not accessible? I am trying to figure out if I can use a replicated Nginx instance that has cache files rsynced (lsyncd ) from the primary instance and serve from the replicated instance (DNS switch) if the primary goes down. - Cherian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 30 02:34:53 2015 From: nginx-forum at nginx.us (Nousfeed) Date: Mon, 29 Jun 2015 22:34:53 -0400 Subject: Reverse proxy setup problem Message-ID: I have created a reverse proxy for 2 web servers both are running about 10 sites each, they use to be on seperate external IP's but now need to be behind the same one. Using Nginx I setup a reverse proxy on a seperate VM, Its seems to work but one site is refusing to go to the correct server. I have nothing in /conf.d/ I am using /sites-available and /sites-enabled, each site has its own config file. Im using proxy_pass with the servers IP address. What am I doing wrong? server { listen 10.0.0.125:80; server_name .example.com; location / { proxy_pass http://10.0.0.110/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259982,259982#msg-259982 From lucas at slcoding.com Tue Jun 30 05:01:06 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 30 Jun 2015 07:01:06 +0200 Subject: Serving from cache even when the origin server goes down In-Reply-To: References: Message-ID: <55922292.9030207@slcoding.com> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale You can use multiple values e.g. the below is probably a good start: proxy_cache_use_stale error timeout invalid_header updating; > Cherian Thomas > 30 Jun 2015 03:27 > > Is it possible to configure Nginx to serve from cache even when the > origin server is not accessible? > > I am trying to figure out if I can use a replicated Nginx instance > that has cache files rsynced (lsyncd > ) > from the primary instance and serve from the replicated instance (DNS > switch) if the primary goes down. > > - Cherian > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From oyljerry at gmail.com Tue Jun 30 06:02:53 2015 From: oyljerry at gmail.com (Jerry OELoo) Date: Tue, 30 Jun 2015 14:02:53 +0800 Subject: Can thread pool improve performance for such scenario Message-ID: Hi All: I am using Nginx as a reverse proxy which provide a web API (HTTP GET ) to client. and the backend application will get request from nginx and do some time-consuming processing (1-2 seconds) then response result to nginx, Nginx return result to client. I think this is synchronize operation. As I know, Nginx import thread pools feature, so is it useful for my scenario and improve performance? -- Rejoice,I Desire! From pturrr at gmail.com Tue Jun 30 06:41:42 2015 From: pturrr at gmail.com (alexey ptushkin) Date: Tue, 30 Jun 2015 09:41:42 +0300 Subject: slow response times Message-ID: <55923A26.6080800@gmail.com> We are currently using nginx/1.7.10 @Ubuntu12.04 for serving high load webapp with about 12-15k QPS mostly tiny requests / HTTP/1.1 keep-alive mode. Nginx is used as proxy for http backends. From time to time we are noticing traffic gaps and during this time nginx answers very slowly (up to 20-25s) across all virtual hosts. There are no any error messages in general error log and also server has much of resources available. The nature of service is that http backends have very low read/connect proxy timeout value (40ms) and nginx should be able to send static file during error interception, those timeout happen very often (2-3k during a second) But unfortunately we cant make it stable enough to provide fast static answer during backend slowness periods since it hangs and answers very slow Location configuration: access_log off; error_page 500 501 502 503 504 408 404 =200 /file.dat; include proxy_params; proxy_intercept_errors on; proxy_cache off; proxy_redirect off; proxy_pass_request_body on; proxy_pass_request_headers on; proxy_next_upstream off; proxy_read_timeout 40ms; proxy_connect_timeout 40ms; proxy_send_timeout 40ms; set $args src=g2; proxy_pass http://upstream/; proxy_http_version 1.1; proxy_set_header Connection ""; } Stub_status during normal operation: Active connections: 831 server accepts handled requests 919758 919758 945641607 Reading: 0 Writing: 44 Waiting: 787 I can share any needed debug/stats info for sure. Thank you for your cooperation From shahzaib.cb at gmail.com Tue Jun 30 07:52:47 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 30 Jun 2015 12:52:47 +0500 Subject: Can thread pool improve performance for such scenario In-Reply-To: References: Message-ID: Hi, This is what they have to say about threads : Offloading read operations to the thread pool is a technique applicable to very specific tasks. It is most useful where the volume of frequently requested content doesn?t fit into the operating system?s VM cache. This might be the case with, for instance, a heavily loaded NGINX-based streaming media server. This is the situation we?ve simulated in our benchmark. --------------------------------------------------------------------------- So if you got storage which doesn't fit into Ram such as 2TB storage with 32Gb RAM . In this case, threads could be useful. Otherwise, nginx is already very much robust to serve concurrent requests against standard extensions such as jpeg,css,html and many more. You can read more about threads in following link : http://nginx.com/blog/thread-pools-boost-performance-9x/ Regards. Shahzaib On Tue, Jun 30, 2015 at 11:02 AM, Jerry OELoo wrote: > Hi All: > I am using Nginx as a reverse proxy which provide a web API (HTTP GET > ) to client. > and the backend application will get request from nginx and do some > time-consuming processing (1-2 seconds) then response result to nginx, > Nginx return result to client. > I think this is synchronize operation. > > As I know, Nginx import thread pools feature, so is it useful for my > scenario and improve performance? > > -- > Rejoice,I Desire! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Jun 30 09:17:33 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 30 Jun 2015 11:17:33 +0200 Subject: Can thread pool improve performance for such scenario In-Reply-To: References: Message-ID: > Hi All: > I am using Nginx as a reverse proxy which provide a web API (HTTP GET > ) to client. > and the backend application will get request from nginx and do some > time-consuming processing (1-2 seconds) then response result to nginx, > Nginx return result to client. > I think this is synchronize operation. What backend protocol? HTTP/TCP? That not blocking at all, no need to improve anything. What the ?thread pool currently improves is sendfile() I/O operations. Lukas From arut at nginx.com Tue Jun 30 10:00:06 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 30 Jun 2015 13:00:06 +0300 Subject: Small bug in src/stream/ngx_stream_proxy_module.c In-Reply-To: References: Message-ID: Hello, > On 29 Jun 2015, at 16:05, itpp2012 wrote: > > (1066) : warning C4244: '=' : conversion from 'off_t' to 'size_t', possible > loss of data > > diff line 1066: > if (size > (size_t) limit) { > - size = limit; > + size = (size_t) limit; > } > What compiler do you have? -- Roman Arutyunyan From oyljerry at gmail.com Tue Jun 30 11:19:01 2015 From: oyljerry at gmail.com (Jerry OELoo) Date: Tue, 30 Jun 2015 19:19:01 +0800 Subject: Can thread pool improve performance for such scenario In-Reply-To: References: Message-ID: Backend is uwsgi TCP protocol. Thanks for your clarify. I am understanding now, Thread pool should not useful to improve my case. On Tue, Jun 30, 2015 at 5:17 PM, Lukas Tribus wrote: >> Hi All: >> I am using Nginx as a reverse proxy which provide a web API (HTTP GET >> ) to client. >> and the backend application will get request from nginx and do some >> time-consuming processing (1-2 seconds) then response result to nginx, >> Nginx return result to client. >> I think this is synchronize operation. > > What backend protocol? HTTP/TCP? That not blocking at all, no need > to improve anything. What the thread pool currently improves is > sendfile() I/O operations. > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Rejoice,I Desire! From nginx-forum at nginx.us Tue Jun 30 11:19:55 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Tue, 30 Jun 2015 07:19:55 -0400 Subject: Static content Message-ID: Hi, I have NGinx 1.8.0 installed successfully and configured NGinx... with upstream servers provided. NGinx and Services are deployed on separate machines. Now, when an request is made via NGinx, the service is invoked resulting in UI with no static content loaded. I have tried root, rewrite directives... to specify the static content path on another server (workspace.corp.test.no), which didn't helped. I might be wrong with configuration. Pls. assist server { listen 80; server_name workspace.corp.test.no; location ~"*\.(js|jpg|png|css)$" { root /workspace/WEB-INF/classes/static; expires 30d; } location /{ proxy_pass http://workspace.corp.test.no/workspace/agentLogin/; } } And, the above configuration is expecting static content on the server where NGinx is installed, when an request is made. But, I expect the static content to be expected from remote server. Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259989,259989#msg-259989 From nginx-forum at nginx.us Tue Jun 30 12:01:38 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 30 Jun 2015 08:01:38 -0400 Subject: Small bug in src/stream/ngx_stream_proxy_module.c In-Reply-To: References: Message-ID: <92db01b216f52cda24a7b7b2cdd468a3.NginxMailingListEnglish@forum.nginx.org> Roman Arutyunyan Wrote: ------------------------------------------------------- > What compiler do you have? A proper one :) vc++ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259969,259991#msg-259991 From arut at nginx.com Tue Jun 30 12:19:30 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 30 Jun 2015 15:19:30 +0300 Subject: Small bug in src/stream/ngx_stream_proxy_module.c In-Reply-To: <92db01b216f52cda24a7b7b2cdd468a3.NginxMailingListEnglish@forum.nginx.org> References: <92db01b216f52cda24a7b7b2cdd468a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On 30 Jun 2015, at 15:01, itpp2012 wrote: > > Roman Arutyunyan Wrote: > ------------------------------------------------------- >> What compiler do you have? > > A proper one :) vc++ version? -- Roman Arutyunyan From nginx-forum at nginx.us Tue Jun 30 12:38:43 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 30 Jun 2015 08:38:43 -0400 Subject: Small bug in src/stream/ngx_stream_proxy_module.c In-Reply-To: References: Message-ID: <4afa7aa7beae7fd58fec794f011b7c93.NginxMailingListEnglish@forum.nginx.org> Roman Arutyunyan Wrote: ------------------------------------------------------- > > On 30 Jun 2015, at 15:01, itpp2012 wrote: > > > > Roman Arutyunyan Wrote: > > ------------------------------------------------------- > >> What compiler do you have? > > > > A proper one :) vc++ > > version? 2010, 2013, 2015, all the same. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259969,259993#msg-259993 From vbart at nginx.com Tue Jun 30 12:47:33 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 30 Jun 2015 15:47:33 +0300 Subject: Nginx-1.9.2 fatal code 2 !! In-Reply-To: References: Message-ID: <1896857.dCqYHIGOEE@vbart-workstation> On Tuesday 30 June 2015 04:28:32 shahzaib shahzaib wrote: > Hi, > > We've just compiled latest nginx-1.9.2 on Debian wheezy 7 in order to > utilize aio threads directive for our storage but nginx started to crash > since we enabled aio threads on it. Following is the compiled options and > log about the crash : > > root at archive3:/usr/local/nginx/conf/vhosts# nginx -V > nginx version: nginx/1.9.2 > built by gcc 4.7.2 (Debian 4.7.2-5) > configure arguments: --sbin-path=/usr/local/sbin/nginx > --with-http_flv_module --with-http_mp4_module --with-threads --with-stream > --with-debug > > error_log : > > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 11097 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:03 [alert] 32079#32079: pthread_create() failed (11: > Resource temporarily unavailable) > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 17232 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 18584 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 595 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 32121 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 7557 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 16852 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 32083 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 5933 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:07 [alert] 32076#32076: worker process 32079 exited with > fatal code 2 and cannot be respawned > 2015/06/30 04:14:03 [alert] 25360#25360: pthread_create() failed (11: > Resource temporarily unavailable) > 2015/06/30 04:14:03 [alert] 18540#18540: pthread_create() failed (11: > Resource temporarily unavailable) > 2015/06/30 04:14:03 [alert] 11093#11093: pthread_create() failed (11: > Resource temporarily unavailable) > 2015/06/30 04:14:03 [alert] 23953#23953: pthread_create() failed (11: > Resource temporarily unavailable) > > Thanks in advance. > You should check the NPROC limit on maximum number of processes/threads, and configure it according to the actual number of threads you're going to use. wbr, Valentin V. Bartenev From miguelmclara at gmail.com Tue Jun 30 14:35:38 2015 From: miguelmclara at gmail.com (Miguel C) Date: Tue, 30 Jun 2015 15:35:38 +0100 Subject: Static content In-Reply-To: References: Message-ID: On Tue, Jun 30, 2015 at 12:19 PM, smsmaddy1981 wrote: > Hi, > I have NGinx 1.8.0 installed successfully and configured NGinx... with > upstream servers provided. NGinx and Services are deployed on separate > machines. Now, when an request is made via NGinx, the service is invoked > resulting in UI with no static content loaded. > > I have tried root, rewrite directives... to specify the static content path > on another server (workspace.corp.test.no), which didn't helped. I might be > wrong with configuration. Pls. assist > > > server { > listen 80; > server_name workspace.corp.test.no; > > location ~"*\.(js|jpg|png|css)$" { > root /workspace/WEB-INF/classes/static; > expires 30d; > } > > location /{ > proxy_pass http://workspace.corp.test.no/workspace/agentLogin/; > } > } > > > And, the above configuration is expecting static content on the server where > NGinx is installed, when an request is made. But, I expect the static > content to be expected from remote server. > That is the problem! You're telling it the static content is in the local system at '/workspace/WEB-INF/classes/static' when its not... Assuming the "frontend" is proxying to another nginx server you should only define "location ~"*\.(js|jpg|png|css)$" {" in the 'inner' (for lack of a better term) nginx server. > > > Best regards, > Maddy > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259989,259989#msg-259989 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Tue Jun 30 14:53:17 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 30 Jun 2015 17:53:17 +0300 Subject: unknown directive "thread_pool" In-Reply-To: <558CFDD3.7050206@nginx.com> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <20150625172104.GM23844@daoine.org> <558CFDD3.7050206@nginx.com> Message-ID: <4560540.mmj4MUFNpK@vbart-workstation> On Friday 26 June 2015 10:22:59 Maxim Konovalov wrote: > On 6/25/15 8:21 PM, Francis Daly wrote: > > On Wed, Jun 24, 2015 at 06:55:05PM +0300, Maxim Konovalov wrote: > > > > Hi there, > > > >> it seems that you start to learn Russian! :-) > > > > That's the joy of machine translation :-) > > > > I hope I didn't end up answering the wrong question. > > > >> Valentin wrote a good blog post about threads: > >> > >> http://nginx.com/blog/thread-pools-boost-performance-9x/ > >> > >> There is a complete and simple configuration in the "Configuring > >> Thread Pools" section, near the end of the post. > > > > Thanks for the pointer; I've had a read through it. It looks > > generally good -- but it does show the thread_pool directive > > at http level: possibly either that or the documentation at > > http://nginx.org/en/docs/ngx_core_module.html#thread_pool should be > > updated to be consistent? > > > Good catch. It should be fixed. Thanks. > Fixed. Thank you, Francis. wbr, Valentin V. Bartenev From cherian.in at gmail.com Tue Jun 30 15:36:06 2015 From: cherian.in at gmail.com (Cherian Thomas) Date: Tue, 30 Jun 2015 08:36:06 -0700 Subject: Serving from cache even when the origin server goes down In-Reply-To: <55922292.9030207@slcoding.com> References: <55922292.9030207@slcoding.com> Message-ID: Thanks a lot. This is perfect. - Cherian On Mon, Jun 29, 2015 at 10:01 PM, Lucas Rolff wrote: > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale > > You can use multiple values e.g. the below is probably a good start: > > proxy_cache_use_stale error timeout invalid_header updating; > > Cherian Thomas > 30 Jun 2015 03:27 > > Is it possible to configure Nginx to serve from cache even when the origin > server is not accessible? > > I am trying to figure out if I can use a replicated Nginx instance that > has cache files rsynced (lsyncd > ) > from the primary instance and serve from the replicated instance (DNS > switch) if the primary goes down. > - Cherian > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 30 21:04:24 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Tue, 30 Jun 2015 17:04:24 -0400 Subject: Static content In-Reply-To: References: Message-ID: Hi Mike Thanks for your reply I did not understand your suggestion. Can you please elaborate with an example Regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259989,260003#msg-260003 From miguelmclara at gmail.com Tue Jun 30 21:49:41 2015 From: miguelmclara at gmail.com (Miguel C) Date: Tue, 30 Jun 2015 22:49:41 +0100 Subject: Static content In-Reply-To: References: Message-ID: You said it yourself: "And, the above configuration is expecting static content on the server where NGinx is installed, when an request is made. But, I expect the static content to be expected from remote server." Nginx expets the file to be on the server, and that's how it works... you need to either make them available in that server, in the path you're specifing or just "drop" that location block in this server. Are you're upstream server other nginx servers? Melhores Cumprimentos // Best Regards ----------------------------------------------- Miguel Clara IT - Sys Admin & Developer