From albert at nn.iij4u.or.jp Mon Oct 1 06:51:46 2012 From: albert at nn.iij4u.or.jp (shin fukuda) Date: Mon, 1 Oct 2012 15:51:46 +0900 Subject: If-Modified-Since to upstream servers Message-ID: <20121001155146.ce12af3c99c61d93e8b74db7@nn.iij4u.or.jp> Hi. I'd like to know status of "1.3 Planned If-Modified-Since to upstream servers". Thank you. -- shin fukuda From roedie at roedie.nl Mon Oct 1 07:29:33 2012 From: roedie at roedie.nl (Sander Klein) Date: Mon, 01 Oct 2012 09:29:33 +0200 Subject: joomla and zend Message-ID: <429260f75b22cc070ff87c0e68d72773@roedie.nl> Hi, I'm trying to figure out how use a zend application within a joomla website which is not in the document root of a website. The website is located in /www/customer/some.name and the zend application is located in /usr/local/share/web-app (actually /usr/local/share/web-app/public). Now what I want is the web-app to be accessible using http://some.name/web-app/. I figures I should use something like: location ~ ^/web-app/ { alias /usr/local/share/web-app/public; } But I cannot get this working. I tried using root instead of alias within the location tags but that didn't seem to help. Can anyone give me a push in the right direction? The config I use for the joomla site is below. server { listen 80; listen [::]:80; server_name some.name; root /www/customer/some.name; access_log /var/log/nginx/access_some.log pic buffer=32k; error_log /var/log/nginx/error_some.log; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php { include /etc/nginx/includes.d/php-fpm_www; include /etc/nginx/includes.d/fastcgi_params; fastcgi_param PHP_VALUE "include_path=."; } } greets, Sander From mdounin at mdounin.ru Mon Oct 1 10:47:19 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Oct 2012 14:47:19 +0400 Subject: Link to the latest stable release of nginx. In-Reply-To: References: Message-ID: <20121001104718.GU40452@mdounin.ru> Hello! On Fri, Sep 28, 2012 at 02:14:42PM +0200, Jan Wrobel wrote: > Hi, > > Can you put a symbolic link in http://nginx.org/download/ that would > point to the latest stable release of nginx and that would be updated > with each release? > > Something like: http://nginx.org/download/nginx-current.tar.gz? > > It would help with automating the process of compiling the latest nginx. > > Also, there is a lot of documentation on the web that explains how to > get and compile nginx from sources and all these docs become obsolete > over time because they point to the nginx that was available during > the time of writing the doc. I don't like this aproach as it makes impossible to determine version from a downloaded file name, and makes it impossible to determine what will be overridden on archive extraction without looking into archive contents. And one would need a version number for automated processing anyway, as it's embedded into directory structure in release archives. Various "latest" nginx versions are listed here: http://nginx.org/en/download.html It's believed to be enough for most uses. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Mon Oct 1 16:22:56 2012 From: nginx-forum at nginx.us (hsrmmr) Date: Mon, 01 Oct 2012 12:22:56 -0400 Subject: Nginx to server secure and not secure traffic Message-ID: <7106f602d66263a9c720db9ad5cdbeec.NginxMailingListEnglish@forum.nginx.org> We have secure and no secure domains for our website e.g. secure.xyz.com and xyz.com I used following like to make single server handle both port 80 and 443 traffic. http://nginx.org/en/docs/http/configuring_https_servers.html#single_http_https_server server { listen 80; listen 443 ssl; server_name secure.xyz.com xyz.com; .... ssl_certificate secure.xyz.com.crt; ssl_certificate_key secure.xyz.com.key; ... } Every thing works fine except that $_SERVER variables 'SERVER_NAME' is set to 'secure.xyz.com' . My question is : 1. Does Nginx always picks the first server from the config ... irrespective of what client has requested and passes to proxy (php-fpm)? 2. We have a lot of rules, so if we create two separate server (as per following) do I need to copy the rules in both places? It there any maintainable way, like 'include /common_rules.conf'? server { listen 443; server_name secure.xyz.com; ssl on; ssl_certificate secure.xyz.com.crt; ... include common_rules.conf; ===>??? } server { listen 80; server_name xyz.com; ... include common_rules.conf; ===>??? } Any help is highly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231295,231295#msg-231295 From agentzh at gmail.com Mon Oct 1 18:41:21 2012 From: agentzh at gmail.com (agentzh) Date: Mon, 1 Oct 2012 11:41:21 -0700 Subject: [ANN] ngx_openresty devel version 1.2.3.5 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new development version of ngx_openresty, 1.2.3.5: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (development) release, 1.2.3.3: * upgraded LuaNginxModule to 0.6.8. * bugfix: ngx.re.gmatch might loop infinitely when the pattern matches an empty string. thanks Lance Li and xingxing for tracking this issue down. * bugfix: pattern matching an empty substring in ngx.re.gmatch did not match at the end of the subject string. * bugfix: ngx.re.gsub might enter infinite loops because it could not handle patterns matching empty strings properly. * bugfix: ngx.re.gsub incorrectly skipped matching altogether when the subject string was empty. OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From nginx-forum at nginx.us Mon Oct 1 19:33:30 2012 From: nginx-forum at nginx.us (Priority1) Date: Mon, 01 Oct 2012 15:33:30 -0400 Subject: Nginx to server secure and not secure traffic In-Reply-To: <7106f602d66263a9c720db9ad5cdbeec.NginxMailingListEnglish@forum.nginx.org> References: <7106f602d66263a9c720db9ad5cdbeec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5e13b7eda6c66e8f410105a89e346263.NginxMailingListEnglish@forum.nginx.org> Hi. hsrmmr Wrote: ------------------------------------------------------- > My question is : > 1. Does Nginx always picks the first server from the config ... > irrespective of what client has requested and passes to proxy > (php-fpm)? First is actually the server name, and others - is just aliases. If you want to send requested hostname to backend - use $hostname instead. > 2. We have a lot of rules, so if we create two separate server (as per > following) do I need to copy the rules in both places? It there any > maintainable way, like 'include /common_rules.conf'? Yes you can. Just "include conf/share_config.conf;" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231295,231297#msg-231297 From pr1 at pr1.ru Mon Oct 1 19:50:57 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Mon, 01 Oct 2012 23:50:57 +0400 Subject: max file size for proxy_cache_path Message-ID: <5069F421.5000701@pr1.ru> Hi. Is there any way to limit size of file, that can be cached into proxy_cache_path? For example, we have: proxy_cache_path /var/nginx/cache/html levels=1:2 keys_zone=html-cache:18m max_size=5g inactive=60m; So, if someone on backend will place a 2G file, and it will be cached, the big part of useful cached data will be destroyed by cache manager. And second problem - we can't be sure, that proxy_temp_path and proxy_cache_path will not be overflowed. From nginx-forum at nginx.us Tue Oct 2 04:30:21 2012 From: nginx-forum at nginx.us (hsrmmr) Date: Tue, 02 Oct 2012 00:30:21 -0400 Subject: Nginx to server secure and not secure traffic In-Reply-To: <5e13b7eda6c66e8f410105a89e346263.NginxMailingListEnglish@forum.nginx.org> References: <7106f602d66263a9c720db9ad5cdbeec.NginxMailingListEnglish@forum.nginx.org> <5e13b7eda6c66e8f410105a89e346263.NginxMailingListEnglish@forum.nginx.org> Message-ID: <865946fad4c1453a2dc2d6b18d53bbee.NginxMailingListEnglish@forum.nginx.org> Yes you can. Just "include conf/share_config.conf;" ==> I created conf files with locations (which are common), however after including Nginx gives error saying " 'location' directive is not allowed here ". Is there something I am missing? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231295,231303#msg-231303 From nginx-forum at nginx.us Tue Oct 2 05:03:10 2012 From: nginx-forum at nginx.us (hsrmmr) Date: Tue, 02 Oct 2012 01:03:10 -0400 Subject: Nginx to server secure and not secure traffic In-Reply-To: <865946fad4c1453a2dc2d6b18d53bbee.NginxMailingListEnglish@forum.nginx.org> References: <7106f602d66263a9c720db9ad5cdbeec.NginxMailingListEnglish@forum.nginx.org> <5e13b7eda6c66e8f410105a89e346263.NginxMailingListEnglish@forum.nginx.org> <865946fad4c1453a2dc2d6b18d53bbee.NginxMailingListEnglish@forum.nginx.org> Message-ID: Include common_rules.conf works properly now. The problem was that the .conf file was also getting included in 'http' directive, so it was complaining about location directive being invalid. Thanks ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231295,231304#msg-231304 From nginx-forum at nginx.us Tue Oct 2 12:01:32 2012 From: nginx-forum at nginx.us (w00t) Date: Tue, 02 Oct 2012 08:01:32 -0400 Subject: X-Accel-Redirect testing In-Reply-To: <876271noku.wl%appa@perusio.net> References: <876271noku.wl%appa@perusio.net> Message-ID: <753148b4d27d494d8ad39236336ebbd7.NginxMailingListEnglish@forum.nginx.org> Ant?nio P. P. Almeida Wrote: ------------------------------------------------------- > XSendfile in Nginx is called X-Accel-Redirect. > > The way X-Accel-Redirect works is that your app/backend sends a > "special" header with the location of the file that is to be served by > Nginx. Usually the application case is for speeding up the delivery of > protected/private files. Let's say you have a ecommerce site that > sells digital downloads. Then these files will be protected and for > speeding up the delivery your application sends an X-Accel-Redirect > header with the location of the file. Then the file is delivered by > Nginx as a regular static file. > > Example: I request to download: > > http://myshop.com/downloads/big_hit.flac > > The file is stocked at /path/to/the/flac_files/big_hit.flac > > your ecommerce sends an header: > > X-Accel-Redirect: /path/to/the/flac_files/big_hit.flac > > when Nginx sees this he knows that now the file is to be served > directly from the file system as a regular static file. Note that the > real location is hidden from the client. > > Note that the real location must be protected from direct access by > the client always, be it from your app, be it from your server, for > example using the internal keyword: > > http://nginx.org/en/docs/http/ngx_http_core_module.html#internal > > In a nutshell that's how it works. I ended up using nginx as a reverse proxy, although this is what I wanted to avoid in the first place (bandwidth reasons). Main traffic passes through nginx which forwards it to apache by proxy_pass, and apache sends back the X-Accel-Redirect and nginx responds with the file (protected with internal; ). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231059,231311#msg-231311 From appa at perusio.net Tue Oct 2 12:19:33 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Tue, 02 Oct 2012 14:19:33 +0200 Subject: X-Accel-Redirect testing In-Reply-To: <753148b4d27d494d8ad39236336ebbd7.NginxMailingListEnglish@forum.nginx.org> References: <876271noku.wl%appa@perusio.net> <753148b4d27d494d8ad39236336ebbd7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87391xukve.wl%appa@perusio.net> On 2 Out 2012 14h01 CEST, nginx-forum at nginx.us wrote: > Ant?nio P. P. Almeida Wrote: > ------------------------------------------------------- > >> XSendfile in Nginx is called X-Accel-Redirect. >> >> The way X-Accel-Redirect works is that your app/backend sends a >> "special" header with the location of the file that is to be served >> by Nginx. Usually the application case is for speeding up the >> delivery of protected/private files. Let's say you have a ecommerce >> site that sells digital downloads. Then these files will be >> protected and for speeding up the delivery your application sends >> an X-Accel-Redirect header with the location of the file. Then the >> file is delivered by Nginx as a regular static file. >> >> Example: I request to download: >> >> http://myshop.com/downloads/big_hit.flac >> >> The file is stocked at /path/to/the/flac_files/big_hit.flac >> >> your ecommerce sends an header: >> >> X-Accel-Redirect: /path/to/the/flac_files/big_hit.flac >> >> when Nginx sees this he knows that now the file is to be served >> directly from the file system as a regular static file. Note that >> the real location is hidden from the client. >> >> Note that the real location must be protected from direct access by >> the client always, be it from your app, be it from your server, for >> example using the internal keyword: >> >> http://nginx.org/en/docs/http/ngx_http_core_module.html#internal >> >> In a nutshell that's how it works. > > I ended up using nginx as a reverse proxy, although this is what I > wanted to avoid in the first place (bandwidth reasons). Main > traffic passes through nginx which forwards it to apache by > proxy_pass, and apache sends back the X-Accel-Redirect and nginx > responds with the file (protected with internal; ). Instead of Apache you can make your app return that header. For example, on drupal there's a module that sends that header for protected file systems. --- appa > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,231059,231311#msg-231311 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From gautier.difolco at gmail.com Tue Oct 2 13:29:54 2012 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Tue, 2 Oct 2012 15:29:54 +0200 Subject: fastcgi and $document_uri Message-ID: Hi all, I'm trying to setup nginx as bellow: location ~ ^/git(.*)$ { include /etc/nginx/fastcgi.conf; fastcgi_param PATH_INFO $1; fastcgi_pass 127.0.0.1:8010; } For information, /etc/nginx/fastcgi.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; It works well, for example, for /git/repo1.git, the PATH_INFO if /repo1.git. I think using regex is slow, I tryied this : location /git { include /etc/nginx/fastcgi.conf; fastcgi_param PATH_INFO $document_uri; fastcgi_pass 127.0.0.1:8010; } But /git/repo1.git gives me /git/repo1.git instead of I expected /repo1.git. Why? is their a mean to do not use regex or do it faster? For your help, In advance, Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Oct 2 13:44:36 2012 From: nginx-forum at nginx.us (samueleast) Date: Tue, 02 Oct 2012 09:44:36 -0400 Subject: 504 Gateway Time-out media temple Message-ID: <360b4050d6f229330584a8c34693db21.NginxMailingListEnglish@forum.nginx.org> I am constantly getting 504 gateway errors when my php script needs to run for longer than 60 secs. I am on media temple on a dedicated server. I have contacted media temple and they have been helpful but none of their suggesion seem to work for me i was told to edit this file. /etc/httpd/conf.d/fcgid.conf which i have to below LoadModule fcgid_module modules/mod_fcgid.so AddHandler fcgid-script fcg fcgi fpl FcgidIPCDir /var/run/mod_fcgid/sock FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm FcgidIdleTimeout 300 FcgidMaxRequestLen 1073741824 FcgidProcessLifeTime 10000 FcgidMaxProcesses 64 FcgidMaxProcessesPerClass 15 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 600 FcgidIOTimeout 600 FcgidInitialEnv RAILS_ENV production FcgidIdleScanInterval 600 so i have tried to max everything as much as i can, to test this i am just running the function below. function test504(){ @set_time_limit(0); sleep(60); echo "true"; } Sleep will work on any value below 60 seconds returning true but on 60 i get 504 gateway error. my phpinfo(); outputs max_execution_time 600 max_input_time 180 I have seen a few post on increasing this fastcgi_connect_timeout but have no idea where to find this on media temple. Can anyone help thanks UPDATE STILL CANT FIX THIS after chatting with support i have been told i need to edit nginx.conf ? and was directed to this post http://blog.secaserver.com/2011/10/nginx-gateway-time-out/ cant fine any of the values on my hosting. client_header_timeout client_body_timeout send_timeout fastcgi_read_timeout my nginx.conf file looks like this #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 120; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } This is driving me crazy any suggestions ??? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231318,231318#msg-231318 From marcin.deranek at booking.com Tue Oct 2 13:44:38 2012 From: marcin.deranek at booking.com (Marcin Deranek) Date: Tue, 2 Oct 2012 15:44:38 +0200 Subject: nginx & upstream_response_time Message-ID: <20121002154438.4f1b3690@booking.com> Hi, Currently I'm running nginx 1.2.4 with uwsgi backend. To my understanding $upstream_response_time should represent time taken to deliver content by upsteam (in my case uwsgi backend). It looks like it't not the case for myself. uwsgi specific snippet: server { ... uwsgi_buffering off; uwsgi_buffer_size 1M; uwsgi_buffers 2 1M; uwsgi_busy_buffers_size 1M; location / { uwsgi_pass unix:/var/run/uwsgi/uwsgi.sock; include uwsgi_params; uwsgi_modifier1 5; } } On backend I'm running uWSGI with 1 worker only. Backend app (PSGI) generates 900kB output then waits for 10s and finishes response: my $app = sub { my $env = shift; return sub { my $respond = shift; my $writer = $respond->([200, ['Content-Type', 'text/html']]); for (1..900) { my $dt = localtime; $writer->write("[ $dt ]: " . "x" x 1024 . "\n"); } sleep 10; $writer->close(); }; }; When I use "slow" client connecting to nginx (eg. socat TCP:127.0.0.1:80,rcvbuf=128 STDIO) I can see the following hapening: Backend server gets busy only for ~10s (this is what I expect). If I issue 2 concurrent requests one is served immediately and 2nd one after ~10s. This behaviour would indicate that backend was able to deliver content in ~10s (whole response was buffered as buffer size is big enough to accommodate full response and we have only 1 worked at the backend). Unfortunately access log disagrees with that as it makes $upstream_response_time almost equal to $request_time (eg. ~1000s vs ~10s of expected). Is this an expected behaviour ? Regards, Marcin From mdounin at mdounin.ru Tue Oct 2 13:58:39 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Oct 2012 17:58:39 +0400 Subject: nginx-1.3.7 Message-ID: <20121002135838.GE40452@mdounin.ru> Changes with nginx 1.3.7 02 Oct 2012 *) Feature: OCSP stapling support. Thanks to Comodo, DigiCert and GlobalSign for sponsoring this work. *) Feature: the "ssl_trusted_certificate" directive. *) Feature: resolver now randomly rotates addresses returned from cache. Thanks to Anton Jouline. *) Bugfix: OpenSSL 0.9.7 compatibility. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Oct 2 14:06:08 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Oct 2012 18:06:08 +0400 Subject: 504 Gateway Time-out media temple In-Reply-To: <360b4050d6f229330584a8c34693db21.NginxMailingListEnglish@forum.nginx.org> References: <360b4050d6f229330584a8c34693db21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121002140607.GI40452@mdounin.ru> Hello! On Tue, Oct 02, 2012 at 09:44:36AM -0400, samueleast wrote: > I am constantly getting 504 gateway errors when my php script needs to run > for longer than 60 secs. Depending on module you use to pass request to backends, i.e. proxy_pass or fastcgi_pass in your nginx config, you should tune either proxy_read_timeout or fastcgi_read_timeout. Both defaults to 60s. See here for more details: http://nginx.org/r/proxy_read_timeout http://nginx.org/r/fastcgi_read_timeout [...] -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Oct 2 14:30:07 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Oct 2012 18:30:07 +0400 Subject: nginx & upstream_response_time In-Reply-To: <20121002154438.4f1b3690@booking.com> References: <20121002154438.4f1b3690@booking.com> Message-ID: <20121002143007.GK40452@mdounin.ru> Hello! On Tue, Oct 02, 2012 at 03:44:38PM +0200, Marcin Deranek wrote: > Hi, > > Currently I'm running nginx 1.2.4 with uwsgi backend. To my > understanding $upstream_response_time should represent time taken > to deliver content by upsteam (in my case uwsgi backend). It looks like > it't not the case for myself. > > uwsgi specific snippet: > > server { > ... > uwsgi_buffering off; [...] > When I use "slow" client connecting to nginx (eg. socat > TCP:127.0.0.1:80,rcvbuf=128 STDIO) I can see the following hapening: > Backend server gets busy only for ~10s (this is what I expect). If I > issue 2 concurrent requests one is served immediately and 2nd one after > ~10s. This behaviour would indicate that backend was able to deliver > content in ~10s (whole response was buffered as buffer size is big > enough to accommodate full response and we have only 1 worked at the > backend). Unfortunately access log disagrees with that as it makes > $upstream_response_time almost equal to $request_time (eg. ~1000s vs > ~10s of expected). Is this an expected behaviour ? You asked nginx to work in unbuffered mode, and in this mode doesn't pay much attention to what happens with backend connection if it isn't able to write data it already has to a client. In particular it won't detect connection close by the backend (and stop counting $upstream_response_time). This is probably could be somewhat enhanced, but if you care about $upstream_response_time it most likely means you don't need "uwsgi_buffering off" and vice versa. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Tue Oct 2 14:56:58 2012 From: nginx-forum at nginx.us (samueleast) Date: Tue, 02 Oct 2012 10:56:58 -0400 Subject: 504 Gateway Time-out media temple In-Reply-To: <20121002140607.GI40452@mdounin.ru> References: <20121002140607.GI40452@mdounin.ru> Message-ID: <420eb49d8fd5335b62a56ecb648be953.NginxMailingListEnglish@forum.nginx.org> hi maxim thanks for the reply i cant find out where these files and variables are on my hosting to edit i have checked all files in my nginx folder and i cant see those options in there anywhere. will there just be a file with proxy_pass fastcgi_pass proxy_read_timeout fastcgi_read_timeout. i tried editing the nginx.conf file to http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 120; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; server { location / { fastcgi_read_timeout 900s; # 15 minutes } } include /etc/nginx/conf.d/*.conf; } am i doing this correctly? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231318,231326#msg-231326 From marcin.deranek at booking.com Tue Oct 2 14:59:16 2012 From: marcin.deranek at booking.com (Marcin Deranek) Date: Tue, 2 Oct 2012 16:59:16 +0200 Subject: nginx & upstream_response_time In-Reply-To: <20121002143007.GK40452@mdounin.ru> References: <20121002154438.4f1b3690@booking.com> <20121002143007.GK40452@mdounin.ru> Message-ID: <20121002165916.2b9d1544@booking.com> Hi Maxim, On Tue, 2 Oct 2012 18:30:07 +0400 Maxim Dounin wrote: > You asked nginx to work in unbuffered mode, and in this mode > doesn't pay much attention to what happens with backend connection > if it isn't able to write data it already has to a client. In > particular it won't detect connection close by the backend (and > stop counting $upstream_response_time). I see. > This is probably could be somewhat enhanced, but if you care about > $upstream_response_time it most likely means you don't need > "uwsgi_buffering off" and vice versa. Well, I don't completely agree. As it was explained at http://forum.nginx.org/read.php?2,193347,193446#msg-193446 some sort of buffering still takes place even though we work in unbuffered mode. The only reason why we went for unbuffered mode is latency. Regards, Marcin From mdounin at mdounin.ru Tue Oct 2 15:11:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Oct 2012 19:11:58 +0400 Subject: 504 Gateway Time-out media temple In-Reply-To: <420eb49d8fd5335b62a56ecb648be953.NginxMailingListEnglish@forum.nginx.org> References: <20121002140607.GI40452@mdounin.ru> <420eb49d8fd5335b62a56ecb648be953.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121002151157.GL40452@mdounin.ru> Hello! On Tue, Oct 02, 2012 at 10:56:58AM -0400, samueleast wrote: > hi maxim thanks for the reply > > i cant find out where these files and variables are on my hosting to edit i > have checked all files in my nginx folder and i cant see those options in > there anywhere. > > will there just be a file with > > proxy_pass > fastcgi_pass > proxy_read_timeout > fastcgi_read_timeout. > > i tried editing the nginx.conf file to > > http { > include mime.types; > default_type application/octet-stream; > > #log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > # '$status $body_bytes_sent "$http_referer" ' > # '"$http_user_agent" "$http_x_forwarded_for"'; > > #access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > #keepalive_timeout 0; > keepalive_timeout 120; > #tcp_nodelay on; > > #gzip on; > #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > > server_tokens off; > > server { > location / { > fastcgi_read_timeout 900s; # 15 minutes > } > } > include /etc/nginx/conf.d/*.conf; > } > > am i doing this correctly? The "include /etc/nginx/conf.d/*.conf;" in your config suggests servers are configured somewhere in /etc/nginx/conf.d/. The way you added fastcgi_read_timeout won't work as it affects only the server{} block you've added. You may try adding fastcgi_read_timeout and/or proxy_read_timeout at http level instead, i.e. http { ... fastcgi_read_timeout 900s; proxy_read_timeout 900s; include /etc/nginx/conf.d/*.conf; } This will work as long as they aren't redefined in your server configs in /etc/nginx/conf.d/. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Tue Oct 2 15:23:41 2012 From: nginx-forum at nginx.us (samueleast) Date: Tue, 02 Oct 2012 11:23:41 -0400 Subject: 504 Gateway Time-out media temple In-Reply-To: <20121002151157.GL40452@mdounin.ru> References: <20121002151157.GL40452@mdounin.ru> Message-ID: YOU ARE AMAZING this has been doing my head in for nearly to days i have hounded media temple took about 20mins on this forum, thank you so much ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231318,231331#msg-231331 From wrr at mixedbit.org Tue Oct 2 17:12:44 2012 From: wrr at mixedbit.org (Jan Wrobel) Date: Tue, 2 Oct 2012 19:12:44 +0200 Subject: Link to the latest stable release of nginx. In-Reply-To: <20121001104718.GU40452@mdounin.ru> References: <20121001104718.GU40452@mdounin.ru> Message-ID: On Mon, Oct 1, 2012 at 12:47 PM, Maxim Dounin wrote: > Hello! > > On Fri, Sep 28, 2012 at 02:14:42PM +0200, Jan Wrobel wrote: > >> Hi, >> >> Can you put a symbolic link in http://nginx.org/download/ that would >> point to the latest stable release of nginx and that would be updated >> with each release? >> >> Something like: http://nginx.org/download/nginx-current.tar.gz? >> >> It would help with automating the process of compiling the latest nginx. >> >> Also, there is a lot of documentation on the web that explains how to >> get and compile nginx from sources and all these docs become obsolete >> over time because they point to the nginx that was available during >> the time of writing the doc. > > I don't like this aproach as it makes impossible to determine > version from a downloaded file name, and makes it impossible to > determine what will be overridden on archive extraction without > looking into archive contents. And one would need a version > number for automated processing anyway, as it's embedded into > directory structure in release archives. You would still be able to download the archive with the version number (the current way). Here is how you can use such link for automated processing: download nginx-current.tar.gz to an empty directory, extract it (nothing is overridden, the directory is empty), cd to nginx* (the only operation that uses the version number); compile and install (no version number needed) > Various "latest" nginx versions are listed here: > > http://nginx.org/en/download.html > > It's believed to be enough for most uses. Except for automation. Jan From kworthington at gmail.com Tue Oct 2 18:00:57 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 2 Oct 2012 14:00:57 -0400 Subject: nginx-1.3.7 In-Reply-To: <20121002135838.GE40452@mdounin.ru> References: <20121002135838.GE40452@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.7 For Windows http://goo.gl/G6PYX (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream (http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Oct 2, 2012 at 9:58 AM, Maxim Dounin wrote: > Changes with nginx 1.3.7 02 Oct 2012 > > *) Feature: OCSP stapling support. > Thanks to Comodo, DigiCert and GlobalSign for sponsoring this work. > > *) Feature: the "ssl_trusted_certificate" directive. > > *) Feature: resolver now randomly rotates addresses returned from cache. > Thanks to Anton Jouline. > > *) Bugfix: OpenSSL 0.9.7 compatibility. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Oct 2 20:44:00 2012 From: nginx-forum at nginx.us (jwxie) Date: Tue, 02 Oct 2012 16:44:00 -0400 Subject: Nginx https rewrite turns POST to GET Message-ID: <0c11badd65ce6648f74f9c734853bc0a.NginxMailingListEnglish@forum.nginx.org> My proxy server runs on ip A and this is how people access my web service. The nginx configuration will redirect to a virtual machine on ip B. For the proxy server on IP A, I have this in my sites-available server { listen 443; ssl on; ssl_certificate nginx.pem; ssl_certificate_key nginx.key; client_max_body_size 200M; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; rewrite ^(.*) https://$http_host$1 permanent; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The proxy_redirect was taken from how do I get nginx to forward HTTP POST requests via rewrite? Everything that hits the public IP will hit 443 because of the rewrite. Internally, we are forwarding to 80 on the virtual machine. But when I run a python script such as the one below to test our configuration import requests data = {'username': '....', 'password': '.....'} url = 'http://IP_A/api/service/signup' res = requests.post(url, data=data, verify=False) print res print res.json print res.status_code print res.headers I am getting a 405 Method Not Allowed. In nginx we found that when it hit the internal server, the internal nginx was getting a GET request, even though in the original header we did a POST (this was shown in the Python script). So it seems like rewrite has problem. Any idea how to fix this? When I commented out the rewrite, it hits 80 for sure, and it went through. Since rewrite was able to talk to our internal server, so rewrite itself has no issue. It's just the rewrite dropped POST to GET. Thank you!Nginx rewrittes to https makes POST becomes GET Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231339,231339#msg-231339 From contact at jpluscplusm.com Tue Oct 2 21:09:25 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 2 Oct 2012 22:09:25 +0100 Subject: Nginx https rewrite turns POST to GET In-Reply-To: <0c11badd65ce6648f74f9c734853bc0a.NginxMailingListEnglish@forum.nginx.org> References: <0c11badd65ce6648f74f9c734853bc0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Oct 2, 2012 9:44 PM, "jwxie" wrote: > > My proxy server runs on ip A and this is how people access my web service. > The nginx configuration will redirect to a virtual machine on ip B. > > For the proxy server on IP A, I have this in my sites-available > > server { > listen 443; > ssl on; > ssl_certificate nginx.pem; > ssl_certificate_key nginx.key; > > client_max_body_size 200M; > server_name localhost 127.0.0.1; > server_name_in_redirect off; > > location / { > proxy_pass http://10.10.0.59:80; > proxy_redirect http://10.10.0.59:80/ /; > > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > } > > } > > server { > listen 80; > rewrite ^(.*) https://$http_host$1 permanent; > server_name localhost 127.0.0.1; > server_name_in_redirect off; > location / { > proxy_pass http://10.10.0.59:80; > proxy_redirect http://10.10.0.59:80/ /; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > } > } > > The proxy_redirect was taken from how do I get nginx to forward HTTP POST > requests via rewrite? > > Everything that hits the public IP will hit 443 because of the rewrite. > Internally, we are forwarding to 80 on the virtual machine. > > But when I run a python script such as the one below to test our > configuration > > import requests > > data = {'username': '....', 'password': '.....'} > url = 'http://IP_A/api/service/signup' > > res = requests.post(url, data=data, verify=False) > print res > print res.json > print res.status_code > print res.headers > > I am getting a 405 Method Not Allowed. In nginx we found that when it hit > the internal server, the internal nginx was getting a GET request, even > though in the original header we did a POST (this was shown in the Python > script). > > So it seems like rewrite has problem. Rewrite has no problem. It doesn't dictate a verb that the client should use. The user-agent you're using is choosing to do this, possibly correctly, upon receipt of the 301 response. Have a read of the 301, 302, 303, 307 and 308 sections here for more information: http://en.wikipedia.org/wiki/List_of_HTTP_status_codes#3xx_Redirection Your options appear to be to use an experimental response code, or to enforce that clients POST to the correct port. I'd choose the latter if at all possible. Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Oct 2 21:13:48 2012 From: nginx-forum at nginx.us (jwxie) Date: Tue, 02 Oct 2012 17:13:48 -0400 Subject: Nginx https rewrite turns POST to GET In-Reply-To: References: Message-ID: <38fec28d60d93845399e3058c20f67a3.NginxMailingListEnglish@forum.nginx.org> Hi Jonathan Thanks for helping! It's a critical blocker. I understand the HTTP spec there. To make it short, how do I enforce that client to POST (an eventually PUT, DELETE, GET, ) correctly? I mean this is not a custom situation. Pretty sure many production servers are running in a similar config (user hits a public ip on 443 and then redirects to an internal server). If you are interested, without the first layer (the public proxy), and use the same configuration directly inside the internal server, and running the communication directly to that server will be fine. In other words, make that server public will be fine even with rewrite rule. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231339,231341#msg-231341 From nginx-forum at nginx.us Tue Oct 2 22:07:39 2012 From: nginx-forum at nginx.us (jwxie) Date: Tue, 02 Oct 2012 18:07:39 -0400 Subject: Nginx https rewrite turns POST to GET In-Reply-To: References: Message-ID: <07d0b4abbf64edaa44a9a2dcf2eaa970.NginxMailingListEnglish@forum.nginx.org> I figured I was very wrong from the beginning. If a user hits http it would be a GET by nature and hence forced to redirect to a viewable page that runs on HTTPS. You are right. My script would work in the test machine only because I did something special. Ah. Thanks Forget about this :P Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231339,231342#msg-231342 From nginx-forum at nginx.us Wed Oct 3 00:42:57 2012 From: nginx-forum at nginx.us (passani) Date: Tue, 02 Oct 2012 20:42:57 -0400 Subject: Mobile browser detection at server level In-Reply-To: <28646c8afe0455c277b2e3a8a7b0c004.squirrel@www.digitalhit.com> References: <28646c8afe0455c277b2e3a8a7b0c004.squirrel@www.digitalhit.com> Message-ID: Hi, my name is Luca Passani and I am the WURFL creator. Pardon my jumping into a old thread, but the NGINX module ScientiaMobile (my company) released today nicely addresses the question of the original poster. More information available here: http://scientiamobile.com/blog/post/view/id/25/title/HTTP-and-Mobile%3A-The-Missing-Header- Thanks Luca Passani CTO @ScientiaMobile Posted at Nginx Forum: http://forum.nginx.org/read.php?2,183827,231343#msg-231343 From nginx-forum at nginx.us Wed Oct 3 01:13:29 2012 From: nginx-forum at nginx.us (sdee) Date: Tue, 02 Oct 2012 21:13:29 -0400 Subject: mod_zip usage - invalid file list from upstream In-Reply-To: <98355537628a1a187a7730ba51df8849.NginxMailingListEnglish@forum.nginx.org> References: <98355537628a1a187a7730ba51df8849.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1507b6c37da7ef2f515425ea7e063b08.NginxMailingListEnglish@forum.nginx.org> I have now solved this issue, so for anyone elses benefit the problem was due to this issue reported in the link below. http://code.google.com/p/mod-zip/issues/detail?id=5 Basically if you use a browser (Chrome/Firefox etc) it sets the "Accept-Encoding: gzip,deflate,sdch" HTTP header which nginx forwards when proxying to the upstream Apache server that is generating the zip file list in PHP. This causes Apache to return the zip file list gzip encoded so mod_zip can't read it. When testing with curl or wget it works without issue as these command line tools don't set a "Accept-Encoding" HTTP header. To fix this problem I added to the nginx config file the proxy_set_header option as seen below. location ~ \.php$ { proxy_pass http://74.112.172.198:82; proxy_redirect off; proxy_set_header Accept-Encoding identity; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230510,231345#msg-231345 From lists at ruby-forum.com Wed Oct 3 08:27:35 2012 From: lists at ruby-forum.com (Brian F.) Date: Wed, 03 Oct 2012 10:27:35 +0200 Subject: nginx behind load balancer In-Reply-To: References: <20111121123822.GW95664@mdounin.ru> <20111121124602.GX95664@mdounin.ru> <20111125171440.GX95664@mdounin.ru> Message-ID: <5598a8b8aa0b4bbc946a9913c582ac30@ruby-forum.com> Rami Essaid wrote in post #1034227: > Thanks for the note and the clever workaround. We were able to tweak > this > to work with but then it still left a lot of the other functions we were > using such as deny/allow, limit_con, ect not working. Instead we went > back > to amazon and it turns out they were able to correct the behavior of > their > load balancer. > > I wanted to report back on the performance of putting nginx behind an > elb. > We compared elb to haproxy and on amazon's cloud we got better > performance > through the elb than we did an haproxy instance. There is minimal impact > on > end user performance for adding this extra step. We did this for > redundancy to allow us to automatically fail over to another zone if the > current zone or instance go down. > > Happy to answer any questions about the setup. > > Rami Hi Rami, We are looking to implement this on our web servers - did you have to change the default configuration of the ELB? Would you be able to copy and paste the final configuration of nginx that works? You help is much appreciated, Brian -- Posted via http://www.ruby-forum.com/. From andrew at nginx.com Wed Oct 3 10:10:26 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 3 Oct 2012 14:10:26 +0400 Subject: If-Modified-Since to upstream servers In-Reply-To: <20121001155146.ce12af3c99c61d93e8b74db7@nn.iij4u.or.jp> References: <20121001155146.ce12af3c99c61d93e8b74db7@nn.iij4u.or.jp> Message-ID: <17AB315F-0184-4350-9AEF-5EB5DC02BB73@nginx.com> Probably to appear around Jan 2013, maybe sooner On Oct 1, 2012, at 10:51 AM, shin fukuda wrote: > Hi. > > I'd like to know status of "1.3 Planned If-Modified-Since to upstream servers". > > Thank you. > > > -- > shin fukuda > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From markus.jelsma at openindex.io Wed Oct 3 11:50:17 2012 From: markus.jelsma at openindex.io (=?utf-8?Q?Markus_Jelsma?=) Date: Wed, 3 Oct 2012 11:50:17 +0000 Subject: Flush log buffers on reload Message-ID: Hi, It seems that Nginx does not flush the log buffers to disk on reload but it would be really helpful if it could. Is there any trick to flush the buffers to disk on reload? Thanks, Markus From mdounin at mdounin.ru Wed Oct 3 12:37:01 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Oct 2012 16:37:01 +0400 Subject: Flush log buffers on reload In-Reply-To: References: Message-ID: <20121003123701.GV40452@mdounin.ru> Hello! On Wed, Oct 03, 2012 at 11:50:17AM +0000, Markus Jelsma wrote: > It seems that Nginx does not flush the log buffers to disk on > reload but it would be really helpful if it could. Is there any > trick to flush the buffers to disk on reload? Log buffers are flushed by a shutting down nginx worker process once there are no more requests in progress and it's going to exit. If it's not enough in your use case, you may use logrotate signal (USR1) flush log buffers at any time you want. -- Maxim Dounin http://nginx.com/support.html From markus.jelsma at openindex.io Wed Oct 3 12:45:55 2012 From: markus.jelsma at openindex.io (=?utf-8?Q?Markus_Jelsma?=) Date: Wed, 3 Oct 2012 12:45:55 +0000 Subject: Flush log buffers on reload In-Reply-To: <20121003123701.GV40452@mdounin.ru> References: <20121003123701.GV40452@mdounin.ru> Message-ID: Thanks! -----Original message----- > From:Maxim Dounin > Sent: Wed 03-Oct-2012 14:41 > To: nginx at nginx.org > Subject: Re: Flush log buffers on reload > > Hello! > > On Wed, Oct 03, 2012 at 11:50:17AM +0000, Markus Jelsma wrote: > > > It seems that Nginx does not flush the log buffers to disk on > > reload but it would be really helpful if it could. Is there any > > trick to flush the buffers to disk on reload? > > Log buffers are flushed by a shutting down nginx worker process > once there are no more requests in progress and it's going to > exit. If it's not enough in your use case, you may use logrotate > signal (USR1) flush log buffers at any time you want. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From cmfileds at gmail.com Wed Oct 3 20:25:47 2012 From: cmfileds at gmail.com (CM Fields) Date: Wed, 3 Oct 2012 16:25:47 -0400 Subject: OCSP response: no response sent Message-ID: I am trying to get OCSP Stapling working in Nginx 1.3.7 with SPDY patch.spdy-52.txt built against OpenSSL 1.0.1c. SSL and SPDY connections to the server work fine. Let me explain what I have done so far and perhaps someone can point me in the right direction or if I have made a mistake somewhere. The OCSP section of the nginx.conf under the SSL config looks like this. The full certificate chain is in the "ssl_certificate /ssl_keys/domain_ssl.crt" file and clients connect without issue. ## SSL Certs ssl on; ssl_session_cache shared:SSL:10m; ssl_certificate /ssl_keys/domain_ssl.crt; ssl_certificate_key /ssl_keys/domain_ssl.key; ssl_ecdh_curve secp521r1; ## OCSP Stapling resolver 127.0.0.1; ssl_stapling on; #ssl_stapling_verify on; ssl_stapling_file /ssl_keys/domain.staple; #ssl_trusted_certificate /ssl_keys/domain_issuer.crt; #ssl_stapling_responder http://ocsp.comodoca.com; According to the Nginx documentation I need to make a DER file for the "ssl_stapling_file" directive in order to send out the OCSP stapling response as part of the first connection. The domain.staple file was made like so. Special thanks to the group over at https://calomel.org/nginx.html for getting me this far and allowing me to use their server for testing against. # collect all the certificates and put them into separate files. level0 is the domain cert, level1 certificate authority and level2 is the root over the CA. openssl s_client -showcerts -connect calomel.org:443 < /dev/null | awk -v c=-1 '/-----BEGIN CERTIFICATE-----/{inc=1;c++} inc {print > ("level" c ".crt")} /---END CERTIFICATE-----/{inc=0}' # Look at the certificates and that they look like the correct format. for i in level?.crt; do openssl x509 -noout -serial -subject -issuer -in "$i"; echo; done # Put all of the publicly available certs into a bundle cat level{0,1,2}.crt > CAbundle.crt # Collect the OCSP response and make the DER domain.staple file. Make sure "Cert Status: good" and "Response verify OK" openssl ocsp -text -no_nonce -issuer level1.crt -CAfile CAbundle.crt -cert level0.crt -VAfile level1.crt -url http://ocsp.comodoca.com -respout domain.staple At this point I _believe_ have done everything correctly and the domain.staple DER formatted file is right. When I test my server with the same steps as above, but with my own domain name instead of calomel.org, I still get "OCSP response: no response sent" when I test with openssl client. This is the openssl client line I used for testing to see what a OCSP server response would look like. I tested two servers. # this server's OCSP stapling response seems to work openssl s_client -connect login.live.com:443 -tls1 -tlsextdebug -status ... OCSP response: ====================================== OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response ... # calomel..org does not support OSCP stapling (yet) and I get the same result on my server's domain... openssl s_client -connect calomel.org:443 -tls1 -tlsextdebug -status -CAfile /usr/lib/ssl/certs/AddTrust_External_Root.pem ... OCSP response: no response sent ... Sorry for the long email, but I want to be as clear as I could. Any help would be appreciated. Thanks! From nginx-forum at nginx.us Wed Oct 3 21:00:39 2012 From: nginx-forum at nginx.us (wurb32) Date: Wed, 03 Oct 2012 17:00:39 -0400 Subject: How to tell Proxy module to retrieve secondary URL if primary URL doenst exist Message-ID: <24cdbc7236ea260afe79fc9503a82de9.NginxMailingListEnglish@forum.nginx.org> Hi all, I am using nginx's Proxy module to retrieve images from Amazon S3. I need to be able to tell the Proxy module to retrieve a smaller sized file if a bigger sized file doesnt exist. How can I do that? For instance, my nginx configuration: location / { proxy_pass https://s3.amazonaws.com/my_bucket/; error_page 415 = /empty; } Say I have image1_100.jpg on S3 then http://localhost/image1_100.jpg works fine. However http://localhost/image1_300.jpg wont work. I want to be able to ask nginx to request the https://s3.amazonaws.com/my_bucket/image1_300.jpg file and if it doesnt exist, request for iamge1_100.jpg file instead. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231364,231364#msg-231364 From nginx-forum at nginx.us Thu Oct 4 00:03:33 2012 From: nginx-forum at nginx.us (apfsantos) Date: Wed, 03 Oct 2012 20:03:33 -0400 Subject: NGINX in bridge mode? Message-ID: Hello, I have developed this module which performs some changes to webpages. But performance is not as good as I expected because a reverse proxy node between the client and the server introduces quite some latency (I'm seeing around 15% drop in performance here). I believe this is because the NGINX box has to terminate all the TCP connections from the clients, and make new ones to the web server. I wonder if it is possible to run NGINX over two bridged interfaces. That way, the NGINX module would change the traffic while it is bridged towards the web server, but it would not need to establish new TCP connections. Does it make sense? Thanks in advance, A. Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231367,231367#msg-231367 From nginx-forum at nginx.us Thu Oct 4 04:21:56 2012 From: nginx-forum at nginx.us (otherjohn) Date: Thu, 04 Oct 2012 00:21:56 -0400 Subject: Please help with optimization Message-ID: Hello All, I am looking to optimize my nginx setup but am new to nginx and some of the settings. Currently the webserver is set to just default settings and on using an forum import script to migrate our forum from another softward I am getting "upstream timed out (110: Connection timed out) while reading response header from upstream". Could someone give me some suggestions? I am running an Amazon ec2 Ubuntu Precise 64 server with the folowing hardware setup 7.5 GB memory 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) 850 GB instance storage 64-bit platform I/O Performance: High I am running nginx 1.2.4 and PHP 5.3.10-1ubuntu3.4 (fpm-fcgi) The website will be running a forum with on average 2000 visitors at any given time, over 60k members and 4 million posts. I need help with the following setttings that currently only have the defaults set. php-fpm's php.ini php-fpm's pool configuration nginx.conf anything else you can suggest. Thanks so much! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231369,231369#msg-231369 From stef at scaleengine.com Thu Oct 4 09:17:03 2012 From: stef at scaleengine.com (Stefan Caunter) Date: Thu, 4 Oct 2012 05:17:03 -0400 Subject: Please help with optimization In-Reply-To: References: Message-ID: Performance issues with large forum migrations are usually database i/o specific; nginx is good because it gets out of the way and sends the php responses from fpm. Standard php configuration for nginx will get it up and running, but you will want to tune php to suit the forum software, and devote most of the system resources to database, which will consume most of them. When that happens, php waits, fastcgis pile up, and nginx will hit timeouts waiting for php, which is waiting for mysql. Make sure mysql has good write performance and plenty of memory. Beware of expensive forum plugins (use slow logging to spot horrible queries that make php wait). On Thu, Oct 4, 2012 at 12:21 AM, otherjohn wrote: > Hello All, > I am looking to optimize my nginx setup but am new to nginx and some of the > settings. Currently the webserver is set to just default settings and on > using an forum import script to migrate our forum from another softward I am > getting "upstream timed out (110: Connection timed out) while reading > response header from upstream". > Could someone give me some suggestions? > > I am running an Amazon ec2 Ubuntu Precise 64 server with the folowing > hardware setup > 7.5 GB memory > 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) > 850 GB instance storage > 64-bit platform > I/O Performance: High > > I am running nginx 1.2.4 and PHP 5.3.10-1ubuntu3.4 (fpm-fcgi) > > The website will be running a forum with on average 2000 visitors at any > given time, over 60k members and 4 million posts. > I need help with the following setttings that currently only have the > defaults set. > php-fpm's php.ini > php-fpm's pool configuration > nginx.conf > anything else you can suggest. > > Thanks so much! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231369,231369#msg-231369 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Oct 4 09:45:44 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Oct 2012 13:45:44 +0400 Subject: How to tell Proxy module to retrieve secondary URL if primary URL doenst exist In-Reply-To: <24cdbc7236ea260afe79fc9503a82de9.NginxMailingListEnglish@forum.nginx.org> References: <24cdbc7236ea260afe79fc9503a82de9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121004094544.GA40452@mdounin.ru> Hello! On Wed, Oct 03, 2012 at 05:00:39PM -0400, wurb32 wrote: > Hi all, > > I am using nginx's Proxy module to retrieve images from Amazon S3. I need to > be able to tell the Proxy module to retrieve a smaller sized file if a > bigger sized file doesnt exist. How can I do that? > > For instance, my nginx configuration: > > location / { > proxy_pass https://s3.amazonaws.com/my_bucket/; > error_page 415 = /empty; > } > > Say I have image1_100.jpg on S3 then http://localhost/image1_100.jpg works > fine. However http://localhost/image1_300.jpg wont work. I want to be able > to ask nginx to request the > https://s3.amazonaws.com/my_bucket/image1_300.jpg file and if it doesnt > exist, request for iamge1_100.jpg file instead. Use error_page 404 and proxy_intercept_errors, see http://nginx.org/r/proxy_intercept_errors. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Oct 4 10:07:19 2012 From: nginx-forum at nginx.us (Piki) Date: Thu, 04 Oct 2012 06:07:19 -0400 Subject: nginx erroneously redirecting to https Message-ID: I am running nginx+php-fpm on localhost for testing some stuff sites before uploading to a live server (which I have done with nginx in the past), and have run into the first issue which Google can't seem to help me with: I have decided to try three different forum softwares. They are installed under separate subdirectories within my web root (e.g. /srv/www/localhost/html/{forum1,forum2,forum3}). On two of the forums, however, whenver I try to use anything that requires a password, it switches from http to https, and on one of the forums, attempts to continue using port 80. On the forum that it attempts to use port 80, I get the following error message: ----------code snippet---------- Secure Connection Failed An error occurred during a connection to localhost:80. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site. ----------end code snippet---------- I do not have https or ssl configured within nginx or the vhost, nor any ssl certs. Here is the vhost config file: ----------code snippet---------- server { server_name localhost; listen 80; root /srv/www/localhost/html; index index.php index.html index.htm; ssi on; location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; } } ----------end code snippet---------- This very same configuration has worked fine on a live (online) server (save for the fact that I had to use a different server_name and root, for obvious reasons), and has never before produced any issues with ssl or switching to https, and I have used it with several other forums without it producing this strange behavior. In case it matters, the two forum softwares I am having issues with are punbb and usebb. Punbb is the one that attempts https on port 80 and produces the error. Usebb doesn't attempt port 80 (it allows the browser to attempt port 443 as is default for https), which causes my browser to not be able to connect to the server since it's not even configured for port 443. The third forum I have installed (Vanilla) doesn't attempt to use 443. However, since 2/3 of the forum softwares are doing this, I have to assume that the issue is nginx, but I am a bit too stupid in the area of servers to be able to troubleshoot the issue on my own, and Google seems to think that I'm trying to use https instead of trying to avoid https (the latter is the case). I would much appreciate some help. (I apologize for being a bit sloppy with posting the code snippets, but it seems that bbcode is disabled (at least the code tage is), which I guess I should've expect when this is joined with a mailing list) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231379,231379#msg-231379 From nginx-forum at nginx.us Thu Oct 4 11:00:13 2012 From: nginx-forum at nginx.us (Piki) Date: Thu, 04 Oct 2012 07:00:13 -0400 Subject: nginx erroneously redirecting to https In-Reply-To: References: Message-ID: <40e34082420f4cc468585fbff088c1d6.NginxMailingListEnglish@forum.nginx.org> Just coming back to post more specific info on my config (I'm usually much better about that): OS: Debian 6.0 "Squeeze" with dotdeb repository (http://www.dotdeb.org/) nginx: 1.2.4 PHP & php-fpm: 5.4.7 nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } php-fpm.conf: ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/usr). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /usr otherwise ;include=/etc/php5/fpm/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /var ; Default Value: none pid = /var/run/php5-fpm.pid ; Error log file ; If it's set to "syslog", log is sent to syslogd instead of being written ; in a local file. ; Note: the default prefix is /var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm.log ; syslog_facility is used to specify what type of program is logging the ; message. This lets syslogd specify that messages from different facilities ; will be handled differently. ; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON) ; Default Value: daemon ;syslog.facility = daemon ; syslog_ident is prepended to every message. If you have multiple FPM ; instances running on the same server, you can change the default value ; which must suit common needs. ; Default Value: php-fpm ;syslog.ident = php-fpm ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; The maximum number of processes FPM will fork. This has been design to control ; the global number of processes when using dynamic PM within a lot of pools. ; Use it with caution. ; Note: A value of 0 indicates no limit ; Default Value: 0 process.max = 4 ; Specify the nice(2) priority to apply to the master process (only if set) ; The value can vary from -19 (highest priority) to 20 (lower priority) ; Note: - It will only work if the FPM master process is launched as root ; - The pool process will inherit the master process priority ; unless it specified otherwise ; Default Value: no set ; process.priority = -19 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = yes ; Set open file descriptor rlimit for the master process. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit for the master process. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Specify the event mechanism FPM will use. The following is available: ; - select (any POSIX os) ; - poll (any POSIX os) ; - epoll (linux >= 2.5.44) ; - kqueue (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0) ; - /dev/poll (Solaris >= 7) ; - port (Solaris >= 10) ; Default Value: not set (auto detection) ; events.mechanism = epoll ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; To configure the pools it is recommended to have one .conf file per ; pool in the following directory: include=/etc/php5/fpm/pool.d/*.conf Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231379,231382#msg-231382 From mdounin at mdounin.ru Thu Oct 4 11:13:34 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Oct 2012 15:13:34 +0400 Subject: OCSP response: no response sent In-Reply-To: References: Message-ID: <20121004111334.GB40452@mdounin.ru> Hello! On Wed, Oct 03, 2012 at 04:25:47PM -0400, CM Fields wrote: > I am trying to get OCSP Stapling working in Nginx 1.3.7 with SPDY > patch.spdy-52.txt built against OpenSSL 1.0.1c. SSL and SPDY > connections to the server work fine. > > Let me explain what I have done so far and perhaps someone can point > me in the right direction or if I have made a mistake somewhere. > > The OCSP section of the nginx.conf under the SSL config looks like > this. The full certificate chain is in the "ssl_certificate > /ssl_keys/domain_ssl.crt" file and clients connect without issue. > > ## SSL Certs > ssl on; > ssl_session_cache shared:SSL:10m; > ssl_certificate /ssl_keys/domain_ssl.crt; > ssl_certificate_key /ssl_keys/domain_ssl.key; > ssl_ecdh_curve secp521r1; > > ## OCSP Stapling > resolver 127.0.0.1; > ssl_stapling on; > #ssl_stapling_verify on; > ssl_stapling_file /ssl_keys/domain.staple; > #ssl_trusted_certificate /ssl_keys/domain_issuer.crt; > #ssl_stapling_responder http://ocsp.comodoca.com; Just a side note: in most cases just switching on ssl_stapling and configuring resolver is enough, nginx will do anything else. If it won't be able to, it will complain at "warn" level to error log. The ssl_stapling_file is mostly intended for debugging. > According to the Nginx documentation I need to make a DER file for the > "ssl_stapling_file" directive in order to send out the OCSP stapling > response as part of the first connection. The domain.staple file was As stapling is an optimization mechanism, you probably don't care much about the first connection. First connection will initiate a OCSP request from nginx, and as soon as response is available it will be stapled. > made like so. Special thanks to the group over at > https://calomel.org/nginx.html for getting me this far and allowing me > to use their server for testing against. > > # collect all the certificates and put them into separate files. > level0 is the domain cert, level1 certificate authority and level2 is > the root over the CA. > openssl s_client -showcerts -connect calomel.org:443 < /dev/null | awk > -v c=-1 '/-----BEGIN CERTIFICATE-----/{inc=1;c++} inc {print > > ("level" c ".crt")} /---END CERTIFICATE-----/{inc=0}' > > # Look at the certificates and that they look like the correct format. > for i in level?.crt; do openssl x509 -noout -serial -subject -issuer > -in "$i"; echo; done > > # Put all of the publicly available certs into a bundle > cat level{0,1,2}.crt > CAbundle.crt > > # Collect the OCSP response and make the DER domain.staple file. Make > sure "Cert Status: good" and "Response verify OK" > openssl ocsp -text -no_nonce -issuer level1.crt -CAfile CAbundle.crt > -cert level0.crt -VAfile level1.crt -url http://ocsp.comodoca.com > -respout domain.staple > > > > At this point I _believe_ have done everything correctly and the > domain.staple DER formatted file is right. When I test my server with > the same steps as above, but with my own domain name instead of > calomel.org, I still get "OCSP response: no response sent" when I test > with openssl client. > > This is the openssl client line I used for testing to see what a OCSP > server response would look like. I tested two servers. > > # this server's OCSP stapling response seems to work > openssl s_client -connect login.live.com:443 -tls1 -tlsextdebug -status > ... > OCSP response: > ====================================== > OCSP Response Data: > OCSP Response Status: successful (0x0) > Response Type: Basic OCSP Response > ... > > # calomel..org does not support OSCP stapling (yet) and I get the same > result on my server's domain... > openssl s_client -connect calomel.org:443 -tls1 -tlsextdebug -status > -CAfile /usr/lib/ssl/certs/AddTrust_External_Root.pem > ... > OCSP response: no response sent > ... The main question is: in which server you've configured stapling? I.e. are you using dedicated ip/port, or try to use name-based virtualhosts instead? Note that with SSL it's not that easy to do virtualhosts correctly, even if SNI is supported by many clients as of now. In particular the above openssl command won't set servername and hence will hit default server. Additionally, while looking into this I've found that due to OpenSSL bug the OCSP stapling won't work at all if it's not enabled in the default server. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Oct 4 11:32:19 2012 From: nginx-forum at nginx.us (otherjohn) Date: Thu, 04 Oct 2012 07:32:19 -0400 Subject: Please help with optimization In-Reply-To: References: Message-ID: <5602689fd9083c55c6e8c0a63b5b876f.NginxMailingListEnglish@forum.nginx.org> Thanks Stephen, Our DB setup is an external RDS MySql Large DB Instance: 7.5 GB memory, 4 ECUs (2 virtual cores with 2 ECUs each), 64-bit platform, High I/O Capacity. There is not much I can do to increase the performance on this but I don't think it's whats causing the problem. The web server has no traffic (except for me) and the DB is only being used for one other site. Based on its reports, it looks like if we did this migration at peak traffic time, it still won't touch 80% of DB usage. John Stefan Caunter Wrote: ------------------------------------------------------- > Performance issues with large forum migrations are usually database > i/o specific; nginx is good because it gets out of the way and sends > the php responses from fpm. Standard php configuration for nginx will > get it up and running, but you will want to tune php to suit the forum > software, and devote most of the system resources to database, which > will consume most of them. When that happens, php waits, fastcgis pile > up, and nginx will hit timeouts waiting for php, which is waiting for > mysql. Make sure mysql has good write performance and plenty of > memory. Beware of expensive forum plugins (use slow logging to spot > horrible queries that make php wait). > > On Thu, Oct 4, 2012 at 12:21 AM, otherjohn > wrote: > > Hello All, > > I am looking to optimize my nginx setup but am new to nginx and some > of the > > settings. Currently the webserver is set to just default settings > and on > > using an forum import script to migrate our forum from another > softward I am > > getting "upstream timed out (110: Connection timed out) while > reading > > response header from upstream". > > Could someone give me some suggestions? > > > > I am running an Amazon ec2 Ubuntu Precise 64 server with the > folowing > > hardware setup > > 7.5 GB memory > > 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) > > 850 GB instance storage > > 64-bit platform > > I/O Performance: High > > > > I am running nginx 1.2.4 and PHP 5.3.10-1ubuntu3.4 (fpm-fcgi) > > > > The website will be running a forum with on average 2000 visitors at > any > > given time, over 60k members and 4 million posts. > > I need help with the following setttings that currently only have > the > > defaults set. > > php-fpm's php.ini > > php-fpm's pool configuration > > nginx.conf > > anything else you can suggest. > > > > Thanks so much! > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,231369,231369#msg-231369 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231369,231384#msg-231384 From nginx-forum at nginx.us Thu Oct 4 16:08:55 2012 From: nginx-forum at nginx.us (wurb32) Date: Thu, 04 Oct 2012 12:08:55 -0400 Subject: How to tell Proxy module to retrieve secondary URL if primary URL doenst exist In-Reply-To: <20121004094544.GA40452@mdounin.ru> References: <20121004094544.GA40452@mdounin.ru> Message-ID: <646ee7f832059440fc3f08f8b3d3cc10.NginxMailingListEnglish@forum.nginx.org> Hi , I tried with error_page and proxy_intercept_errors and it is not working for me. Do you mind if you can give me an example configuration? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231364,231389#msg-231389 From nginx-forum at nginx.us Thu Oct 4 17:12:34 2012 From: nginx-forum at nginx.us (Piki) Date: Thu, 04 Oct 2012 13:12:34 -0400 Subject: Please help with optimization In-Reply-To: <5602689fd9083c55c6e8c0a63b5b876f.NginxMailingListEnglish@forum.nginx.org> References: <5602689fd9083c55c6e8c0a63b5b876f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <77da8da5e44a369535f8be0c57d58587.NginxMailingListEnglish@forum.nginx.org> My best suggestion would be to look through your nginx and php settings to limit the amount of persistent connections and total connections (both to mysql and to your site(s)). The more persistent connections there are, the more processes on your server and web browsers accessing the site will be able to maintain a single connection each, so limitting those will force it to cut off anything inactive or taking too long when it reaches that limit. Unfortunately I don't have the experience to come clsoe to suggesting good values for you. My best advice would be to check for general recommendations on Google et al., then pick what you feel is safe and adjust over time until you feel you hit the right values. Another option would be to enable gzip within nginx, (add "gzip on;" without quotes to nginx.conf), then use Google to research what browsers don't support gzip. You can blacklist those browser by adding something similar to: gzip_disable "msie6"; to nginx.conf, placing all the browser UA strings within quotes. That way, nginx will automatically use gzip compression for browsers that are not blacklisted, which will save bandwidth. There are also various options you can add to nginx.conf (I think the default has them commented out by default) to tweak this further. There are plenty of other things you can do to optimize things, most of which you can find quite easily in the nginx wiki and via Google et al. Search for things like "nginx optimization", "nginx and php-fpm optimization", and "nginx and php-fpm and mysql optimization". Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231369,231391#msg-231391 From aweber at comcast.net Thu Oct 4 18:20:30 2012 From: aweber at comcast.net (AJ Weber) Date: Thu, 04 Oct 2012 14:20:30 -0400 Subject: ignore part of URL to force caching? Message-ID: <506DD36E.6090307@comcast.net> I would like to "override" the intent of the app server that is basically disabling any caching of the backend file. For example, they are embedding a "noCache=#######" parameter at the end of the URL (there are other parameters following, but if I can check the url up-to the "?" that would suit me fine). This is actually a dynamically generated SWF file, but the file is then constant for a reasonable amount of time such that I'd like to cache it for a few minutes. Is there a way in a specific "location" to tell nginx to ignore the parameters (or any portion of the URL) when determining the cached object for that URL? In other words, tell nginx to cache content for that location, say only 5min, and ignore all parameters when determining whether to cache and how to match cached content? If I'm not explaining myself properly, please let me know and I'll try another route. :) Thanks in advance, AJ From cmfileds at gmail.com Thu Oct 4 18:31:41 2012 From: cmfileds at gmail.com (CM Fields) Date: Thu, 4 Oct 2012 14:31:41 -0400 Subject: OCSP response: no response sent In-Reply-To: <20121004111334.GB40452@mdounin.ru> References: <20121004111334.GB40452@mdounin.ru> Message-ID: Maxim, Thank you. I was using virtual hosts. Once I switched my conf over to using a default ssl server block, with "server _;" ocsp stapling worked with the openssl client test. This is perfectly fine in my situation. All that was needed is "ssl_stapling on;" and the resolver line just like you mentioned. Question: I noticed the OCSP Response Data has an update time and a "next" update time. Cert Status: good This Update: Oct 4 00:00:37 2012 GMT Next Update: Oct 8 00:00:37 2012 GMT Am I correct in assuming nginx will cache the OSCP Response Data at least till "Next Update" time thus reducing the amount of OCSP requests going to the CA? Finally, just a heads up. If I incorrectly put "ssl_stapling on;" in the parent http{} area Nginx 1.3.7 will crash/dump. Again, thanks for a great web server. On Thu, Oct 4, 2012 at 7:13 AM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 03, 2012 at 04:25:47PM -0400, CM Fields wrote: > >> I am trying to get OCSP Stapling working in Nginx 1.3.7 with SPDY >> patch.spdy-52.txt built against OpenSSL 1.0.1c. SSL and SPDY >> connections to the server work fine. >> >> Let me explain what I have done so far and perhaps someone can point >> me in the right direction or if I have made a mistake somewhere. >> >> The OCSP section of the nginx.conf under the SSL config looks like >> this. The full certificate chain is in the "ssl_certificate >> /ssl_keys/domain_ssl.crt" file and clients connect without issue. >> >> ## SSL Certs >> ssl on; >> ssl_session_cache shared:SSL:10m; >> ssl_certificate /ssl_keys/domain_ssl.crt; >> ssl_certificate_key /ssl_keys/domain_ssl.key; >> ssl_ecdh_curve secp521r1; >> >> ## OCSP Stapling >> resolver 127.0.0.1; >> ssl_stapling on; >> #ssl_stapling_verify on; >> ssl_stapling_file /ssl_keys/domain.staple; >> #ssl_trusted_certificate /ssl_keys/domain_issuer.crt; >> #ssl_stapling_responder http://ocsp.comodoca.com; > > Just a side note: in most cases just switching on ssl_stapling and > configuring resolver is enough, nginx will do anything else. If > it won't be able to, it will complain at "warn" level to error > log. The ssl_stapling_file is mostly intended for debugging. > >> According to the Nginx documentation I need to make a DER file for the >> "ssl_stapling_file" directive in order to send out the OCSP stapling >> response as part of the first connection. The domain.staple file was > > As stapling is an optimization mechanism, you probably don't care > much about the first connection. First connection will initiate a > OCSP request from nginx, and as soon as response is available it > will be stapled. > >> made like so. Special thanks to the group over at >> https://calomel.org/nginx.html for getting me this far and allowing me >> to use their server for testing against. >> >> # collect all the certificates and put them into separate files. >> level0 is the domain cert, level1 certificate authority and level2 is >> the root over the CA. >> openssl s_client -showcerts -connect calomel.org:443 < /dev/null | awk >> -v c=-1 '/-----BEGIN CERTIFICATE-----/{inc=1;c++} inc {print > >> ("level" c ".crt")} /---END CERTIFICATE-----/{inc=0}' >> >> # Look at the certificates and that they look like the correct format. >> for i in level?.crt; do openssl x509 -noout -serial -subject -issuer >> -in "$i"; echo; done >> >> # Put all of the publicly available certs into a bundle >> cat level{0,1,2}.crt > CAbundle.crt >> >> # Collect the OCSP response and make the DER domain.staple file. Make >> sure "Cert Status: good" and "Response verify OK" >> openssl ocsp -text -no_nonce -issuer level1.crt -CAfile CAbundle.crt >> -cert level0.crt -VAfile level1.crt -url http://ocsp.comodoca.com >> -respout domain.staple >> >> >> >> At this point I _believe_ have done everything correctly and the >> domain.staple DER formatted file is right. When I test my server with >> the same steps as above, but with my own domain name instead of >> calomel.org, I still get "OCSP response: no response sent" when I test >> with openssl client. >> >> This is the openssl client line I used for testing to see what a OCSP >> server response would look like. I tested two servers. >> >> # this server's OCSP stapling response seems to work >> openssl s_client -connect login.live.com:443 -tls1 -tlsextdebug -status >> ... >> OCSP response: >> ====================================== >> OCSP Response Data: >> OCSP Response Status: successful (0x0) >> Response Type: Basic OCSP Response >> ... >> >> # calomel..org does not support OSCP stapling (yet) and I get the same >> result on my server's domain... >> openssl s_client -connect calomel.org:443 -tls1 -tlsextdebug -status >> -CAfile /usr/lib/ssl/certs/AddTrust_External_Root.pem >> ... >> OCSP response: no response sent >> ... > > The main question is: in which server you've configured stapling? > I.e. are you using dedicated ip/port, or try to use name-based > virtualhosts instead? > > Note that with SSL it's not that easy to do virtualhosts > correctly, even if SNI is supported by many clients as of now. In > particular the above openssl command won't set servername and > hence will hit default server. > > Additionally, while looking into this I've found that due to > OpenSSL bug the OCSP stapling won't work at all if it's not > enabled in the default server. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From alansilva at acm.org Thu Oct 4 19:37:09 2012 From: alansilva at acm.org (Alan Silva) Date: Thu, 4 Oct 2012 16:37:09 -0300 Subject: WAF Recommendations? In-Reply-To: <70168872bc656160b18c5394e6c94f9e@schug.net> References: <746c1e0ead5b74a7c4258bd94b9ac216.NginxMailingListEnglish@forum.nginx.org> <70168872bc656160b18c5394e6c94f9e@schug.net> Message-ID: Hi, I recommend you to try use of modsecurity for NGINX, with some adaptions, the CRS (a set for modsecurity rules) working now with this module. Instructions: http://www.modsecurity.org/projects/modsecurity/nginx/index.html Regards, Alan On Tuesday, September 25, 2012 at 3:40 AM, Christoph Schug wrote: > On 2012-09-25 03:47, Listjj wrote: > > May i ask where can i download the source of ngx_lua? > > > Speaking of lua-nginx-module, it's hosted on GitHub > > https://github.com/chaoslawful/lua-nginx-module > https://github.com/chaoslawful/lua-nginx-module/tags > > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aweber at comcast.net Thu Oct 4 19:59:01 2012 From: aweber at comcast.net (Aaron) Date: Thu, 04 Oct 2012 15:59:01 -0400 Subject: WAF Recommendations? In-Reply-To: References: <746c1e0ead5b74a7c4258bd94b9ac216.NginxMailingListEnglish@forum.nginx.org> <70168872bc656160b18c5394e6c94f9e@schug.net> Message-ID: My reservation is whether I need to compile it, and how. Can nginx use shared libraries or do I have to recompile that from source too? I think I would like to try it if someone can tell me the necessary steps (or goes ahead and builds it for centos 6). -Aaron Alan Silva wrote: >Hi, > >I recommend you to try use of modsecurity for NGINX, with some >adaptions, >the CRS (a set for modsecurity rules) working now with this module. > >Instructions: >http://www.modsecurity.org/projects/modsecurity/nginx/index.html > >Regards, > >Alan > >On Tuesday, September 25, 2012 at 3:40 AM, Christoph Schug wrote: > >> On 2012-09-25 03:47, Listjj wrote: >> > May i ask where can i download the source of ngx_lua? >> >> >> Speaking of lua-nginx-module, it's hosted on GitHub >> >> https://github.com/chaoslawful/lua-nginx-module >> https://github.com/chaoslawful/lua-nginx-module/tags >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org (mailto:nginx at nginx.org) >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > > > > >------------------------------------------------------------------------ > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From alansilva at acm.org Thu Oct 4 21:33:30 2012 From: alansilva at acm.org (Alan Silva) Date: Thu, 4 Oct 2012 18:33:30 -0300 Subject: WAF Recommendations? In-Reply-To: References: <746c1e0ead5b74a7c4258bd94b9ac216.NginxMailingListEnglish@forum.nginx.org> <70168872bc656160b18c5394e6c94f9e@schug.net> Message-ID: <7B8BDFD24ACF403ABDF26C86E7076D7E@gmail.com> Hi Aaron, In instructions have a step-by-step to package build, but you have more specific doubts about module, I recommend you to subscribe and ask in modsecurity-users list. But I think today modsecurity is a good and usual alternative for WAF in NGINX. Regards, Alan On Thursday, October 4, 2012 at 4:59 PM, Aaron wrote: > My reservation is whether I need to compile it, and how. Can nginx use shared libraries or do I have to recompile that from source too? > > I think I would like to try it if someone can tell me the necessary steps (or goes ahead and builds it for centos 6). -------------- next part -------------- An HTML attachment was scrubbed... URL: From albert at nn.iij4u.or.jp Fri Oct 5 01:10:38 2012 From: albert at nn.iij4u.or.jp (shin fukuda) Date: Fri, 5 Oct 2012 10:10:38 +0900 Subject: If-Modified-Since to upstream servers In-Reply-To: <17AB315F-0184-4350-9AEF-5EB5DC02BB73@nginx.com> References: <20121001155146.ce12af3c99c61d93e8b74db7@nn.iij4u.or.jp> <17AB315F-0184-4350-9AEF-5EB5DC02BB73@nginx.com> Message-ID: <20121005101038.e99fdb7ad5fc7788d3cea96b@nn.iij4u.or.jp> Hi. That's good news! On Wed, 3 Oct 2012 14:10:26 +0400 Andrew Alexeev wrote: > Probably to appear around Jan 2013, maybe sooner > > On Oct 1, 2012, at 10:51 AM, shin fukuda wrote: > > > Hi. > > > > I'd like to know status of "1.3 Planned If-Modified-Since to upstream servers". > > > > Thank you. > > > > > > -- > > shin fukuda > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- shin fukuda From nginx-forum at nginx.us Fri Oct 5 02:19:29 2012 From: nginx-forum at nginx.us (dullnicker) Date: Thu, 04 Oct 2012 22:19:29 -0400 Subject: nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) Message-ID: <8c7d9548f2c26eb7b20c3267d0ff0fb4.NginxMailingListEnglish@forum.nginx.org> Dear all, I must kindly ask for your help again. I have installed nginx on dozens of VPS and never came across the problem that I am facing now on my newest installation. When I try to (re-) start nginx, I get the following error message: ---- nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) ---- Here are some values from my /etc/sysctl.conf: ---- # Controls the maximum size of a message, in bytes # kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue # kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages # kernel.shmall = 4294967296 ---- Needless to say that I have 900 MB RAM and 49 GB disk space free. OS is CentOS 6. What am I missing? I am really pulling my hair right now! Thank you very much for any hint in advance! Kind regards -A Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231404,231404#msg-231404 From sb at waeme.net Fri Oct 5 05:17:01 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Fri, 5 Oct 2012 09:17:01 +0400 Subject: nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) In-Reply-To: <8c7d9548f2c26eb7b20c3267d0ff0fb4.NginxMailingListEnglish@forum.nginx.org> References: <8c7d9548f2c26eb7b20c3267d0ff0fb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <03D3ACED-EC03-48D4-B8E3-3217A7557DEF@waeme.net> On 5 Oct2012, at 06:19 , dullnicker wrote: > > Needless to say that I have 900 MB RAM and 49 GB disk space free. OS is > CentOS 6. > > What am I missing? I am really pulling my hair right now! Thank you very > much for any hint in advance! Please show sysctl vm.max_map_count and ulimit -a From nginx-forum at nginx.us Fri Oct 5 07:03:23 2012 From: nginx-forum at nginx.us (dullnicker) Date: Fri, 05 Oct 2012 03:03:23 -0400 Subject: nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) In-Reply-To: <03D3ACED-EC03-48D4-B8E3-3217A7557DEF@waeme.net> References: <03D3ACED-EC03-48D4-B8E3-3217A7557DEF@waeme.net> Message-ID: Dear Sergey, thank you for helping out! Here are the outputs: [root at c3 ~]# sysctl vm.max_map_count vm.max_map_count = 65530 [root at c3 ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127163 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 256 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231404,231406#msg-231406 From appa at perusio.net Fri Oct 5 09:49:12 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 05 Oct 2012 11:49:12 +0200 Subject: ignore part of URL to force caching? In-Reply-To: <506DD36E.6090307@comcast.net> References: <506DD36E.6090307@comcast.net> Message-ID: <87fw5tuu3r.wl%appa@perusio.net> On 4 Out 2012 20h20 CEST, aweber at comcast.net wrote: > I would like to "override" the intent of the app server that is > basically disabling any caching of the backend file. For example, > they are embedding a "noCache=#######" parameter at the end of the > URL (there are other parameters following, but if I can check the > url up-to the "?" that would suit me fine). > > This is actually a dynamically generated SWF file, but the file is > then constant for a reasonable amount of time such that I'd like to > cache it for a few minutes. > Is there a way in a specific "location" to tell nginx to ignore the > parameters (or any portion of the URL) when determining the cached > object for that URL? In other words, tell nginx to cache content > for that location, say only 5min, and ignore all parameters when > determining whether to cache and how to match cached content? If I understand correctly. Inside your main cached location you can do: if ($request_uri ~* noCache) { return 418; } error_page 418 =200 @other-cached-location; And add @other-cached-location: location @other-cached-location { # Set your cache parameters here and serve the content. } --- appa From sb at waeme.net Fri Oct 5 09:50:06 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Fri, 5 Oct 2012 13:50:06 +0400 Subject: nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) In-Reply-To: References: <03D3ACED-EC03-48D4-B8E3-3217A7557DEF@waeme.net> Message-ID: On 5 Oct2012, at 11:03 , dullnicker wrote: > Dear Sergey, > > thank you for helping out! Here are the outputs: Nothing unusual here. It seems mmap failed because of vm hypervisor settings/limits. Try to compile (gcc -o mmap_test mmap_test.c) and run (./mmap_test) simple test in attachment - it should not exit with errors. -------------- next part -------------- A non-text attachment was scrubbed... Name: mmap_test.c Type: application/octet-stream Size: 657 bytes Desc: not available URL: From igor at sysoev.ru Fri Oct 5 10:08:20 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 5 Oct 2012 14:08:20 +0400 Subject: ignore part of URL to force caching? In-Reply-To: <506DD36E.6090307@comcast.net> References: <506DD36E.6090307@comcast.net> Message-ID: <20121005100820.GA16818@nginx.com> On Thu, Oct 04, 2012 at 02:20:30PM -0400, AJ Weber wrote: > I would like to "override" the intent of the app server that is > basically disabling any caching of the backend file. For example, they > are embedding a "noCache=#######" parameter at the end of the URL (there > are other parameters following, but if I can check the url up-to the "?" > that would suit me fine). > > This is actually a dynamically generated SWF file, but the file is then > constant for a reasonable amount of time such that I'd like to cache it > for a few minutes. > > Is there a way in a specific "location" to tell nginx to ignore the > parameters (or any portion of the URL) when determining the cached > object for that URL? In other words, tell nginx to cache content for > that location, say only 5min, and ignore all parameters when determining > whether to cache and how to match cached content? > > If I'm not explaining myself properly, please let me know and I'll try > another route. :) You can use "proxy_cache_key": http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key For example, to completly ignore query strings: location /swf/ { ... proxy_cache ... proxy_cache_valid 5m; proxy_cache_key $proxy_host$uri; } or to account query strings parameters ONE and TWO: location /swf/ { ... proxy_cache ... proxy_cache_valid 5m; proxy_cache_key $proxy_host$uri?$arg_ONE&$arg_TWO; } -- Igor Sysoev http://nginx.com/support.html From igor at sysoev.ru Fri Oct 5 10:09:08 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 5 Oct 2012 14:09:08 +0400 Subject: ignore part of URL to force caching? In-Reply-To: <87fw5tuu3r.wl%appa@perusio.net> References: <506DD36E.6090307@comcast.net> <87fw5tuu3r.wl%appa@perusio.net> Message-ID: <20121005100908.GB16818@nginx.com> On Fri, Oct 05, 2012 at 11:49:12AM +0200, Ant?nio P. P. Almeida wrote: > On 4 Out 2012 20h20 CEST, aweber at comcast.net wrote: > > > I would like to "override" the intent of the app server that is > > basically disabling any caching of the backend file. For example, > > they are embedding a "noCache=#######" parameter at the end of the > > URL (there are other parameters following, but if I can check the > > url up-to the "?" that would suit me fine). > > > > This is actually a dynamically generated SWF file, but the file is > > then constant for a reasonable amount of time such that I'd like to > > cache it for a few minutes. > > > Is there a way in a specific "location" to tell nginx to ignore the > > parameters (or any portion of the URL) when determining the cached > > object for that URL? In other words, tell nginx to cache content > > for that location, say only 5min, and ignore all parameters when > > determining whether to cache and how to match cached content? > > If I understand correctly. Inside your main cached location you can do: > > if ($request_uri ~* noCache) { > return 418; > } > > error_page 418 =200 @other-cached-location; > > And add @other-cached-location: > > location @other-cached-location { > # Set your cache parameters here and serve the content. > } There is proxy_cache_key for this. -- Igor Sysoev http://nginx.com/support.html From appa at perusio.net Fri Oct 5 10:20:00 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 05 Oct 2012 12:20:00 +0200 Subject: ignore part of URL to force caching? In-Reply-To: <20121005100908.GB16818@nginx.com> References: <506DD36E.6090307@comcast.net> <87fw5tuu3r.wl%appa@perusio.net> <20121005100908.GB16818@nginx.com> Message-ID: <87ehldusof.wl%appa@perusio.net> On 5 Out 2012 12h09 CEST, igor at sysoev.ru wrote: > On Fri, Oct 05, 2012 at 11:49:12AM +0200, Ant?nio P. P. Almeida > wrote: >> On 4 Out 2012 20h20 CEST, aweber at comcast.net wrote: >> >>> I would like to "override" the intent of the app server that is >>> basically disabling any caching of the backend file. For example, >>> they are embedding a "noCache=#######" parameter at the end of the >>> URL (there are other parameters following, but if I can check the >>> url up-to the "?" that would suit me fine). >>> >>> This is actually a dynamically generated SWF file, but the file is >>> then constant for a reasonable amount of time such that I'd like >>> to cache it for a few minutes. >> >>> Is there a way in a specific "location" to tell nginx to ignore >>> the parameters (or any portion of the URL) when determining the >>> cached object for that URL? In other words, tell nginx to cache >>> content for that location, say only 5min, and ignore all >>> parameters when determining whether to cache and how to match >>> cached content? >> >> If I understand correctly. Inside your main cached location you can >> do: >> >> if ($request_uri ~* noCache) { >> return 418; >> } >> >> error_page 418 =200 @other-cached-location; >> >> And add @other-cached-location: >> >> location @other-cached-location { >> # Set your cache parameters here and serve the content. >> } > > There is proxy_cache_key for this. I was under the impression that there wasn't a specific location for the swf files. If there is in fact the proxy already has all you need for doing it much easier. --- appa From nginx-forum at nginx.us Fri Oct 5 10:43:06 2012 From: nginx-forum at nginx.us (dullnicker) Date: Fri, 05 Oct 2012 06:43:06 -0400 Subject: nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) In-Reply-To: References: Message-ID: <592355dd3cb18b901e7ff5b7998d2f40.NginxMailingListEnglish@forum.nginx.org> Sergey Budnevitch Wrote: ------------------------------------------------------- > Nothing unusual here. It seems mmap failed because of > vm hypervisor settings/limits. Try to compile (gcc -o mmap_test > mmap_test.c) > and run (./mmap_test) simple test in attachment - it should not exit > with > errors. Thank you very much, Sergey! I compiled and ran the test and here is the result: ---- [root at c3 test]# ./mmap_test 65536 bytes mmaped 131072 bytes mmaped 262144 bytes mmaped 524288 bytes mmaped mmap(MAP_ANON|MAP_SHARED, 1048576) failed, errno = 28 ---- So there is an error. I guess I will have to contact the VPS provider then... Again, thank you for your help and support! Kind regards -A Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231404,231415#msg-231415 From mdounin at mdounin.ru Fri Oct 5 11:10:34 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Oct 2012 15:10:34 +0400 Subject: OCSP response: no response sent In-Reply-To: References: <20121004111334.GB40452@mdounin.ru> Message-ID: <20121005111033.GM40452@mdounin.ru> Hello! On Thu, Oct 04, 2012 at 02:31:41PM -0400, CM Fields wrote: > Maxim, > > Thank you. I was using virtual hosts. Once I switched my conf over to > using a default ssl server block, with "server _;" ocsp stapling > worked with the openssl client test. This is perfectly fine in my > situation. All that was needed is "ssl_stapling on;" and the resolver > line just like you mentioned. > > > Question: > > I noticed the OCSP Response Data has an update time and a "next" update time. > > Cert Status: good > This Update: Oct 4 00:00:37 2012 GMT > Next Update: Oct 8 00:00:37 2012 GMT > > Am I correct in assuming nginx will cache the OSCP Response Data at > least till "Next Update" time thus reducing the amount of OCSP > requests going to the CA? Not exactly. As of now, nginx will cache valid responses for 1 hour, and errors for 5 mins. > Finally, just a heads up. If I incorrectly put "ssl_stapling on;" in > the parent http{} area Nginx 1.3.7 will crash/dump. Ooops, thank you for report. Fix: --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -737,7 +737,7 @@ ngx_http_ssl_init(ngx_conf_t *cf) sscf = cscfp[s]->ctx->srv_conf[ngx_http_ssl_module.ctx_index]; - if (!sscf->stapling) { + if (sscf->ssl.ctx == NULL || !sscf->stapling) { continue; } (Committed, see http://trac.nginx.org/nginx/changeset/4888/nginx) > > Again, thanks for a great web server. > > On Thu, Oct 4, 2012 at 7:13 AM, Maxim Dounin wrote: > > Hello! > > > > On Wed, Oct 03, 2012 at 04:25:47PM -0400, CM Fields wrote: > > > >> I am trying to get OCSP Stapling working in Nginx 1.3.7 with SPDY > >> patch.spdy-52.txt built against OpenSSL 1.0.1c. SSL and SPDY > >> connections to the server work fine. > >> > >> Let me explain what I have done so far and perhaps someone can point > >> me in the right direction or if I have made a mistake somewhere. > >> > >> The OCSP section of the nginx.conf under the SSL config looks like > >> this. The full certificate chain is in the "ssl_certificate > >> /ssl_keys/domain_ssl.crt" file and clients connect without issue. > >> > >> ## SSL Certs > >> ssl on; > >> ssl_session_cache shared:SSL:10m; > >> ssl_certificate /ssl_keys/domain_ssl.crt; > >> ssl_certificate_key /ssl_keys/domain_ssl.key; > >> ssl_ecdh_curve secp521r1; > >> > >> ## OCSP Stapling > >> resolver 127.0.0.1; > >> ssl_stapling on; > >> #ssl_stapling_verify on; > >> ssl_stapling_file /ssl_keys/domain.staple; > >> #ssl_trusted_certificate /ssl_keys/domain_issuer.crt; > >> #ssl_stapling_responder http://ocsp.comodoca.com; > > > > Just a side note: in most cases just switching on ssl_stapling and > > configuring resolver is enough, nginx will do anything else. If > > it won't be able to, it will complain at "warn" level to error > > log. The ssl_stapling_file is mostly intended for debugging. > > > >> According to the Nginx documentation I need to make a DER file for the > >> "ssl_stapling_file" directive in order to send out the OCSP stapling > >> response as part of the first connection. The domain.staple file was > > > > As stapling is an optimization mechanism, you probably don't care > > much about the first connection. First connection will initiate a > > OCSP request from nginx, and as soon as response is available it > > will be stapled. > > > >> made like so. Special thanks to the group over at > >> https://calomel.org/nginx.html for getting me this far and allowing me > >> to use their server for testing against. > >> > >> # collect all the certificates and put them into separate files. > >> level0 is the domain cert, level1 certificate authority and level2 is > >> the root over the CA. > >> openssl s_client -showcerts -connect calomel.org:443 < /dev/null | awk > >> -v c=-1 '/-----BEGIN CERTIFICATE-----/{inc=1;c++} inc {print > > >> ("level" c ".crt")} /---END CERTIFICATE-----/{inc=0}' > >> > >> # Look at the certificates and that they look like the correct format. > >> for i in level?.crt; do openssl x509 -noout -serial -subject -issuer > >> -in "$i"; echo; done > >> > >> # Put all of the publicly available certs into a bundle > >> cat level{0,1,2}.crt > CAbundle.crt > >> > >> # Collect the OCSP response and make the DER domain.staple file. Make > >> sure "Cert Status: good" and "Response verify OK" > >> openssl ocsp -text -no_nonce -issuer level1.crt -CAfile CAbundle.crt > >> -cert level0.crt -VAfile level1.crt -url http://ocsp.comodoca.com > >> -respout domain.staple > >> > >> > >> > >> At this point I _believe_ have done everything correctly and the > >> domain.staple DER formatted file is right. When I test my server with > >> the same steps as above, but with my own domain name instead of > >> calomel.org, I still get "OCSP response: no response sent" when I test > >> with openssl client. > >> > >> This is the openssl client line I used for testing to see what a OCSP > >> server response would look like. I tested two servers. > >> > >> # this server's OCSP stapling response seems to work > >> openssl s_client -connect login.live.com:443 -tls1 -tlsextdebug -status > >> ... > >> OCSP response: > >> ====================================== > >> OCSP Response Data: > >> OCSP Response Status: successful (0x0) > >> Response Type: Basic OCSP Response > >> ... > >> > >> # calomel..org does not support OSCP stapling (yet) and I get the same > >> result on my server's domain... > >> openssl s_client -connect calomel.org:443 -tls1 -tlsextdebug -status > >> -CAfile /usr/lib/ssl/certs/AddTrust_External_Root.pem > >> ... > >> OCSP response: no response sent > >> ... > > > > The main question is: in which server you've configured stapling? > > I.e. are you using dedicated ip/port, or try to use name-based > > virtualhosts instead? > > > > Note that with SSL it's not that easy to do virtualhosts > > correctly, even if SNI is supported by many clients as of now. In > > particular the above openssl command won't set servername and > > hence will hit default server. > > > > Additionally, while looking into this I've found that due to > > OpenSSL bug the OCSP stapling won't work at all if it's not > > enabled in the default server. > > > > -- > > Maxim Dounin > > http://nginx.com/support.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Oct 5 12:33:20 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Oct 2012 16:33:20 +0400 Subject: How to tell Proxy module to retrieve secondary URL if primary URL doenst exist In-Reply-To: <646ee7f832059440fc3f08f8b3d3cc10.NginxMailingListEnglish@forum.nginx.org> References: <20121004094544.GA40452@mdounin.ru> <646ee7f832059440fc3f08f8b3d3cc10.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121005123320.GN40452@mdounin.ru> Hello! On Thu, Oct 04, 2012 at 12:08:55PM -0400, wurb32 wrote: > Hi , > > I tried with error_page and proxy_intercept_errors and it is not working for > me. Do you mind if you can give me an example configuration? Trivial config with fallback to a predefined static file would look like: location / { error_page 404 = /fallback.jpg; proxy_pass http://upstream; proxy_intercept_errors on; } location = /fallback.jpg { # serve static file } If you want rewrite from /something_300.jpg to /something_100.jpg to happen automatically, for all possible values of "something", then config like this should work, using named location and rewrite: location / { error_page 404 = @fallback; proxy_pass http://upstream; proxy_intercept_errors on; } location @fallback { # if request ends with _300.jpg - rewrite to _100.jpg # and stop processing of rewrite rules, i.e. continue # with proxy_pass; else return 404 rewrite ^(.*)_300.jpg$ $1_100.jpg break; return 404; proxy_pass http://upstream; } Documentation: http://nginx.org/r/proxy_intercept_errors http://nginx.org/r/error_page http://nginx.org/r/location http://nginx.org/r/rewrite -- Maxim Dounin http://nginx.com/support.html From aweber at comcast.net Fri Oct 5 14:55:34 2012 From: aweber at comcast.net (AJ Weber) Date: Fri, 05 Oct 2012 10:55:34 -0400 Subject: webdav support Message-ID: <506EF4E6.6050205@comcast.net> I will be setting up a "server" for webdav access. Basically for SSL session caching only, not much else but that and proxy back to the actual app server. Do I need the webdav and the ngx-dav-ext-module to support all the WEBDAV http methods just to pass-thru? Or are those modules just needed if I want to filter and/or parse the http traffic? Thanks, AJ From nginx-forum at nginx.us Fri Oct 5 14:57:49 2012 From: nginx-forum at nginx.us (dullnicker) Date: Fri, 05 Oct 2012 10:57:49 -0400 Subject: nginx: [alert] mmap(MAP_ANON|MAP_SHARED, 2097152) failed (28: No space left on device) In-Reply-To: <8c7d9548f2c26eb7b20c3267d0ff0fb4.NginxMailingListEnglish@forum.nginx.org> References: <8c7d9548f2c26eb7b20c3267d0ff0fb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <122a271f341d616f83e98204b68afde1.NginxMailingListEnglish@forum.nginx.org> Just for future reference: "shmpages size" had to get raised by the VPS provider. Works fine now. Thanks again for the assistance! Kind regards -A Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231404,231428#msg-231428 From aweber at comcast.net Fri Oct 5 15:45:04 2012 From: aweber at comcast.net (AJ Weber) Date: Fri, 05 Oct 2012 11:45:04 -0400 Subject: ignore part of URL to force caching? In-Reply-To: <20121005100820.GA16818@nginx.com> References: <506DD36E.6090307@comcast.net> <20121005100820.GA16818@nginx.com> Message-ID: <506F0080.7060703@comcast.net> This worked exactly as Igor described. Thank you! -AJ On 10/5/2012 6:08 AM, Igor Sysoev wrote: > On Thu, Oct 04, 2012 at 02:20:30PM -0400, AJ Weber wrote: >> I would like to "override" the intent of the app server that is >> basically disabling any caching of the backend file. For example, they >> are embedding a "noCache=#######" parameter at the end of the URL (there >> are other parameters following, but if I can check the url up-to the "?" >> that would suit me fine). >> >> This is actually a dynamically generated SWF file, but the file is then >> constant for a reasonable amount of time such that I'd like to cache it >> for a few minutes. >> >> Is there a way in a specific "location" to tell nginx to ignore the >> parameters (or any portion of the URL) when determining the cached >> object for that URL? In other words, tell nginx to cache content for >> that location, say only 5min, and ignore all parameters when determining >> whether to cache and how to match cached content? >> >> If I'm not explaining myself properly, please let me know and I'll try >> another route. :) > You can use "proxy_cache_key": > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key > > For example, to completly ignore query strings: > > location /swf/ { > ... > proxy_cache ... > proxy_cache_valid 5m; > proxy_cache_key $proxy_host$uri; > } > > or to account query strings parameters ONE and TWO: > > location /swf/ { > ... > proxy_cache ... > proxy_cache_valid 5m; > proxy_cache_key $proxy_host$uri?$arg_ONE&$arg_TWO; > } > > From mdounin at mdounin.ru Fri Oct 5 15:53:02 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Oct 2012 19:53:02 +0400 Subject: webdav support In-Reply-To: <506EF4E6.6050205@comcast.net> References: <506EF4E6.6050205@comcast.net> Message-ID: <20121005155302.GU40452@mdounin.ru> Hello! On Fri, Oct 05, 2012 at 10:55:34AM -0400, AJ Weber wrote: > I will be setting up a "server" for webdav access. Basically for > SSL session caching only, not much else but that and proxy back to > the actual app server. > > Do I need the webdav and the ngx-dav-ext-module to support all the > WEBDAV http methods just to pass-thru? Or are those modules just > needed if I want to filter and/or parse the http traffic? These module are only needed if you want nginx to handle webdav requests by itself. You don't need them to proxy requests to an upstream server. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 5 19:53:48 2012 From: nginx-forum at nginx.us (tatsumi) Date: Fri, 05 Oct 2012 15:53:48 -0400 Subject: deamon off option is not recognized Message-ID: hello, I wrote nginx.conf first line. deamon off; but demon off option is not recognized why? my nginx version: nginx/1.2.1 please help me # /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf nginx: [emerg] unknown directive "deamon" in /usr/local/nginx/conf/nginx.conf:1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231437,231437#msg-231437 From edho at myconan.net Fri Oct 5 19:54:51 2012 From: edho at myconan.net (Edho Arief) Date: Sat, 6 Oct 2012 02:54:51 +0700 Subject: deamon off option is not recognized In-Reply-To: References: Message-ID: On Sat, Oct 6, 2012 at 2:53 AM, tatsumi wrote: > hello, > > I wrote nginx.conf first line. > > deamon off; > > but demon off option is not recognized why? > my nginx version: nginx/1.2.1 > > please help me > See closer, you wrote "deamon" while it should be "daemon". From nginx-forum at nginx.us Sat Oct 6 05:53:54 2012 From: nginx-forum at nginx.us (iceman) Date: Sat, 06 Oct 2012 01:53:54 -0400 Subject: imap_client_buffer not being obeyed Message-ID: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> I have imap_client_buffer set to 64k(as required by imap protocol) in the nginx.conf file However when an imap client sends a very long command, post authentication, the length gets truncated at 4k(the default page size of the linux operating system) how can i debug this problem? i have stepped through the code using gdb. as far as i can see, in mail_proxy module, the value of the conf file(120000 for testing) was correctly seen gdb) p p->upstream.connection->data $24 = (void *) 0x9e4bd48 (gdb) p s $25 = (ngx_mail_session_t *) 0x9e4bd48 (gdb) p p->buffer->end Cannot access memory at address 0x1c (gdb) p s->buffer->end - s->buffer->last $26 = 120000 (gdb) p s $27 = (ngx_mail_session_t *) 0x9e4bd48 (gdb) n 205 pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); (gdb) n 207 s->proxy->buffer = ngx_create_temp_buf(s->connection->pool, (gdb) p pcf $28 = (ngx_mail_proxy_conf_t *) 0x9e3c480 (gdb) p *pcf $29 = {enable = 1, pass_error_message = 1, xclient = 1, buffer_size = 120000, timeout = 86400000, upstream_quick_retries = 0, upstream_retries = 0, min_initial_retry_delay = 3000, max_initial_retry_delay = 3000, max_retry_delay = 60000} When the command below is sent using telnet, only 4 k of data is accepted, then nginx hangs until i hit enter on keyboard......after which the truncated command is sent to the upstream imap server. I am using nginx 0.78, is this a known issue? This is the command sent HP1L UID FETCH 344990,344996,344998,345004,345006,345010:345011,345015,345020,345043,345046,345049:345050,345053,345057:345059,345071,345080,345083,345085,345090,345092:345093,345096,345101:345102,345106,345112,345117,345136,345140,345142:345144,345146:345147,345150,345161,345163,345167,345174:345176,345195,345197,345203,345205,345207:345209,345214,345218,345221,345224,345229,345231,345233,345236,345239,345248,345264,345267,345272,345285,345290,345292,345301,345305,345308,345316,345320,345322,345324,345327,345358,345375,345384,345386,345391,345409,345427:345428,345430:345432,345434:345435,345437,345439,345443,345448,345450,345463,345468:345470,345492,345494,345499,345501,345504,345506,345515:345519,345522,345525,345535,345563,345568,345574,345577,345580,345582,345599,345622,345626,345630,345632:345633,345637,345640,345647:345648,345675,345684,345686:345687,345703:345704,345714:345717,345720,345722:345724,345726,345730,345734:345737,345749,345756,345759,345783,345785:345787,345790,345806:345807,345812,345816,345720,345722:345724,345726,345730,345734:345737,345749,345756,345759,345783,345785:345787,345790,345806:345807,345812,345817,345902,345919,345953,345978,345981,345984,345990,345997,346004,346008:346009,346011:346012,346022,346039,346044,346050,346061:346062,346066:346067,346075:346076,346081,346088,346090,346093,346096,346098:346099,346110,346140,346170,346172:346174,346187,346189,346193:346194,346197,346204,346212,346225,346241,346244,346259,346323,346325:346328,346331:346332,346337:346338,346342,346346,346353,346361:346362,346364,346420,346426,346432,346447,346450:346451,346453:346454,346456:346457,346459:346460,346466:346468,346474,346476,346479,346483,346496,346498:346501,346504,346530,346539,346546,346576,346589:346590,346594:346595,346601,346607:346609,346612,346614:346615,346617:346618,346625,346631,346638,346641,346654,346657,346661,346665,346671,346687:346688,346693,346695,346734:346735,346741,346747:346748,346755,346757,346777,346779,346786:346788,346791,346793,346801,346815,346821:346822,346825,346828,346837,346839,346843,346848,346857:346858,346860,346862:346863,346866,346868:346869,346877,346883,346895:346897,346899,346923,346927,346945,346948,346961,346964:346966,346968,346970,346974,346987,346989:346990,346992,347000,347003,347008:347011,347013,347021,347028,347032:347034,347036,347049,347051,347058,347076,347079,347081,347083,347085,347092,347096,347108,347130,347145:347148,347150,347155:347158,347161,347163:347164,347181,347184,347187:347189,347204,347210:347211,347215,347217:347220,347227:347228,347234,347244,347246,347251,347253,347263:347264,347266,347268,347275,347292,347294,347304,347308,347317:347320,347322,347325:347327,347340:347341,347346,347352:347353,347357,347360:347361,347375,347379,347382:347386,347389,347392,347402,347405:347406,347411,347433:347434,347438,347440:347441,347443:347444,347448,347459:347460,347465,347468:347469,347476:347479,347490,347497,347506,347526,347530,347545,347547,347555:347556,347601:347605,347632,347634,347641,347643:347646,347649,347653,347660,347668,347676,347707,347719,347722,347724,347727:347732,347735,347746,347754,347756:347757,347761,347776,347779,347791,347798,347800,347805,347816:347817,347822,347837,347841,347843,347846,347848,347851,347879,347885,347892:347894,347903,347907:347911,347915:347916,347918,347950,347952:347953,347981,347986:347988,348001,348037:348038,348049,348052,348056:348058,348061,348072,348074,348077:348078,348080,348082,348100,348105,348109,348111:348116,348119:348123,348131:348132,348138,348150:348151,348153,348157,348161:348163,348166,348168:348169,348171,348173,348176,348178,348180:348181,348201,348204,348208,348218:348219,348222,348226,348229:348230,348235,348238:348240,348244:348247,348249,348251:348253,348256:348257,348263,348285,348288:348289,348293,348298:348299,348301:348302,348305:348306,348310,348327,348332:348337,348340,348342,348344,348348,348351:348353,348356:348357,348360,348366,348377,348386,348390,348398,348400:348401,348406:348407,348419,348422,348424,348427:348428,348430,348432:348433,348439,348444,348447:348448,348450:348451,348454,348456,348459:348460,348473,348493,348497:348498,348504,348506,348508,348516,348520,348527,348530,348532,348546:348547,348551,348560:348563,348567,348570:348572,348574,348577,348581,348588,348595,348610,348632,348636,348642,348646,348667,348672:348673,348679,348703,348713:348714,348716,348718:348722,348728:348729,348731,348735,348743:348745,348749,348751:348752,348759,348768,348773,348780:348781,348784:348791 (UID FLAGS) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231439,231439#msg-231439 From nginx-forum at nginx.us Sat Oct 6 05:55:35 2012 From: nginx-forum at nginx.us (iceman) Date: Sat, 06 Oct 2012 01:55:35 -0400 Subject: imap_client_buffer not being obeyed In-Reply-To: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> References: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0e73c842c9308d518e6940443108d8fb.NginxMailingListEnglish@forum.nginx.org>
HP1L UID FETCH
344990,344996,344998,345004,345006,345010:345011,345015,345020,345043,345046,345049:345050,345053,345057:345059,345071,345080,345083,345085,345090,345092:345093,345096,345101:345102,345106,345112,345117,345136,345140,345142:345144,345146:345147,345150,345161,345163,345167,345174:345176,345195,345197,345203,345205,345207:345209,345214,345218,345221,345224,345229,345231,345233,345236,345239,345248,345264,345267,345272,345285,345290,345292,345301,345305,345308,345316,345320,345322,345324,345327,345358,345375,345384,345386,345391,345409,345427:345428,345430:345432,345434:345435,345437,345439,345443,345448,345450,345463,345468:345470,345492,345494,345499,345501,345504,345506,345515:345519,345522,345525,345535,345563,345568,345574,345577,345580,345582,345599,345622,345626,345630,345632:345633,345637,345640,345647:345648,345675,345684,345686:345687,345703:345704,345714:345717,345720,345722:345724,345726,345730,345734:345737,345749,345756,345759,345783,345785:345787,345790,345806:345807,345812,345816,345720,345722:345724,345726,345730,345734:345737,345749,345756,345759,345783,345785:345787,345790,345806:345807,345812,345817,345902,345919,345953,345978,345981,345984,345990,345997,346004,346008:346009,346011:346012,346022,346039,346044,346050,346061:346062,346066:346067,346075:346076,346081,346088,346090,346093,346096,346098:346099,346110,346140,346170,346172:346174,346187,346189,346193:346194,346197,346204,346212,346225,346241,346244,346259,346323,346325:346328,346331:346332,346337:346338,346342,346346,346353,346361:346362,346364,346420,346426,346432,346447,346450:346451,346453:346454,346456:346457,346459:346460,346466:346468,346474,346476,346479,346483,346496,346498:346501,346504,346530,346539,346546,346576,346589:346590,346594:346595,346601,346607:346609,346612,346614:346615,346617:346618,346625,346631,346638,346641,346654,346657,346661,346665,346671,346687:346688,346693,346695,346734:346735,346741,346747:346748,346755,346757,346777,346779,346786:346788,346791,346793,346801,346815,346821:346822,346825,346828,346837,346839,346843,346848,346857:346858,346860,346862:346863,346866,346868:346869,346877,346883,346895:346897,346899,346923,346927,346945,346948,346961,346964:346966,346968,346970,346974,346987,346989:346990,346992,347000,347003,347008:347011,347013,347021,347028,347032:347034,347036,347049,347051,347058,347076,347079,347081,347083,347085,347092,347096,347108,347130,347145:347148,347150,347155:347158,347161,347163:347164,347181,347184,347187:347189,347204,347210:347211,347215,347217:347220,347227:347228,347234,347244,347246,347251,347253,347263:347264,347266,347268,347275,347292,347294,347304,347308,347317:347320,347322,347325:347327,347340:347341,347346,347352:347353,347357,347360:347361,347375,347379,347382:347386,347389,347392,347402,347405:347406,347411,347433:347434,347438,347440:347441,347443:347444,347448,347459:347460,347465,347468:347469,347476:347479,347490,347497,347506,347526,347530,347545,347547,347555:347556,347601:347605,347632,347634,347641,347643:347646,347649,347653,347660,347668,347676,347707,347719,347722,347724,347727:347732,347735,347746,347754,347756:347757,347761,347776,347779,347791,347798,347800,347805,347816:347817,347822,347837,347841,347843,347846,347848,347851,347879,347885,347892:347894,347903,347907:347911,347915:347916,347918,347950,347952:347953,347981,347986:347988,348001,348037:348038,348049,348052,348056:348058,348061,348072,348074,348077:348078,348080,348082,348100,348105,348109,348111:348116,348119:348123,348131:348132,348138,348150:348151,348153,348157,348161:348163,348166,348168:348169,348171,348173,348176,348178,348180:348181,348201,348204,348208,348218:348219,348222,348226,348229:348230,348235,348238:348240,348244:348247,348249,348251:348253,348256:348257,348263,348285,348288:348289,348293,348298:348299,348301:348302,348305:348306,348310,348327,348332:348337,348340,348342,348344,348348,348351:348353,348356:348357,348360,348366,348377,348386,348390,348398,348400:348401,348406:348407,348419,348422,348424,348427:348428,348430,348432:348433,348439,348444,348447:348448,348450:348451,348454,348456,348459:348460,348473,348493,348497:348498,348504,348506,348508,348516,348520,348527,348530,348532,348546:348547,348551,348560:348563,348567,348570:348572,348574,348577,348581,348588,348595,348610,348632,348636,348642,348646,348667,348672:348673,348679,348703,348713:348714,348716,348718:348722,348728:348729,348731,348735,348743:348745,348749,348751:348752,348759,348768,348773,348780:348781,348784:348791
(UID FLAGS) 
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231439,231440#msg-231440 From nginx-forum at nginx.us Sat Oct 6 05:57:12 2012 From: nginx-forum at nginx.us (iceman) Date: Sat, 06 Oct 2012 01:57:12 -0400 Subject: imap_client_buffer not being obeyed In-Reply-To: <0e73c842c9308d518e6940443108d8fb.NginxMailingListEnglish@forum.nginx.org> References: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> <0e73c842c9308d518e6940443108d8fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: the entire command does not show up on the webpage, copy and paste the snippet in an editor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231439,231441#msg-231441 From nginx-forum at nginx.us Sat Oct 6 10:35:34 2012 From: nginx-forum at nginx.us (tatsumi) Date: Sat, 06 Oct 2012 06:35:34 -0400 Subject: deamon off option is not recognized In-Reply-To: References: Message-ID: <48313e204c0b9a449e8dd8655e951e6e.NginxMailingListEnglish@forum.nginx.org> OH!! very thanks!!! I can do it.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231437,231442#msg-231442 From agentzh at gmail.com Sat Oct 6 19:28:23 2012 From: agentzh at gmail.com (agentzh) Date: Sat, 6 Oct 2012 12:28:23 -0700 Subject: [ANN] ngx_openresty devel version 1.2.3.7 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new development version of ngx_openresty, 1.2.3.7: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (development) release, 1.2.3.5: * upgraded LuaNginxModule to 0.6.10. * feature: now ngx.req.get_headers() returns a Lua table with keys in the all-lower-case form by default. thanks James Hurst and Matthieu Tourne for the feature request. * feature: now ngx.req.get_headers() adds an "__index" metamethod to the resulting Lua table by default, which will automatically normalize the lookup key by converting upper-case letters and underscores in case of a lookup miss. thanks James Hurst and Matthieu Tourne for suggesting this feature. * feature: now ngx.req.get_headers() accepts a second (optional) argument, "raw", for controlling whether to return the original form of the header names (that is, the original letter-case). * feature: added public C API functions "ngx_http_shared_dict_get" and "ngx_http_lua_find_zone" to allow other Nginx C modules or a patched Nginx core to directly access the shared memory dictionaries created by LuaNginxModule. thanks Piotr Sikora for requesting this feature. * bugfix: fixed a compilation warning in the TCP/stream cosocket codebase when using (at least) gcc 3.4.6 for MIPS. thanks Dirk Feytons for reporting this as GitHub issue #162. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002003 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From nginx-forum at nginx.us Sun Oct 7 13:42:37 2012 From: nginx-forum at nginx.us (Piki) Date: Sun, 07 Oct 2012 09:42:37 -0400 Subject: nginx erroneously redirecting to https In-Reply-To: References: Message-ID: <0361fe28d5e279a54c76dc04e4608c83.NginxMailingListEnglish@forum.nginx.org> Piki Wrote: ------------------------------------------------------- > I am running nginx+php-fpm on localhost for testing some stuff sites > before uploading to a live server (which I have done with nginx in the > past), and have run into the first issue which Google can't seem to > help me with: > > I have decided to try three different forum softwares. They are > installed under separate subdirectories within my web root (e.g. > /srv/www/localhost/html/{forum1,forum2,forum3}). On two of the forums, > however, whenver I try to use anything that requires a password, it > switches from http to https, and on one of the forums, attempts to > continue using port 80. On the forum that it attempts to use port 80, > I get the following error message: > > ----------code snippet---------- > Secure Connection Failed > > An error occurred during a connection to localhost:80. > > SSL received a record that exceeded the maximum permissible length. > > (Error code: ssl_error_rx_record_too_long) > > The page you are trying to view cannot be shown because the > authenticity of the received data could not be verified. > Please contact the website owners to inform them of this problem. > Alternatively, use the command found in the help menu to report this > broken site. > ----------end code snippet---------- > > I do not have https or ssl configured within nginx or the vhost, nor > any ssl certs. > > Here is the vhost config file: > ----------code snippet---------- > server { > server_name localhost; > listen 80; > root /srv/www/localhost/html; > index index.php index.html index.htm; > ssi on; > > location ~ \.php$ { > include /etc/nginx/fastcgi_params; > fastcgi_index index.php; > fastcgi_pass 127.0.0.1:9000; > } > } > ----------end code snippet---------- This issue seems to have fixed itself after I added the "ssl off;" directive (without quotes) to the above server block. After I restarted nginx, I cleared out and reinstalled the affected forum software, and the issue hasn't reappeared yet, not even after clearing my browser cache, rebooting the computer, then revisiting the forums. I though "ssl off;" was supposed to be the default if the ssl setting isn't specified? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231379,231455#msg-231455 From nginx-forum at nginx.us Mon Oct 8 02:30:37 2012 From: nginx-forum at nginx.us (mlybrand) Date: Sun, 07 Oct 2012 22:30:37 -0400 Subject: Redirect loop problem with SSL Message-ID: <71bcebc4c8f22c0284cc2e8550fa1295.NginxMailingListEnglish@forum.nginx.org> Hello, I am having a problem understanding exactly how to configure my nginx server to use SSL with Linode, RoR, Unicorn and spree ecommerce. This post will be kind of long, but this is just so you can see all the steps I have taken thus far. The first section will include all the steps I have taken to set up my linode instance. In this setup, the site displays, but I get the Redirect Loop when I try to log in. Someone suggested that I needed to set up separate server blocks for port 80 and port 443. After I did this (the nginx.conf for that set up will be at the end), every page gives me the Redirect loop. So, I clearly do not understand what I am doing and need a little nudge. Please help me. --- error message --- This webpage has a redirect loop The webpage at https://50.116.18.21/login has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer. --- end error message --- --- original steps --- 1. create server Ubuntu 10.04 LTS (with defaults) 2. boot server 3. ssh root at 50.116.18.21 4. apt-get -y update 5. apt-get -y install curl git-core python-software-properties 6. add-apt-repository ppa:nginx/stable 7. apt-get -y update 8. apt-get -y install nginx /*** Can I do the SSL stuff here??? ***/ #. mkdir /srv/ssl/ #. cd /srv/ssl #. openssl req -new -x509 -days 365 -nodes -out /srv/ssl/nginx.pem -keyout /srv/ssl/nginx.key 9. service nginx start 10. apt-get -y install imagemagick 11. add-apt-repository ppa:pitti/postgresql 12. apt-get -y update 13. apt-get -y install postgresql libpq-dev 14. sudo -u postgres psql 15. \password 16. create user mayorio with password 'secret'; 17. create database mayorio_production owner mayorio; 18. \q 19. apt-get -y install telnet postfix 20. add-apt-repository ppa:chris-lea/node.js 21. apt-get -y update 22. apt-get -y install nodejs 23. apt-get -y install libxslt-dev libxml2-dev 24. adduser deployer --ingroup admin 25. su deployer 26. cd 27. curl -L https://raw.github.com/fesplugas/rbenv-installer/master/bin/rbenv-installer | bash 28. vim ~/.bashrc # add rbenv to the top 29. put the following in the top: --- begin snippet --- export RBENV_ROOT="${HOME}/.rbenv" if [ -d "${RBENV_ROOT}" ]; then export PATH="${RBENV_ROOT}/bin:${PATH}" eval "$(rbenv init -)" fi --- end snippet --- 30. . ~/.bashrc 31. rbenv bootstrap-ubuntu-10-04 32. rbenv install 1.9.3-p125 33. rbenv global 1.9.3-p125 34. gem install bundler --no-ri --no-rdoc 35. rbenv rehash 36. ssh git at github.com 37. switch back to local box 38. rails new mayorio -d postgresql 39. cd mayorio *** 40 and 41 no longer necessary. they must have fixed it :) 40. edit Gemfile to add "gem 'cocaine', :git => 'git://github.com/thoughtbot/cocaine.git'" (make sure to do to newlines after for spree) 41. bundle install *** see above *** 42. sudo -u postgres psql 43. create user mayorio with password 'secret'; 44. create database mayorio_development owner mayorio; 45. \q 46. edit config/database.yml to add password 47. spree install (accept all defaults) 48. bundle exec rake assets:precompile:nondigest *** consider adding public/assets to .gitignore *** 49. edit .gitignore and add "/config/database.yml" 50. cp config/database.yml config/database.example.yml 51. git init 52. git add . 53. git commit -m "initial commit" 54. create github repo "mayorio" 55. git remote add origin git at github.com:mlybrand/mayorio.git 56. git push origin master 57. edit Gemfile to uncomment unicorn and capistrano lines 58. bundle 59. capify . 60. edit Capfile to uncomment load assets line 61. edit config/deploy.rb to be the following: --- begin code snippet --- require "bundler/capistrano" server "50.116.18.21", :web, :app, :db, primary: true set :application, "mayorio" set :user, "deployer" set :deploy_to, "/home/#{user}/apps/#{application}" set :deploy_via, :remote_cache set :use_sudo, false set :scm, "git" set :repository, "git at github.com:mlybrand/#{application}.git" set :branch, "master" default_run_options[:pty] = true ssh_options[:forward_agent] = true after "deploy", "deploy:cleanup" # keep only the last 5 releases namespace :deploy do %w[start stop restart].each do |command| desc "#{command} unicorn server" task command, roles: :app, except: {no_release: true} do run "/etc/init.d/unicorn_#{application} #{command}" end end task :setup_config, roles: :app do sudo "ln -nfs #{current_path}/config/nginx.conf /etc/nginx/sites-enabled/#{application}" sudo "ln -nfs #{current_path}/config/unicorn_init.sh /etc/init.d/unicorn_#{application}" run "mkdir -p #{shared_path}/config" put File.read("config/database.example.yml"), "#{shared_path}/config/database.yml" puts "Now edit the config files in #{shared_path}." end after "deploy:setup", "deploy:setup_config" task :symlink_config, roles: :app do run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml" end after "deploy:finalize_update", "deploy:symlink_config" desc "Make sure local git is in sync with remote." task :check_revision, roles: :web do unless `git rev-parse HEAD` == `git rev-parse origin/master` puts "WARNING: HEAD is not the same as origin/master" puts "Run `git push` to sync changes." exit end end before "deploy", "deploy:check_revision" end --- end code snippet --- 62. edit config/nginx.conf --- begin code snippet --- upstream unicorn { server unix:/tmp/unicorn.mayorio.sock fail_timeout=0; } server { listen 80; # default deferred # server_name example.com listen 443 ssl; ssl_certificate /srv/ssl/nginx.pem; ssl_certificate_key /srv/ssl/nginx.key; # server_name example.com; root /home/deployer/apps/mayorio/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } --- end code snippet --- 63. edit config/unicorn.rb --- begin code snippet --- root = "/home/deployer/apps/mayorio/current" working_directory root pid "#{root}/tmp/pids/unicorn.pid" stderr_path "#{root}/log/unicorn.log" stdout_path "#{root}/log/unicorn.log" listen "/tmp/unicorn.mayorio.sock" worker_processes 2 timeout 30 --- end code snippet --- 64. edit config/unicorn_init.sh --- begin code snippet --- #!/bin/sh ### BEGIN INIT INFO # Provides: unicorn # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Manage unicorn server # Description: Start, stop, restart unicorn server for a specific application. ### END INIT INFO set -e # Feel free to change any of the following variables for your app: TIMEOUT=${TIMEOUT-60} APP_ROOT=/home/deployer/apps/mayorio/current PID=$APP_ROOT/tmp/pids/unicorn.pid CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb -E production" AS_USER=deployer set -u OLD_PIN="$PID.oldbin" sig () { test -s "$PID" && kill -$1 `cat $PID` } oldsig () { test -s $OLD_PIN && kill -$1 `cat $OLD_PIN` } run () { if [ "$(id -un)" = "$AS_USER" ]; then eval $1 else su -c "$1" - $AS_USER fi } case "$1" in start) sig 0 && echo >&2 "Already running" && exit 0 run "$CMD" ;; stop) sig QUIT && exit 0 echo >&2 "Not running" ;; force-stop) sig TERM && exit 0 echo >&2 "Not running" ;; restart|reload) sig HUP && echo reloaded OK && exit 0 echo >&2 "Couldn't reload, starting '$CMD' instead" run "$CMD" ;; upgrade) if sig USR2 && sleep 2 && sig 0 && oldsig QUIT then n=$TIMEOUT while test -s $OLD_PIN && test $n -ge 0 do printf '.' && sleep 1 && n=$(( $n - 1 )) done echo if test $n -lt 0 && test -s $OLD_PIN then echo >&2 "$OLD_PIN still exists after $TIMEOUT seconds" exit 1 fi exit 0 fi echo >&2 "Couldn't upgrade, starting '$CMD' instead" run "$CMD" ;; reopen-logs) sig USR1 ;; *) echo >&2 "Usage: $0 " exit 1 ;; esac --- end code snippet --- 65. chmod +x config/unicorn_init.sh 66. git add . 67. git commit -m "deployment configs" 68. git push 69. cap deploy:setup 70. ssh deployer at 50.116.18.21 71. cd apps/mayorio/shared/config 72. vim database.yml 73. remove everything except production 74. add password for production 75. add "host: localhost" 76. exit 77. cap deploy:cold 78. ssh deployer at 50.116.18.21 79. sudo rm /etc/nginx/sites-enabled/default 80. sudo service nginx restart 81. sudo update-rc.d unicorn_mayorio defaults --- end original steps --- --- final nginx.conf used --- upstream unicorn { server unix:/tmp/unicorn.mayorio.sock fail_timeout=0; } server { listen 80; # default deferred # server_name example.com rewrite ^(.*) https://$host$1 permanent; location ~ \.(php|html)$ { deny all; } access_log /dev/null; error_log /dev/null; } server { ssl on; # server_name example.com listen 443 ssl; ssl_certificate /srv/ssl/nginx.pem; ssl_certificate_key /srv/ssl/nginx.key; # server_name example.com; root /home/deployer/apps/mayorio/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } --- end final nginx.conf --- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231461,231461#msg-231461 From nginx-forum at nginx.us Mon Oct 8 05:41:34 2012 From: nginx-forum at nginx.us (iceman) Date: Mon, 08 Oct 2012 01:41:34 -0400 Subject: imap_client_buffer not being obeyed In-Reply-To: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> References: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: guys, any ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231439,231463#msg-231463 From roedie at roedie.nl Mon Oct 8 08:07:59 2012 From: roedie at roedie.nl (Sander Klein) Date: Mon, 08 Oct 2012 10:07:59 +0200 Subject: joomla and zend In-Reply-To: <429260f75b22cc070ff87c0e68d72773@roedie.nl> References: <429260f75b22cc070ff87c0e68d72773@roedie.nl> Message-ID: Just one more time,... nobody who can help me with the following? On 01.10.2012 09:29, Sander Klein wrote: > Hi, > > I'm trying to figure out how use a zend application within a joomla > website which is not in the document root of a website. > > The website is located in /www/customer/some.name and the zend > application is located in /usr/local/share/web-app (actually > /usr/local/share/web-app/public). Now what I want is the web-app to > be > accessible using http://some.name/web-app/. I figures I should use > something like: > > location ~ ^/web-app/ { > alias /usr/local/share/web-app/public; > } > > But I cannot get this working. I tried using root instead of alias > within the location tags but that didn't seem to help. Can anyone > give > me a push in the right direction? The config I use for the joomla > site > is below. > > server { > listen 80; > listen [::]:80; > > server_name some.name; > root /www/customer/some.name; > > access_log /var/log/nginx/access_some.log pic buffer=32k; > error_log /var/log/nginx/error_some.log; > > location / { > try_files $uri $uri/ /index.php?$query_string; > } > > location ~ \.php { > include /etc/nginx/includes.d/php-fpm_www; > include /etc/nginx/includes.d/fastcgi_params; > fastcgi_param PHP_VALUE "include_path=."; > } > } > > greets, > > Sander > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From edho at myconan.net Mon Oct 8 08:15:16 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 8 Oct 2012 15:15:16 +0700 Subject: joomla and zend In-Reply-To: References: <429260f75b22cc070ff87c0e68d72773@roedie.nl> Message-ID: On Mon, Oct 8, 2012 at 3:07 PM, Sander Klein wrote: > Just one more time,... nobody who can help me with the following? > I think you're not clear enough what problem you encountered. Additionally, using alias is painful. I'm not familiar enough with Zend but I guess something like this: location ^~ /web-app/ { alias /usr/local/share/web-app/public/; location ~ \.php$ { } } > On 01.10.2012 09:29, Sander Klein wrote: >> >> Hi, >> >> I'm trying to figure out how use a zend application within a joomla >> website which is not in the document root of a website. >> >> The website is located in /www/customer/some.name and the zend >> application is located in /usr/local/share/web-app (actually >> /usr/local/share/web-app/public). Now what I want is the web-app to be >> accessible using http://some.name/web-app/. I figures I should use >> something like: >> >> location ~ ^/web-app/ { >> alias /usr/local/share/web-app/public; >> } >> >> But I cannot get this working. I tried using root instead of alias >> within the location tags but that didn't seem to help. Can anyone give >> me a push in the right direction? The config I use for the joomla site >> is below. >> >> server { >> listen 80; >> listen [::]:80; >> >> server_name some.name; >> root /www/customer/some.name; >> >> access_log /var/log/nginx/access_some.log pic buffer=32k; >> error_log /var/log/nginx/error_some.log; >> >> location / { >> try_files $uri $uri/ /index.php?$query_string; >> } >> >> location ~ \.php { >> include /etc/nginx/includes.d/php-fpm_www; >> include /etc/nginx/includes.d/fastcgi_params; >> fastcgi_param PHP_VALUE "include_path=."; >> } >> } >> >> greets, >> >> Sander >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From roedie at roedie.nl Mon Oct 8 08:49:27 2012 From: roedie at roedie.nl (Sander Klein) Date: Mon, 08 Oct 2012 10:49:27 +0200 Subject: joomla and zend In-Reply-To: References: <429260f75b22cc070ff87c0e68d72773@roedie.nl> Message-ID: <0592338026dcad627ddcfad7c5c9cebd@roedie.nl> On 08.10.2012 10:15, Edho Arief wrote: > On Mon, Oct 8, 2012 at 3:07 PM, Sander Klein > wrote: >> Just one more time,... nobody who can help me with the following? >> > > I think you're not clear enough what problem you encountered. > Additionally, using alias is painful. > > I'm not familiar enough with Zend but I guess something like this: > > location ^~ /web-app/ { > alias /usr/local/share/web-app/public/; > > location ~ \.php$ { > > } > } Thanks for your answer. I'm indeed not sure how to explain it clearly. I do have an apache config which 'just works' (tm) would that help so somebody could translate that config over to an nginx conf? Sander From roedie at roedie.nl Mon Oct 8 09:06:50 2012 From: roedie at roedie.nl (Sander Klein) Date: Mon, 08 Oct 2012 11:06:50 +0200 Subject: joomla and zend In-Reply-To: <0592338026dcad627ddcfad7c5c9cebd@roedie.nl> References: <429260f75b22cc070ff87c0e68d72773@roedie.nl> <0592338026dcad627ddcfad7c5c9cebd@roedie.nl> Message-ID: Hi, On 08.10.2012 10:49, Sander Klein wrote: > On 08.10.2012 10:15, Edho Arief wrote: >> On Mon, Oct 8, 2012 at 3:07 PM, Sander Klein >> wrote: >>> Just one more time,... nobody who can help me with the following? >>> >> >> I think you're not clear enough what problem you encountered. >> Additionally, using alias is painful. >> >> I'm not familiar enough with Zend but I guess something like this: >> >> location ^~ /web-app/ { >> alias /usr/local/share/web-app/public/; >> >> location ~ \.php$ { >> >> } >> } > the apache config I use is: ServerName some.website.xx DocumentRoot /some/path/to/site Alias /webapp /other/path/to/webapp/public SetEnv APPLICATION_ENV env_for_webapp In the documentroot the is a joomla website which I got working with the config used here: http://docs.joomla.org/Nginx Now I only need to get the webapp working which is a zend application. So, if you go http://some.website.xx/ you get the joomla site, but if you go to http://some.website.xx/webapp/ you need to get the webapp. I just don't know how to get that working. Sander From mdounin at mdounin.ru Mon Oct 8 10:57:31 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Oct 2012 14:57:31 +0400 Subject: imap_client_buffer not being obeyed In-Reply-To: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> References: <960cadc7de0cdd2f8b02bb42fe6c52d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121008105731.GB40452@mdounin.ru> Hello! On Sat, Oct 06, 2012 at 01:53:54AM -0400, iceman wrote: > I have imap_client_buffer set to 64k(as required by imap protocol) in the > nginx.conf file > However when an imap client sends a very long command, post authentication, > the length gets truncated at 4k(the default page size of the linux operating > system) > > how can i debug this problem? i have stepped through the code using gdb. as > far as i can see, in mail_proxy module, the value of the conf file(120000 > for testing) was correctly seen > > gdb) p p->upstream.connection->data > $24 = (void *) 0x9e4bd48 > (gdb) p s > $25 = (ngx_mail_session_t *) 0x9e4bd48 > (gdb) p p->buffer->end > Cannot access memory at address 0x1c > (gdb) p s->buffer->end - s->buffer->last > $26 = 120000 > (gdb) p s > $27 = (ngx_mail_session_t *) 0x9e4bd48 > (gdb) n > 205 pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); > (gdb) n > 207 s->proxy->buffer = ngx_create_temp_buf(s->connection->pool, > (gdb) p pcf > $28 = (ngx_mail_proxy_conf_t *) 0x9e3c480 > (gdb) p *pcf > $29 = {enable = 1, pass_error_message = 1, xclient = 1, buffer_size = > 120000, > timeout = 86400000, upstream_quick_retries = 0, upstream_retries = 0, > min_initial_retry_delay = 3000, max_initial_retry_delay = 3000, > max_retry_delay = 60000} > > When the command below is sent using telnet, only 4 k of data is accepted, > then nginx hangs until i hit enter on keyboard......after which the > truncated command is sent to the upstream imap server. > > I am using nginx 0.78, is this a known issue? I would recommend the following: a) Test with something better than telnet. While telnet is generally usable for testing, it might have issues with testing things buffer sizes as telnet usually have it's own buffers. Moreover, usually telnet works in line-at-a-time mode, which means it's telnet who is responsible for handling of the command until you hit enter. b) If you still see the issue, upgrade to at least latest stable version of nginx, 1.2.4, before further testing. There is no such version "0.78", but as long as it starts with "0." it's really old. -- Maxim Dounin http://nginx.com/support.html From t.hipp at raumobil.de Mon Oct 8 15:00:08 2012 From: t.hipp at raumobil.de (Tobias Hipp) Date: Mon, 08 Oct 2012 17:00:08 +0200 Subject: Tomcat behind Nginx resulting in jsessionids in url Message-ID: <5072EA78.9030601@raumobil.de> Hello everybody, I'm trying to set up nginx as a reverse proxy and softwareloadbalancer. The backend is built by some tomcat servers. Using apache as a reverse proxy everything goes well, but using nginx the following occurs: 1.) With cookies turned off in the browser, links and urls look like this: http://subdomain.example.com/mobile/details%3Bjsessionid%3DF42AFDC7C01E7295B29B143ED6F0772F;jsessionid=F42AFDC7C01E7295B29B143ED6F0772F?someParams=someValues Somehow, the jsessionid is there twice, but once it is URL-Encoded. With cookies enabled, links and urls look like this: http://subdomain.example.com/mobile/details;jsessionid=F42AFDC7C01E7295B29B143ED6F0772F?someParams=someValues Why is this happening? 2.) Some modules of the site are shown at least twice, again only while using nginx as reverse proxy. How did I achiev that? Here is my configuration: nginx.conf: user www-data www-data; worker_processes 6; worker_priority 0; error_log /var/log/nginx/error.log; #log_not_found off; worker_cpu_affinity 000001 000010 000100 001000 010000 100000; events { multi_accept off; worker_connections 6144; #Muss eventuell an die Hardware angepasst werden. Erlaubt derzeit 6 * 6144 gleichzeitige Anfragen) } http { log_format test_debug '$remote_addr - $remote_user [$time_local] ' "$request " $status ' Arguments: $args'; include /my/config/nginx/upstreams.conf; include mime.types; default_type application/octet-stream; keepalive_timeout 65; server { listen 80; server_name localhost; location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } # include /etc/nginx/conf.d/*.conf; include /my/config/nginx/sites/*.conf; } proxy.conf: proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; upstreams.conf: upstream service1 { ip_hash; server 192.168.1.117:8080 weight=1 max_fails=10 fail_timeout=60s; server 192.168.1.115:8080 weight=1 max_fails=10 fail_timeout=60s; } upstream service2 { ip_hash; server 192.168.1.117:7080 weight=1 max_fails=10 fail_timeout=60s; server 192.168.1.115:7080 weight=1 max_fails=10 fail_timeout=60s; } /my/config/nginx/sites/service1.conf: server { error_log subdomain_error.log; access_log subdomain_access.log test_debug; rewrite_log on; include /my/config/nginx/proxy.conf; listen 80; server_name subdomain.example.com; rewrite "^/([^%]+)[^;]*([^?]+)?(\?{0,1}.*)" /$1$2$3; #used to kick out the wrong jsessionid befor passing the request, avoiding a 404 if ($http_cookie ~* "jsessionid=([^;]+)(?:;|$)") { set $co "jsessionid=$1"; } location = / { proxy_set_header Cookie $co; proxy_pass http://service1/?param=value; } location ~* "^/(images|css|js|userimages)/(.*)$" { proxy_pass http://service1/$1/$2; } location ~* "^/(.*\.(jpg|gif|png))$" { proxy_pass http://service1/$1?; } location / { proxy_set_header Cookie $co; proxy_pass http://service1; } } server { error_log /var/log/nginx/subdomain_ssl_error.log; rewrite_log on; listen 443 ssl; server_name subdomain.example.com; ssl_certificate /raumo/config/nginx/zertifikate/subdomain.example.com.crt; ssl_certificate_key /raumo/config/nginx/zertifikate/subdomain.example.com.key; if ($args_param !~ "value"){ set $args_param "value"; } location = / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://service1; } location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://service1; } } server{ error_log /var/log/nginx/subdomain_redir_error.log; listen 80; server_name example.com; location / { return 301 http://subdomain.example.com; } } server{ listen 80; server_name www.subdomain.com; location / { return 301 http://subdomain.example.com; } } Would apreciate any help. Thank you all in advance T.Hipp From eddy at sdf.org Mon Oct 8 16:29:15 2012 From: eddy at sdf.org (eddy at sdf.org) Date: Mon, 8 Oct 2012 16:29:15 -0000 Subject: Redirect loop problem with SSL In-Reply-To: <71bcebc4c8f22c0284cc2e8550fa1295.NginxMailingListEnglish@forum.nginx.org> References: <71bcebc4c8f22c0284cc2e8550fa1295.NginxMailingListEnglish@forum.nginx.org> Message-ID: <80d8d733ea161ab6b373d1c8f279378c.squirrel@wm.sdf.org> hey mlybrand, You can check the redirecting behaviour with CURL. Use, $ curl -I -k http://example.org # to get only the respond header and dont validate the ssl cert Also, maybe try "return 301 https://$host$request_uri;" over "rewrite ^(.*) https://$host$1 permanent;", and check the with curl, again. Good luck and have a nice week! ~ed From agentzh at gmail.com Mon Oct 8 22:19:55 2012 From: agentzh at gmail.com (agentzh) Date: Mon, 8 Oct 2012 15:19:55 -0700 Subject: [ANN] ngx_openresty stable version 1.2.3.8 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new stable version of ngx_openresty, 1.2.3.8: http://openresty.org/#Download This release is exactly the same as the last development release, 1.2.3.7. This is the final release based on the Nginx 1.2.3 core. The next (development) release will be based on Nginx 1.2.4 (or later if exists). Also, I'll start the new 0.7.x release series for the ngx_lua module, which features the new "light threads" API developed on the git "thread" branch of the ngx_lua project. You can see more on ngx_lua's "light threads" in the following post on the openresty-en mailing list: http://groups.google.com/group/openresty-en/browse_thread/thread/c14e27a459964056 You're still very welcome to comment on this new feature :) The following components are bundled with this release: * LuaJIT-2.0.0-beta10 * array-var-nginx-module-0.03rc1 * auth-request-nginx-module-0.2 * drizzle-nginx-module-0.1.4 * echo-nginx-module-0.41 * encrypted-session-nginx-module-0.02 * form-input-nginx-module-0.07rc5 * headers-more-nginx-module-0.18 * iconv-nginx-module-0.10rc7 * lua-5.1.5 * lua-cjson-1.0.3 * lua-rds-parser-0.05 * lua-redis-parser-0.10 * lua-resty-dns-0.08 * lua-resty-memcached-0.08 * lua-resty-mysql-0.10 * lua-resty-redis-0.14 * lua-resty-string-0.06 * lua-resty-upload-0.03 * memc-nginx-module-0.13rc3 * nginx-1.2.3 * ngx_coolkit-0.2rc1 * ngx_devel_kit-0.2.17 * ngx_lua-0.6.10 * ngx_postgres-1.0rc2 * rds-csv-nginx-module-0.05rc2 * rds-json-nginx-module-0.12rc10 * redis-nginx-module-0.3.6 * redis2-nginx-module-0.09 * set-misc-nginx-module-0.22rc8 * srcache-nginx-module-0.16 * xss-nginx-module-0.03rc9 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From ianevans at digitalhit.com Tue Oct 9 01:49:27 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Mon, 8 Oct 2012 21:49:27 -0400 Subject: Getting built in 404 instead of my custom one Message-ID: Just noticed that my custom 404 page isn't getting served in all circumstances and I'm getting the bare internal 404 at times. Looking at the debug, it seems to be choosing my 404, but somehow not displaying it: 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: 404, "/zeep.shtml?" a:1, c:1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http special response: 404, "/zeep.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 internal redirect: "/dhe404.shtml?" If it's redirecting to the dhe404.shtml file, why isn't it serving it? Could it be an issue with the fastcgi cache? (shtml files are actually php) From ianevans at digitalhit.com Tue Oct 9 02:16:33 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Mon, 8 Oct 2012 22:16:33 -0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: References: Message-ID: On Mon, October 8, 2012 9:49 pm, Ian M. Evans wrote: > Just noticed that my custom 404 page isn't getting served in all > circumstances and I'm getting the bare internal 404 at times. Looking at > the debug, it seems to be choosing my 404, but somehow not displaying it: here's a longer bit of the debug: 2012/10/08 21:09:34 [debug] 13419#0: *6056489 accept: 24.212.205.68 fd:5 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer add: 5: 60000:1125303069 2012/10/08 21:09:34 [debug] 13419#0: *6056489 epoll add event: fd:5 op:1 ev:80000001 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 0920CD1C 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 0920CD1C 2012/10/08 21:09:34 [debug] 13419#0: *6056489 malloc: 0922DA80:660 2012/10/08 21:09:34 [debug] 13419#0: *6056489 malloc: 09220E70:1024 2012/10/08 21:09:34 [debug] 13419#0: *6056489 posix_memalign: 091E7FE0:4096 @16 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http process request line 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv: fd:5 1024 of 1024 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http request line: "GET /zeep.shtml HTTP/1.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http uri: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http args: "" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http exten: "shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http process request header line 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "Host: www.example.com" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "User-Agent: Mozilla/5.0 (Windows NT 6.0; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "Accept-Language: en-us,en;q=0.5" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "Accept-Encoding: gzip, deflate" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "DNT: 1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header: "Connection: keep-alive" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http alloc large header buffer 2012/10/08 21:09:34 [debug] 13419#0: *6056489 posix_memalign: 091EE230:256 @16 2012/10/08 21:09:34 [debug] 13419#0: *6056489 malloc: 09219C88:8192 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http large header alloc: 09219C88 8192 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http large header copy: 714 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv: fd:5 388 of 7478 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv: fd:5 -1 of 7090 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv() not ready (11: Resource temporarily unavailable) 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 0920CD1C 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 0920CD1C 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http process request header line 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv: fd:5 356 of 7090 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http header done 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer del: 5: 1125303069 2012/10/08 21:09:34 [debug] 13419#0: *6056489 rewrite phase: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script complex value 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/usr/local/nginx/htdocs" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "0" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http map started 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http map: "/zeep.shtml" "0" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "0" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 add cleanup: 091E8B48 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http file cache exists: -5 e:0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 cache file: "/var/lib/nginx/fastcgicache/e/43/f170708cf22b5b20fc1f7ca3cfcf943e" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http file cache lock u:1 wt:0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream cache: -5 2012/10/08 21:09:34 [debug] 13419#0: *6056489 posix_memalign: 091C5650:4096 @16 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "GATEWAY_INTERFACE" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "CGI/1.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "SERVER_SOFTWARE" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "nginx" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "SERVER_SOFTWARE: nginx" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "QUERY_STRING" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "QUERY_STRING: " 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "REQUEST_METHOD" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "GET" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "REQUEST_METHOD: GET" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "CONTENT_TYPE" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "CONTENT_TYPE: " 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "CONTENT_LENGTH" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "CONTENT_LENGTH: " 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "SCRIPT_FILENAME" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/usr/local/nginx/htdocs" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "SCRIPT_FILENAME: /usr/local/nginx/htdocs/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "PATH_TRANSLATED" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/usr/local/nginx/htdocs" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "PATH_TRANSLATED: /usr/local/nginx/htdocs/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "REQUEST_URI" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "REQUEST_URI: /zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "DOCUMENT_URI" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "DOCUMENT_URI: /zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "DOCUMENT_ROOT" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/usr/local/nginx/htdocs" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "DOCUMENT_ROOT: /usr/local/nginx/htdocs" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "SERVER_PROTOCOL" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "HTTP/1.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "REMOTE_ADDR" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "24.212.205.68" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "REMOTE_ADDR: 24.212.205.68" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "REMOTE_PORT" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "54558" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "REMOTE_PORT: 54558" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "SERVER_ADDR" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "67.19.142.122" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "SERVER_ADDR: 67.19.142.122" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "SERVER_PORT" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "80" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "SERVER_PORT: 80" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "SERVER_NAME" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "www.example.com" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "SERVER_NAME: www.example.com" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_HOST: www.example.com" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_USER_AGENT: Mozilla/5.0 (Windows NT 6.0; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_ACCEPT_LANGUAGE: en-us,en;q=0.5" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_ACCEPT_ENCODING: gzip, deflate" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_DNT: 1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 fastcgi param: "HTTP_CONNECTION: keep-alive" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http cleanup add: 091E8C54 2012/10/08 21:09:34 [debug] 13419#0: *6056489 get rr peer, try: 1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 socket 10 2012/10/08 21:09:34 [debug] 13419#0: *6056489 epoll add connection: fd:10 ev:80000005 2012/10/08 21:09:34 [debug] 13419#0: *6056489 connect to 127.0.0.1:10004, fd:10 #6056490 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream connect: -2 2012/10/08 21:09:34 [debug] 13419#0: *6056489 posix_memalign: 091C0330:128 @16 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer add: 10: 60000:1125303437 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: -4, "/zeep.shtml?" a:1, c:2 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http request count:2 blk:0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 09264228 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 092641F4 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 092641F4 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http run request: "/zeep.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream check client, write event:1, "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream recv(): -1 (11: Resource temporarily unavailable) 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 09264228 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream request: "/zeep.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream send request handler 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream send request 2012/10/08 21:09:34 [debug] 13419#0: *6056489 chain writer buf fl:0 s:2256 2012/10/08 21:09:34 [debug] 13419#0: *6056489 chain writer in: 091E8C70 2012/10/08 21:09:34 [debug] 13419#0: *6056489 writev: 2256 2012/10/08 21:09:34 [debug] 13419#0: *6056489 chain writer out: 00000000 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer del: 10: 1125303437 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer add: 10: 60000:1125303439 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 0920CD50 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 09264228 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 09264228 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream request: "/zeep.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream dummy handler 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 0920CD50 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream request: "/zeep.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream process header 2012/10/08 21:09:34 [debug] 13419#0: *6056489 malloc: 091C6658:4096 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv: fd:10 128 of 4029 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 01 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 06 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 00 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 01 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 00 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 64 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 04 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record byte: 00 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi record length: 100 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi parser: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi header: "Status: 404 Not Found" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi parser: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi header: "X-Powered-By: PHP/5.3.6" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi parser: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi header: "Content-type: text/html" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi parser: 1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http fastcgi header done 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http file cache free, fd: -1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 finalize http upstream request: 404 2012/10/08 21:09:34 [debug] 13419#0: *6056489 finalize http fastcgi request 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free rr peer 1 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 close http upstream connection: 10 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 091C0330, unused: 88 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer del: 10: 1125303439 2012/10/08 21:09:34 [debug] 13419#0: *6056489 reusable connection: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: 404, "/zeep.shtml?" a:1, c:1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http special response: 404, "/zeep.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 internal redirect: "/dhe404.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 rewrite phase: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script complex value 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/usr/local/nginx/htdocs" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script copy: "/maintenance.html4 [debug] 13419#0: *6056489 generic phase: 4 2012/10/08 21:09:34 [debug] 13419#0: *6056489 generic phase: 5 2012/10/08 21:09:34 [debug] 13419#0: *6056489 access phase: 6 2012/10/08 21:09:34 [debug] 13419#0: *6056489 access phase: 7 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post access phase: 8 2012/10/08 21:09:34 [debug] 13419#0: *6056489 try files phase: 9 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http init upstream, client timer: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "http" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "GET" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "www.example.com" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http cache key: "httpGETwww.example.com/zeep.shtml" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "0" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http script var: "0" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 add cleanup: 091E8F50 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http file cache exists: 0 e:0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http upstream cache: 404 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: 404, "/dhe404.shtml?" a:1, c:3 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http special response: 404, "/dhe404.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http set discard body 2012/10/08 21:09:34 [debug] 13419#0: *6056489 charset: "" > "ISO-8859-1" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 HTTP/1.1 404 Not Found 2012/10/08 21:09:34 [debug] 13419#0: *6056489 write new buf t:1 f:0 0921CAE4, pos 0921CAE4, size: 174 file: 0, size: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http write filter: l:0 f:0 s:174 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http output filter "/dhe404.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http copy filter: "/dhe404.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http postpone filter "/dhe404.shtml?" 091E8FD8 2012/10/08 21:09:34 [debug] 13419#0: *6056489 write old buf t:1 f:0 0921CAE4, pos 0921CAE4, size: 174 file: 0, size: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 write new buf t:0 f:0 00000000, pos 080CA180, size: 116 file: 0, size: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 write new buf t:0 f:0 00000000, pos 080C9AE0, size: 52 file: 0, size: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http write filter: l:1 f:0 s:342 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http write filter limit 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 writev: 342 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http write filter 00000000 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http copy filter: 0 "/dhe404.shtml?" 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: 0, "/dhe404.shtml?" a:1, c:3 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http request count:3 blk:0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: -4, "/dhe404.shtml?" a:1, c:2 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http request count:2 blk:0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http finalize request: -4, "/dhe404.shtml?" a:1, c:1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 set http keepalive handler 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http close request 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http log handler 2012/10/08 21:09:34 [debug] 13419#0: *6056489 run cleanup: 091E8F50 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http file cache cleanup 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http file cache free, fd: -1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 run cleanup: 091E8B48 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 091C6658 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 091E7FE0, unused: 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 0921BC90, unused: 104 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 091C5650, unused: 1636 2012/10/08 21:09:34 [debug] 13419#0: *6056489 event timer add: 5: 5000:1125248440 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 0922DA80 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 09220E70 2012/10/08 21:09:34 [debug] 13419#0: *6056489 hc free: 00000000 0 2012/10/08 21:09:34 [debug] 13419#0: *6056489 hc busy: 09237D08 1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 free: 09219C88 2012/10/08 21:09:34 [debug] 13419#0: *6056489 tcp_nodelay 2012/10/08 21:09:34 [debug] 13419#0: *6056489 reusable connection: 1 2012/10/08 21:09:34 [debug] 13419#0: *6056489 post event 0920CD1C 2012/10/08 21:09:34 [debug] 13419#0: *6056489 delete posted event 0920CD1C 2012/10/08 21:09:34 [debug] 13419#0: *6056489 http keepalive handler 2012/10/08 21:09:34 [debug] 13419#0: *6056489 malloc: 09220E70:1024 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv: fd:5 -1 of 1024 2012/10/08 21:09:34 [debug] 13419#0: *6056489 recv() not ready (11: Resource temporarily unavailable) 2012/10/08 21:09:39 [debug] 13419#0: *6056489 event timer del: 5: 1125248440 2012/10/08 21:09:39 [debug] 13419#0: *6056489 http keepalive handler 2012/10/08 21:09:39 [debug] 13419#0: *6056489 close http connection: 5 2012/10/08 21:09:39 [debug] 13419#0: *6056489 reusable connection: 0 2012/10/08 21:09:39 [debug] 13419#0: *6056489 free: 09220E70 2012/10/08 21:09:39 [debug] 13419#0: *6056489 free: 00000000 2012/10/08 21:09:39 [debug] 13419#0: *6056489 free: 00000000 2012/10/08 21:09:39 [debug] 13419#0: *6056489 free: 09237C40, unused: 16 2012/10/08 21:09:39 [debug] 13419#0: *6056489 free: 091EE230, unused: 188 From ianevans at digitalhit.com Tue Oct 9 02:55:57 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Mon, 8 Oct 2012 22:55:57 -0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: References: Message-ID: On Mon, October 8, 2012 10:16 pm, Ian M. Evans wrote: > On Mon, October 8, 2012 9:49 pm, Ian M. Evans wrote: >> Just noticed that my custom 404 page isn't getting served in all >> circumstances and I'm getting the bare internal 404 at times. Looking at >> the debug, it seems to be choosing my 404, but somehow not displaying >> it: > > here's a longer bit of the debug: hmm, disabled the fastcgi cache and it served my error file okay. enabled the cache and it served the nginx internal 404. Here's our cache setup: fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 loader_threshold=2000; map $http_cookie $no_cache { default 0; ~SESS 1; } fastcgi_cache_key "$scheme$request_method$host$request_uri"; add_header X-My-Cache $upstream_cache_status; map $uri $no_cache_dirs { default 0; ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) 1; } Should I toss the custom 404 into a non-cached subdir? From ianevans at digitalhit.com Tue Oct 9 03:51:21 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Mon, 8 Oct 2012 23:51:21 -0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: References: Message-ID: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> On Mon, October 8, 2012 10:55 pm, Ian M. Evans wrote: [snip] > hmm, disabled the fastcgi cache and it served my error file okay. enabled > the cache and it served the nginx internal 404. > > Here's our cache setup: > > fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 > keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 > loader_threshold=2000; > map $http_cookie $no_cache { default 0; ~SESS 1; } > fastcgi_cache_key "$scheme$request_method$host$request_uri"; > add_header X-My-Cache $upstream_cache_status; > > map $uri $no_cache_dirs { > default 0; > ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) > 1; > } > > Should I toss the custom 404 into a non-cached subdir? > that didn't work. It's definitely the fastcgi cache interfering with the php-parsed custom error page. I'm scratching my head here. My fastcgi.conf contains this additional cache info: fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; expires epoch; fastcgi_cache_lock on; I thought adding: map $uri $no_cache { default 0; /dhe404.shtml 1; } in my nginx.conf would do the trick, but nope. How can I stop the fastcgi_cache from handling the custom error page? Thanks. From igor at sysoev.ru Tue Oct 9 07:57:39 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 9 Oct 2012 11:57:39 +0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> References: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> Message-ID: <20121009075739.GA28203@nginx.com> On Mon, Oct 08, 2012 at 11:51:21PM -0400, Ian M. Evans wrote: > On Mon, October 8, 2012 10:55 pm, Ian M. Evans wrote: > [snip] > > > hmm, disabled the fastcgi cache and it served my error file okay. enabled > > the cache and it served the nginx internal 404. > > > > Here's our cache setup: > > > > fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 > > keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 > > loader_threshold=2000; > > map $http_cookie $no_cache { default 0; ~SESS 1; } > > fastcgi_cache_key "$scheme$request_method$host$request_uri"; > > add_header X-My-Cache $upstream_cache_status; > > > > map $uri $no_cache_dirs { > > default 0; > > ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) > > 1; > > } > > > > Should I toss the custom 404 into a non-cached subdir? > > > > that didn't work. > > It's definitely the fastcgi cache interfering with the php-parsed custom > error page. > > I'm scratching my head here. > > My fastcgi.conf contains this additional cache info: > > fastcgi_cache MYCACHE; > fastcgi_keep_conn on; > fastcgi_cache_bypass $no_cache $no_cache_dirs; > fastcgi_no_cache $no_cache $no_cache_dirs; > fastcgi_cache_valid 200 301 5m; > fastcgi_cache_valid 302 5m; > fastcgi_cache_valid 404 1m; > fastcgi_cache_use_stale error timeout invalid_header updating http_500; > fastcgi_ignore_headers Cache-Control Expires; > expires epoch; > fastcgi_cache_lock on; > > I thought adding: > > map $uri $no_cache { default 0; > /dhe404.shtml 1; > } > > in my nginx.conf would do the trick, but nope. How can I stop the > fastcgi_cache from handling the custom error page? The issue is in fastcgi_cache_key: fastcgi_cache_key "$scheme$request_method$host$request_uri"; It always uses client original $request_uri. Try: fastcgi_cache_key "$scheme$request_method$host$uri?$args"; -- Igor Sysoev http://nginx.com/support.html From ianevans at digitalhit.com Tue Oct 9 08:26:31 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 9 Oct 2012 04:26:31 -0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: <20121009075739.GA28203@nginx.com> References: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> <20121009075739.GA28203@nginx.com> Message-ID: <887128ffa5bd6632380c71c8a1decf23.squirrel@www.digitalhit.com> On Tue, October 9, 2012 3:57 am, Igor Sysoev wrote: > On Mon, Oct 08, 2012 at 11:51:21PM -0400, Ian M. Evans wrote: >> On Mon, October 8, 2012 10:55 pm, Ian M. Evans wrote: >> [snip] >> >> > hmm, disabled the fastcgi cache and it served my error file okay. >> enabled >> > the cache and it served the nginx internal 404. >> > >> > Here's our cache setup: >> > >> > fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 >> > keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 >> > loader_threshold=2000; >> > map $http_cookie $no_cache { default 0; ~SESS 1; } >> > fastcgi_cache_key "$scheme$request_method$host$request_uri"; >> > add_header X-My-Cache $upstream_cache_status; >> > >> > map $uri $no_cache_dirs { >> > default 0; >> > ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) >> > 1; >> > } >> > >> > Should I toss the custom 404 into a non-cached subdir? >> > >> >> that didn't work. >> >> It's definitely the fastcgi cache interfering with the php-parsed custom >> error page. >> >> I'm scratching my head here. >> >> My fastcgi.conf contains this additional cache info: >> >> fastcgi_cache MYCACHE; >> fastcgi_keep_conn on; >> fastcgi_cache_bypass $no_cache $no_cache_dirs; >> fastcgi_no_cache $no_cache $no_cache_dirs; >> fastcgi_cache_valid 200 301 5m; >> fastcgi_cache_valid 302 5m; >> fastcgi_cache_valid 404 1m; >> fastcgi_cache_use_stale error timeout invalid_header updating http_500; >> fastcgi_ignore_headers Cache-Control Expires; >> expires epoch; >> fastcgi_cache_lock on; >> >> I thought adding: >> >> map $uri $no_cache { default 0; >> /dhe404.shtml 1; >> } >> >> in my nginx.conf would do the trick, but nope. How can I stop the >> fastcgi_cache from handling the custom error page? > > The issue is in fastcgi_cache_key: > fastcgi_cache_key "$scheme$request_method$host$request_uri"; > > It always uses client original $request_uri. Try: > fastcgi_cache_key "$scheme$request_method$host$uri?$args"; > Thanks, Igor. It sort of worked. I'm now getting the custom 404, but it's a cached version of the custom 404 php file. That is, my custom 404 tells the visitor: "Hmmmm the page you're looking for [ /zap.shtml ] doesn't seem to be on our site. [etc, etc]" If I then do another file name that doesn't exist, say, test1.shtml, I'm getting a cached version of the 404 as instead of saying test1.shtml, it still says "Hmmmm the page you're looking for [ /zap.shtml ] doesn't seem to be on our site." So, though it's now serving the custom 404, it's still serving a cached version. So I guess I need a way to turn off caching for the custom 404 file. Just tried another one and, of course, got a cached 404 that someone else had just tried. From ianevans at digitalhit.com Tue Oct 9 08:34:19 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 9 Oct 2012 04:34:19 -0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: <887128ffa5bd6632380c71c8a1decf23.squirrel@www.digitalhit.com> References: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> <20121009075739.GA28203@nginx.com> <887128ffa5bd6632380c71c8a1decf23.squirrel@www.digitalhit.com> Message-ID: On Tue, October 9, 2012 4:26 am, Ian M. Evans wrote: [snip] > So, though it's now serving the custom 404, it's still serving a cached > version. So I guess I need a way to turn off caching for the custom 404 > file. Just tried another one and, of course, got a cached 404 that someone > else had just tried. Think I solved the cached issue. Just created the location: location = /dhe404.shtml { fastcgi_pass 127.0.0.1:10004; fastcgi_cache_valid 0s; } and made the cache valid 0. Seems to work. From igor at sysoev.ru Tue Oct 9 08:46:51 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 9 Oct 2012 12:46:51 +0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: References: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> <20121009075739.GA28203@nginx.com> <887128ffa5bd6632380c71c8a1decf23.squirrel@www.digitalhit.com> Message-ID: On Oct 9, 2012, at 12:34 , Ian M. Evans wrote: > On Tue, October 9, 2012 4:26 am, Ian M. Evans wrote: > [snip] >> So, though it's now serving the custom 404, it's still serving a cached >> version. So I guess I need a way to turn off caching for the custom 404 >> file. Just tried another one and, of course, got a cached 404 that someone >> else had just tried. > > Think I solved the cached issue. > > Just created the location: > > location = /dhe404.shtml { > fastcgi_pass 127.0.0.1:10004; > fastcgi_cache_valid 0s; > } > > and made the cache valid 0. Seems to work. location = /dhe404.shtml { fastcgi_pass ... fastcgi_cache off; } -- Igor Sysoev http://nginx.com/support.html From ianevans at digitalhit.com Tue Oct 9 08:51:50 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 9 Oct 2012 04:51:50 -0400 Subject: Getting built in 404 instead of my custom one In-Reply-To: References: <650f077d451161833f9d29dd9907781a.squirrel@www.digitalhit.com> <20121009075739.GA28203@nginx.com> <887128ffa5bd6632380c71c8a1decf23.squirrel@www.digitalhit.com> Message-ID: <9a6b58ca1e8a37537b734968fd68f2e8.squirrel@www.digitalhit.com> On Tue, October 9, 2012 4:46 am, Igor Sysoev wrote: > On Oct 9, 2012, at 12:34 , Ian M. Evans wrote: > >> On Tue, October 9, 2012 4:26 am, Ian M. Evans wrote: >> [snip] >>> So, though it's now serving the custom 404, it's still serving a cached >>> version. So I guess I need a way to turn off caching for the custom 404 >>> file. Just tried another one and, of course, got a cached 404 that >>> someone >>> else had just tried. >> >> Think I solved the cached issue. >> >> Just created the location: >> >> location = /dhe404.shtml { >> fastcgi_pass 127.0.0.1:10004; >> fastcgi_cache_valid 0s; >> } >> >> and made the cache valid 0. Seems to work. > > location = /dhe404.shtml { > fastcgi_pass ... > fastcgi_cache off; > } that's better, thanks. From nginx-forum at nginx.us Tue Oct 9 09:20:00 2012 From: nginx-forum at nginx.us (zildjohn01) Date: Tue, 09 Oct 2012 05:20:00 -0400 Subject: limit_req seems to have no effect, but I would prefer it did Message-ID: I'm attempting to rate limit requests, and I'm unable to make the limit_req directive have any effect. I've trimmed it down to a minimal test case. Here's my complete nginx.conf (with only the server_name changed to protect the innocent): ---------------------------------------------------------------------- worker_processes 2; events { worker_connections 8192; } http { #keepalive_timeout 0s; #keepalive_requests 0; #limit_conn_zone $binary_remote_addr zone=conn_res:10m; #limit_conn conn_res 1; limit_req_zone $binary_remote_addr zone=req_res:10m rate=1r/s; limit_req zone=req_res; server { listen 80; server_name example.com *.example.com; location / { return 410; } } } ---------------------------------------------------------------------- I've tried various combinations of burst=2, nodelay, 1r/s or 1r/m, with and without limit_conn, with and without keepalive, with and without "location /", etc... and requests are never being limited, as shown by the access.log entries below: ---------------------------------------------------------------------- while true; do curl 55.55.55.55 -H'Host: test.example.com' done 12.34.56.78 - - [09/Oct/2012:08:47:03 +0000] "GET / HTTP/1.1" 410 158 "-" "curl/7.21.1 (i686-pc-mingw32) libcurl/7.21.1 OpenSSL/0.9.8r zlib/1.2.3" 12.34.56.78 - - [09/Oct/2012:08:47:03 +0000] "GET / HTTP/1.1" 410 158 "-" "curl/7.21.1 (i686-pc-mingw32) libcurl/7.21.1 OpenSSL/0.9.8r zlib/1.2.3" 12.34.56.78 - - [09/Oct/2012:08:47:03 +0000] "GET / HTTP/1.1" 410 158 "-" "curl/7.21.1 (i686-pc-mingw32) libcurl/7.21.1 OpenSSL/0.9.8r zlib/1.2.3" 12.34.56.78 - - [09/Oct/2012:08:47:04 +0000] "GET / HTTP/1.1" 410 158 "-" "curl/7.21.1 (i686-pc-mingw32) libcurl/7.21.1 OpenSSL/0.9.8r zlib/1.2.3" etc... ---------------------------------------------------------------------- The error.log file is empty. I'm running nginx 1.3.7, compiled from source, on an Amazon EC2 micro instance with the default image. (For kicks, I also tried 1.0.15, with no luck.) Here is /proc/version: Linux version 3.2.21-1.32.6.amzn1.x86_64 (mockbuild at gobi-build-31004) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Sat Jun 23 02:32:15 UTC 2012 Am I missing something obvious here? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231530,231530#msg-231530 From ne at vbart.ru Tue Oct 9 10:14:45 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 9 Oct 2012 14:14:45 +0400 Subject: limit_req seems to have no effect, but I would prefer it did In-Reply-To: References: Message-ID: <201210091414.45724.ne@vbart.ru> On Tuesday 09 October 2012 13:20:00 zildjohn01 wrote: > I'm attempting to rate limit requests, and I'm unable to make the limit_req > directive have any effect. I've trimmed it down to a minimal test case. > Here's my complete nginx.conf (with only the server_name changed to protect > the innocent): > [...] > limit_req_zone $binary_remote_addr zone=req_res:10m rate=1r/s; > limit_req zone=req_res; > > server { > listen 80; > server_name example.com *.example.com; > location / { > return 410; > } > } [...] > > Am I missing something obvious here? > "return" is a directive from the rewrite module which works on the rewrite phase. "limit_req" works on the preaccess phase that comes later. So, your request doesn't reach the limit_req module. wbr, Valentin V. Bartenev -- http://nginx.com/support.html From tmartincpp at gmail.com Tue Oct 9 12:03:40 2012 From: tmartincpp at gmail.com (Thomas Martin) Date: Tue, 9 Oct 2012 14:03:40 +0200 Subject: Rewrite and FastCGI. Message-ID: Hello everyone! I'm trying to use rewrite and fastcgis with Nginx 1.2.1 (from the Debian Wheezy package). My root directory looks like this: /www/dir1 /www/dir2 In my server's file I have this: root /www/; # dir1 location /dir1 { rewrite ^(.*) https://$host$1 permanent; } # php5-fpm location ~ \.(php|phtml)$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /www/$fastcgi_script_name; } } The rewrite is working great for a html page (for example) but not for a php page. Same results with: root /www/; location /dir1 { rewrite ^(.*) https://$host$1 permanent; # php5-fpm location ~ \.(php|phtml)$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /www/$fastcgi_script_name; } } I need to use php's fastcgi for dir1 and dir2 but only to redirect dir1 in https. To be honest I try many settings but without any results. It seems that fastcgi (or upstream) have always the priority on location. A solution could be to add a rewrite in the php's location too to have something like this (but that seems rally ugly to me): root /www/; location /dir1 { rewrite ^(.*) https://$host$1 permanent; # php5-fpm location ~ \.(php|phtml)$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /www/$fastcgi_script_name; } } # php5-fpm location ~ \.(php|phtml)$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /www/$fastcgi_script_name; } } Any help would be really appreciated! Thanks for reading anyway. Regards. From francis at daoine.org Tue Oct 9 12:53:11 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Oct 2012 13:53:11 +0100 Subject: Rewrite and FastCGI. In-Reply-To: References: Message-ID: <20121009125311.GM17159@craic.sysops.org> On Tue, Oct 09, 2012 at 02:03:40PM +0200, Thomas Martin wrote: Hi there, > # dir1 > location /dir1 { > rewrite ^(.*) https://$host$1 permanent; > } > > # php5-fpm > location ~ \.(php|phtml)$ { > include /etc/nginx/fastcgi_params; > fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; > fastcgi_param SCRIPT_FILENAME > /www/$fastcgi_script_name; > } > } > The rewrite is working great for a html page (for example) but not for > a php page. One request is handled in one location. You have location /dir1 {} location ~ \.(php|phtml)$ {} Possibly what you want is location ^~ /dir1 {} location ~ \.(php|phtml)$ {} or maybe location /dir1 {} location / { location ~ \.(php|phtml)$ {} } See http://nginx.org/r/location for details. (Possibly what you want is some other configuration: the important thing to keep in mind is: for this request, which one location do you wish to handle it? Then configure the locations accordingly.) f -- Francis Daly francis at daoine.org From tmartincpp at gmail.com Tue Oct 9 14:50:32 2012 From: tmartincpp at gmail.com (Thomas Martin) Date: Tue, 9 Oct 2012 16:50:32 +0200 Subject: Rewrite and FastCGI. In-Reply-To: <20121009125311.GM17159@craic.sysops.org> References: <20121009125311.GM17159@craic.sysops.org> Message-ID: Hi Francis. FYI I read the documentation before to send my first email but I misunderstood the part about "^~" and also I didn?t clearly realized that "One request is handled in one location". So with your clarification I was able to make something working: # DocumentRoot root /www/; # dir1 location /dir1 { rewrite ^(.*) https://$host$1 permanent; } # dir2 location /dir2 { # php5-fpm location ~ \.(php|phtml)$ { fastcgi_param PHP_VALUE "include_path=/www/dir2:/www/dir2/common/include:."; fastcgi_param ENVIRONMENT_NAME devs; fastcgi_param DB_CONF_DIRECTORY /etc/itf/db/devs/; include /etc/nginx/sites-conf.d/php-fpm; } } # All others location / { # php5-fpm location ~ \.(php|phtml)$ { fastcgi_param ENVIRONMENT_NAME devs; fastcgi_param DB_CONF_DIRECTORY /etc/db/devs/; include /etc/nginx/sites-conf.d/php-fpm; } } It seems to work as expected; I guess I could use "^~" too but I didn't tried yet. At first I wanted to avoid repetition of the php-fpm's part but I didn't realized that it wasn't doable. Thanks for your help and again this is really appreciated! Regards. 2012/10/9 Francis Daly : > On Tue, Oct 09, 2012 at 02:03:40PM +0200, Thomas Martin wrote: > > Hi there, > >> # dir1 >> location /dir1 { >> rewrite ^(.*) https://$host$1 permanent; >> } >> >> # php5-fpm >> location ~ \.(php|phtml)$ { >> include /etc/nginx/fastcgi_params; >> fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; >> fastcgi_param SCRIPT_FILENAME >> /www/$fastcgi_script_name; >> } >> } > >> The rewrite is working great for a html page (for example) but not for >> a php page. > > One request is handled in one location. > > You have > > location /dir1 {} > location ~ \.(php|phtml)$ {} > > Possibly what you want is > > location ^~ /dir1 {} > location ~ \.(php|phtml)$ {} > > or maybe > > location /dir1 {} > location / { > location ~ \.(php|phtml)$ {} > } > > See http://nginx.org/r/location for details. > > (Possibly what you want is some other configuration: the important thing > to keep in mind is: for this request, which one location do you wish to > handle it? Then configure the locations accordingly.) > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From pr1 at pr1.ru Tue Oct 9 14:51:21 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Tue, 9 Oct 2012 18:51:21 +0400 Subject: zero size buf in output(Bug?) Message-ID: Hi. After update from 1.0.14 to 1.2.3, strange alerts starts to appears in error.log: 2012/10/08 23:44:07 [alert] 25551#0: *79037 zero size buf in output t:0 r:0 f:0 000000000D31F3F0 000000000D31F3F0-000000000D3273F0 0000000000000000 0-0 while sending to client, client: 91.138.121.71, server: www.example.ru, request: "HEAD /images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22 HTTP/1.1", upstream: "http://10.23.23.15:6644/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22", host: "www.example.ru" We can't reproduce it in test environment(without real load). It appears on different servers with different CentOS versions on it. In most cases error appears after HEAD requests(by google bot, for example). We uses proxy_cache requests, config: proxy_cache_key "$request_method|$host|$uri?$args"; proxy_cache_path /var/nginx/cache/html levels=1:2 keys_zone=html-cache:18m max_size=5g inactive=60m; proxy_cache html-cache; proxy_cache_valid 200 302 301 10m; proxy_cache_valid 304 10m; proxy_cache_valid 404 1m; proxy_cache_use_stale updating timeout error http_502 http_503 http_500; So, when there's no file in cache path, and we made, for example, a HEAD request, in some cases it generates an error. Example with full debug in attachment. # uname -a Linux fe1 2.6.18-194.17.1.el5 #1 SMP Wed Sep 29 12:50:31 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # nginx -V nginx version: nginx/1.2.3 TLS SNI support disabled configure arguments: --prefix=/var/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/nginx/logs/error.log --http-log-path=/var/nginx/logs/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/nginx/tmp/client_body --http-proxy-temp-path=/var/nginx/tmp/proxy_temp --http-fastcgi-temp-path=/var/nginx/tmp/fastcgi_temp --http-uwsgi-temp-path=/var/nginx/tmp/uwsgi_temp --http-scgi-temp-path=/var/nginx/tmp/scgi_temp --user=_nginx --group=_nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-file-aio --with-ipv6 --add-module=nginx_upstream_hash-0.3.1 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' -- -- Andrey Feldman -------------- next part -------------- 2012/10/08 23:44:07 [debug] 25551#0: *79037 generic phase: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 rewrite phase: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "lwp-request/6.03 libwww-perl/6.03" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script if 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script if: false 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "HEAD" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script if 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script if: false 2012/10/08 23:44:07 [debug] 25551#0: *79037 using configuration "/" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http cl:-1 max:7340032 2012/10/08 23:44:07 [debug] 25551#0: *79037 rewrite phase: 3 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script if 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script if: false 2012/10/08 23:44:07 [debug] 25551#0: *79037 post rewrite phase: 4 2012/10/08 23:44:07 [debug] 25551#0: *79037 generic phase: 5 2012/10/08 23:44:07 [debug] 25551#0: *79037 generic phase: 6 2012/10/08 23:44:07 [debug] 25551#0: *79037 limit_req[0]: 0 0.000 2012/10/08 23:44:07 [debug] 25551#0: *79037 generic phase: 7 2012/10/08 23:44:07 [debug] 25551#0: *79037 limit conn: 9569EE24 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 add cleanup: 000000000D242058 2012/10/08 23:44:07 [debug] 25551#0: *79037 access phase: 8 2012/10/08 23:44:07 [debug] 25551#0: *79037 access phase: 9 2012/10/08 23:44:07 [debug] 25551#0: *79037 post access phase: 10 2012/10/08 23:44:07 [debug] 25551#0: *79037 posix_memalign: 000000000D242390:4096 @16 2012/10/08 23:44:07 [debug] 25551#0: *79037 http init upstream, client timer: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 epoll add event: fd:146 op:1 ev:80000004 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "HEAD" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "|" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "www.example.ru" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "|" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "/images/one/video_player.swf" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "?" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http cache key: "HEAD|www.example.ru|/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 add cleanup: 000000000D242318 2012/10/08 23:44:07 [debug] 25551#0: *79037 http file cache exists: -5 e:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 cache file: "/var/nginx/cache/html/6/63/ebe6688e7cc6dc7ce112182eb764d636" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream cache: -5 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "X-Forwarded-For: " 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "91.138.121.71" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: " 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "Host: " 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "www.example.ru" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: " 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "X-Real-IP: " 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script var: "91.138.121.71" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: " 2012/10/08 23:44:07 [debug] 25551#0: *79037 http script copy: "Connection: close 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "TE: deflate,gzip;q=0.3" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "User-Agent: lwp-request/6.03 libwww-perl/6.03" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: 2012/10/08 23:44:07 [debug] 25551#0: *79037 http cleanup add: 000000000D242AF0 2012/10/08 23:44:07 [debug] 25551#0: *79037 get rr peer, try: 2 2012/10/08 23:44:07 [debug] 25551#0: *79037 get rr peer, current: 0 -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 socket 147 2012/10/08 23:44:07 [debug] 25551#0: *79037 epoll add connection: fd:147 ev:80000005 2012/10/08 23:44:07 [debug] 25551#0: *79037 connect to 10.23.23.15:6644, fd:147 #79038 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream connect: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 posix_memalign: 000000000D23B560:128 @16 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer add: 147: 200000:1349725647874 2012/10/08 23:44:07 [debug] 25551#0: *79037 http finalize request: -4, "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" a:1, c:2 2012/10/08 23:44:07 [debug] 25551#0: *79037 http request count:2 blk:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 http run request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream check client, write event:1, "/images/one/video_player.swf" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream recv(): -1 (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream send request handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream send request 2012/10/08 23:44:07 [debug] 25551#0: *79037 chain writer buf fl:1 s:297 2012/10/08 23:44:07 [debug] 25551#0: *79037 chain writer in: 000000000D242B28 2012/10/08 23:44:07 [debug] 25551#0: *79037 writev: 297 2012/10/08 23:44:07 [debug] 25551#0: *79037 chain writer out: 0000000000000000 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer del: 147: 1349725647874 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer add: 147: 300000:1349725747875 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process header 2012/10/08 23:44:07 [debug] 25551#0: *79037 malloc: 000000000D31F3F0:32768 2012/10/08 23:44:07 [debug] 25551#0: *79037 recv: fd:147 4344 of 32591 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy status 200 "200 OK" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Date: Mon, 08 Oct 2012 19:44:07 GMT" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Server: Oracle-Application-Server-11g" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Last-Modified: Tue, 05 Jun 2012 10:52:38 GMT" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "ETag: "2ba8080-21db7-4c1b772eaa980"" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Accept-Ranges: bytes" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Content-Length: 138679" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Cache-Control: max-age=3600" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Expires: Mon, 08 Oct 2012 20:44:07 GMT" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "X-BE-NODE: 1" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Connection: close" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Content-Type: application/x-shockwave-flash" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header: "Content-Language: en" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy header done 2012/10/08 23:44:07 [debug] 25551#0: *79037 HTTP/1.1 200 OK 2012/10/08 23:44:07 [debug] 25551#0: *79037 write new buf t:1 f:0 000000000D243118, pos 000000000D243118, size: 392 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 http write filter: l:1 f:0 s:392 2012/10/08 23:44:07 [debug] 25551#0: *79037 http write filter limit 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 writev: 392 2012/10/08 23:44:07 [debug] 25551#0: *79037 http write filter 0000000000000000 2012/10/08 23:44:07 [debug] 25551#0: *79037 http file cache set header 2012/10/08 23:44:07 [debug] 25551#0: *79037 http cacheable: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 posix_memalign: 000000000D31C3C0:4096 @16 2012/10/08 23:44:07 [debug] 25551#0: *79037 http proxy filter init s:200 h:0 c:0 l:138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe preread: 3951 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:28247 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 1448 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:26799 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 5399 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write busy: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write: out:0000000000000000, f:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 5399 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747877 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 post event 000000000D2B5458 2012/10/08 23:44:07 [debug] 25551#0: *79037 delete posted event 000000000D2B5458 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 post event 000000000D24D448 2012/10/08 23:44:07 [debug] 25551#0: *79037 post event 000000000D2B5458 2012/10/08 23:44:07 [debug] 25551#0: *79037 delete posted event 000000000D2B5458 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 delete posted event 000000000D24D448 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:26799 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 8688 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:18111 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 14087 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write busy: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write: out:0000000000000000, f:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 14087 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747877 2012/10/08 23:44:07 [debug] 25551#0: *79037 post event 000000000D24D448 2012/10/08 23:44:07 [debug] 25551#0: *79037 post event 000000000D2B5458 2012/10/08 23:44:07 [debug] 25551#0: *79037 delete posted event 000000000D2B5458 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 delete posted event 000000000D24D448 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:18111 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 11584 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:6527 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 25671 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write busy: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write: out:0000000000000000, f:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 25671 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 138679 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747878 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:6527 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 6527 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #0 2012/10/08 23:44:07 [debug] 25551#0: *79037 malloc: 000000000D22D330:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 3609 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:4583 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D31F3F0, pos 000000000D31F62A, size: 32198 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 3609 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 106481 2012/10/08 23:44:07 [debug] 25551#0: *79037 add cleanup: 000000000D31C558 2012/10/08 23:44:07 [debug] 25551#0: *79037 hashed path: /var/nginx/tmp/proxy_temp/0000004711 2012/10/08 23:44:07 [debug] 25551#0: *79037 temp fd:162 2012/10/08 23:44:07 [debug] 25551#0: *79037 write: 162, 000000000D31F3F0, 32768, 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write busy: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write: out:000000000D243380, f:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 http output filter "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http copy filter: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 read: 162, 000000000D31C6B8, 1024, 570 2012/10/08 23:44:07 [debug] 25551#0: *79037 http postpone filter "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 000000000D31CAB8 2012/10/08 23:44:07 [debug] 25551#0: *79037 http copy filter: -1 "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 3609 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 106481 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747878 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:32768 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 14480 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:22871 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 8192 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 9897 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 98289 2012/10/08 23:44:07 [debug] 25551#0: *79037 write: 162, 000000000D22D330, 8192, 32768 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 9897 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 98289 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747878 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 13032 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 22929 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 98289 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 22929 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 98289 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747878 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 2896 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 25825 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 98289 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 25825 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 98289 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747878 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 15135 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #2 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #3 2012/10/08 23:44:07 [debug] 25551#0: *79037 malloc: 000000000D200270:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 3689 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:4503 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 32768 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 8192 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 3689 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 57329 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 3689 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 57329 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747878 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 3:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 13032 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #4 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 2:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D200270, pos 000000000D200270, size: 8192 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 8529 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 49137 2012/10/08 23:44:07 [debug] 25551#0: *79037 write: 162, 000000000D200270, 8192, 81920 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 8529 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 49137 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747879 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 3:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 18824 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 3:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 27353 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 49137 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 27353 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 49137 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747879 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 3:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 17376 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #5 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #6 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 1:4423 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv() not ready (11: Resource temporarily unavailable) 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: -2 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 32768 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf in s:1 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 8192 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 3769 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 8177 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 3769 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 8177 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747879 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream downstream error 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream dummy handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream process upstream 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe read upstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 3:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 4408 2012/10/08 23:44:07 [debug] 25551#0: *79037 readv: 3:8192 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe recv chain: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D200270, pos 000000000D200270, size: 8177 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D31F3F0, pos 000000000D31F3F0, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe buf free s:0 t:1 f:0 000000000D22D330, pos 000000000D22D330, size: 0 file: 0, size: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe length: 8177 2012/10/08 23:44:07 [debug] 25551#0: *79037 input buf #7 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D22D330 2012/10/08 23:44:07 [debug] 25551#0: *79037 write: 162, 000000000D200270, 8177, 131072 2012/10/08 23:44:07 [debug] 25551#0: *79037 pipe write downstream: 1 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer: 147, old: 1349725747875, new: 1349725747879 2012/10/08 23:44:07 [debug] 25551#0: *79037 http file cache update 2012/10/08 23:44:07 [debug] 25551#0: *79037 http file cache rename: "/var/nginx/tmp/proxy_temp/0000004711" to "/var/nginx/cache/html/6/63/ebe6688e7cc6dc7ce112182eb764d636" 2012/10/08 23:44:07 [debug] 25551#0: *79037 malloc: 000000000D23B5F0:71 2012/10/08 23:44:07 [debug] 25551#0: *79037 malloc: 000000000D327400:65536 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream exit: 0000000000000000 2012/10/08 23:44:07 [debug] 25551#0: *79037 finalize http upstream request: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 finalize http proxy request 2012/10/08 23:44:07 [debug] 25551#0: *79037 free rr peer 2 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 close http upstream connection: 147 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D23B560, unused: 48 2012/10/08 23:44:07 [debug] 25551#0: *79037 event timer del: 147: 1349725747875 2012/10/08 23:44:07 [debug] 25551#0: *79037 reusable connection: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 http upstream temp fd: 162 2012/10/08 23:44:07 [debug] 25551#0: *79037 http output filter "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http copy filter: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [alert] 25551#0: *79037 zero size buf in output t:0 r:0 f:0 000000000D31F3F0 000000000D31F3F0-000000000D3273F0 0000000000000000 0-0 while sending to client, client: 91.138.121.71, server: www.example.ru, request: "HEAD /images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22 HTTP/1.1", upstream: "http://10.23.23.15:6644/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22", host: "www.example.ru" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http postpone filter "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 000000000D31CBE8 2012/10/08 23:44:07 [debug] 25551#0: *79037 http copy filter: -1 "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http finalize request: -1, "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" a:1, c:1 2012/10/08 23:44:07 [debug] 25551#0: *79037 http terminate request count:1 2012/10/08 23:44:07 [debug] 25551#0: *79037 http terminate cleanup count:1 blk:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 http posted request: "/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22" 2012/10/08 23:44:07 [debug] 25551#0: *79037 http terminate handler count:1 2012/10/08 23:44:07 [debug] 25551#0: *79037 http request count:1 blk:0 2012/10/08 23:44:07 [debug] 25551#0: *79037 http close request 2012/10/08 23:44:07 [debug] 25551#0: *79037 http log handler 2012/10/08 23:44:07 [debug] 25551#0: *79037 run cleanup: 000000000D31C558 2012/10/08 23:44:07 [debug] 25551#0: *79037 file cleanup: fd:162 2012/10/08 23:44:07 [debug] 25551#0: *79037 run cleanup: 000000000D242318 2012/10/08 23:44:07 [debug] 25551#0: *79037 run cleanup: 000000000D242058 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D200270 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 0000000000000000 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D31F3F0 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D241380, unused: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D242390, unused: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D31C3C0, unused: 1719 2012/10/08 23:44:07 [debug] 25551#0: *79037 close http connection: 146 2012/10/08 23:44:07 [debug] 25551#0: *79037 reusable connection: 0 2012/10/08 23:44:07 [debug] 25551#0: *79037 free: 000000000D215960, unused: 440 From john at disqus.com Tue Oct 9 19:10:03 2012 From: john at disqus.com (John Watson) Date: Tue, 9 Oct 2012 12:10:03 -0700 Subject: Handling 500k concurrent connections on Linux Message-ID: <20121009191003.GA26608@asylum.local> I was wondering if anyone had some tips/guidelines for scaling Nginx on Linux to >500k concurrent connections. Playing with the nginx_http_push_stream module in streaming mode. Noticing periodic slow accept and/or response headers. I've scoured the Internet looking/learning ways to tune Nginx/Linux but I think I've exhausted my abilities. Any help would be appreciated. Hardware Dual Nehalem 5520 24G RAM Intel 82576 (igb) Ubuntu 12.04.1 (3.2.0-31-generic x86_64) Thank You, John W -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 834 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Oct 9 19:36:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Oct 2012 23:36:54 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: Message-ID: <20121009193654.GE40452@mdounin.ru> Hello! On Tue, Oct 09, 2012 at 06:51:21PM +0400, Andrey Feldman wrote: > Hi. > After update from 1.0.14 to 1.2.3, strange alerts starts to appears in > error.log: > 2012/10/08 23:44:07 [alert] 25551#0: *79037 zero size buf in output > t:0 r:0 f:0 000000000D31F3F0 000000000D31F3F0-000000000D3273F0 > 0000000000000000 0-0 while sending to client, client: 91.138.121.71, > server: www.example.ru, request: "HEAD > /images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22 > HTTP/1.1", upstream: > "http://10.23.23.15:6644/images/one/video_player.swf?file=http://www.example.ru/test/generate_xml%3fpid=51559%26one=22", > host: "www.example.ru" > > We can't reproduce it in test environment(without real load). It > appears on different servers with different CentOS versions on it. > In most cases error appears after HEAD requests(by google bot, for example). > > We uses proxy_cache requests, config: > proxy_cache_key "$request_method|$host|$uri?$args"; > proxy_cache_path /var/nginx/cache/html levels=1:2 > keys_zone=html-cache:18m max_size=5g inactive=60m; > proxy_cache html-cache; > proxy_cache_valid 200 302 301 10m; > proxy_cache_valid 304 10m; > proxy_cache_valid 404 1m; > proxy_cache_use_stale updating timeout error http_502 http_503 http_500; > > So, when there's no file in cache path, and we made, for example, a > HEAD request, in some cases it generates an error. > Example with full debug in attachment. Thank you for report. From debug log it's more or less clear what goes on here, it indeed affects HEAD (as well as other header only) requests while loading cache entry. I'm able to reproduce it here with the following config: location = /proxy { proxy_pass http://127.0.0.1:8080/10m; proxy_cache one; proxy_cache_valid any 5s; sendfile off; output_buffers 1 1024; } It's more or less harmless (i.e. no bad things happen, worst one is log entry). Quick fix would be to do something like this: --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2075,6 +2075,8 @@ ngx_http_upstream_send_response(ngx_http r->write_event_handler = ngx_http_request_empty_handler; c->error = 1; + u->pipe->downstream_error = 1; + } else { ngx_http_upstream_finalize_request(r, u, rc); return; Though it probably needs more attention. I'll take a look as time permits. -- Maxim Dounin http://nginx.com/support.html From wandenberg at gmail.com Tue Oct 9 20:46:04 2012 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Tue, 9 Oct 2012 17:46:04 -0300 Subject: Help with cache manager Message-ID: Hi, I'm having trouble with the cache manager, it is not controlling the max_size of the cache. I have 22 workers on a 24 core server, 3 cache paths, one with 600g and other two with 200g of max_size. The cache path with 600g usually pass this value and reach 640g, as example. The files used on this server has no more than 300mb each. The other 200g cache paths don't have this problem. Do you know any problem about a limit on max_size? Any suggestion where I can investigate the problem? The configuration is like this proxy_cache_path /path1 levels=1:2 keys_zone=c1:1024m inactive=10d max_size=600g; proxy_cache_path /path2 levels=1:2 keys_zone=c2:1024m inactive=10d max_size=200g; proxy_cache_path /path3 levels=1:2 keys_zone=c3:1024m inactive=10d max_size=200g; When I restart the server the cache goes to the configured size. Nginx: 1.2.0 with cache_purge module Regards, Wandenberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From pr1 at pr1.ru Tue Oct 9 22:07:43 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Wed, 10 Oct 2012 02:07:43 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: <20121009193654.GE40452@mdounin.ru> References: <20121009193654.GE40452@mdounin.ru> Message-ID: <5074A02F.9040006@pr1.ru> Tried your patch, but after few minutes workers starts crashing. GDB: Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fff9e5f2785
, last=0x7fff9e5f2f50
, fmt=, args=0x7fff9e5f2730) at src/core/ngx_string.c:178 178 while (*fmt >= '0' && *fmt <= '9') { On 10/09/2012 11:36 PM, Maxim Dounin wrote: > Thank you for report. From debug log it's more or less clear what > goes on here, it indeed affects HEAD (as well as other header > only) requests while loading cache entry. > > I'm able to reproduce it here with the following config: > > location = /proxy { > proxy_pass http://127.0.0.1:8080/10m; > proxy_cache one; > proxy_cache_valid any 5s; > sendfile off; > output_buffers 1 1024; > } > > It's more or less harmless (i.e. no bad things happen, worst one > is log entry). > > Quick fix would be to do something like this: > > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -2075,6 +2075,8 @@ ngx_http_upstream_send_response(ngx_http > r->write_event_handler = ngx_http_request_empty_handler; > c->error = 1; > > + u->pipe->downstream_error = 1; > + > } else { > ngx_http_upstream_finalize_request(r, u, rc); > return; > > Though it probably needs more attention. I'll take a look as time > permits. > From yaoweibin at gmail.com Wed Oct 10 08:05:38 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 10 Oct 2012 16:05:38 +0800 Subject: [ANNOUNCE] Tengine-1.4.1 Message-ID: Hi folks, We are glad to announce that Tengine-1.4.1 (development version) has been released. You can either checkout the source code from github: https://github.com/taobao/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-1.4.1.tar.gz The changelog of this release is as follows: *) Feature: added jemalloc library support. (fanjizhao) *) Feature: added a new variable '$dollar', whose value is the dollar sign ('$'). (zhuzhaoyuan) *) Feature: added the option 'off' to 'worker_cpu_affinity' directive. (cfsego) *) Change: disable CPU affinity when a new worker process is forked as an old one exits abnormally. (cfsego) *) Bugfix: fixed compile error with shared Lua module when using LuaJIT in Mac OS. (monadbobo) *) Bugfix: fixed the wrong module execution order with the third party shared filter module. (monadbobo) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao From andrew at nginx.com Wed Oct 10 09:05:05 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 10 Oct 2012 13:05:05 +0400 Subject: Handling 500k concurrent connections on Linux In-Reply-To: <20121009191003.GA26608@asylum.local> References: <20121009191003.GA26608@asylum.local> Message-ID: <80E7F6B1-CBE0-4E72-943D-5B915ECF5DC0@nginx.com> John, On Oct 9, 2012, at 11:10 PM, John Watson wrote: > I was wondering if anyone had some tips/guidelines for scaling Nginx on > Linux to >500k concurrent connections. Playing with the > nginx_http_push_stream module in streaming mode. Noticing periodic slow > accept and/or response headers. I've scoured the Internet > looking/learning ways to tune Nginx/Linux but I think I've exhausted my > abilities. > > Any help would be appreciated. > > Hardware > Dual Nehalem 5520 > 24G RAM > Intel 82576 (igb) > Ubuntu 12.04.1 (3.2.0-31-generic x86_64) > > Thank You, > > John W I'd assume you've already checked/fixed the following, right? 1) Error logs - anything wrong seen in there? 2) http://nginx.org/en/docs/ngx_core_module.html#multi_accept and http://nginx.org/en/docs/ngx_core_module.html#accept_mutex - did you try it on/off? 3) file descriptors limits (cat /proc/sys/fs/file-max, sudo - nginx && ulimit, worker_rlimit_nofile) 4) sysctl net.ipv4.ip_local_port_range (if you're aiming at proxying all those connections to upstreams) Additional information about what's happening in all those 500k connections might be helpful, as well as the relevant configuration section :) Hope this helps -- AA @ nginx http://nginx.com/support.html From ryanchan404 at gmail.com Wed Oct 10 09:24:28 2012 From: ryanchan404 at gmail.com (Ryan Chan) Date: Wed, 10 Oct 2012 17:24:28 +0800 Subject: Starter/HelloWorld nginx module Message-ID: Hello, I want to develop a simple nginx module so I am looking for some sample module, and I searched in Google and got: "http://www.evanmiller.org/nginx-modules-guide.html" While the content is amazing in detail but I am just thinking, why not a starter module or even hello world provided from the source so we can start new module from this starter every time? Even better if have official doc on module development but at least an official hello world module would be very great, isn't? Ryan From nginx-forum at nginx.us Wed Oct 10 09:32:24 2012 From: nginx-forum at nginx.us (tysonlee) Date: Wed, 10 Oct 2012 05:32:24 -0400 Subject: Rewrite and FastCGI. In-Reply-To: References: Message-ID: <172e8ed601e57d9901b90801adb5c5db.NginxMailingListEnglish@forum.nginx.org> I've tried various combinations of burst=2, nodelay, 1r/s or 1r/m, with and without limit_conn, with and without keepalive, with and without "location /", etc... and requests are never being limited, as shown by the access.log entries below: Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231534,231568#msg-231568 From andrew at nginx.com Wed Oct 10 09:43:32 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 10 Oct 2012 13:43:32 +0400 Subject: Starter/HelloWorld nginx module In-Reply-To: References: Message-ID: <4A2C7484-8216-44A3-B9E2-FECC1288B2E0@nginx.com> On Oct 10, 2012, at 1:24 PM, Ryan Chan wrote: > Hello, > > I want to develop a simple nginx module so I am looking for some > sample module, and I searched in Google and got: > "http://www.evanmiller.org/nginx-modules-guide.html" > > While the content is amazing in detail but I am just thinking, why not > a starter module or even hello world provided from the source so we > can start new module from this starter every time? > > Even better if have official doc on module development but at least an > official hello world module would be very great, isn't? > > > > Ryan Thanks, we'll definitely ponder adding something like this to the distro. From yaoweibin at gmail.com Wed Oct 10 10:06:34 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 10 Oct 2012 18:06:34 +0800 Subject: Handling 500k concurrent connections on Linux In-Reply-To: <20121009191003.GA26608@asylum.local> References: <20121009191003.GA26608@asylum.local> Message-ID: Hi, My colleague has done similar work as you. He has successfully established 2000k connections in one Nginx server. You can read this blog with google translate: http://rdc.taobao.com/blog/cs/?p=1062 . It's written in Chinese[?] . Hope this can help you. 2012/10/10 John Watson : > I was wondering if anyone had some tips/guidelines for scaling Nginx on > Linux to >500k concurrent connections. Playing with the > nginx_http_push_stream module in streaming mode. Noticing periodic slow > accept and/or response headers. I've scoured the Internet > looking/learning ways to tune Nginx/Linux but I think I've exhausted my > abilities. > > Any help would be appreciated. > > Hardware > Dual Nehalem 5520 > 24G RAM > Intel 82576 (igb) > Ubuntu 12.04.1 (3.2.0-31-generic x86_64) > > Thank You, > > John W > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 330.gif Type: image/gif Size: 96 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Oct 10 11:01:14 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Oct 2012 15:01:14 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: <5074A02F.9040006@pr1.ru> References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> Message-ID: <20121010110114.GH40452@mdounin.ru> Hello! On Wed, Oct 10, 2012 at 02:07:43AM +0400, Andrey Feldman wrote: > Tried your patch, but after few minutes workers starts crashing. > GDB: > > Core was generated by `nginx: worker process '. > Program terminated with signal 11, Segmentation fault. > #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fff9e5f2785
0x7fff9e5f2785 out of bounds>, > last=0x7fff9e5f2f50
, > fmt=, args=0x7fff9e5f2730) at > src/core/ngx_string.c:178 > 178 while (*fmt >= '0' && *fmt <= '9') { Strange. What's in backtrace? > > > > > On 10/09/2012 11:36 PM, Maxim Dounin wrote: > >Thank you for report. From debug log it's more or less clear what > >goes on here, it indeed affects HEAD (as well as other header > >only) requests while loading cache entry. > > > >I'm able to reproduce it here with the following config: > > > > location = /proxy { > > proxy_pass http://127.0.0.1:8080/10m; > > proxy_cache one; > > proxy_cache_valid any 5s; > > sendfile off; > > output_buffers 1 1024; > > } > > > >It's more or less harmless (i.e. no bad things happen, worst one > >is log entry). > > > >Quick fix would be to do something like this: > > > >--- a/src/http/ngx_http_upstream.c > >+++ b/src/http/ngx_http_upstream.c > >@@ -2075,6 +2075,8 @@ ngx_http_upstream_send_response(ngx_http > > r->write_event_handler = ngx_http_request_empty_handler; > > c->error = 1; > > > >+ u->pipe->downstream_error = 1; > >+ > > } else { > > ngx_http_upstream_finalize_request(r, u, rc); > > return; > > > >Though it probably needs more attention. I'll take a look as time > >permits. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.com/support.html From francis at daoine.org Wed Oct 10 12:36:16 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Oct 2012 13:36:16 +0100 Subject: Rewrite and FastCGI. In-Reply-To: References: <20121009125311.GM17159@craic.sysops.org> Message-ID: <20121010123616.GN17159@craic.sysops.org> On Tue, Oct 09, 2012 at 04:50:32PM +0200, Thomas Martin wrote: Hi there, great that you found a configuration that works. > FYI I read the documentation before to send my first email but I > misunderstood the part about "^~" and also I didn?t clearly realized > that "One request is handled in one location". Having re-read the docs, I can see that the interpretation could be made clearer. Can you suggest any wording that might have helped you understand then, what you do now? Maybe it can become easier for the next person ;-) Would adding a link to http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration have helped, do you think? It is an example rather than complete documentation, and it leaves out the "^~" thing, so maybe it wouldn't have been directly useful. > So with your clarification I was able to make something working: > location /dir1 { > rewrite ^(.*) https://$host$1 permanent; > } > location /dir2 { > # php5-fpm > location ~ \.(php|phtml)$ { > fastcgi_param PHP_VALUE "include_path=/www/dir2:/www/dir2/common/include:."; > fastcgi_param ENVIRONMENT_NAME devs; > fastcgi_param DB_CONF_DIRECTORY /etc/itf/db/devs/; > > include /etc/nginx/sites-conf.d/php-fpm; > } > } > location / { > # php5-fpm > location ~ \.(php|phtml)$ { > fastcgi_param ENVIRONMENT_NAME devs; > fastcgi_param DB_CONF_DIRECTORY /etc/db/devs/; > > include /etc/nginx/sites-conf.d/php-fpm; > } > } > > It seems to work as expected; I guess I could use "^~" too but I > didn't tried yet. That looks reasonable. "^~" only matters if you have (top level) regex matches -- which here, you don't. (Avoiding top level regex matches makes it very easy to know which location{} will match any particular request. That's usually considered a Good Thing.) > At first I wanted to avoid repetition of the php-fpm's part but I > didn't realized that it wasn't doable. In this case, you want different fastcgi_param lines for different php scripts -- so repeating the common config (by using "include") and adding the specific parts, like you have done, is probably the best way. > Thanks for your help and again this is really appreciated! You're welcome. All the best, f -- Francis Daly francis at daoine.org From pr1 at pr1.ru Wed Oct 10 13:43:20 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Wed, 10 Oct 2012 17:43:20 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: <20121010110114.GH40452@mdounin.ru> References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> Message-ID: On Wed, Oct 10, 2012 at 3:01 PM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 10, 2012 at 02:07:43AM +0400, Andrey Feldman wrote: > >> Tried your patch, but after few minutes workers starts crashing. >> GDB: >> >> Core was generated by `nginx: worker process '. >> Program terminated with signal 11, Segmentation fault. >> #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fff9e5f2785
> 0x7fff9e5f2785 out of bounds>, >> last=0x7fff9e5f2f50
, >> fmt=, args=0x7fff9e5f2730) at >> src/core/ngx_string.c:178 >> 178 while (*fmt >= '0' && *fmt <= '9') { > > Strange. What's in backtrace? Backtrace in attach. -- -- Andrey Feldman -------------- next part -------------- Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fffefda6605 "", last=0x7fffefda6dd0 "\020", fmt=, args=0x7fffefda65b0) at src/core/ngx_string.c:178 178 while (*fmt >= '0' && *fmt <= '9') { (gdb) bt #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fffefda6605 "", last=0x7fffefda6dd0 "\020", fmt=, args=0x7fffefda65b0) at src/core/ngx_string.c:178 #1 0x000000000040692e in ngx_log_error_core (level=3, log=0x14143930, err=14, fmt=0x473e4a "\307", ) at src/core/ngx_log.c:108 #2 0x000000000040dfb2 in ngx_conf_set_path_slot (cf=0x14224fe8, cmd=, conf=) at src/core/ngx_file.c:267 #3 0x00000000141439b8 in ?? () #4 0x0000000014225850 in ?? () #5 0x00002b87720e8250 in ?? () #6 0x0000000014154770 in ?? () #7 0x0000000000449e6f in ngx_http_upstream_send_response (r=0x0, u=0x1) at src/http/ngx_http_upstream.c:2124 #8 ngx_http_upstream_process_header (r=0x0, u=0x1) at src/http/ngx_http_upstream.c:1644 #9 0x00000000141439b8 in ?? () #10 0x0000000000000078 in ?? () #11 0x4a8da000006b3500 in ?? () #12 0x0000000050757962 in ?? () #13 0x00000000141439b8 in ?? () #14 0x00000000141439b8 in ?? () #15 0xfffffffffffffffe in ?? () #16 0x0000000000000078 in ?? () #17 0x00000000141d7828 in ?? () #18 0x0000000050757962 in ?? () #19 0x000000000045106a in ngx_http_file_cache_exists (cache=0x4742a1, c=0x40aa1c) at src/http/ngx_http_file_cache.c:690 #20 0x0000000300000010 in ?? () #21 0x0000000014143930 in ?? () #22 0x000000000068c2c0 in ?? () #23 0x00000000141439b8 in ?? () #24 0x0000000014224c80 in ?? () #25 0x00000000141439b8 in ?? () #26 0x0000000014224a00 in ?? () #27 0x00002b87720e8250 in ?? () #28 0x00000000141d6798 in ?? () #29 0x0000000000000001 in ?? () #30 0x00000000004409d6 in ngx_http_variable_sent_keep_alive (r=, v=0x78, data=) at src/http/ngx_http_variables.c:1654 #31 0x000000000000012e in ?? () #32 0x0000000014224c80 in ?? () #33 0x0000000014224c80 in ?? () #34 0x00000000141439b8 in ?? () #35 0x00000000141439b8 in ?? () #36 0x00002b87720e8250 in ?? () #37 0x00000000141d6798 in ?? () #38 0x0000000014224a00 in ?? () #39 0x0000000014224c80 in ?? () #40 0x00000000141439b8 in ?? () #41 0x00002b87720e8250 in ?? () #42 0x00000000141d6798 in ?? () #43 0x000000000044321c in ngx_http_script_regex_start_code (e=0x1) at src/http/ngx_http_script.c:974 #44 0x00000000140610d0 in ?? () #45 0x00000000004762c9 in ngx_http_modern_browser (cf=0x7fffefda6605, cmd=0x0, conf=0x7fffefda6e00) at src/http/modules/ngx_http_browser_module.c:553 #46 0x0000000000000089 in ?? () ---Type to continue, or q to quit--- #47 0x00000000000019b0 in ?? () #48 0xfffffffffffffffc in ?? () #49 0x000000000041fee0 in ngx_event_pipe_read_upstream (p=0x0, do_write=) at src/event/ngx_event_pipe.c:382 #50 ngx_event_pipe (p=0x0, do_write=) at src/event/ngx_event_pipe.c:49 #51 0x0000000000001260 in ?? () #52 0x0000000000000062 in ?? () #53 0x00000000004220e9 in ngx_signal_handler (signo=25593) at src/os/unix/ngx_process.c:316 #54 0x0000000000000000 in ?? () From black.fledermaus at arcor.de Wed Oct 10 14:48:47 2012 From: black.fledermaus at arcor.de (basti) Date: Wed, 10 Oct 2012 16:48:47 +0200 Subject: ginx reverse proxy + wordpress in subdir Message-ID: <50758ACF.7080304@arcor.de> hello @all i have url test.example.com/blog in nginx i have defined a reverse proxy location /blog { proxy_pass http://blog.example.com/; proxy_redirect default; } all this is running very well. in the directory I want to install wordpress. but the link to the wordpress install is look like http://test.example.com/wp-admin/setup-config.php instead of http://test.example.com/blog/wp-admin/setup-config.php is it possible to fix this on nginx side? From nginx-forum at nginx.us Wed Oct 10 14:50:28 2012 From: nginx-forum at nginx.us (izghitu) Date: Wed, 10 Oct 2012 10:50:28 -0400 Subject: nginx upstream not balancing properly Message-ID: <4c0cbf1cbbc618ca4c5e2dbdcf55ffef.NginxMailingListEnglish@forum.nginx.org> Hi, I am running latest nginx and using it for load balancing. I am using the upstream fair module. I have 3 backend servers running apache. The problem is that when I start a load test the balancing is not done correctly. The first server in the upstream gets most of the traffic and the other 2 get very few traffic. If I restart nginx then all 3 backend servers start to get the same ammount of traffic Am I doing something wrong? Please advise Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231585,231585#msg-231585 From andrew at nginx.com Wed Oct 10 15:01:25 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 10 Oct 2012 19:01:25 +0400 Subject: nginx upstream not balancing properly In-Reply-To: <4c0cbf1cbbc618ca4c5e2dbdcf55ffef.NginxMailingListEnglish@forum.nginx.org> References: <4c0cbf1cbbc618ca4c5e2dbdcf55ffef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <35DA530C-290A-48C9-8ACF-4644A2A08A10@nginx.com> On Oct 10, 2012, at 6:50 PM, izghitu wrote: > Hi, > > I am running latest nginx and using it for load balancing. I am using the > upstream fair module. > > I have 3 backend servers running apache. The problem is that when I start a > load test the balancing is not done correctly. The first server in the > upstream gets most of the traffic and the other 2 get very few traffic. If I > restart nginx then all 3 backend servers start to get the same ammount of > traffic > > Am I doing something wrong? > > Please advise First of all, do you mean our standard least_conn functionality or some 3rd party module? http://nginx.org/en/docs/http/ngx_http_upstream_module.html#least_conn > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231585,231585#msg-231585 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Oct 10 15:24:53 2012 From: nginx-forum at nginx.us (izghitu) Date: Wed, 10 Oct 2012 11:24:53 -0400 Subject: nginx upstream not balancing properly In-Reply-To: <35DA530C-290A-48C9-8ACF-4644A2A08A10@nginx.com> References: <35DA530C-290A-48C9-8ACF-4644A2A08A10@nginx.com> Message-ID: Hi, This is what I am using http://wiki.nginx.org/HttpUpstreamFairModule Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231585,231587#msg-231587 From andrew at nginx.com Wed Oct 10 15:35:06 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 10 Oct 2012 19:35:06 +0400 Subject: nginx upstream not balancing properly In-Reply-To: References: <35DA530C-290A-48C9-8ACF-4644A2A08A10@nginx.com> Message-ID: <6F7DEE69-F239-4023-9F95-BC3C6244DCA7@nginx.com> On Oct 10, 2012, at 7:24 PM, izghitu wrote: > Hi, > > This is what I am using > http://wiki.nginx.org/HttpUpstreamFairModule Can you try your setup with http://nginx.org/en/docs/http/ngx_http_upstream_module.html#least_conn which is part of standard distributions since 1.3.1 and 1.2.2-stable? > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231585,231587#msg-231587 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Wed Oct 10 15:42:16 2012 From: nginx-forum at nginx.us (izghitu) Date: Wed, 10 Oct 2012 11:42:16 -0400 Subject: nginx upstream not balancing properly In-Reply-To: <6F7DEE69-F239-4023-9F95-BC3C6244DCA7@nginx.com> References: <6F7DEE69-F239-4023-9F95-BC3C6244DCA7@nginx.com> Message-ID: Thanks for your input. I will test with that one and let you know how it goes Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231585,231591#msg-231591 From nginx-forum at nginx.us Wed Oct 10 16:41:07 2012 From: nginx-forum at nginx.us ((mt) Drew) Date: Wed, 10 Oct 2012 12:41:07 -0400 Subject: 504 Gateway Time-out media temple In-Reply-To: References: <20121002151157.GL40452@mdounin.ru> Message-ID: <8a47b859908153926e0b34a8aafdb160.NginxMailingListEnglish@forum.nginx.org> samueleast Wrote: ------------------------------------------------------- > YOU ARE AMAZING this has been doing my head in for nearly to days i > have hounded media temple took about 20mins on this forum, > > thank you so much ;) Hey there! I'm glad you were able to get your issue resolved. I'm not exactly sure what happened when you tried calling in to (mt) Media Temple but if the problem you call about is within our scope of support we will always be able to help you. With that said, we are always happy to help with out of scope issues like this one, but it depends on whether or not the agent is familiar with the issue you are calling about. I do apologize that you were unable to receive help on this when you called in. If you have any questions for us, we're available 24/7 by phone, chat and Twitter. Drew J (mt) Media Temple @MediaTemple Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231318,231592#msg-231592 From mdounin at mdounin.ru Wed Oct 10 16:51:22 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Oct 2012 20:51:22 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> Message-ID: <20121010165122.GL40452@mdounin.ru> Hello! On Wed, Oct 10, 2012 at 05:43:20PM +0400, Andrey Feldman wrote: > On Wed, Oct 10, 2012 at 3:01 PM, Maxim Dounin wrote: > > Hello! > > > > On Wed, Oct 10, 2012 at 02:07:43AM +0400, Andrey Feldman wrote: > > > >> Tried your patch, but after few minutes workers starts crashing. > >> GDB: > >> > >> Core was generated by `nginx: worker process '. > >> Program terminated with signal 11, Segmentation fault. > >> #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fff9e5f2785
>> 0x7fff9e5f2785 out of bounds>, > >> last=0x7fff9e5f2f50
, > >> fmt=, args=0x7fff9e5f2730) at > >> src/core/ngx_string.c:178 > >> 178 while (*fmt >= '0' && *fmt <= '9') { > > > > Strange. What's in backtrace? > > Backtrace in attach. Unfortunately, backtrace is completely unreadable. Not sure if it's a result of memory corruption, a result of optimization used, or just a core vs. binary mismatch. -- Maxim Dounin http://nginx.com/support.html From ryanchan404 at gmail.com Wed Oct 10 17:17:17 2012 From: ryanchan404 at gmail.com (Ryan Chan) Date: Thu, 11 Oct 2012 01:17:17 +0800 Subject: Starter/HelloWorld nginx module In-Reply-To: <4A2C7484-8216-44A3-B9E2-FECC1288B2E0@nginx.com> References: <4A2C7484-8216-44A3-B9E2-FECC1288B2E0@nginx.com> Message-ID: On Wed, Oct 10, 2012 at 5:43 PM, Andrew Alexeev wrote: > On Oct 10, 2012, at 1:24 PM, Ryan Chan wrote: > Thanks, we'll definitely ponder adding something like this to the distro. > Thanks, that will definitely help other people contributing more module to nginx and make it more useful! Keep the good works! Ryan. From andrejaenisch at googlemail.com Wed Oct 10 18:07:27 2012 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Wed, 10 Oct 2012 20:07:27 +0200 Subject: zero size buf in output(Bug?) In-Reply-To: <20121010165122.GL40452@mdounin.ru> References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> <20121010165122.GL40452@mdounin.ru> Message-ID: 2012/10/10 Maxim Dounin : > Hello! > > Unfortunately, backtrace is completely unreadable. GooglePlus shows the following as attachment: Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fffefda6605 "", last=0x7fffefda6dd0 "\020", fmt=, args=0x7fffefda65b0) at src/core/ngx_string.c:178 178 while (*fmt >= '0' && *fmt <= '9') { (gdb) bt #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fffefda6605 "", last=0x7fffefda6dd0 "\020", fmt=, args=0x7fffefda65b0) at src/core/ngx_string.c:178 #1 0x000000000040692e in ngx_log_error_core (level=3, log=0x14143930, err=14, fmt=0x473e4a "\307", ) at src/core/ngx_log.c:108 #2 0x000000000040dfb2 in ngx_conf_set_path_slot (cf=0x14224fe8, cmd=, conf=) at src/core/ngx_file.c:267 #3 0x00000000141439b8 in ?? () #4 0x0000000014225850 in ?? () #5 0x00002b87720e8250 in ?? () #6 0x0000000014154770 in ?? () #7 0x0000000000449e6f in ngx_http_upstream_send_response (r=0x0, u=0x1) at src/http/ngx_http_upstream.c:2124 #8 ngx_http_upstream_process_header (r=0x0, u=0x1) at src/http/ngx_http_upstream.c:1644 #9 0x00000000141439b8 in ?? () #10 0x0000000000000078 in ?? () #11 0x4a8da000006b3500 in ?? () #12 0x0000000050757962 in ?? () #13 0x00000000141439b8 in ?? () #14 0x00000000141439b8 in ?? () #15 0xfffffffffffffffe in ?? () #16 0x0000000000000078 in ?? () #17 0x00000000141d7828 in ?? () #18 0x0000000050757962 in ?? () #19 0x000000000045106a in ngx_http_file_cache_exists (cache=0x4742a1, c=0x40aa1c) at src/http/ngx_http_file_cache.c:690 #20 0x0000000300000010 in ?? () #21 0x0000000014143930 in ?? () #22 0x000000000068c2c0 in ?? () #23 0x00000000141439b8 in ?? () #24 0x0000000014224c80 in ?? () #25 0x00000000141439b8 in ?? () #26 0x0000000014224a00 in ?? () #27 0x00002b87720e8250 in ?? () #28 0x00000000141d6798 in ?? () #29 0x0000000000000001 in ?? () #30 0x00000000004409d6 in ngx_http_variable_sent_keep_alive (r=, v=0x78, data=) at src/http/ngx_http_variables.c:1654 #31 0x000000000000012e in ?? () #32 0x0000000014224c80 in ?? () #33 0x0000000014224c80 in ?? () #34 0x00000000141439b8 in ?? () #35 0x00000000141439b8 in ?? () #36 0x00002b87720e8250 in ?? () #37 0x00000000141d6798 in ?? () #38 0x0000000014224a00 in ?? () #39 0x0000000014224c80 in ?? () #40 0x00000000141439b8 in ?? () #41 0x00002b87720e8250 in ?? () #42 0x00000000141d6798 in ?? () #43 0x000000000044321c in ngx_http_script_regex_start_code (e=0x1) at src/http/ngx_http_script.c:974 #44 0x00000000140610d0 in ?? () #45 0x00000000004762c9 in ngx_http_modern_browser (cf=0x7fffefda6605, cmd=0x0, conf=0x7fffefda6e00) at src/http/modules/ngx_http_browser_module.c:553 #46 0x0000000000000089 in ?? () ---Type to continue, or q to quit--- #47 0x00000000000019b0 in ?? () #48 0xfffffffffffffffc in ?? () #49 0x000000000041fee0 in ngx_event_pipe_read_upstream (p=0x0, do_write=) at src/event/ngx_event_pipe.c:382 #50 ngx_event_pipe (p=0x0, do_write=) at src/event/ngx_event_pipe.c:49 #51 0x0000000000001260 in ?? () #52 0x0000000000000062 in ?? () #53 0x00000000004220e9 in ngx_signal_handler (signo=25593) at src/os/unix/ngx_process.c:316 #54 0x0000000000000000 in ?? () Not unreadable, but a little bit confusing to me. Andre Jaenisch From aweber at comcast.net Wed Oct 10 18:14:37 2012 From: aweber at comcast.net (AJ Weber) Date: Wed, 10 Oct 2012 14:14:37 -0400 Subject: this may be a dumb ssl question, but here goes... Message-ID: <5075BB0D.8030201@comcast.net> Can I install and configure nginx to use a "public"/global CA's SSL Certificate like Verisign, AND force (require) the use of client SSL certificates, AND allow those client/browser-certificates to be from a different CA/root? For example, openca or some self-signed setup that I use to just distribute client certificates to my registered users? Let me know if I am not asking the question correctly. Thanks, AJ From mdounin at mdounin.ru Wed Oct 10 18:27:55 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Oct 2012 22:27:55 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> <20121010165122.GL40452@mdounin.ru> Message-ID: <20121010182755.GO40452@mdounin.ru> Hello! On Wed, Oct 10, 2012 at 08:07:27PM +0200, Andre Jaenisch wrote: > 2012/10/10 Maxim Dounin : > > Hello! > > > > Unfortunately, backtrace is completely unreadable. > > GooglePlus shows the following as attachment: > > > Core was generated by `nginx: worker process '. > Program terminated with signal 11, Segmentation fault. > #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fffefda6605 "", > last=0x7fffefda6dd0 "\020", fmt=, > args=0x7fffefda65b0) > at src/core/ngx_string.c:178 > 178 while (*fmt >= '0' && *fmt <= '9') { > (gdb) bt > #0 0x000000000040ab19 in ngx_vslprintf (buf=0x7fffefda6605 "", > last=0x7fffefda6dd0 "\020", fmt=, > args=0x7fffefda65b0) > at src/core/ngx_string.c:178 > #1 0x000000000040692e in ngx_log_error_core (level=3, log=0x14143930, > err=14, fmt=0x473e4a "\307", ) > at src/core/ngx_log.c:108 > #2 0x000000000040dfb2 in ngx_conf_set_path_slot (cf=0x14224fe8, > cmd=, conf=) > at src/core/ngx_file.c:267 > #3 0x00000000141439b8 in ?? () > #4 0x0000000014225850 in ?? () > #5 0x00002b87720e8250 in ?? () > #6 0x0000000014154770 in ?? () [...] > Not unreadable, but a little bit confusing to me. I can see gdb output in the attachment, thank you. :) But it's not something I can read (and understand). Obviously backtrace is invalid/corrupted, and I already named several possible reasons. -- Maxim Dounin http://nginx.com/support.html From ne at vbart.ru Wed Oct 10 18:27:49 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 10 Oct 2012 22:27:49 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: <20121010165122.GL40452@mdounin.ru> Message-ID: <201210102227.49345.ne@vbart.ru> On Wednesday 10 October 2012 22:07:27 Andre Jaenisch wrote: > 2012/10/10 Maxim Dounin : > > Hello! > > > > Unfortunately, backtrace is completely unreadable. > > GooglePlus shows the following as attachment: > [...] I believe Maxim meant in terms of informativity. The backtrace looks broken. wbr, Valentin V. Bartenev -- http://nginx.com/support.html From pr1 at pr1.ru Wed Oct 10 18:30:46 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Wed, 10 Oct 2012 22:30:46 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: <20121010182755.GO40452@mdounin.ru> References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> <20121010165122.GL40452@mdounin.ru> <20121010182755.GO40452@mdounin.ru> Message-ID: <5075BED6.90109@pr1.ru> On 10/10/2012 10:27 PM, Maxim Dounin wrote: > But it's not something I can read (and understand). Obviously > backtrace is invalid/corrupted, and I already named several > possible reasons. I made a new build, will try it now and give you normal bt, i hope:) From pr1 at pr1.ru Wed Oct 10 19:05:13 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Wed, 10 Oct 2012 23:05:13 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: <5075BED6.90109@pr1.ru> References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> <20121010165122.GL40452@mdounin.ru> <20121010182755.GO40452@mdounin.ru> <5075BED6.90109@pr1.ru> Message-ID: <5075C6E9.5060003@pr1.ru> In attach. This one looks interesting. On 10/10/2012 10:30 PM, Andrey Feldman wrote: > On 10/10/2012 10:27 PM, Maxim Dounin wrote: >> But it's not something I can read (and understand). Obviously >> backtrace is invalid/corrupted, and I already named several >> possible reasons. > > I made a new build, will try it now and give you normal bt, i hope:) > -------------- next part -------------- Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 0x000000000040af99 in ngx_vslprintf (buf=0x7fffce908963 "\316\002", last=0x7fffce909130 "", fmt=, args=0x7fffce908910) at src/core/ngx_string.c:254 254 while (*p && buf < last) { (gdb) bt #0 0x000000000040af99 in ngx_vslprintf (buf=0x7fffce908963 "\316\002", last=0x7fffce909130 "", fmt=, args=0x7fffce908910) at src/core/ngx_string.c:254 #1 0x0000000000406bfe in ngx_log_error_core (level=3, log=0x1bcbf960, err=14, fmt=0x47f5c0 "chmod() \"%s\" failed") at src/core/ngx_log.c:120 #2 0x000000000040e562 in ngx_ext_rename_file (src=0x1bdb2540, to=0x1bdb1c98, ext=0x7fffce909330) at src/core/ngx_file.c:557 #3 0x000000000045153a in ngx_http_file_cache_update (r=0x1bcbf9e8, tf=0x1bdb2538) at src/http/ngx_http_file_cache.c:938 #4 0x0000000000446ed6 in ngx_http_upstream_process_request (r=0x1bcbf9e8) at src/http/ngx_http_upstream.c:2686 #5 0x000000000044a068 in ngx_http_upstream_send_response (r=0x1bcbf9e8, u=0x1bdb1930) at src/http/ngx_http_upstream.c:2342 #6 ngx_http_upstream_process_header (r=0x1bcbf9e8, u=0x1bdb1930) at src/http/ngx_http_upstream.c:1644 #7 0x00000000004471d9 in ngx_http_upstream_handler (ev=0x1bcd83f0) at src/http/ngx_http_upstream.c:935 #8 0x000000000041d7b4 in ngx_event_process_posted (cycle=0x1bbca0c0, posted=0x6aec68) at src/event/ngx_event_posted.c:40 #9 0x0000000000424646 in ngx_worker_process_cycle (cycle=0x1bbca0c0, data=) at src/os/unix/ngx_process_cycle.c:808 #10 0x0000000000422bfd in ngx_spawn_process (cycle=0x1bbca0c0, proc=0x424580 , data=0x0, name=0x4823ce "worker process", respawn=-3) at src/os/unix/ngx_process.c:198 #11 0x0000000000423b7c in ngx_start_worker_processes (cycle=0x1bbca0c0, n=96, type=-3) at src/os/unix/ngx_process_cycle.c:365 #12 0x0000000000424e14 in ngx_master_process_cycle (cycle=0x1bbca0c0) at src/os/unix/ngx_process_cycle.c:137 #13 0x0000000000406795 in main (argc=-829384832, argv=) at src/core/nginx.c:410 From john at disqus.com Wed Oct 10 20:11:42 2012 From: john at disqus.com (John Watson) Date: Wed, 10 Oct 2012 13:11:42 -0700 Subject: Handling 500k concurrent connections on Linux In-Reply-To: <80E7F6B1-CBE0-4E72-943D-5B915ECF5DC0@nginx.com> References: <20121009191003.GA26608@asylum.local> <80E7F6B1-CBE0-4E72-943D-5B915ECF5DC0@nginx.com> Message-ID: <20121010201142.GA21197@asylum.local> 1) Error logs are clean (except for some 404s) 2) nginx.conf and sysctl.conf: https://gist.github.com/0b3b52050254e273ff11 Set TX/RX descriptors to 4096/4096 (maximum): ethtool -G eth1 tx 4096 rx 4096 Disabled irqbalanced and pinned IRQs to CPU0-7 for NIC Don't know exact amount, but a good majority of the connections are sitting idle for 90s before being closed. Some graphs on the network interface for past couple days: https://www.dropbox.com/s/0bl304ulhqp6a4n/push_stream_network.png Thank you, John W On Wed, Oct 10, 2012 at 01:05:05PM +0400, Andrew Alexeev wrote: > John, > > On Oct 9, 2012, at 11:10 PM, John Watson wrote: > > > I was wondering if anyone had some tips/guidelines for scaling Nginx on > > Linux to >500k concurrent connections. Playing with the > > nginx_http_push_stream module in streaming mode. Noticing periodic slow > > accept and/or response headers. I've scoured the Internet > > looking/learning ways to tune Nginx/Linux but I think I've exhausted my > > abilities. > > > > Any help would be appreciated. > > > > Hardware > > Dual Nehalem 5520 > > 24G RAM > > Intel 82576 (igb) > > Ubuntu 12.04.1 (3.2.0-31-generic x86_64) > > > > Thank You, > > > > John W > > I'd assume you've already checked/fixed the following, right? > > 1) Error logs - anything wrong seen in there? > > 2) http://nginx.org/en/docs/ngx_core_module.html#multi_accept and http://nginx.org/en/docs/ngx_core_module.html#accept_mutex - did you try it on/off? > > 3) file descriptors limits (cat /proc/sys/fs/file-max, sudo - nginx && ulimit, worker_rlimit_nofile) > > 4) sysctl net.ipv4.ip_local_port_range (if you're aiming at proxying all those connections to upstreams) > > Additional information about what's happening in all those 500k connections might be helpful, as well as the relevant configuration section :) > > Hope this helps > > > -- > AA @ nginx > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 834 bytes Desc: not available URL: From aweber at comcast.net Wed Oct 10 21:16:12 2012 From: aweber at comcast.net (AJ Weber) Date: Wed, 10 Oct 2012 17:16:12 -0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <5075BB0D.8030201@comcast.net> References: <5075BB0D.8030201@comcast.net> Message-ID: <5075E59C.2030704@comcast.net> I think I might have found my answer to this. I can generate my own (or use any different) CA and add that in ssl_client_certificate ; And then set ssl_verify_client on; This appears to work in initial testing. So my follow-up is: 1) Does this sound like the way to make my original question work? 2) Can I revoke certificates, and will nginx check a revocation list of some kind? Thanks again, AJ On 10/10/2012 2:14 PM, AJ Weber wrote: > Can I install and configure nginx to use a "public"/global CA's SSL > Certificate like Verisign, AND force (require) the use of client SSL > certificates, AND allow those client/browser-certificates to be from a > different CA/root? For example, openca or some self-signed setup that > I use to just distribute client certificates to my registered users? > > Let me know if I am not asking the question correctly. > > Thanks, > AJ > From mdounin at mdounin.ru Wed Oct 10 21:16:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Oct 2012 01:16:54 +0400 Subject: zero size buf in output(Bug?) In-Reply-To: <5075C6E9.5060003@pr1.ru> References: <20121009193654.GE40452@mdounin.ru> <5074A02F.9040006@pr1.ru> <20121010110114.GH40452@mdounin.ru> <20121010165122.GL40452@mdounin.ru> <20121010182755.GO40452@mdounin.ru> <5075BED6.90109@pr1.ru> <5075C6E9.5060003@pr1.ru> Message-ID: <20121010211654.GQ40452@mdounin.ru> Hello! On Wed, Oct 10, 2012 at 11:05:13PM +0400, Andrey Feldman wrote: > In attach. > This one looks interesting. > > > On 10/10/2012 10:30 PM, Andrey Feldman wrote: > >On 10/10/2012 10:27 PM, Maxim Dounin wrote: > >>But it's not something I can read (and understand). Obviously > >>backtrace is invalid/corrupted, and I already named several > >>possible reasons. > > > >I made a new build, will try it now and give you normal bt, i hope:) > > > Core was generated by `nginx: worker process '. > Program terminated with signal 11, Segmentation fault. > #0 0x000000000040af99 in ngx_vslprintf (buf=0x7fffce908963 "\316\002", last=0x7fffce909130 "", fmt=, > args=0x7fffce908910) at src/core/ngx_string.c:254 > 254 while (*p && buf < last) { > (gdb) bt > #0 0x000000000040af99 in ngx_vslprintf (buf=0x7fffce908963 "\316\002", last=0x7fffce909130 "", fmt=, > args=0x7fffce908910) at src/core/ngx_string.c:254 > #1 0x0000000000406bfe in ngx_log_error_core (level=3, log=0x1bcbf960, err=14, fmt=0x47f5c0 "chmod() \"%s\" failed") > at src/core/ngx_log.c:120 > #2 0x000000000040e562 in ngx_ext_rename_file (src=0x1bdb2540, to=0x1bdb1c98, ext=0x7fffce909330) at src/core/ngx_file.c:557 > #3 0x000000000045153a in ngx_http_file_cache_update (r=0x1bcbf9e8, tf=0x1bdb2538) at src/http/ngx_http_file_cache.c:938 > #4 0x0000000000446ed6 in ngx_http_upstream_process_request (r=0x1bcbf9e8) at src/http/ngx_http_upstream.c:2686 Ok, thanx, I see what goes on. The patch is indeed wrong, it broke small HEAD requests. Disregard it. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Oct 10 22:18:59 2012 From: nginx-forum at nginx.us (jkl) Date: Wed, 10 Oct 2012 18:18:59 -0400 Subject: apparent deadlock with fastcgi Message-ID: <0d249f10bcc747055b10829848a5e7ae.NginxMailingListEnglish@forum.nginx.org> I have a simple fastcgi responder that converts a JSON post to a CSV download. It works in a streaming fashion, writing the response before finishing reading the request. According to the FastCGI specification, such operation is allowed: http://www.fastcgi.com/devkit/doc/fcgi-spec.html "The Responder application sends CGI/1.1 stdout data to the Web server over FCGI_STDOUT, and CGI/1.1 stderr data over FCGI_STDERR. The application sends these concurrently, not one after the other. The application must wait to finish reading FCGI_PARAMS before it begins writing FCGI_STDOUT and FCGI_STDERR, but it needn't finish reading from FCGI_STDIN before it begins writing these two streams." Using a debugger I observe that my responder blocks while reading from FCGI_STDIN before I have received the whole request, which I think implies there is a bug on the nginx side. If I buffer the entire response in memory before sending it to back to nginx, I don't see the problem with the blocking read on the request. I only observe this problem when the request is "large enough", but I'm not sure what exact size triggers the problem. Does nginx support this sort of behavior for fastcgi responders? If not, is that fact documented somewhere that I missed? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231607,231607#msg-231607 From mdounin at mdounin.ru Wed Oct 10 22:48:46 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Oct 2012 02:48:46 +0400 Subject: apparent deadlock with fastcgi In-Reply-To: <0d249f10bcc747055b10829848a5e7ae.NginxMailingListEnglish@forum.nginx.org> References: <0d249f10bcc747055b10829848a5e7ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121010224846.GS40452@mdounin.ru> Hello! On Wed, Oct 10, 2012 at 06:18:59PM -0400, jkl wrote: > I have a simple fastcgi responder that converts a JSON post to a CSV > download. It works in a streaming fashion, writing the response before > finishing reading the request. According to the FastCGI specification, such > operation is allowed: > > http://www.fastcgi.com/devkit/doc/fcgi-spec.html > > "The Responder application sends CGI/1.1 stdout data to the Web server over > FCGI_STDOUT, and CGI/1.1 stderr data over FCGI_STDERR. The application sends > these concurrently, not one after the other. The application must wait to > finish reading FCGI_PARAMS before it begins writing FCGI_STDOUT and > FCGI_STDERR, but it needn't finish reading from FCGI_STDIN before it begins > writing these two streams." > > Using a debugger I observe that my responder blocks while reading from > FCGI_STDIN before I have received the whole request, which I think implies > there is a bug on the nginx side. > > If I buffer the entire response in memory before sending it to back to > nginx, I don't see the problem with the blocking read on the request. > > I only observe this problem when the request is "large enough", but I'm not > sure what exact size triggers the problem. > > Does nginx support this sort of behavior for fastcgi responders? If not, is > that fact documented somewhere that I missed? It's not supported. As long as nginx sees full response headers - it considers rest of request body unneeded and stops sending it. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Oct 10 22:51:26 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Oct 2012 02:51:26 +0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <5075E59C.2030704@comcast.net> References: <5075BB0D.8030201@comcast.net> <5075E59C.2030704@comcast.net> Message-ID: <20121010225126.GT40452@mdounin.ru> Hello! On Wed, Oct 10, 2012 at 05:16:12PM -0400, AJ Weber wrote: > I think I might have found my answer to this. > > I can generate my own (or use any different) CA and add that in > ssl_client_certificate ; > And then set ssl_verify_client on; > > This appears to work in initial testing. So my follow-up is: > 1) Does this sound like the way to make my original question work? Yes. > 2) Can I revoke certificates, and will nginx check a revocation list > of some kind? http://nginx.org/r/ssl_crl > > Thanks again, > AJ > > > On 10/10/2012 2:14 PM, AJ Weber wrote: > >Can I install and configure nginx to use a "public"/global CA's > >SSL Certificate like Verisign, AND force (require) the use of > >client SSL certificates, AND allow those > >client/browser-certificates to be from a different CA/root? For > >example, openca or some self-signed setup that I use to just > >distribute client certificates to my registered users? > > > >Let me know if I am not asking the question correctly. > > > >Thanks, > >AJ > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.com/support.html From aweber at comcast.net Wed Oct 10 23:16:03 2012 From: aweber at comcast.net (AJ Weber) Date: Wed, 10 Oct 2012 19:16:03 -0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <20121010225126.GT40452@mdounin.ru> References: <5075BB0D.8030201@comcast.net> <5075E59C.2030704@comcast.net> <20121010225126.GT40452@mdounin.ru> Message-ID: <507601B3.9010104@comcast.net> So far, I am loving nginx. :) Thanks! On 10/10/2012 6:51 PM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 10, 2012 at 05:16:12PM -0400, AJ Weber wrote: > >> I think I might have found my answer to this. >> >> I can generate my own (or use any different) CA and add that in >> ssl_client_certificate; >> And then set ssl_verify_client on; >> >> This appears to work in initial testing. So my follow-up is: >> 1) Does this sound like the way to make my original question work? > Yes. > >> 2) Can I revoke certificates, and will nginx check a revocation list >> of some kind? > http://nginx.org/r/ssl_crl > >> Thanks again, >> AJ >> >> >> On 10/10/2012 2:14 PM, AJ Weber wrote: >>> Can I install and configure nginx to use a "public"/global CA's >>> SSL Certificate like Verisign, AND force (require) the use of >>> client SSL certificates, AND allow those >>> client/browser-certificates to be from a different CA/root? For >>> example, openca or some self-signed setup that I use to just >>> distribute client certificates to my registered users? >>> >>> Let me know if I am not asking the question correctly. >>> >>> Thanks, >>> AJ >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Oct 11 01:17:19 2012 From: nginx-forum at nginx.us (zildjohn01) Date: Wed, 10 Oct 2012 21:17:19 -0400 Subject: limit_req seems to have no effect, but I would prefer it did In-Reply-To: <201210091414.45724.ne@vbart.ru> References: <201210091414.45724.ne@vbart.ru> Message-ID: <9c8506f2332faaacfac121d1563568f9.NginxMailingListEnglish@forum.nginx.org> That definitely explains the behavior I was seeing. But to me, any way to bypass the rate limiter seems like a security hole. Is there any way to change the phase/order of these two directives, or to otherwise cause rewritten requests to be rate limited? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231530,231611#msg-231611 From quintinpar at gmail.com Thu Oct 11 05:50:17 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 10 Oct 2012 22:50:17 -0700 Subject: Deny ips, and pick ips from a file. Message-ID: Hi all, I need to deny users by ip. I assume we need to do something like this location / { # block one workstation deny 192.168.1.1; # allow anyone in 192.168.1.0/24 allow 192.168.1.0/24; # drop rest of the world deny all; } But how can I pass on the list of ips from a file? A file which will get udated from time to time. Can I pass the ips something like this deny /tmp/iplist.txt; Will Nginx refresh the ip list in memory if the file gets changed? -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmartincpp at gmail.com Thu Oct 11 07:47:50 2012 From: tmartincpp at gmail.com (Thomas Martin) Date: Thu, 11 Oct 2012 09:47:50 +0200 Subject: Rewrite and FastCGI. In-Reply-To: <20121010123616.GN17159@craic.sysops.org> References: <20121009125311.GM17159@craic.sysops.org> <20121010123616.GN17159@craic.sysops.org> Message-ID: Hi! 2012/10/10 Francis Daly : > Having re-read the docs, I can see that the interpretation could be > made clearer. Can you suggest any wording that might have helped you > understand then, what you do now? Maybe it can become easier for the > next person ;-) > > Would adding a link to > http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration > have helped, do you think? It is an example rather than complete > documentation, and it leaves out the "^~" thing, so maybe it wouldn't > have been directly useful. > English is not my native language and I'm really bad at it so this can be the explanation of my misunderstanding. :/ Anyway maybe the section about location could be reorganized a bit to explain possibilities ([ = | ~ | ~* | ^~ ]) in a dedicated part. > > That looks reasonable. "^~" only matters if you have (top level) regex > matches -- which here, you don't. > > (Avoiding top level regex matches makes it very easy to know which > location{} will match any particular request. That's usually considered > a Good Thing.) > Ok, good to know. > In this case, you want different fastcgi_param lines for different php > scripts -- so repeating the common config (by using "include") and adding > the specific parts, like you have done, is probably the best way. Thanks for this confirmation. From ne at vbart.ru Thu Oct 11 09:31:42 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 11 Oct 2012 13:31:42 +0400 Subject: limit_req seems to have no effect, but I would prefer it did In-Reply-To: <9c8506f2332faaacfac121d1563568f9.NginxMailingListEnglish@forum.nginx.org> References: <201210091414.45724.ne@vbart.ru> <9c8506f2332faaacfac121d1563568f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201210111331.42466.ne@vbart.ru> On Thursday 11 October 2012 05:17:19 zildjohn01 wrote: > That definitely explains the behavior I was seeing. But to me, any way to > bypass the rate limiter seems like a security hole. Just "return 410;" is the much cheaper than the whole request limitation thing. And it's a good reason to save the resources and don't do limitation at all in this case. The limit modules should be used to limit access to any resource consumption tasks, not trivial. Limiting for "return 410;" seems pointless to me. > Is there any way to change the phase/order of these two directives, No, there is no way. > or to otherwise cause rewritten requests to be rate limited? You can try some workaround like this: location / { try_files /410 @410; } location @410 { return 410; } wbr, Valentin V. Bartenev -- http://nginx.com/support.html From citrin at citrin.ru Thu Oct 11 10:03:09 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Thu, 11 Oct 2012 14:03:09 +0400 Subject: Deny ips, and pick ips from a file. In-Reply-To: References: Message-ID: <5076995D.6080303@citrin.ru> On 11.10.2012 09:50, Quintin Par wrote: > I need to deny users by ip. I assume we need to do something like this > > location / { > > # block one workstation > > deny 192.168.1.1; > > # allow anyone in 192.168.1.0/24 > > allow 192.168.1.0/24 ; > > # drop rest of the world > > deny all; > > } > > But how can I pass on the list of ips from a file? A file which will get udated > from time to time. > > Can I pass the ips something like this > > deny /tmp/iplist.txt; If list of IP to block is really big, then better to use geo module instead allow/deny: http://nginx.org/en/docs/http/ngx_http_geo_module.html geo $denyed_host { default 1; include /tmp/iplist.txt; } ... if ($denyed_host) { return 403; } iplist.txt should contain lines like: 192.168.1.0/24 0; 192.168.1.1/32 1; After update of /tmp/iplist.txt you should reconfigure nginx (e. g. run nginx -s reload). -- Anton Yuzhaninov From nginx-forum at nginx.us Thu Oct 11 10:55:13 2012 From: nginx-forum at nginx.us (Wolfsrudel) Date: Thu, 11 Oct 2012 06:55:13 -0400 Subject: Deny ips, and pick ips from a file. In-Reply-To: <5076995D.6080303@citrin.ru> References: <5076995D.6080303@citrin.ru> Message-ID: http://bash.cyberciti.biz/web-server/nginx-shell-script-to-block-spamhaus-lasso-drop-spam-ip-address/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231617,231642#msg-231642 From appa at perusio.net Thu Oct 11 11:02:23 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 11 Oct 2012 13:02:23 +0200 Subject: Deny ips, and pick ips from a file. In-Reply-To: References: <5076995D.6080303@citrin.ru> Message-ID: <874nm1uv9c.wl%appa@perusio.net> On 11 Out 2012 12h55 CEST, nginx-forum at nginx.us wrote: > http://bash.cyberciti.biz/web-server/nginx-shell-script-to-block-spamhaus-lasso-drop-spam-ip-address/ Also a shameless plug - I leave the server handling to be done ? la carte :) https://github.com/perusio/nginx-spamhaus-drop This creates a file to be used by the geo directive. --- appa > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,231617,231642#msg-231642 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From andrejaenisch at googlemail.com Thu Oct 11 11:13:04 2012 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Thu, 11 Oct 2012 13:13:04 +0200 Subject: Rewrite and FastCGI. In-Reply-To: References: <20121009125311.GM17159@craic.sysops.org> <20121010123616.GN17159@craic.sysops.org> Message-ID: 2012/10/11 Thomas Martin : > Hi! Hello. > English is not my native language and I'm really bad at it so this can > be the explanation of my misunderstanding. :/ > Anyway maybe the section about location could be reorganized a bit to > explain possibilities ([ = | ~ | ~* | ^~ ]) in a dedicated part. English isn't my native language, too, but regular expressions ("regex" resp. "RegExp") is such a wide-spreaded topic, that you could read about the basics at wikipedia, too -> http://en.wikipedia.org/wiki/Regular_expression Choose the translation in your mother tongue at the left hand sidebar ;-) Best regards, Andre From tmartincpp at gmail.com Thu Oct 11 11:20:03 2012 From: tmartincpp at gmail.com (Thomas Martin) Date: Thu, 11 Oct 2012 13:20:03 +0200 Subject: Rewrite and FastCGI. In-Reply-To: References: <20121009125311.GM17159@craic.sysops.org> <20121010123616.GN17159@craic.sysops.org> Message-ID: Hello Andre. 2012/10/11 Andre Jaenisch : > 2012/10/11 Thomas Martin : >> Hi! > > Hello. >> English is not my native language and I'm really bad at it so this can >> be the explanation of my misunderstanding. :/ >> Anyway maybe the section about location could be reorganized a bit to >> explain possibilities ([ = | ~ | ~* | ^~ ]) in a dedicated part. > > English isn't my native language, too, but regular expressions > ("regex" resp. "RegExp") is such a wide-spreaded topic, that you could > read about the basics at wikipedia, too -> > http://en.wikipedia.org/wiki/Regular_expression > Choose the translation in your mother tongue at the left hand sidebar ;-) > > Best regards, Andre > Indeed, you are right. Regards. From jrpozo at conclase.net Thu Oct 11 12:03:07 2012 From: jrpozo at conclase.net (Juan R. Pozo) Date: Thu, 11 Oct 2012 14:03:07 +0200 Subject: Does nginx have a directive analogous to Apache's ? Message-ID: <343379795.20121011140307@conclase.net> Hello, I'm trying to set up an nginx server to replace our current setup based on Apache. Our users have password protected directories (with directives in .htaccess files) which we need to keep protected in the new setup. As far as I can see, nginx doesn't have a Directory directive, but only a Location directive, which refers to URIs instead of file system paths. This means that if a directory is reachable through more than one URL, I have to include them all in one or more Location directives. For example: domain.com, root: /home/user/public_html sub.domain.com, root: /home/user/public_html/sub If user protects sub.domain.com/admin (directory /home/user/public_html/sub/admin) I must make sure that both /admin in sub.domain.com and /sub/admin in domain.com are protected with the same password file. I'd rather protect the directory itself, and not every URL through which visitors can access its contents. So, does nginx have any mechanism that allows to refer to file system paths in configuration files, like Apache's blocks do? Thank you. Regards. -- Juan R. Pozo - http://html.conclase.net/ From nginx-forum at nginx.us Thu Oct 11 13:19:23 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 11 Oct 2012 09:19:23 -0400 Subject: URI re-mapping using try_files Message-ID: Hello, I am new to nginx, and am still learning my way. I use nginx to serve static html pages to users, and in particular, I want users who visit 'www.example.com/public/doc/abc123?para=data' to be served with the file '/home/www/example/public/doc/abc123/abc123.html' on the server, but 'www.example.com/public/doc/abc123?para=data' stays in the user's browser address bar. So there are two issues here: 1. map URI pattern '/public/doc/blah' to file '/home/www/example/public/doc/blah/blah.html' on the server; 2. keep query string '?parap=data' in the address bar. I wonder if these can be solved by using `try_files` within a location block. For example, substring the $uri to take out '/blah' part, and append it to the $uri followed by '.html'. And possibly append $args after '.html'? How exactly do we do this in nginx? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231672,231672#msg-231672 From aweber at comcast.net Thu Oct 11 15:35:16 2012 From: aweber at comcast.net (AJ Weber) Date: Thu, 11 Oct 2012 11:35:16 -0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <20121010225126.GT40452@mdounin.ru> References: <5075BB0D.8030201@comcast.net> <5075E59C.2030704@comcast.net> <20121010225126.GT40452@mdounin.ru> Message-ID: <5076E734.4090000@comcast.net> I didn't double-check yet, but it looks like if I set this up, and the client does not have a client-side certificate, nginx is returning either a 400 (or more likely a 403)? Is there any way I can be entirely "rude" and re-map the return code if you do not have a client certificate to 444? Thanks again, AJ On 10/10/2012 6:51 PM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 10, 2012 at 05:16:12PM -0400, AJ Weber wrote: > >> I think I might have found my answer to this. >> >> I can generate my own (or use any different) CA and add that in >> ssl_client_certificate; >> And then set ssl_verify_client on; >> >> This appears to work in initial testing. So my follow-up is: >> 1) Does this sound like the way to make my original question work? > Yes. > >> 2) Can I revoke certificates, and will nginx check a revocation list >> of some kind? > http://nginx.org/r/ssl_crl > >> Thanks again, >> AJ >> >> >> On 10/10/2012 2:14 PM, AJ Weber wrote: >>> Can I install and configure nginx to use a "public"/global CA's >>> SSL Certificate like Verisign, AND force (require) the use of >>> client SSL certificates, AND allow those >>> client/browser-certificates to be from a different CA/root? For >>> example, openca or some self-signed setup that I use to just >>> distribute client certificates to my registered users? >>> >>> Let me know if I am not asking the question correctly. >>> >>> Thanks, >>> AJ >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Thu Oct 11 16:57:47 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Oct 2012 17:57:47 +0100 Subject: URI re-mapping using try_files In-Reply-To: References: Message-ID: <20121011165747.GO17159@craic.sysops.org> On Thu, Oct 11, 2012 at 09:19:23AM -0400, mrtn wrote: Hi there, > I use nginx to serve static html pages to users, and in particular, I want > users who visit 'www.example.com/public/doc/abc123?para=data' to be served > with the file '/home/www/example/public/doc/abc123/abc123.html' on the > server, but 'www.example.com/public/doc/abc123?para=data' stays in the > user's browser address bar. For this sort of thing, "rewrite" (http://nginx.org/r/rewrite) is probably the directive you want. rewrite (.*)/(.*) $1/$2/$2.html break; can do what you want in this case. What do you want to happen if the user asks for any of: http://www.example.com/public/doc/abc123 http://www.example.com/public/doc/abc123?para=nodata http://www.example.com/public/doc/abc123/abc123.html http://www.example.com/public/doc/one/two The rewrite directive above may or may not do what you want for those. > So there are two issues here: 1. map URI pattern '/public/doc/blah' to file > '/home/www/example/public/doc/blah/blah.html' on the server; 2. keep query > string '?parap=data' in the address bar. 1) is in the rewrite (or any similar configuration which maps url to file) 2) is by virtue of not doing an external rewrite, which is a redirection. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Oct 11 19:59:24 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 11 Oct 2012 15:59:24 -0400 Subject: URI re-mapping using try_files In-Reply-To: <20121011165747.GO17159@craic.sysops.org> References: <20121011165747.GO17159@craic.sysops.org> Message-ID: <0254b931f75ce74e227d789dabc84d23.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks for introducing me to rewrite directive. Just to confirm, this is how I should use your rewrite: root /home/www/example; location /public/doc/ { rewrite (.*)/(.*) $1/$2/$2.html break; } Ideally, for the other cases you raised, I want the following to happen: >http://www.example.com/public/doc/abc123 >http://www.example.com/public/doc/abc123?para=nodata when the query string (e.g. ?para=blah) part is missing or incomplete, I want to serve a generic error page (e.g. /error.html) >http://www.example.com/public/doc/abc123/abc123.html when the user tries to access the actual html page directly, I want to block it by either returning a 404 or serving a generic error page as above >http://www.example.com/public/doc/one/two when the user queries an URI that has no corresponding .html file on the server, I want to simply return a 404. Can all these be implemented using rewrite only? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231672,231682#msg-231682 From francis at daoine.org Thu Oct 11 20:32:20 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Oct 2012 21:32:20 +0100 Subject: URI re-mapping using try_files In-Reply-To: <0254b931f75ce74e227d789dabc84d23.NginxMailingListEnglish@forum.nginx.org> References: <20121011165747.GO17159@craic.sysops.org> <0254b931f75ce74e227d789dabc84d23.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121011203220.GP17159@craic.sysops.org> On Thu, Oct 11, 2012 at 03:59:24PM -0400, mrtn wrote: Hi there, > Thanks for introducing me to rewrite directive. Just to confirm, this is how > I should use your rewrite: > > root /home/www/example; > > location /public/doc/ { > rewrite (.*)/(.*) $1/$2/$2.html break; > } It can work outside of all locations, or else in the one location that handles this request. So this config can work. > Ideally, for the other cases you raised, I want the following to happen: > > >http://www.example.com/public/doc/abc123 > >http://www.example.com/public/doc/abc123?para=nodata > > when the query string (e.g. ?para=blah) part is missing or incomplete, I > want to serve a generic error page (e.g. /error.html) The above rewrite pays no attention to query strings. So you'll want to do something based on $arg_para -- maybe an "if" or something involving "map". I guess (without testing): if ($arg_para != data) { return 404; } inside that location{} would probably work. > >http://www.example.com/public/doc/abc123/abc123.html > > when the user tries to access the actual html page directly, I want to block > it by either returning a 404 or serving a generic error page as above The above rewrite does that (assuming that the "rewritten" file is absent). > >http://www.example.com/public/doc/one/two > > when the user queries an URI that has no corresponding .html file on the > server, I want to simply return a 404. The above rewrite does that; but which html file should it look for here? /home/www/example/public/doc/one/two/two.html, or /home/www/example/public/doc/one/two/one/two.html? (As in: do you repeat everything after /public/doc, or do you repeat just the final after-slash part?) > Can all these be implemented using rewrite only? Thanks. With the extra "if" above, I think so. Your testing should be able to show any problems, or unexpected behaviour. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Oct 11 20:37:20 2012 From: nginx-forum at nginx.us (Ensiferous) Date: Thu, 11 Oct 2012 16:37:20 -0400 Subject: Does nginx have a directive analogous to Apache's ? In-Reply-To: <343379795.20121011140307@conclase.net> References: <343379795.20121011140307@conclase.net> Message-ID: No it doesn't. You will need to specify all the locations. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231660,231684#msg-231684 From nginx-forum at nginx.us Thu Oct 11 21:22:54 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 11 Oct 2012 17:22:54 -0400 Subject: URI re-mapping using try_files In-Reply-To: <20121011203220.GP17159@craic.sysops.org> References: <20121011203220.GP17159@craic.sysops.org> Message-ID: <62deab814d29f100fdc67dc3e30bf471.NginxMailingListEnglish@forum.nginx.org> Hi Francis, >I guess (without testing): > >if ($arg_para != data) { return 404; } > >inside that location{} would probably work Hmm, I read on nginx website and elsewhere that if statement may not work consistently within a location directive, and is generally discouraged. Should I worry in this case? >The above rewrite does that; but which html file should it look for here? > >/home/www/example/public/doc/one/two/two.html, or >/home/www/example/public/doc/one/two/one/two.html? > >(As in: do you repeat everything after /public/doc, or do you repeat >just the final after-slash part?) I only want to repeat the final after-slash part, so any '/public/doc/blahA/blahB' should only look for '/public/doc/blahA/blahB/blahB.html'. Would that change anything in the config proposed here? Thanks again! Mrtn Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231672,231685#msg-231685 From mdounin at mdounin.ru Thu Oct 11 21:53:42 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Oct 2012 01:53:42 +0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <5076E734.4090000@comcast.net> References: <5075BB0D.8030201@comcast.net> <5075E59C.2030704@comcast.net> <20121010225126.GT40452@mdounin.ru> <5076E734.4090000@comcast.net> Message-ID: <20121011215341.GW40452@mdounin.ru> Hello! On Thu, Oct 11, 2012 at 11:35:16AM -0400, AJ Weber wrote: > I didn't double-check yet, but it looks like if I set this up, and > the client does not have a client-side certificate, nginx is > returning either a 400 (or more likely a 403)? Is there any way I > can be entirely "rude" and re-map the return code if you do not have > a client certificate to 444? The answer is on the very same page: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors -- Maxim Dounin http://nginx.com/support.html From aweber at comcast.net Thu Oct 11 22:07:44 2012 From: aweber at comcast.net (Aaron) Date: Thu, 11 Oct 2012 18:07:44 -0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <20121011215341.GW40452@mdounin.ru> References: <5075BB0D.8030201@comcast.net> <5075E59C.2030704@comcast.net> <20121010225126.GT40452@mdounin.ru> <5076E734.4090000@comcast.net> <20121011215341.GW40452@mdounin.ru> Message-ID: <8c09ed52-368e-4fb0-8b4b-b229a87bb2ff@email.android.com> I noticed that, but it appears to require a page / uri. I think the special 444 should not return content, if I am reading its design correctly. -Aaron Maxim Dounin wrote: >Hello! > >On Thu, Oct 11, 2012 at 11:35:16AM -0400, AJ Weber wrote: > >> I didn't double-check yet, but it looks like if I set this up, and >> the client does not have a client-side certificate, nginx is >> returning either a 400 (or more likely a 403)? Is there any way I >> can be entirely "rude" and re-map the return code if you do not have >> a client certificate to 444? > >The answer is on the very same page: >http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors > >-- >Maxim Dounin >http://nginx.com/support.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Oct 11 22:12:37 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Oct 2012 23:12:37 +0100 Subject: URI re-mapping using try_files In-Reply-To: <62deab814d29f100fdc67dc3e30bf471.NginxMailingListEnglish@forum.nginx.org> References: <20121011203220.GP17159@craic.sysops.org> <62deab814d29f100fdc67dc3e30bf471.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121011221237.GQ17159@craic.sysops.org> On Thu, Oct 11, 2012 at 05:22:54PM -0400, mrtn wrote: Hi there, > >I guess (without testing): > > > >if ($arg_para != data) { return 404; } > > > >inside that location{} would probably work > > Hmm, I read on nginx website and elsewhere that if statement may not work > consistently within a location directive, and is generally discouraged. > Should I worry in this case? What precisely did you read? And does it apply here? (I think I know the answers to those; but there's no reason for you to believe my words instead of words written elsewhere. If the words are all consistent, then you can choose to believe them all. If the words are contradictory, then you must choose which source to consider incorrect.) > >(As in: do you repeat everything after /public/doc, or do you repeat > >just the final after-slash part?) > > I only want to repeat the final after-slash part, so any > '/public/doc/blahA/blahB' should only look for > '/public/doc/blahA/blahB/blahB.html'. Would that change anything in the > config proposed here? What happened when you tried it? mkdir -p /home/www/example/public/doc/blahA/blahB/blahA echo This is in blahB > /home/www/example/public/doc/blahA/blahB/blahB.html echo This is in blahA > /home/www/example/public/doc/blahA/blahB/blahA/blahB.html curl -i http://www.example.com/public/doc/blahA/blahB As it happens, the rewrite suggested should end up fetching the "in blahB" file, because it only repeats the part after the last slash. When you understand *why* it does that, you'll have a better chance of writing your own regex-based rewrites. Good luck with it! f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Oct 11 22:21:40 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Oct 2012 02:21:40 +0400 Subject: this may be a dumb ssl question, but here goes... In-Reply-To: <8c09ed52-368e-4fb0-8b4b-b229a87bb2ff@email.android.com> References: <5075BB0D.8030201@comcast.net> <5075E59C.2030704@comcast.net> <20121010225126.GT40452@mdounin.ru> <5076E734.4090000@comcast.net> <20121011215341.GW40452@mdounin.ru> <8c09ed52-368e-4fb0-8b4b-b229a87bb2ff@email.android.com> Message-ID: <20121011222139.GY40452@mdounin.ru> Hello! On Thu, Oct 11, 2012 at 06:07:44PM -0400, Aaron wrote: > I noticed that, but it appears to require a page / uri. I think > the special 444 should not return content, if I am reading its > design correctly. This is because anything in nginx requires an uri. But it's up to you to not return content for the uri, like this: error_page 496 = /nocert; location = /nocert { return 444; } See here for details: http://nginx.org/r/error_page http://nginx.org/r/location http://nginx.org/r/return -- Maxim Dounin http://nginx.com/support.html From quintinpar at gmail.com Fri Oct 12 00:08:55 2012 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 11 Oct 2012 17:08:55 -0700 Subject: Compiling Nginx on production. How to do it without down time. Message-ID: Hi all, I run vanilla builds of Nginx from the Nginx centos repo. But recently I?ve started to realize the need for additional modules in Nginx like Nginx status, openresty, lua etc. Now this will mean that my vanilla Nginx with nginx version: nginx/1.0.12 built by gcc 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g' will have to be recompiled. How can I do it with minimal downtime on my prod machine? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Oct 12 00:26:51 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 12 Oct 2012 01:26:51 +0100 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: Message-ID: <20121012002651.GR17159@craic.sysops.org> On Thu, Oct 11, 2012 at 05:08:55PM -0700, Quintin Par wrote: Hi there, > How can I do it with minimal downtime on my prod machine? http://nginx.org/en/docs/control.html ? f -- Francis Daly francis at daoine.org From electronixtar at gmail.com Fri Oct 12 01:08:28 2012 From: electronixtar at gmail.com (est) Date: Fri, 12 Oct 2012 09:08:28 +0800 Subject: Get conf location of a running nginx process Message-ID: Hi list, Is there a general way to extract the nginx config file path, in a third party program? I see the sudo strace -s reload has this line open("/etc/nginx/nginx.conf", O_RDONLY|O_LARGEFILE) = 4 but how can I get the string /etc/nginx/nginx.conf ? I presume I have three ways: 1. ps aux perhaps there is a command line like nginx -c /etc/my.conf 2. nginx -V will list the default conf path 3. If #1 and #2 fails, could it be done reading nginx's process image in memory? Are there any other scenarios? How do I achieve the #3 way? Thanks in advance! est -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Oct 12 02:14:13 2012 From: nginx-forum at nginx.us (brokentwig) Date: Thu, 11 Oct 2012 22:14:13 -0400 Subject: Keep getting "does not match " Message-ID: <478b60ac5b46819e456e303ff615c261.NginxMailingListEnglish@forum.nginx.org> I have a complicated setup where I'm running WordPress in the /blog/ subfolder, and then Vanilla Forums 2 in the /forums/ folder under root. The config below works fine for almost all pages except all images under the /blog/wp-content/gallery/ folder. In the log I keep getting "/forum/" does not match "/blog/wp-content/gallery/something_something...". Shouldn't "/blog/wp-content/....." be matched on the "location /" entry? I have confirmed the path is accurate for the images, so the try_files under the main "locaton /" should catch it. Any help would be appriciated: server { #error_log /var/log/nginx/rewrite_log notice; rewrite_log on; listen 80; server_name domain_name.com www.domain_name.com; root /var/www/domain_name.com/blog; index index.php index.html; # Main site is the blog, so last condition rewrites for WP location / { try_files $uri $uri/ @blog; } location @blog { try_files $uri $uri/ /index.php?p=$uri$is_args$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; # Set default script path to WP blog set $php_root /var/www/domain_name.com/blog; # If this is a forum request, change the root if ($request_uri ~ /forum/) { set $php_root /var/www/domain_name.com; } fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_read_timeout 300; fastcgi_max_temp_file_size 0; } location /forum/ { root /var/www/domain_name.com; if (!-e $request_filename) { rewrite ^/forum/(.*)$ /forum/index.php?p=$1 last; break; } } location ~ /\.ht { access_log off; log_not_found off; deny all; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231694,231694#msg-231694 From goelvivek2011 at gmail.com Fri Oct 12 06:00:57 2012 From: goelvivek2011 at gmail.com (Vivek Goel) Date: Fri, 12 Oct 2012 11:30:57 +0530 Subject: Getting waitpid error Message-ID: Hi, While calling nginx stop we are getting following error: [alert] 25563#0: waitpid() failed (10: No child processes) What could be the reason for this problem? How can we solve this? or Can we ignore this error ? regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Fri Oct 12 07:05:52 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 11:05:52 +0400 Subject: Getting waitpid error In-Reply-To: References: Message-ID: <4EB5EAF2-6A63-4B91-8356-5241DA731693@sysoev.ru> On Oct 12, 2012, at 10:00 , Vivek Goel wrote: > Hi, > While calling nginx stop we are getting following error: > [alert] 25563#0: waitpid() failed (10: No child processes) > > What could be the reason for this problem? > How can we solve this? or Can we ignore this error ? Yes, this error is already logged at "info" level for Solaris and FreeBSD and probably should not be logged at all. What OS do you use? -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Oct 12 10:38:55 2012 From: nginx-forum at nginx.us (goelvivek) Date: Fri, 12 Oct 2012 06:38:55 -0400 Subject: Getting waitpid error In-Reply-To: References: Message-ID: <8ff586f6dae14aa469afb4e5fe962a15.NginxMailingListEnglish@forum.nginx.org> I am using Amazon Linux. uname -a gives followng result Linux 3.2.22-35.60.amzn1.x86_64 #1 SMP Thu Jul 5 14:07:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231695,231707#msg-231707 From nginx-forum at nginx.us Fri Oct 12 12:26:38 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 12 Oct 2012 08:26:38 -0400 Subject: Nginx Map Multiple Parameter Input Message-ID: <5cb05ab8d2e8ffb7ddd69ff8ffde78a6.NginxMailingListEnglish@forum.nginx.org> I have a case where I need to use Nginx map in slightly different way. I use following to create a variable $blogpath `map $uri $blogname{ ~^(?P/[_0-9a-zA-Z-]+/)files/(.*) $blogpath ; }` Next, I want to run another map using: `map $http_host$blogpath $blogid{ #map lines }` Problem is - map doesn't support 2 input parameters. It throws error: "nginx: [emerg] invalid number of the map parameters ..." As map is outside server{] block, I cannot use set to create a temporary variable to combine value of "$http_host$blogpath" Goal is to get "domain-name.com/first-dir" from URL. If any other nginx variable can give me entire URL as seen in browser, it will also work. Please suggest a workaround! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231715#msg-231715 From igor at sysoev.ru Fri Oct 12 12:56:46 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 16:56:46 +0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <5cb05ab8d2e8ffb7ddd69ff8ffde78a6.NginxMailingListEnglish@forum.nginx.org> References: <5cb05ab8d2e8ffb7ddd69ff8ffde78a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <289E8D94-2BF4-4308-984E-598831F9249C@sysoev.ru> On Oct 12, 2012, at 16:26 , rahul286 wrote: > I have a case where I need to use Nginx map in slightly different way. > > I use following to create a variable $blogpath > > > `map $uri $blogname{ > ~^(?P/[_0-9a-zA-Z-]+/)files/(.*) $blogpath ; > }` > > Next, I want to run another map using: > > > `map $http_host$blogpath $blogid{ > #map lines > }` > > > Problem is - map doesn't support 2 input parameters. It throws error: > "nginx: [emerg] invalid number of the map parameters ..." > > As map is outside server{] block, I cannot use set to create a temporary > variable to combine value of "$http_host$blogpath" > > Goal is to get "domain-name.com/first-dir" from URL. If any other nginx > variable can give me entire URL as seen in browser, it will also work. > > Please suggest a workaround! What version do you use ? Changes with nginx 0.9.0 29 Nov 2010 ... *) Feature: the "map" directive supports expressions as the first parameter. -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 12 13:14:45 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 12 Oct 2012 09:14:45 -0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <289E8D94-2BF4-4308-984E-598831F9249C@sysoev.ru> References: <289E8D94-2BF4-4308-984E-598831F9249C@sysoev.ru> Message-ID: <129a160fe4fefbc4a5a9ce815d8354aa.NginxMailingListEnglish@forum.nginx.org> I am using Nginx 1.2.4 So a line like "map $http_host$uri $blogid" is valid one? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231720#msg-231720 From igor at sysoev.ru Fri Oct 12 13:47:08 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 17:47:08 +0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <129a160fe4fefbc4a5a9ce815d8354aa.NginxMailingListEnglish@forum.nginx.org> References: <289E8D94-2BF4-4308-984E-598831F9249C@sysoev.ru> <129a160fe4fefbc4a5a9ce815d8354aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5D76344F-8610-427E-A172-9498AC380098@sysoev.ru> On Oct 12, 2012, at 17:14 , rahul286 wrote: > I am using Nginx 1.2.4 > > So a line like "map $http_host$uri $blogid" is valid one? Yes. This message is issued for internal map parameters. Probably you have space in some first or second paramter. -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 12 14:21:09 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 12 Oct 2012 10:21:09 -0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <5D76344F-8610-427E-A172-9498AC380098@sysoev.ru> References: <5D76344F-8610-427E-A172-9498AC380098@sysoev.ru> Message-ID: <4d5800536294145074f53c61ba034fed.NginxMailingListEnglish@forum.nginx.org> I will debug my config again. Thanks. By the way, does map support regex in input parameter? Now or in future? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231722#msg-231722 From igor at sysoev.ru Fri Oct 12 14:24:35 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 18:24:35 +0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <4d5800536294145074f53c61ba034fed.NginxMailingListEnglish@forum.nginx.org> References: <5D76344F-8610-427E-A172-9498AC380098@sysoev.ru> <4d5800536294145074f53c61ba034fed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9EDB723F-2F92-4E5B-9C0D-E471E5282684@sysoev.ru> On Oct 12, 2012, at 18:21 , rahul286 wrote: > I will debug my config again. Thanks. > > By the way, does map support regex in input parameter? Now or in future? Changes with nginx 0.9.6 21 Mar 2011 *) Feature: the "map" directive supports regular expressions as value of the first parameter. http://nginx.org/en/docs/http/ngx_http_map_module.html map $http_user_agent $mobile { default 0; "~Opera Mini" 1; } -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 12 14:37:28 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 12 Oct 2012 10:37:28 -0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <9EDB723F-2F92-4E5B-9C0D-E471E5282684@sysoev.ru> References: <9EDB723F-2F92-4E5B-9C0D-E471E5282684@sysoev.ru> Message-ID: <4bab81090e0ee806279c6d6158d4333a.NginxMailingListEnglish@forum.nginx.org> Sorry for wrong question. I wanted to ask... map "~ ^/~([^/]*)/.*$)" $userhome{ } style regex. Where map input string will be $1. === Apart from that, inside map: map $uri $blogname{ ~^(?P/[^/]+/)files/(.*)$ $blogpath ; } I am not able to use $1. I always have to use a variable like $blogpath. Is it by design or a mistake on my end? I get error: "nginx: [emerg] unknown "1" variable" when I try: map $uri $blogname{ ~^(/[^/]+/)files/(.*)$ $1 ; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231725#msg-231725 From igor at sysoev.ru Fri Oct 12 14:43:47 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 18:43:47 +0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <4bab81090e0ee806279c6d6158d4333a.NginxMailingListEnglish@forum.nginx.org> References: <9EDB723F-2F92-4E5B-9C0D-E471E5282684@sysoev.ru> <4bab81090e0ee806279c6d6158d4333a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <31EB4126-B699-4145-B1A8-2DDAA83C882B@sysoev.ru> On Oct 12, 2012, at 18:37 , rahul286 wrote: > Sorry for wrong question. I wanted to ask... > > map "~ ^/~([^/]*)/.*$)" $userhome{ > } > > style regex. Where map input string will be $1. No. "map" is not "location". > === > > Apart from that, inside map: > > map $uri $blogname{ > ~^(?P/[^/]+/)files/(.*)$ $blogpath ; > } > > I am not able to use $1. I always have to use a variable like $blogpath. Is > it by design or a mistake on my end? > > I get error: "nginx: [emerg] unknown "1" variable" when I try: > > map $uri $blogname{ > ~^(/[^/]+/)files/(.*)$ $1 ; > } Digit captures are not supported in map. You have to use named capture. -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 12 14:51:04 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 12 Oct 2012 10:51:04 -0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <31EB4126-B699-4145-B1A8-2DDAA83C882B@sysoev.ru> References: <31EB4126-B699-4145-B1A8-2DDAA83C882B@sysoev.ru> Message-ID: Igor Sysoev Wrote: ------------------------------------------------------- > Digit captures are not supported in map. You have to use named capture. Thanks for clarification :-) Is map exception or there are other directives also which do not support Digit captures? I ran into issues in if-location block (mostly an If-evil case). I switched to named capture and it worked nice. Thanks again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231728#msg-231728 From igor at sysoev.ru Fri Oct 12 14:55:36 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 18:55:36 +0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: References: <31EB4126-B699-4145-B1A8-2DDAA83C882B@sysoev.ru> Message-ID: <9A4ED69F-64B8-480F-AF70-DFAB7FD479C9@sysoev.ru> On Oct 12, 2012, at 18:51 , rahul286 wrote: > Igor Sysoev Wrote: > ------------------------------------------------------- >> Digit captures are not supported in map. You have to use named capture. > > Thanks for clarification :-) > > Is map exception or there are other directives also which do not support > Digit captures? > > I ran into issues in if-location block (mostly an If-evil case). I switched > to named capture and it worked nice. Some directives support, but digital captures can be implicitly overwritten. With named captures you can control explicitly. -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 12 15:14:57 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 12 Oct 2012 11:14:57 -0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <9A4ED69F-64B8-480F-AF70-DFAB7FD479C9@sysoev.ru> References: <9A4ED69F-64B8-480F-AF70-DFAB7FD479C9@sysoev.ru> Message-ID: Yep. When using values across lines, I noticed overwriting. One last question: I think using "P" in named capture e.g. "?P" is old style. But even on nginx 1.2 also some people get error: "pcre_compile() failed: unrecognized character after". It goes away when they update PCRE lib. For better compatibility, is it good idea to use "P" or is there any advantage without "P"? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231734#msg-231734 From igor at sysoev.ru Fri Oct 12 17:18:39 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 12 Oct 2012 21:18:39 +0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: References: <9A4ED69F-64B8-480F-AF70-DFAB7FD479C9@sysoev.ru> Message-ID: <94089F05-70EA-4CAA-8BA6-A9007589CE02@sysoev.ru> On Oct 12, 2012, at 19:14 , rahul286 wrote: > Yep. When using values across lines, I noticed overwriting. > > One last question: > > I think using "P" in named capture e.g. "?P" is old style. But > even on nginx 1.2 also some people get error: "pcre_compile() failed: > unrecognized character after". It goes away when they update PCRE lib. > > For better compatibility, is it good idea to use "P" or is there any > advantage without "P"? No advantage, just readability. ? Perl 5.10 compatible syntax, supported since PCRE-7.0 ?'name' Perl 5.10 compatible syntax, supported since PCRE-7.0 ?P Python compatible syntax, supported since PCRE-4.0 BTW PCRE-7.0 has been released on 19 December 2006, almost 6 years ago. -- Igor Sysoev http://nginx.com/support.html From francis at daoine.org Fri Oct 12 17:55:12 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 12 Oct 2012 18:55:12 +0100 Subject: Keep getting "does not match " In-Reply-To: <478b60ac5b46819e456e303ff615c261.NginxMailingListEnglish@forum.nginx.org> References: <478b60ac5b46819e456e303ff615c261.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121012175512.GS17159@craic.sysops.org> On Thu, Oct 11, 2012 at 10:14:13PM -0400, brokentwig wrote: Hi there, > The > config below works fine for almost all pages except all images under the > /blog/wp-content/gallery/ folder. The uri-matching location directives you have are: > location / { > location ~ \.php$ { > location /forum/ { > location ~ /\.ht { What http request is made that leads to unexpected behaviour? f -- Francis Daly francis at daoine.org From quintinpar at gmail.com Fri Oct 12 20:29:14 2012 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 12 Oct 2012 13:29:14 -0700 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: <20121012002651.GR17159@craic.sysops.org> References: <20121012002651.GR17159@craic.sysops.org> Message-ID: Thanks for this. This looks a bit complicated. I?d assume that ?make install? will overwrite the executable and that will ensure everything. Or should I just go ahead and do service nginx restart On Thu, Oct 11, 2012 at 5:26 PM, Francis Daly wrote: > On Thu, Oct 11, 2012 at 05:08:55PM -0700, Quintin Par wrote: > > Hi there, > > > How can I do it with minimal downtime on my prod machine? > > http://nginx.org/en/docs/control.html ? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pr1 at pr1.ru Fri Oct 12 20:44:28 2012 From: pr1 at pr1.ru (Andrey Feldman) Date: Sat, 13 Oct 2012 00:44:28 +0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012002651.GR17159@craic.sysops.org> Message-ID: <5078812C.9070503@pr1.ru> Check 'nginx -V', if match new version - just /etc/init.d/nginix upgrade (if you're on linux). On 10/13/2012 12:29 AM, Quintin Par wrote: > Thanks for this. This looks a bit complicated. > > I?d assume that ?make install? will overwrite the executable and that > will ensure everything. Or should I just go ahead and do service nginx > restart > > > On Thu, Oct 11, 2012 at 5:26 PM, Francis Daly > wrote: > > On Thu, Oct 11, 2012 at 05:08:55PM -0700, Quintin Par wrote: > > Hi there, > > > How can I do it with minimal downtime on my prod machine? > > http://nginx.org/en/docs/control.html ? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Fri Oct 12 21:27:23 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 12 Oct 2012 22:27:23 +0100 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012002651.GR17159@craic.sysops.org> Message-ID: <20121012212723.GT17159@craic.sysops.org> On Fri, Oct 12, 2012 at 01:29:14PM -0700, Quintin Par wrote: Hi there, > Thanks for this. This looks a bit complicated. > > I?d assume that ?make install? will overwrite the executable and that will > ensure everything. Or should I just go ahead and do service nginx restart "make install" will replace the binary. It won't run the new binary. After replacing the binary, you'll want to do something like kill -USR2 $(cat logs/nginx.pid) And after testing that things are working as expected, then kill -WINCH $(cat logs/nginx.pid.oldbin) followed eventually by kill -QUIT $(cat logs/nginx.pid.oldbin) The "control.html" page has more details about how to handle problems. "service nginx restart" probably doesn't do that sequence, and so probably won't be 0-downtime. But you can use your test system to find a sequence that works well enough for you. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Fri Oct 12 21:48:00 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Fri, 12 Oct 2012 17:48:00 -0400 Subject: Flushing fastcgi cache Message-ID: Don't know if I'm just having a caffeine-deficiency moment here, but how does one clear the fastcgi cache on either a site-wide or per-file basis? Thanks. From farseas at gmail.com Sat Oct 13 00:25:36 2012 From: farseas at gmail.com (Bob S.) Date: Fri, 12 Oct 2012 20:25:36 -0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: <20121012212723.GT17159@craic.sysops.org> References: <20121012002651.GR17159@craic.sysops.org> <20121012212723.GT17159@craic.sysops.org> Message-ID: I don't see how this can work. Trying to replace the nginx executable with a new version of nginx, while nginx is running, will produce: cp: cannot create regular file `nginx': Text file busy Or am I missing something? This is standard behavior on any running executable in Linux. On Fri, Oct 12, 2012 at 5:27 PM, Francis Daly wrote: > On Fri, Oct 12, 2012 at 01:29:14PM -0700, Quintin Par wrote: > > Hi there, > > > Thanks for this. This looks a bit complicated. > > > > I?d assume that ?make install? will overwrite the executable and that > will > > ensure everything. Or should I just go ahead and do service nginx restart > > "make install" will replace the binary. It won't run the new binary. > > After replacing the binary, you'll want to do something like > > kill -USR2 $(cat logs/nginx.pid) > > And after testing that things are working as expected, then > > kill -WINCH $(cat logs/nginx.pid.oldbin) > > followed eventually by > > kill -QUIT $(cat logs/nginx.pid.oldbin) > > The "control.html" page has more details about how to handle problems. > > "service nginx restart" probably doesn't do that sequence, and so probably > won't be 0-downtime. But you can use your test system to find a sequence > that works well enough for you. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwellsnc at gmail.com Sat Oct 13 04:50:47 2012 From: bwellsnc at gmail.com (bwellsnc) Date: Sat, 13 Oct 2012 00:50:47 -0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012002651.GR17159@craic.sysops.org> <20121012212723.GT17159@craic.sysops.org> Message-ID: Since you are using CentOS, look into rebuilding the rpm against the src rpm. http://nginx.org/packages/centos/6/SRPMS/ I took the src rpm and enabled the modules that I need and added the modules that I wanted from the addons. I then built the rpm and created my own repo server inside my network to distribute it to my internal servers. When you build the RPM and install it, it will restart nginx but that should only be a few seconds of downtime. Brent On Fri, Oct 12, 2012 at 8:25 PM, Bob S. wrote: > I don't see how this can work. > > Trying to replace the nginx executable with a new version of nginx, while > nginx is running, will produce: > > cp: cannot create regular file `nginx': Text file busy > > Or am I missing something? > > This is standard behavior on any running executable in Linux. > > > > On Fri, Oct 12, 2012 at 5:27 PM, Francis Daly wrote: >> >> On Fri, Oct 12, 2012 at 01:29:14PM -0700, Quintin Par wrote: >> >> Hi there, >> >> > Thanks for this. This looks a bit complicated. >> > >> > I?d assume that ?make install? will overwrite the executable and that >> > will >> > ensure everything. Or should I just go ahead and do service nginx >> > restart >> >> "make install" will replace the binary. It won't run the new binary. >> >> After replacing the binary, you'll want to do something like >> >> kill -USR2 $(cat logs/nginx.pid) >> >> And after testing that things are working as expected, then >> >> kill -WINCH $(cat logs/nginx.pid.oldbin) >> >> followed eventually by >> >> kill -QUIT $(cat logs/nginx.pid.oldbin) >> >> The "control.html" page has more details about how to handle problems. >> >> "service nginx restart" probably doesn't do that sequence, and so probably >> won't be 0-downtime. But you can use your test system to find a sequence >> that works well enough for you. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From quintinpar at gmail.com Sat Oct 13 05:21:59 2012 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 12 Oct 2012 22:21:59 -0700 Subject: Deny ips, and pick ips from a file. In-Reply-To: <874nm1uv9c.wl%appa@perusio.net> References: <5076995D.6080303@citrin.ru> <874nm1uv9c.wl%appa@perusio.net> Message-ID: Thanks Antonio. This bonus is so good. On Thu, Oct 11, 2012 at 4:02 AM, Ant?nio P. P. Almeida wrote: > On 11 Out 2012 12h55 CEST, nginx-forum at nginx.us wrote: > > > > http://bash.cyberciti.biz/web-server/nginx-shell-script-to-block-spamhaus-lasso-drop-spam-ip-address/ > > Also a shameless plug - I leave the server handling to be done ? la > carte :) > > https://github.com/perusio/nginx-spamhaus-drop > > This creates a file to be used by the geo directive. > > --- appa > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,231617,231642#msg-231642 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Oct 13 09:18:08 2012 From: nginx-forum at nginx.us (arun) Date: Sat, 13 Oct 2012 05:18:08 -0400 Subject: Getting "MyIP - - [13/Oct/2012:23:17:13 +0530] "GET devices HTTP/1.1" 400 172 "-" "-"" Message-ID: <63aa0342d17bf868622348c8dfc11ba3.NginxMailingListEnglish@forum.nginx.org> I'm running NginX server and sending HTTP requests from cUrl, browser and python httplib client. When I try tosend any request through python httplib client, it is getting bad request response but the same request is succeeded with cUrl and Browser, Error i'm getting is MyIP - - [13/Oct/2012:23:17:13 +0530] "GET devices HTTP/1.1" 400 172 "-" "-" Regars, Kumar Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231754,231754#msg-231754 From igor at sysoev.ru Sat Oct 13 09:20:21 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 13 Oct 2012 13:20:21 +0400 Subject: Getting "MyIP - - [13/Oct/2012:23:17:13 +0530] "GET devices HTTP/1.1" 400 172 "-" "-"" In-Reply-To: <63aa0342d17bf868622348c8dfc11ba3.NginxMailingListEnglish@forum.nginx.org> References: <63aa0342d17bf868622348c8dfc11ba3.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Oct 13, 2012, at 13:18 , arun wrote: > I'm running NginX server and sending HTTP requests from cUrl, browser and > python httplib client. > When I try tosend any request through python httplib client, it is getting > bad request response but the same request is succeeded with cUrl and > Browser, > > Error i'm getting is > > MyIP - - [13/Oct/2012:23:17:13 +0530] "GET devices HTTP/1.1" 400 172 "-" > "-" "devices" is wrong URL, it must start with slash. -- Igor Sysoev http://nginx.com/support.html From appa at perusio.net Sat Oct 13 11:44:10 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 13 Oct 2012 13:44:10 +0200 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012002651.GR17159@craic.sysops.org> Message-ID: <87zk3qtx4l.wl%appa@perusio.net> On 12 Out 2012 22h29 CEST, quintinpar at gmail.com wrote: > Thanks for this. This looks a bit complicated. You could just steal debian's postinst script also: https://gist.github.com/3884272 of course there are debian specificities. Maybe you could adapt it or see if there's a similar thing on the centOS package. --- appa From ne at vbart.ru Sat Oct 13 12:10:48 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sat, 13 Oct 2012 16:10:48 +0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012212723.GT17159@craic.sysops.org> Message-ID: <201210131610.48838.ne@vbart.ru> On Saturday 13 October 2012 04:25:36 Bob S. wrote: > I don't see how this can work. > > Trying to replace the nginx executable with a new version of nginx, while > nginx is running, will produce: > > cp: cannot create regular file `nginx': Text file busy > > Or am I missing something? > > This is standard behavior on any running executable in Linux. This is just a warning produced by "cp", you can force it with -f flag. wbr, Valentin V. Bartenev -- http://nginx.com/support.html From ne at vbart.ru Sat Oct 13 12:21:17 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sat, 13 Oct 2012 16:21:17 +0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: <201210131610.48838.ne@vbart.ru> References: <201210131610.48838.ne@vbart.ru> Message-ID: <201210131621.17995.ne@vbart.ru> On Saturday 13 October 2012 16:10:48 Valentin V. Bartenev wrote: > On Saturday 13 October 2012 04:25:36 Bob S. wrote: > > I don't see how this can work. > > > > Trying to replace the nginx executable with a new version of nginx, while > > nginx is running, will produce: > > > > cp: cannot create regular file `nginx': Text file busy > > > > Or am I missing something? > > > > This is standard behavior on any running executable in Linux. > > This is just a warning produced by "cp", you can force it with -f flag. > But anyway, it would be a much better idea to "mv" or "rm" an old binary instead of overwrite it. wbr, Valentin V. Bartenev -- http://nginx.com/support.html From francis at daoine.org Sat Oct 13 12:51:44 2012 From: francis at daoine.org (Francis Daly) Date: Sat, 13 Oct 2012 13:51:44 +0100 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012002651.GR17159@craic.sysops.org> <20121012212723.GT17159@craic.sysops.org> Message-ID: <20121013125144.GU17159@craic.sysops.org> On Fri, Oct 12, 2012 at 08:25:36PM -0400, Bob S. wrote: Hi there, > I don't see how this can work. And yet, it does. > Trying to replace the nginx executable with a new version of nginx, while > nginx is running, will produce: > > cp: cannot create regular file `nginx': Text file busy I've never seen that error message. That's not to say that it doesn't happen; but that it is by no means universal. > Or am I missing something? It is possible to move, remove, rewrite, or replace a file, independent of whether it is being used by another process. > This is standard behavior on any running executable in Linux. I'd suggest that your Linux is not the same as my Linux. "standard behavior" is that "file being used" does not block "file being modified". All that said, if what "make install" does by default is not what you want, you can always "cp" or "mv" the old executable; then "cp", "mv", "cat", or otherwise, put the new executable in place. And then send the special nginx signals to ensure upgrade with zero downtime. (And it is possible that your system init script does exactly this.) All the best, f -- Francis Daly francis at daoine.org From bwellsnc at gmail.com Sat Oct 13 14:10:12 2012 From: bwellsnc at gmail.com (bwellsnc) Date: Sat, 13 Oct 2012 10:10:12 -0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: <20121013125144.GU17159@craic.sysops.org> References: <20121012002651.GR17159@craic.sysops.org> <20121012212723.GT17159@craic.sysops.org> <20121013125144.GU17159@craic.sysops.org> Message-ID: To add, if you require 0 downtime then you really need to reevaluate your environment. Even in the linux world, things need to be restarted or rebooted. If you cannot be down for more than a few seconds to allow for a software update, then you need to look into setting up a clustered/load balanced environment. I have 2 front end load balancers with 4 backend web servers so to keep my environment up and running. This allows me to run software updates and system reboots with little to no downtime. At then end, it's keeping the perception that you are up and running to your customers. Just my 2 cents Brent On Sat, Oct 13, 2012 at 8:51 AM, Francis Daly wrote: > On Fri, Oct 12, 2012 at 08:25:36PM -0400, Bob S. wrote: > > Hi there, > >> I don't see how this can work. > > And yet, it does. > >> Trying to replace the nginx executable with a new version of nginx, while >> nginx is running, will produce: >> >> cp: cannot create regular file `nginx': Text file busy > > I've never seen that error message. > > That's not to say that it doesn't happen; but that it is by no means > universal. > >> Or am I missing something? > > It is possible to move, remove, rewrite, or replace a file, independent > of whether it is being used by another process. > >> This is standard behavior on any running executable in Linux. > > I'd suggest that your Linux is not the same as my Linux. > > "standard behavior" is that "file being used" does not block "file > being modified". > > > All that said, if what "make install" does by default is not what you > want, you can always "cp" or "mv" the old executable; then "cp", "mv", > "cat", or otherwise, put the new executable in place. > > And then send the special nginx signals to ensure upgrade with zero > downtime. > > (And it is possible that your system init script does exactly this.) > > All the best, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Oct 13 16:17:16 2012 From: nginx-forum at nginx.us (rahul286) Date: Sat, 13 Oct 2012 12:17:16 -0400 Subject: Nginx Map Multiple Parameter Input In-Reply-To: <94089F05-70EA-4CAA-8BA6-A9007589CE02@sysoev.ru> References: <94089F05-70EA-4CAA-8BA6-A9007589CE02@sysoev.ru> Message-ID: Thanks again for more details. For better compatibility, I will use "P" everywhere. :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231715,231765#msg-231765 From highclass99 at gmail.com Sun Oct 14 10:03:55 2012 From: highclass99 at gmail.com (highclass99) Date: Sun, 14 Oct 2012 19:03:55 +0900 Subject: fastcgi_next_upstream settings and "Connection reset by peer" errors Message-ID: I have a nginx setup with a fastcgi upstream of 5 servers 10.1.1.1 to 10.1.1.5 When a PHP has a long loop (which is due to bad php coding which is difficult to fix at this moment where the PHP process itself crashes), The nginx fastcgi request loops through all the servers, for example the error logs show: 2012/10/14 18:33:52 [error] 4478#0: *900244 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 122.32.55.139, server: mydomain.com, request: "GET /132727767 HTTP/1.1", upstream: "fastcgi://10.1.1.1:9000", host: "www.mydomain.com" 2012/10/14 18:34:06 [error] 4478#0: *900244 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 122.32.55.139, server: mydomain.com, request: "GET /132727767 HTTP/1.1", upstream: "fastcgi://10.1.1.2:9000", host: "www.mydomain.com" 2012/10/14 18:34:23 [error] 4478#0: *900244 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 122.32.55.139, server: mydomain.com, request: "GET /132727767 HTTP/1.1", upstream: "fastcgi://10.1.1.3:9000", host: "www.mydomain.com" 2012/10/14 18:34:36 [error] 4478#0: *900244 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 122.32.55.139, server: mydomain.com, request: "GET /132727767 HTTP/1.1", upstream: "fastcgi://10.1.1.4:9000", host: "www.mydomain.com" 2012/10/14 18:34:52 [error] 4478#0: *900244 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 122.32.55.139, server: mydomain.com, request: "GET /132727767 HTTP/1.1", upstream: "fastcgi://10.1.1.5:9000", host: "www.mydomain.com" Therefore I have tried fastcgi_next_upstream error; and fastcgi_next_upstream off; But this does not seem to help... Am I missing something? -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Sun Oct 14 17:15:54 2012 From: farseas at gmail.com (Bob S.) Date: Sun, 14 Oct 2012 13:15:54 -0400 Subject: Compiling Nginx on production. How to do it without down time. In-Reply-To: References: <20121012002651.GR17159@craic.sysops.org> <20121012212723.GT17159@craic.sysops.org> <20121013125144.GU17159@craic.sysops.org> Message-ID: Some of the best advice yet. Anybody in serious production mode should have no single points of failure. On Sat, Oct 13, 2012 at 10:10 AM, bwellsnc wrote: > To add, if you require 0 downtime then you really need to reevaluate > your environment. Even in the linux world, things need to be > restarted or rebooted. If you cannot be down for more than a few > seconds to allow for a software update, then you need to look into > setting up a clustered/load balanced environment. I have 2 front end > load balancers with 4 backend web servers so to keep my environment up > and running. This allows me to run software updates and system > reboots with little to no downtime. At then end, it's keeping the > perception that you are up and running to your customers. > > Just my 2 cents > > Brent > > > On Sat, Oct 13, 2012 at 8:51 AM, Francis Daly wrote: > > On Fri, Oct 12, 2012 at 08:25:36PM -0400, Bob S. wrote: > > > > Hi there, > > > >> I don't see how this can work. > > > > And yet, it does. > > > >> Trying to replace the nginx executable with a new version of nginx, > while > >> nginx is running, will produce: > >> > >> cp: cannot create regular file `nginx': Text file busy > > > > I've never seen that error message. > > > > That's not to say that it doesn't happen; but that it is by no means > > universal. > > > >> Or am I missing something? > > > > It is possible to move, remove, rewrite, or replace a file, independent > > of whether it is being used by another process. > > > >> This is standard behavior on any running executable in Linux. > > > > I'd suggest that your Linux is not the same as my Linux. > > > > "standard behavior" is that "file being used" does not block "file > > being modified". > > > > > > All that said, if what "make install" does by default is not what you > > want, you can always "cp" or "mv" the old executable; then "cp", "mv", > > "cat", or otherwise, put the new executable in place. > > > > And then send the special nginx signals to ensure upgrade with zero > > downtime. > > > > (And it is possible that your system init script does exactly this.) > > > > All the best, > > > > f > > -- > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Sun Oct 14 18:11:45 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 14 Oct 2012 11:11:45 -0700 Subject: =?UTF-8?Q?Status_module_=E2=80=93_cache_expiration=2C_invalidation_=26_poi?= =?UTF-8?Q?soning=2E?= Message-ID: Hi all, I have a website that?s heavily cached ? or intended to be heavily cached via Nginx . But somehow looking at the page response time I feeling there is a very high eviction/invalidation rate. Now is there way I can look at the cache status in Nginx for different. I don?t use the purge module. The only way I check cache status is through add_header X-Cache-Status $upstream_cache_status; I want to o/p the cache status to a RRD tool and look at the metrics. Is this possible? -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Sun Oct 14 18:30:22 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 14 Oct 2012 11:30:22 -0700 Subject: Override a set_empty variable from http params Message-ID: Hi all, I set the some params to my backend server as shown below set_if_empty $country_code $http_x_country_code; proxy_set_header X-Country-Code $country_code; Now the http_x_country_code comes via the GeoIP module. Is there a way here I can override this based on an HTTP param I can set. Say site.com/country=US and country variable has greater precedence over the http_x_country_code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Sun Oct 14 18:32:27 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 14 Oct 2012 20:32:27 +0200 Subject: =?UTF-8?Q?Re=3A_Status_module_=E2=80=93_cache_expiration=2C_invalidation_?= =?UTF-8?Q?=26_poisoning=2E?= In-Reply-To: References: Message-ID: <87y5j8ucp0.wl%appa@perusio.net> On 14 Out 2012 20h11 CEST, quintinpar at gmail.com wrote: > Hi all, > > I have a website that?s heavily cached ? or intended to be heavily > cached via Nginx . > > But somehow looking at the page response time I feeling there is a > very high eviction/invalidation rate. > > Now is there way I can look at the cache status in Nginx for > different. I don?t use the purge module. > > The only way I check cache status is through > > add_header X-Cache-Status $upstream_cache_status; > > I want to o/p the cache status to a RRD tool and look at the > metrics. > > Is this possible? With Lua shouldn't be that difficult. You just inspect the value of $upstream_cache_status. ngx.var.upstream_cache_status paired with http://wiki.nginx.org/HttpLuaModule#ngx.location.capture to use a specific location to write to RRDtool. Just an idea, --- appa From quintinpar at gmail.com Sun Oct 14 19:06:51 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 14 Oct 2012 12:06:51 -0700 Subject: =?UTF-8?Q?Re=3A_Status_module_=E2=80=93_cache_expiration=2C_invalidation_?= =?UTF-8?Q?=26_poisoning=2E?= In-Reply-To: <87y5j8ucp0.wl%appa@perusio.net> References: <87y5j8ucp0.wl%appa@perusio.net> Message-ID: Hi Antonio, Can you help me a bit more? I have never written in lua Say for every hit I want to hit a rrd proxy like this echo "$location_url:1" | nc -w 1 -u host.com 8125 how do I go about doing this in lua? On Sun, Oct 14, 2012 at 11:32 AM, Ant?nio P. P. Almeida wrote: > On 14 Out 2012 20h11 CEST, quintinpar at gmail.com wrote: > > > Hi all, > > > > I have a website that?s heavily cached ? or intended to be heavily > > cached via Nginx . > > > > But somehow looking at the page response time I feeling there is a > > very high eviction/invalidation rate. > > > > Now is there way I can look at the cache status in Nginx for > > different. I don?t use the purge module. > > > > The only way I check cache status is through > > > > add_header X-Cache-Status $upstream_cache_status; > > > > I want to o/p the cache status to a RRD tool and look at the > > metrics. > > > > Is this possible? > > With Lua shouldn't be that difficult. You just inspect the value of > $upstream_cache_status. > > ngx.var.upstream_cache_status paired with > http://wiki.nginx.org/HttpLuaModule#ngx.location.capture > to use a specific location to write to RRDtool. > > Just an idea, > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sun Oct 14 19:56:43 2012 From: agentzh at gmail.com (agentzh) Date: Sun, 14 Oct 2012 12:56:43 -0700 Subject: [ANN] ngx_openresty devel version 1.2.4.1 released In-Reply-To: References: Message-ID: Hi, guys! I am happy to announce the new development version of ngx_openresty, 1.2.4.1: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (stable) release, 1.2.3.8: * upgraded the Nginx core to 1.2.4. * see for changes. * upgraded LuaNginxModule to 0.7.1. * feature: implemented the "light threads" API, which allows asynchronous concurrent processing within a single Nginx request handler, based on automatically-scheduled Lua coroutines. thanks Lee Holloway for requesting this feature. * * * bugfix: ngx.re.gsub() might throw out the exception "attempt to call a string value" when the "replace" argument was a Lua function and the subject string was large. thanks Zhu Maohai for reporting this issue. * bugfix: older gcc versions might issue warnings like "variable 'nrets' might be clobbered by 'longjmp' or 'vfork'", like gcc 3.4.3 (for Solaris 11) and gcc 4.1.2 (for Red Hat Linux). thanks Wenhua Zhang for reporting this issue. * docs: added a warning for ngx.var.VARIABLE that memory is allocated in the per-request memory pool. thanks lilydjwg. * docs: made it clear why "return" is recommended to be used with ngx.exit(). thanks Antoine. * docs: massive wording improvements from Dayo. * now we add SrcacheNginxModule before both LuaNginxModule and HeadersMoreNginxModule so that the former's output filter runs *after* those of the latter's. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002004 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From appa at perusio.net Sun Oct 14 21:12:33 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 14 Oct 2012 23:12:33 +0200 Subject: =?UTF-8?Q?Re=3A_Status_module_=E2=80=93_cache_expiration=2C_invalidation_?= =?UTF-8?Q?=26_poisoning=2E?= In-Reply-To: References: <87y5j8ucp0.wl%appa@perusio.net> Message-ID: <87wqysu5a6.wl%appa@perusio.net> On 14 Out 2012 21h06 CEST, quintinpar at gmail.com wrote: > Hi Antonio, > > Can you help me a bit more? I have never written in lua > > Say for every hit I want to hit a rrd proxy like this > > echo "$location_url:1" | nc -w 1 -u host.com 8125 This is not the ideal option because AFAIK (agentzh can chime in to clarify things) the adding of the data to RRDtool will "hang" your reply. Ideally you should have a "frontend" location that issues two subrequests one to write to RRDtool and another to get the response. I don't think that in this particular case it will be dramatic. Try it and see if you can live with it. > how do I go about doing this in lua? Try: location /whatever-is-your-location { header_filter_by_lua ' if ngx.var.upstream_cache_status == "HIT" and os.execute("echo " .. ngx.var.location_url .. ":1 | nc -w 1 -u host.com 8125") == 0 then ngx.header["X-RRDtool"] = "YES" end '; } This adds a header X-RRDtool if successfull. I stress that this is not the most performant way to do this. The Lua module can create sockets, so the best option would be to use that facility and get rid of netcat IMO. This is more of a hack than aything else. Perhaps is performant enough for your application. YMMV, --- appa From nginx-forum at nginx.us Sun Oct 14 23:21:46 2012 From: nginx-forum at nginx.us (tvaughan) Date: Sun, 14 Oct 2012 19:21:46 -0400 Subject: $request_uri greater than 8 characters causes core dump Message-ID: Nginx dumps core if the $request_uri is not found and is more than 8 characters. For example: $ GET -Sed http://localhost/en/12345 GET http://localhost/en/12345 200 OK Connection: close Date: Sun, 14 Oct 2012 23:13:40 GMT Accept-Ranges: bytes Server: nginx/1.1.19 Content-Length: 4464 Content-Type: text/html Last-Modified: Sun, 14 Oct 2012 22:21:45 GMT Client-Date: Sun, 14 Oct 2012 23:13:40 GMT Client-Peer: 127.0.0.1:80 Client-Response-Num: 1 Title: Page Not Found :( X-Meta-Charset: utf-8 $ GET -Sed http://localhost/en/123456 GET http://localhost/en/123456 500 Server closed connection without sending any data back Content-Type: text/plain Client-Date: Sun, 14 Oct 2012 23:13:43 GMT Client-Warning: Internal response But a longer $request_uri that is found is OK. For example: $ GET -Sed http://localhost/en/about/index.html GET http://localhost/en/about/index.html 200 OK Connection: close Date: Sun, 14 Oct 2012 23:15:20 GMT Accept-Ranges: bytes Server: nginx/1.1.19 Content-Length: 3050 Content-Type: text/html Last-Modified: Sun, 14 Oct 2012 19:37:32 GMT Client-Date: Sun, 14 Oct 2012 23:15:20 GMT Client-Peer: 127.0.0.1:80 Client-Response-Num: 1 Title: X-Meta-Author: X-Meta-Charset: utf-8 This is on Ubuntu 12.04 x86 64-bit with nginx 1.1.19 (the `nginx-light` package). This is my setup: server { listen 80; server_name localhost; root /srv/www/site; index index.html; error_page 404 = @404; location @404 { try_files /404.html /404/index.html; } location ~ ^/(404|css|js|font|img|en|es) { try_files $uri $uri/ =404; } location / { rewrite ^ $scheme://$server_name:$server_port/en$request_uri? redirect; } } What in the world? Thanks. -Tom Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231787,231787#msg-231787 From sahmed1020 at gmail.com Mon Oct 15 03:56:44 2012 From: sahmed1020 at gmail.com (S Ahmed) Date: Sun, 14 Oct 2012 23:56:44 -0400 Subject: redirect urls Message-ID: I'm porting a .asp application over to something else, and I need to re-write the following url pattern: www.example.com/posts/get_post.asp?post_id=123 to www.example.com/posts/123 How can I do that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aps2891 at gmail.com Mon Oct 15 04:29:06 2012 From: aps2891 at gmail.com (Aparna Bhat) Date: Mon, 15 Oct 2012 09:59:06 +0530 Subject: Load Balancing Algorithm Message-ID: Hi All, Is it possible for me to change the load balancing algorithm used in Nginx.?? Thank you in advance for the reply, aps. From edho at myconan.net Mon Oct 15 04:44:17 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 15 Oct 2012 11:44:17 +0700 Subject: redirect urls In-Reply-To: References: Message-ID: On Mon, Oct 15, 2012 at 10:56 AM, S Ahmed wrote: > I'm porting a .asp application over to something else, and I need to > re-write the following url pattern: > > www.example.com/posts/get_post.asp?post_id=123 > > to > > www.example.com/posts/123 > location = /posts/get_post.asp { return 301 /posts/$arg_post_id; } From nginx-forum at nginx.us Mon Oct 15 05:56:25 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 15 Oct 2012 01:56:25 -0400 Subject: Dynamic IP Whitelisting From A Database Message-ID: <18d537963704cb95affce1d02561fc00.NginxMailingListEnglish@forum.nginx.org> Hi, We have a software as a service product where each user get's their own hosted subdomain like (me.domain.com). We would like to implement whitelisting, where users can specify certain IP addresses that are allowed to view their site. This is implemented like: location / { allow 1.2.3.4; allow 2.3.4.5; deny all; } The problem is that we don't want to deal with the application code touching/writing the nginx configuration when a user makes changes to the whitelist. I was thinking it would be awesome if nginx could read this from a database (MySQL or SQLite), and auto populate the allow list from that. Is this possible? If not, what is the best way to implement whitelisting, without the application code reading and writing the nginx configuration file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231793,231793#msg-231793 From ne at vbart.ru Mon Oct 15 07:40:56 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 15 Oct 2012 11:40:56 +0400 Subject: $request_uri greater than 8 characters causes core dump In-Reply-To: References: Message-ID: <201210151140.56126.ne@vbart.ru> On Monday 15 October 2012 03:21:46 tvaughan wrote: > Nginx dumps core if the $request_uri is not found and is more than 8 > characters. For example: > [...] > > This is on Ubuntu 12.04 x86 64-bit with nginx 1.1.19 (the `nginx-light` > package). > [...] > > What in the world? Thanks. > You use an old nginx version from development branch with well-known bug. wbr, Valentin V. Bartenev -- http://nginx.com/support.html From igor at sysoev.ru Mon Oct 15 08:05:45 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 15 Oct 2012 12:05:45 +0400 Subject: Override a set_empty variable from http params In-Reply-To: References: Message-ID: <20121015080544.GA41590@nginx.com> On Sun, Oct 14, 2012 at 11:30:22AM -0700, Quintin Par wrote: > Hi all, > > I set the some params to my backend server as shown below > > set_if_empty $country_code $http_x_country_code; > > proxy_set_header X-Country-Code $country_code; > > Now the http_x_country_code comes via the GeoIP module. > > Is there a way here I can override this based on an HTTP param I can set. > > Say site.com/country=US and country variable has greater precedence over > the http_x_country_code. Using "map" you can define: http { map $arg_country $country_code { "" $header_country; default $arg_country; } map $http_x_country_code $header_country { "" $geoip_country_code; default $http_x_country_code; } ... location / { proxy_set_header X-Country-Code $country_code; ... -- Igor Sysoev http://nginx.com/support.html From mdounin at mdounin.ru Mon Oct 15 10:02:22 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Oct 2012 14:02:22 +0400 Subject: Load Balancing Algorithm In-Reply-To: References: Message-ID: <20121015100222.GE40452@mdounin.ru> Hello! On Mon, Oct 15, 2012 at 09:59:06AM +0530, Aparna Bhat wrote: > Is it possible for me to change the load balancing algorithm used in Nginx.?? Yes. As of vanilla nginx, the following balancing algorithms are available: - round-robin (default) - least conn, see http://nginx.org/r/least_conn - ip hash, see http://nginx.org/r/ip_hash It is also possible to use 3rd party balancing modules (there are number of them available, see http://wiki.nginx.org/3rdPartyModules). And obviously you may write your own. -- Maxim Dounin http://nginx.com/support.html From sahmed1020 at gmail.com Mon Oct 15 14:26:06 2012 From: sahmed1020 at gmail.com (S Ahmed) Date: Mon, 15 Oct 2012 10:26:06 -0400 Subject: redirect urls In-Reply-To: References: Message-ID: So when nginx responds with a 301, is that an additional rountrip for the user? Could I somehow maintain the url for the client, but internally re-write it to the updated url so my web application can handle it correctly? On Mon, Oct 15, 2012 at 12:44 AM, Edho Arief wrote: > On Mon, Oct 15, 2012 at 10:56 AM, S Ahmed wrote: > > I'm porting a .asp application over to something else, and I need to > > re-write the following url pattern: > > > > www.example.com/posts/get_post.asp?post_id=123 > > > > to > > > > www.example.com/posts/123 > > > > location = /posts/get_post.asp { > return 301 /posts/$arg_post_id; > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From highclass99 at gmail.com Mon Oct 15 15:57:23 2012 From: highclass99 at gmail.com (highclass99) Date: Tue, 16 Oct 2012 00:57:23 +0900 Subject: About the least_conn (http://nginx.org/r/least_conn) load balancing option and php-fpm Message-ID: Hello, I'm testing the least_conn option for php-fpm. I'm curious whether what I should set "listen.backlog" in php-fpm.conf settings? Should I try listen.backlog=0? or some other setting for least_conn load balancing? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Mon Oct 15 16:16:22 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 15 Oct 2012 23:16:22 +0700 Subject: redirect urls In-Reply-To: References: Message-ID: On Mon, Oct 15, 2012 at 9:26 PM, S Ahmed wrote: > So when nginx responds with a 301, is that an additional rountrip for the > user? > > Could I somehow maintain the url for the client, but internally re-write it > to the updated url so my web application can handle it correctly? > rewrite ^ /posts/$arg_post_id; From sahmed1020 at gmail.com Mon Oct 15 19:55:53 2012 From: sahmed1020 at gmail.com (S Ahmed) Date: Mon, 15 Oct 2012 15:55:53 -0400 Subject: redirect urls In-Reply-To: References: Message-ID: so like this: location = /posts/get_post.asp { rewrite ^ /posts/$arg_post_id; } correct? On Mon, Oct 15, 2012 at 12:16 PM, Edho Arief wrote: > On Mon, Oct 15, 2012 at 9:26 PM, S Ahmed wrote: > > So when nginx responds with a 301, is that an additional rountrip for the > > user? > > > > Could I somehow maintain the url for the client, but internally re-write > it > > to the updated url so my web application can handle it correctly? > > > > rewrite ^ /posts/$arg_post_id; > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at disqus.com Mon Oct 15 21:32:44 2012 From: john at disqus.com (John Watson) Date: Mon, 15 Oct 2012 14:32:44 -0700 Subject: gzip filter on streaming responses Message-ID: <20121015213244.GA1337@asylum.local> I did some investigation and the gzip filter will only activate if there is a Content-Length header with valid length. Is there any way of deflating streaming responses from the nginx push stream module? Where is there isn't a known content length, but potential for thousands of messages to be transferred? Regards, John -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 834 bytes Desc: not available URL: From jdorfman at netdna.com Mon Oct 15 21:57:27 2012 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 15 Oct 2012 14:57:27 -0700 Subject: gzip filter on streaming responses In-Reply-To: <20121015213244.GA1337@asylum.local> References: <20121015213244.GA1337@asylum.local> Message-ID: Have you tried? gzip on; gzip_min_length 0; Regards, Justin Dorfman NetDNA ? The Science of Acceleration? On Mon, Oct 15, 2012 at 2:32 PM, John Watson wrote: > I did some investigation and the gzip filter will only activate if there > is a Content-Length header with valid length. > > Is there any way of deflating streaming responses from the nginx push > stream module? Where is there isn't a known content length, but > potential for thousands of messages to be transferred? > > Regards, > > John > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at disqus.com Tue Oct 16 07:30:29 2012 From: john at disqus.com (John Watson) Date: Tue, 16 Oct 2012 00:30:29 -0700 Subject: Handling 500k concurrent connections on Linux In-Reply-To: <20121010201142.GA21197@asylum.local> References: <20121009191003.GA26608@asylum.local> <80E7F6B1-CBE0-4E72-943D-5B915ECF5DC0@nginx.com> <20121010201142.GA21197@asylum.local> Message-ID: <20121016073029.GA56487@asylum.local> After a bit more digging I discovered that Nginx sets the backlog on the listen socket to only 511 (at least on Linux), not the -1 in the docs. By increasing that to a much larger number I haven't noticed slow accepts/response headers. Also for reference, backlog on a listen socket is silently limited to net.core.somaxconn (which defaults to 128) so make sure to increase that and other necessary tunings as well. On Wed, Oct 10, 2012 at 01:11:42PM -0700, John Watson wrote: > 1) Error logs are clean (except for some 404s) > > 2) nginx.conf and sysctl.conf: https://gist.github.com/0b3b52050254e273ff11 > > Set TX/RX descriptors to 4096/4096 (maximum): > ethtool -G eth1 tx 4096 rx 4096 > > Disabled irqbalanced and pinned IRQs to CPU0-7 for NIC > > Don't know exact amount, but a good majority of the connections are > sitting idle for 90s before being closed. > > Some graphs on the network interface for past couple days: > https://www.dropbox.com/s/0bl304ulhqp6a4n/push_stream_network.png > > Thank you, > > John W > > On Wed, Oct 10, 2012 at 01:05:05PM +0400, Andrew Alexeev wrote: > > John, > > > > On Oct 9, 2012, at 11:10 PM, John Watson wrote: > > > > > I was wondering if anyone had some tips/guidelines for scaling Nginx on > > > Linux to >500k concurrent connections. Playing with the > > > nginx_http_push_stream module in streaming mode. Noticing periodic slow > > > accept and/or response headers. I've scoured the Internet > > > looking/learning ways to tune Nginx/Linux but I think I've exhausted my > > > abilities. > > > > > > Any help would be appreciated. > > > > > > Hardware > > > Dual Nehalem 5520 > > > 24G RAM > > > Intel 82576 (igb) > > > Ubuntu 12.04.1 (3.2.0-31-generic x86_64) > > > > > > Thank You, > > > > > > John W > > > > I'd assume you've already checked/fixed the following, right? > > > > 1) Error logs - anything wrong seen in there? > > > > 2) http://nginx.org/en/docs/ngx_core_module.html#multi_accept and http://nginx.org/en/docs/ngx_core_module.html#accept_mutex - did you try it on/off? > > > > 3) file descriptors limits (cat /proc/sys/fs/file-max, sudo - nginx && ulimit, worker_rlimit_nofile) > > > > 4) sysctl net.ipv4.ip_local_port_range (if you're aiming at proxying all those connections to upstreams) > > > > Additional information about what's happening in all those 500k connections might be helpful, as well as the relevant configuration section :) > > > > Hope this helps > > > > > > -- > > AA @ nginx > > http://nginx.com/support.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 834 bytes Desc: not available URL: From nginx-forum at nginx.us Tue Oct 16 08:43:34 2012 From: nginx-forum at nginx.us (justin) Date: Tue, 16 Oct 2012 04:43:34 -0400 Subject: connect() to unix:/var/run/php-fpm/php.sock failed Message-ID: <3e1c173a43c015a9d83fce1a881056f7.NginxMailingListEnglish@forum.nginx.org> Just finished running a benchmark with http://blitz.io on a development application written in PHP (using php-fpm with a unix socket, not TCP) connecting to MySQL. Started with 50 concurrent connections and scaled up to 250. Got lots of timeouts and errors. In my nginx log, the following is logged a bunch of times: 2012/10/16 01:28:29 [error] 15019#0: *26418 connect() to unix:/var/run/php-fpm/php.sock failed (11: Resource temporarily unavailable) while connecting to upstream, What could cause this? I am using nginx 1.2.4 with the following performance tweaks: worker_processes 2; worker_connections 1024; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; gzip on; gzip_proxied any; gzip_comp_level 4; gzip_disable "msie6"; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript application/json PHP-FPM is configured with 8 static workers and listen.backlog = -1. Any idea how I can reduce the failures connecting to upstream (PHP-FPM)? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231839,231839#msg-231839 From jerome at loyet.net Tue Oct 16 08:47:53 2012 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Tue, 16 Oct 2012 10:47:53 +0200 Subject: connect() to unix:/var/run/php-fpm/php.sock failed In-Reply-To: <3e1c173a43c015a9d83fce1a881056f7.NginxMailingListEnglish@forum.nginx.org> References: <3e1c173a43c015a9d83fce1a881056f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/10/16 justin > > Just finished running a benchmark with http://blitz.io on a development > application written in PHP (using php-fpm with a unix socket, not TCP) > connecting to MySQL. Started with 50 concurrent connections and scaled up to > 250. Got lots of timeouts and errors. In my nginx log, the following is > logged a bunch of times: > > 2012/10/16 01:28:29 [error] 15019#0: *26418 connect() to > unix:/var/run/php-fpm/php.sock failed (11: Resource temporarily unavailable) > while connecting to upstream, > > What could cause this? I am using nginx 1.2.4 with the following performance > tweaks: > > worker_processes 2; > worker_connections 1024; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 30; > gzip on; > gzip_proxied any; > gzip_comp_level 4; > gzip_disable "msie6"; > gzip_types text/plain text/html text/css application/x-javascript > text/xml application/xml application/xml+rss text/javascript > application/javascript application/json > > PHP-FPM is configured with 8 static workers and listen.backlog = -1. Any > idea how I can reduce the failures connecting to upstream (PHP-FPM)? > try setting listen.backlog to 128 in FPM configuration. It may be a bug with listen.backlog = -1 on some FPM version. ++ jerome From ne at vbart.ru Tue Oct 16 08:55:30 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 16 Oct 2012 12:55:30 +0400 Subject: Handling 500k concurrent connections on Linux In-Reply-To: <20121016073029.GA56487@asylum.local> References: <20121009191003.GA26608@asylum.local> <20121010201142.GA21197@asylum.local> <20121016073029.GA56487@asylum.local> Message-ID: <201210161255.30246.ne@vbart.ru> On Tuesday 16 October 2012 11:30:29 John Watson wrote: > After a bit more digging I discovered that Nginx sets the backlog on the > listen socket to only 511 (at least on Linux), not the -1 in the docs. > [...] Docs: "By default, backlog is set to -1 on FreeBSD, and to 511 on other platforms." @ http://nginx.org/r/listen wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Oct 16 08:58:27 2012 From: nginx-forum at nginx.us (justin) Date: Tue, 16 Oct 2012 04:58:27 -0400 Subject: connect() to unix:/var/run/php-fpm/php.sock failed In-Reply-To: <3e1c173a43c015a9d83fce1a881056f7.NginxMailingListEnglish@forum.nginx.org> References: <3e1c173a43c015a9d83fce1a881056f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0214fccf5be0ed9689a182751586d348.NginxMailingListEnglish@forum.nginx.org> I set PHP-FPM to: listen.backlog = 256 and restart PHP-FPM. It was slightly (very little) better, but still: This rush generated 6,114 successful hits in 1.0 min and we transferred 34.39 MB of data in and out of your app. The average hit rate of 98/second translates to about 8,476,385 hits/day. The average response time was 229 ms. You've got bigger problems, though: 10.77% of the users during this rush experienced timeouts or errors! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231839,231846#msg-231846 From someukdeveloper at gmail.com Tue Oct 16 08:58:56 2012 From: someukdeveloper at gmail.com (Some Developer) Date: Tue, 16 Oct 2012 09:58:56 +0100 Subject: Encrypting traffic between Nginx and App Server backend (via FastCGI) Message-ID: <507D21D0.5040808@gmail.com> I was wondering if it was possible to encrypt traffic between an Nginx front end and an app server back end? I have a Django application running and can have the traffic between the Django app and the database encrypted using SSL and can have traffic from the internet at large and Nginx encrypted also using SSL but I need to have the traffic between the Nginx server and the Django app server encrypted as well. Is there a method to accomplish this? Thank you for any help. From edho at myconan.net Tue Oct 16 09:31:54 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 16 Oct 2012 16:31:54 +0700 Subject: Encrypting traffic between Nginx and App Server backend (via FastCGI) In-Reply-To: <507D21D0.5040808@gmail.com> References: <507D21D0.5040808@gmail.com> Message-ID: On Tue, Oct 16, 2012 at 3:58 PM, Some Developer wrote: > I was wondering if it was possible to encrypt traffic between an Nginx front > end and an app server back end? I have a Django application running and can > have the traffic between the Django app and the database encrypted using SSL > and can have traffic from the internet at large and Nginx encrypted also > using SSL but I need to have the traffic between the Nginx server and the > Django app server encrypted as well. Is there a method to accomplish this? > I know two methods 1. nginx on django end with ssl setup 2. stunnel or similar From sb at waeme.net Tue Oct 16 09:34:29 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 16 Oct 2012 13:34:29 +0400 Subject: connect() to unix:/var/run/php-fpm/php.sock failed In-Reply-To: <0214fccf5be0ed9689a182751586d348.NginxMailingListEnglish@forum.nginx.org> References: <3e1c173a43c015a9d83fce1a881056f7.NginxMailingListEnglish@forum.nginx.org> <0214fccf5be0ed9689a182751586d348.NginxMailingListEnglish@forum.nginx.org> Message-ID: <30D075ED-B5F0-4A8C-9E37-396B19508D42@waeme.net> On 16 Oct2012, at 12:58 , justin wrote: > I set PHP-FPM to: > > listen.backlog = 256 > > and restart PHP-FPM. It was slightly (very little) better, but still: > > This rush generated 6,114 successful hits in 1.0 min and we transferred > 34.39 MB of data in and out of your app. The average hit rate of 98/second > translates to about 8,476,385 hits/day. > > The average response time was 229 ms. It is simple math: you have 8 php workers, each request take ~ 0.2s, most of that time request spend in php, so you are able to serve about 40 requests per second. Either increase number of php worker processes and/or buy new hardware, or tune your php application, use caching and so on. From nginx-forum at nginx.us Tue Oct 16 10:21:00 2012 From: nginx-forum at nginx.us (revirii) Date: Tue, 16 Oct 2012 06:21:00 -0400 Subject: upstream: grant recovery time to backend server Message-ID: <676d5f24ff5a61aeaab3da3433687a59.NginxMailingListEnglish@forum.nginx.org> Hello, i have a load balancing setup running with nginx in front and 2 tomcats as backend: upstream backend { ip_hash; server 192.168.1.100; server 192.168.1.101; } Runs fine so far, but from time to time one of the tomcats crashes and needs to be restarted. As soon as nginx notices that the tomcat is coming up, nginx starts to forward requests to the starting-up tomcat. But it would be better that the tomcat starts up completly before nginx starts sending requests to it. Is there any possibility to grant a recovery time to a backend server before the backend receives requests? I.e. something like "nginx: hey, i noticed that the formerly crashed backend server is coming up, but i'll wait 300s before sending requests to it"? thx in advance revirii Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231854,231854#msg-231854 From mdounin at mdounin.ru Tue Oct 16 10:41:17 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Oct 2012 14:41:17 +0400 Subject: gzip filter on streaming responses In-Reply-To: <20121015213244.GA1337@asylum.local> References: <20121015213244.GA1337@asylum.local> Message-ID: <20121016104117.GQ40452@mdounin.ru> Hello! On Mon, Oct 15, 2012 at 02:32:44PM -0700, John Watson wrote: > I did some investigation and the gzip filter will only activate if there > is a Content-Length header with valid length. This is not true. With Content-Length present gzip filter is able to more effectively allocate buffers (or skip responses as per gzip_min_length), but it isn't limited to responses with Content-Length present. > Is there any way of deflating streaming responses from the nginx push > stream module? Where is there isn't a known content length, but > potential for thousands of messages to be transferred? As long as push stream module does things correctly it should work, but that's the question more about (3rd party) push stream module, not gzip filter. -- Maxim Dounin http://nginx.com/support.html From someukdeveloper at gmail.com Tue Oct 16 10:44:14 2012 From: someukdeveloper at gmail.com (Some Developer) Date: Tue, 16 Oct 2012 11:44:14 +0100 Subject: Encrypting traffic between Nginx and App Server backend (via FastCGI) In-Reply-To: References: <507D21D0.5040808@gmail.com> Message-ID: <507D3A7E.5030305@gmail.com> On 16/10/2012 10:31, Edho Arief wrote: > On Tue, Oct 16, 2012 at 3:58 PM, Some Developer > wrote: >> I was wondering if it was possible to encrypt traffic between an Nginx front >> end and an app server back end? I have a Django application running and can >> have the traffic between the Django app and the database encrypted using SSL >> and can have traffic from the internet at large and Nginx encrypted also >> using SSL but I need to have the traffic between the Nginx server and the >> Django app server encrypted as well. Is there a method to accomplish this? >> > > I know two methods > 1. nginx on django end with ssl setup > 2. stunnel or similar stunnel looks good. Thanks. From mdounin at mdounin.ru Tue Oct 16 10:50:45 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Oct 2012 14:50:45 +0400 Subject: upstream: grant recovery time to backend server In-Reply-To: <676d5f24ff5a61aeaab3da3433687a59.NginxMailingListEnglish@forum.nginx.org> References: <676d5f24ff5a61aeaab3da3433687a59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121016105045.GR40452@mdounin.ru> Hello! On Tue, Oct 16, 2012 at 06:21:00AM -0400, revirii wrote: > Hello, > > i have a load balancing setup running with nginx in front and 2 tomcats as > backend: > > upstream backend { > ip_hash; > server 192.168.1.100; > server 192.168.1.101; > } > > Runs fine so far, but from time to time one of the tomcats crashes and needs > to be restarted. As soon as nginx notices that the tomcat is coming up, > nginx starts to forward requests to the starting-up tomcat. But it would be > better that the tomcat starts up completly before nginx starts sending > requests to it. > > Is there any possibility to grant a recovery time to a backend server before > the backend receives requests? I.e. something like "nginx: hey, i noticed > that the formerly crashed backend server is coming up, but i'll wait 300s > before sending requests to it"? Don't open listen sockets on a backend server unless you think it's started and ready to handle requests? -- Maxim Dounin http://nginx.com/support.html From andrew at nginx.com Tue Oct 16 11:41:32 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 16 Oct 2012 15:41:32 +0400 Subject: nginx/websockets Message-ID: Hi folks, If there are companies here that would benefit from commercially sponsoring implementation of WebSocket Protocol in nginx, feel free to email me. Many thanks -- AA @ nginx http://nginx.com From nginx-forum at nginx.us Tue Oct 16 12:03:15 2012 From: nginx-forum at nginx.us (fhosting) Date: Tue, 16 Oct 2012 08:03:15 -0400 Subject: [ANNOUNCE] ngx_slowfs_cache-1.8 In-Reply-To: References: Message-ID: <0792e6d7229d53dae60074d0fe579976.NginxMailingListEnglish@forum.nginx.org> Any plans to integrate with nginx MP4 module or FLV? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223126,231859#msg-231859 From trm.nagios at gmail.com Tue Oct 16 16:55:37 2012 From: trm.nagios at gmail.com (trm asn) Date: Tue, 16 Oct 2012 22:25:37 +0530 Subject: Condition in location for proxy_pass Message-ID: Dear List , I need to do a conditional proxy_pass , but it's not happening as expected . like set $target http://www.example.com/sso/url.ping?TargetUrl=http://www.example.com/home And that $target , I would like to check with $request_uri in below segment . location /sso/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; ---------- ---------- proxy_pass http://10.20.137.21; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Oct 16 16:56:56 2012 From: nginx-forum at nginx.us (justin) Date: Tue, 16 Oct 2012 12:56:56 -0400 Subject: connect() to unix:/var/run/php-fpm/php.sock failed In-Reply-To: <30D075ED-B5F0-4A8C-9E37-396B19508D42@waeme.net> References: <30D075ED-B5F0-4A8C-9E37-396B19508D42@waeme.net> Message-ID: Sergey, Just tried with 50 PHP-FPM workers, and performance was even worse: > This rush generated 5,004 successful hits in 1.0 min and we transferred 47.39 MB of data in and out of your app. The average hit rate of 80/second translates to about 6,982,940 > hits/day. > > The average response time was 276 ms. > > You've got bigger problems, though: 20.67% of the users during this rush experienced timeouts or errors! Same problem logged in /var/log/nginx/error.log: connect() to unix:/var/run/php-fpm/php.sock failed (11: Resource temporarily unavailable) while connecting to upstream Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231839,231872#msg-231872 From trm.nagios at gmail.com Tue Oct 16 17:00:21 2012 From: trm.nagios at gmail.com (trm asn) Date: Tue, 16 Oct 2012 22:30:21 +0530 Subject: Condition in location for proxy_pass In-Reply-To: References: Message-ID: Dear List , I need to do a conditional proxy_pass , but it's not happening as expected . like set $target http://www.example.com/sso/url.ping?TargetUrl=http://www.example.com/home And that $target , I would like to check with $request_uri in below segment . location /sso/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; ---------- if ($target ~ $request_uri) { proxy_pass http://10.20.137.21; } else { return 403; } ------------ } Is that possible achieve inside "location" for "proxy_pass" . --trm -------------- next part -------------- An HTML attachment was scrubbed... URL: From duanemulder at rattyshack.ca Tue Oct 16 17:35:33 2012 From: duanemulder at rattyshack.ca (rattyshack) Date: Tue, 16 Oct 2012 17:35:33 -0000 Subject: Http 1.1 chucking support Message-ID: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> An HTML attachment was scrubbed... URL: From trm.nagios at gmail.com Tue Oct 16 17:48:30 2012 From: trm.nagios at gmail.com (trm asn) Date: Tue, 16 Oct 2012 23:18:30 +0530 Subject: Http 1.1 chucking support In-Reply-To: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> References: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> Message-ID: On Tue, Oct 16, 2012 at 11:05 PM, rattyshack wrote: > Hello list. > So we are running nginx 1.2.4 and when connecting with an svn client using > the service library nginx is returning the following error. > XML parsing failed: (411 Length Required) > From what I can tell I need to compile in the httpchunkinmodule. However > the module was last tested on the website with version 1.1.5. > Does 1.2.4 have support for httpchunkinmodule? And how would I use it. > > Duane > > Sent from my BlackBerry? PlayBook? > www.blackberry.com > > > use the below function after proxy_pass. proxy_http_version 1.1; i think it'll solve the issue. --trm -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Tue Oct 16 18:44:08 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 16 Oct 2012 14:44:08 -0400 Subject: try_files rewrite drops other get variables Message-ID: <85347df532bc998c8a50e8f6ad9ac39c.squirrel@www.digitalhit.com> A couple of wees ago in "Updating some old 'if' statements" (http://forum.nginx.org/read.php?2,231164,231164#msg-231164) it was suggested I use try files instead of the 'if' I had. The old location did two things: It ran an extensionless file as a php file and also checked for the existence of a statically cached file. Though the rewrite is successfully serving the static file and handling the extensionless php fine, I didn't initially notice that other GET variables like ?page are getting ignored. Here's the old 'if' location that passed on all variables just fine: old: location ~ ^/galleries(/.*$|$) { if (-f /usr/local/nginx/htdocs/pixcache$request_uri/index.html) { expires 2h; rewrite ^(.*)$ /pixcache$1/index.html last; break; } rewrite ^/galleries(/.*$|$) /galleries.php?mypath=$1 last; } and the current location with try_files location ~ ^/galleries(?P/.*$|$) { expires 2h; try_files /pixcache$request_uri/index.html /galleries.php?mypath=$mypath; } Those familiar with my posts know I suck at regex, but I'm assuming the rewrite in the old location successfully took URLS like: /galleries/129/1/3?page=2 and passed them to php as: /galleries.php?mypath=/129/1/3&page=2 while something in the new location is dropping any additional get variables? The old location has been working successfully in production for several years but I thought I should get rid of the evil if. The try_files version, as I said, works except for the dropping of additional get variables in the URL. From john at disqus.com Tue Oct 16 18:53:34 2012 From: john at disqus.com (John Watson) Date: Tue, 16 Oct 2012 11:53:34 -0700 Subject: gzip filter on streaming responses In-Reply-To: <20121016104117.GQ40452@mdounin.ru> References: <20121015213244.GA1337@asylum.local> <20121016104117.GQ40452@mdounin.ru> Message-ID: <20121016185334.GA44758@asylum.local> That makes much more sense. Thank you for clarifying how the gzip filter works. There is a patch forthcoming for the push stream module that adds the necessary information to get gzip working. Regards, John On Tue, Oct 16, 2012 at 02:41:17PM +0400, Maxim Dounin wrote: > Hello! > > On Mon, Oct 15, 2012 at 02:32:44PM -0700, John Watson wrote: > > > I did some investigation and the gzip filter will only activate if there > > is a Content-Length header with valid length. > > This is not true. With Content-Length present gzip filter is able > to more effectively allocate buffers (or skip responses as per > gzip_min_length), but it isn't limited to responses with > Content-Length present. > > > Is there any way of deflating streaming responses from the nginx push > > stream module? Where is there isn't a known content length, but > > potential for thousands of messages to be transferred? > > As long as push stream module does things correctly it should > work, but that's the question more about (3rd party) push stream > module, not gzip filter. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 834 bytes Desc: not available URL: From andrew at nginx.com Tue Oct 16 19:26:01 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 16 Oct 2012 23:26:01 +0400 Subject: Http 1.1 chucking support In-Reply-To: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> References: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> Message-ID: <961BC75E-2DC6-4D4E-9842-D58EA0B51F5F@nginx.com> On Oct 16, 2012, at 9:35 PM, rattyshack wrote: > Hello list. > So we are running nginx 1.2.4 and when connecting with an svn client using the service library nginx is returning the following error. > XML parsing failed: (411 Length Required) > From what I can tell I need to compile in the httpchunkinmodule. However the module was last tested on the website with version 1.1.5. > Does 1.2.4 have support for httpchunkinmodule? And how would I use it. The author should be able to answer in full :) There's work ongoing to make full chunked encoding on input work out of the stock nginx. Should be ready somewhere around Dec 2013. > Duane > > Sent from my BlackBerry? PlayBook? > www.blackberry.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From andrew at nginx.com Tue Oct 16 19:42:09 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 16 Oct 2012 23:42:09 +0400 Subject: Http 1.1 chucking support In-Reply-To: <961BC75E-2DC6-4D4E-9842-D58EA0B51F5F@nginx.com> References: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> <961BC75E-2DC6-4D4E-9842-D58EA0B51F5F@nginx.com> Message-ID: On Oct 16, 2012, at 11:26 PM, Andrew Alexeev wrote: > On Oct 16, 2012, at 9:35 PM, rattyshack wrote: > >> Hello list. >> So we are running nginx 1.2.4 and when connecting with an svn client using the service library nginx is returning the following error. >> XML parsing failed: (411 Length Required) >> From what I can tell I need to compile in the httpchunkinmodule. However the module was last tested on the website with version 1.1.5. >> Does 1.2.4 have support for httpchunkinmodule? And how would I use it. > > The author should be able to answer in full :) There's work ongoing to make full chunked encoding on input work out of the stock nginx. Should be ready somewhere around Dec 2013. That one was really good. I meant Dec 2012. >> Duane >> >> Sent from my BlackBerry? PlayBook? >> www.blackberry.com >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From duanemulder at rattyshack.ca Wed Oct 17 03:31:36 2012 From: duanemulder at rattyshack.ca (Duane) Date: Tue, 16 Oct 2012 23:31:36 -0400 Subject: Http 1.1 chucking support In-Reply-To: References: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> Message-ID: <507E2698.2020009@rattyshack.ca> Thanks I will give that a try. I did not know that options was there :) when proxy_http_version 1.1 is enabled I am assuming that clients that are 1.0 only are still supported. Duane On 12-10-16 1:48 PM, trm asn wrote: > > > On Tue, Oct 16, 2012 at 11:05 PM, rattyshack > > wrote: > > Hello list. > So we are running nginx 1.2.4 and when connecting with an svn > client using the service library nginx is returning the following > error. > XML parsing failed: (411 Length Required) > From what I can tell I need to compile in the httpchunkinmodule. > However the module was last tested on the website with version 1.1.5. > Does 1.2.4 have support for httpchunkinmodule? And how would I > use it. > > Duane > > Sent from my BlackBerry? PlayBook^(TM) > www.blackberry.com > > > > use the below function after proxy_pass. > > proxy_http_version 1.1; > > i think it'll solve the issue. > > --trm > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Wed Oct 17 04:05:21 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 17 Oct 2012 09:35:21 +0530 Subject: Override a set_empty variable from http params In-Reply-To: <20121015080544.GA41590@nginx.com> References: <20121015080544.GA41590@nginx.com> Message-ID: Igor, Is this correct? set $country_code $arg_country; set_if_empty $country_code $geoip_country_code; proxy_set_header X-Country-Code $country_code; On Mon, Oct 15, 2012 at 1:35 PM, Igor Sysoev wrote: > On Sun, Oct 14, 2012 at 11:30:22AM -0700, Quintin Par wrote: > > Hi all, > > > > I set the some params to my backend server as shown below > > > > set_if_empty $country_code $http_x_country_code; > > > > proxy_set_header X-Country-Code $country_code; > > > > Now the http_x_country_code comes via the GeoIP module. > > > > Is there a way here I can override this based on an HTTP param I can set. > > > > Say site.com/country=US and country variable has greater precedence over > > the http_x_country_code. > > Using "map" you can define: > > http { > > map $arg_country $country_code { > "" $header_country; > default $arg_country; > } > > map $http_x_country_code $header_country { > "" $geoip_country_code; > default $http_x_country_code; > } > > ... > > location / { > proxy_set_header X-Country-Code $country_code; > ... > > > -- > Igor Sysoev > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Oct 17 07:57:05 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 17 Oct 2012 11:57:05 +0400 Subject: Http 1.1 chucking support In-Reply-To: <507E2698.2020009@rattyshack.ca> References: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> <507E2698.2020009@rattyshack.ca> Message-ID: <201210171157.05526.ne@vbart.ru> On Wednesday 17 October 2012 07:31:36 Duane wrote: > Thanks I will give that a try. I did not know that options was there :) > > when proxy_http_version 1.1 is enabled I am assuming that clients that > are 1.0 only are still supported. > This directive does not have anything to do with clients. Please, see the docs: http://nginx.org/r/proxy_http_version wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From trm.nagios at gmail.com Wed Oct 17 09:46:18 2012 From: trm.nagios at gmail.com (trm asn) Date: Wed, 17 Oct 2012 15:16:18 +0530 Subject: Condition in location for proxy_pass [ Re-post ] Message-ID: Dear List , I need to do a conditional proxy_pass , but it's not happening as expected . like set $target http://www.example.com/sso/url.ping?TargetUrl=http://www.example.com/home And that $target , I would like to check with $request_uri in below segment . location /sso/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; ---------- if ($target ~ $request_uri) { proxy_pass http://10.20.137.21; } else { return 403; } ------------ } Is that possible achieve inside "location" for "proxy_pass" . -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at waeme.net Wed Oct 17 10:22:40 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 17 Oct 2012 14:22:40 +0400 Subject: connect() to unix:/var/run/php-fpm/php.sock failed In-Reply-To: References: <30D075ED-B5F0-4A8C-9E37-396B19508D42@waeme.net> Message-ID: <3DF2381C-FCA0-47E4-9F4C-4B8E831BFE26@waeme.net> On 16 Oct2012, at 20:56 , justin wrote: > Sergey, > > Just tried with 50 PHP-FPM workers, and performance was even worse: Well, if your application is not CPU, i/o etc bound, you have to find what your php application wait for. In case of php+mysql it is advisable to enable slow log in php-fpm and mysql. Sorry but further php debugging you should do yourself - it is out of scope of the mailing list. > > Same problem logged in /var/log/nginx/error.log: > > connect() to unix:/var/run/php-fpm/php.sock failed (11: Resource temporarily > unavailable) while connecting to upstream When 50 php workers serve equal number of requests and another 128 (from listen.backlog) requests wait in backlog, 179th request will result in "(11: Resource temporarily unavailable) while connecting to upstream". From david.kostal at gmail.com Wed Oct 17 12:55:21 2012 From: david.kostal at gmail.com (David Kostal) Date: Wed, 17 Oct 2012 14:55:21 +0200 Subject: Patch: TPROXY support for listening sockets Message-ID: Hi, this patch is _not_ meant to provide transparently IP of clients to the backends. It is meant to for nginx to be able to listen on non-local IP, as replacement for (eg.) NATing of incomming connections to local port. My setups use LVS with direct routing and real-servers without the public IPs, using NAT. However the problem I'm facing is IPv6 and one option is to use TPROXY sockets to listen to (possibly large number of) non-local IPs. Would you recommend some other solution or is this TPROXY patch fine? Thanks, David On Fri, Jun 15, 2012 at 7:56 AM, David Kostal wrote: > Hi all, > I just run into the need to have nginx support the Linux TPROXY > feature: there is not REDIRECT target for IPv6. As there is no support > for TPROXY in nginx I created a small patch for core & http modules > against 1.2.1. It's enabled by recompiling nginx with --with-tproxy > and activating it by adding "tproxy" as an additional argument to > listen. > > Unfortunately it is not possible to enable/disable tproxy behavior for > existing sockets during reload, only on startup or when reload adds > new listening sockets. This is due to fact that the setsockopt() call > must be done before bind(). > > Please have a look, so far it works for me but I did not do yet any > heavy testing and it's not production yet:) > > david.kostal at gmail.com > ----+ -- david.kostal at gmail.com ----+ -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-tproxy.patch Type: application/octet-stream Size: 5619 bytes Desc: not available URL: From andrew at andrewloe.com Wed Oct 17 17:33:15 2012 From: andrew at andrewloe.com (W. Andrew Loe III) Date: Wed, 17 Oct 2012 10:33:15 -0700 Subject: Dynamic Proxy Pass - is there a more elegant solution? Message-ID: I'm X-Accel-Redirecting URLs that look like: /AWSS3/bucket/key?auth=value Right now my location that handles these and requests from S3 looks like this: location ~ /AWSS3/(.*) { # Store our ETag value. set $rails_etag $upstream_http_etag; # Prevent Amazon from overwriting our Headers. proxy_hide_header Content-Type; proxy_hide_header ETag; # Hide Amazon Headers proxy_hide_header X-Amz-Id-2; proxy_hide_header X-Amz-Request-Id; # Set the HTTP Host header to S3. proxy_set_header Host 's3.amazonaws.com'; # Force Amazon to do the heavy lifting. proxy_max_temp_file_size 0; # Ensure tight timeouts, we'll retry the requests to a different backend in # the event of failures. proxy_connect_timeout 5; proxy_send_timeout 10; proxy_read_timeout 10; # Retry if Amazon freaks out. proxy_next_upstream error timeout http_500 http_502 http_503 http_504; # Ensure the requests are always gets. proxy_method GET; proxy_set_header Method 'GET'; proxy_set_header Content-Length ""; proxy_set_header Cookie ""; proxy_set_header Content-Type ""; # Clear any CloudFront headers. proxy_set_header X-Amz-Cf-Id ""; # We use the query string for Authorization, clear headers that the client # may have sent. proxy_set_header Authorization ""; # Resolver, for dynamically proxied requests. resolver 8.8.8.8; # Proxy to S3. set $s3 "s3-external-2.amazonaws.com"; proxy_pass https://$s3/$1$is_args$args; # Add back our own ETag. add_header ETag $rails_etag; internal; } Is there a way to avoid having to capture the path in the location block and reconstruct it with $is_args$args? I would like something more like: location /AWSS3/ { ... proxy_pass https://$s3/; } which does work when passing to an upstream block (https://s3/ with upstream s3 defined). I prefer this method as it re-resolves the DNS periodically. From agentzh at gmail.com Wed Oct 17 23:01:43 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 17 Oct 2012 16:01:43 -0700 Subject: [ANN] ngx_openresty devel version 1.2.4.3 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new development version of ngx_openresty, 1.2.4.3: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.4.1: * upgraded LuaJIT to 2.0.0 beta11. * made LuaRestyRedisLibrary 27% faster, LuaRestyMemcachedLibrary 22% faster, and LuaRestyMySQLLibrary 15% faster, all for simple test cases loaded by ab, tested on Linux x86_64. * all Lua APIs involved with I/O in LuaNginxModule are faster in general. * complete change log: * upgraded LuaRestyMemcachedLibrary to 0.09. * optimize: we now use Lua's own "table.concat()" to do string concatenation for all the memcached requests instead of relying on the cosocket API (on the C level) because calling the Lua C API is much slower especially when LuaJIT is in use. now for simple test cases loaded by "ab -k -c10", we get 11.3% overall performance boost. * upgraded LuaNginxModule to 0.7.2. * feature: now we can automatically detect the vendor-provided LuaJIT-2.0 package on Gentoo. thanks Il'ya V. Yesin for the patch. it is still recommended, however, to explicitly set the environments "LUAJIT_INC" and "LUAJIT_LIB". OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From ianevans at digitalhit.com Thu Oct 18 01:55:23 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 17 Oct 2012 21:55:23 -0400 Subject: try_files rewrite drops other get variables In-Reply-To: <85347df532bc998c8a50e8f6ad9ac39c.squirrel@www.digitalhit.com> References: <85347df532bc998c8a50e8f6ad9ac39c.squirrel@www.digitalhit.com> Message-ID: <507F618B.7080409@digitalhit.com> On 16/10/2012 2:44 PM, Ian M. Evans wrote: > A couple of wees ago in "Updating some old 'if' statements" > (http://forum.nginx.org/read.php?2,231164,231164#msg-231164) it was > suggested I use try files instead of the 'if' I had. > > The old location did two things: It ran an extensionless file as a php > file and also checked for the existence of a statically cached file. > > Though the rewrite is successfully serving the static file and handling > the extensionless php fine, I didn't initially notice that other GET > variables like ?page are getting ignored. > > Here's the old 'if' location that passed on all variables just fine: > > old: > location ~ ^/galleries(/.*$|$) { > if (-f /usr/local/nginx/htdocs/pixcache$request_uri/index.html) { > expires 2h; > rewrite ^(.*)$ /pixcache$1/index.html last; > break; > } > rewrite ^/galleries(/.*$|$) /galleries.php?mypath=$1 last; > } > > and the current location with try_files > > location ~ ^/galleries(?P/.*$|$) { > expires 2h; > try_files /pixcache$request_uri/index.html /galleries.php?mypath=$mypath; > } > > Those familiar with my posts know I suck at regex, but I'm assuming the > rewrite in the old location successfully took URLS like: > > /galleries/129/1/3?page=2 and passed them to php as: > /galleries.php?mypath=/129/1/3&page=2 while something in the new location > is dropping any additional get variables? > > The old location has been working successfully in production for several > years but I thought I should get rid of the evil if. > > The try_files version, as I said, works except for the dropping of > additional get variables in the URL. > I thought adding $args to the end of the try_files line would work but that appeared to mess it up more. Is there any way to get the try_files version to work like the old version? Been staring at this but I don't see how rewrite ^/galleries(/.*$|$) /galleries.php?mypath=$1 last; passes the path and additional GET variables, but the try_files version doesn't pass on the additional variables. From edho at myconan.net Thu Oct 18 01:59:24 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 18 Oct 2012 08:59:24 +0700 Subject: try_files rewrite drops other get variables In-Reply-To: <507F618B.7080409@digitalhit.com> References: <85347df532bc998c8a50e8f6ad9ac39c.squirrel@www.digitalhit.com> <507F618B.7080409@digitalhit.com> Message-ID: On Thu, Oct 18, 2012 at 8:55 AM, Ian Evans wrote: > > I thought adding $args to the end of the try_files line would work but that > appeared to mess it up more. Is there any way to get the try_files version > to work like the old version? > > Been staring at this but I don't see how > rewrite ^/galleries(/.*$|$) /galleries.php?mypath=$1 last; passes the path > and additional GET variables, but the try_files version doesn't pass on the > additional variables. > >From nginx.org/r/rewrite: "If a replacement string includes the new request arguments, the previous request arguments are appended after them. If this is undesired, putting a question mark at the end of a replacement string avoids having them appended, for example:" From edho at myconan.net Thu Oct 18 02:03:00 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 18 Oct 2012 09:03:00 +0700 Subject: try_files rewrite drops other get variables In-Reply-To: References: <85347df532bc998c8a50e8f6ad9ac39c.squirrel@www.digitalhit.com> <507F618B.7080409@digitalhit.com> Message-ID: On Thu, Oct 18, 2012 at 8:59 AM, Edho Arief wrote: > On Thu, Oct 18, 2012 at 8:55 AM, Ian Evans wrote: >> >> I thought adding $args to the end of the try_files line would work but that >> appeared to mess it up more. Is there any way to get the try_files version >> to work like the old version? >> >> Been staring at this but I don't see how >> rewrite ^/galleries(/.*$|$) /galleries.php?mypath=$1 last; passes the path >> and additional GET variables, but the try_files version doesn't pass on the >> additional variables. >> > > From nginx.org/r/rewrite: > > "If a replacement string includes the new request arguments, the > previous request arguments are appended after them. If this is > undesired, putting a question mark at the end of a replacement string > avoids having them appended, for example:" Additionally, I do indeed add $args for my try_files for wordpress: try_files $uri $uri/ /index.php?q=$uri&$args; From ianevans at digitalhit.com Thu Oct 18 02:21:08 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 17 Oct 2012 22:21:08 -0400 Subject: try_files rewrite drops other get variables In-Reply-To: References: <85347df532bc998c8a50e8f6ad9ac39c.squirrel@www.digitalhit.com> <507F618B.7080409@digitalhit.com> Message-ID: <507F6794.6000903@digitalhit.com> On 17/10/2012 10:03 PM, Edho Arief wrote: > On Thu, Oct 18, 2012 at 8:59 AM, Edho Arief wrote: >> On Thu, Oct 18, 2012 at 8:55 AM, Ian Evans wrote: >>> >>> I thought adding $args to the end of the try_files line would work but that >>> appeared to mess it up more. Is there any way to get the try_files version >>> to work like the old version? >>> >>> Been staring at this but I don't see how >>> rewrite ^/galleries(/.*$|$) /galleries.php?mypath=$1 last; passes the path >>> and additional GET variables, but the try_files version doesn't pass on the >>> additional variables. >>> >> >> From nginx.org/r/rewrite: >> >> "If a replacement string includes the new request arguments, the >> previous request arguments are appended after them. If this is >> undesired, putting a question mark at the end of a replacement string >> avoids having them appended, for example:" > > Additionally, I do indeed add $args for my try_files for wordpress: > > try_files $uri $uri/ /index.php?q=$uri&$args; I changed the try_files to: try_files /pixcache$request_uri/index.html /galleries.php?mypath=$mypath&$args; and it worked. I had mistakenly added the $args without the '&' Will take it for a sail and make sure it keeps working. :-) From nginx-forum at nginx.us Thu Oct 18 09:17:27 2012 From: nginx-forum at nginx.us (karolis) Date: Thu, 18 Oct 2012 05:17:27 -0400 Subject: rewrite all locations to https except one Message-ID: Hi Everyone, i have this problem with rewrite. As subject says i want to rewrite all locations to https except one, that should remain http. But that one with http isn't redirecting properly. I'm using nginx 1.2.2 version. Here's my conf: server { client_max_body_size 500M; listen 80; server_name alis.am.lt; #rewrite ^(.*) https://$host$1 permanent; #rewrite ^ https://$server_name$request_uri? permanent; location / { rewrite ^(.*) https://$host$1 permanent; proxy_pass http://www_serveriai_80; proxy_set_header Host $http_host; } location /SomeService { rewrite ^(.*) http://$host$1 permanent; proxy_method POST; proxy_pass http://10.255.6.120:8080/SomeService; proxy_set_header Host $http_host; #proxy_redirect default; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I really appreciate any help. Thanks! Regards, Karolis Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231948,231948#msg-231948 From mdounin at mdounin.ru Thu Oct 18 09:22:26 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2012 13:22:26 +0400 Subject: rewrite all locations to https except one In-Reply-To: References: Message-ID: <20121018092225.GU40452@mdounin.ru> Hello! On Thu, Oct 18, 2012 at 05:17:27AM -0400, karolis wrote: > Hi Everyone, > > i have this problem with rewrite. As subject says i want to rewrite all > locations to https except one, that should remain http. But that one with > http isn't redirecting properly. I'm using nginx 1.2.2 version. Here's my > conf: > > server { > client_max_body_size 500M; > listen 80; > server_name alis.am.lt; > #rewrite ^(.*) https://$host$1 permanent; > #rewrite ^ https://$server_name$request_uri? permanent; > > location / { > rewrite ^(.*) https://$host$1 permanent; > proxy_pass http://www_serveriai_80; > proxy_set_header Host $http_host; > } Just a side note: it doesn't make sense to write proxy_pass here. Using location / { rewrite ^(.*) https://$host$1 permanent; } would be enough. Or, better, location / { return 301 https://$host$request_uri; } > location /SomeService { > rewrite ^(.*) http://$host$1 permanent; This will create infinite loop, as you try to redirect back to the same address. Just remove this rewrite. > proxy_method POST; > proxy_pass http://10.255.6.120:8080/SomeService; > proxy_set_header Host $http_host; > #proxy_redirect default; > #proxy_set_header Host $host; > #proxy_set_header X-Real-IP $remote_addr; > #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > > > I really appreciate any help. Thanks! -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Oct 18 10:37:26 2012 From: nginx-forum at nginx.us (jspy124) Date: Thu, 18 Oct 2012 06:37:26 -0400 Subject: Can I use nginx as a normal proxy (not reverse proxy) to a internetwebsite? Message-ID: <324714062561b8a846ea2c7ae07e21b4.NginxMailingListEnglish@forum.nginx.org> Hello All, I'am a rookie with nginx configuration and would be glad if someone could help me. Can I use nginx as a normal proxy (not reverse proxy) for an internal application to the internet. For exmple: I have a application "XYZ" located in my internal network. This application need's secure access to an official website in the internet over ssl (https://www.example.com). I would connect this application to the official website over nginx-proxy located in the DMZ. The nginx-proxy should listen to port 7443 and forward the request to the official website (https://www.example.com). My questions: Can i use the nginx proxy in that way described above? Can anyone post me an example-configuration? Is it necessary to configure a ssl-certificate on the nginx-proxy and set the ssl-parameter to "on"? Thanxs all for support me! Best regards J?rgen Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231964,231964#msg-231964 From r at roze.lv Thu Oct 18 10:54:45 2012 From: r at roze.lv (Reinis Rozitis) Date: Thu, 18 Oct 2012 13:54:45 +0300 Subject: connect() to unix:/var/run/php-fpm/php.sock failed In-Reply-To: <3DF2381C-FCA0-47E4-9F4C-4B8E831BFE26@waeme.net> References: <30D075ED-B5F0-4A8C-9E37-396B19508D42@waeme.net> <3DF2381C-FCA0-47E4-9F4C-4B8E831BFE26@waeme.net> Message-ID: > connect() to unix:/var/run/php-fpm/php.sock failed (11: Resource > temporarily unavailable) while connecting to upstream I would suggest you to try switch to tcp - my own experience shows that communication (even with little speed gain) between nginx and php via unix socket is somehow unreliable especially under load. You have to tweak the backlog and net.core.somaxconn but for me (at least in past) even with high values it still produced some percentage of faulty connections. rr From nginx-forum at nginx.us Thu Oct 18 11:24:14 2012 From: nginx-forum at nginx.us (kustodian) Date: Thu, 18 Oct 2012 07:24:14 -0400 Subject: Nginx Location Matching Documentation Update Message-ID: <1464f8b9a060b6a3d26695fb0ef4df75.NginxMailingListEnglish@forum.nginx.org> I would recommend that the documentation for location matching is updated http://wiki.nginx.org/HttpCoreModule#location, explicitaly the example. It should contain a conventional strings besides the root '/' of the site: location = / { # matches the query / only. [ configuration A ] } location / { # matches any query, since all queries begin with /, but regular # expressions and any longer conventional blocks will be # matched first. [ configuration B ] } location ^~ /images/ { # matches any query beginning with /images/ and halts searching, # so regular expressions will not be checked. [ configuration C ] } location /media/ { # matches any query beginning with /media/ and continues serching, # so regular expressions will be checked. This will be matched only if # regular expressions don't find a match [ configuration D ] } location ~* \.(gif|jpg|jpeg)$ { # matches any request ending in gif, jpg, or jpeg. However, all # requests to the /images/ directory will be handled by # Configuration C. [ configuration E ] } I don't know who could I ask about this, but I think that it's important that all 4 explained steps in the documentation are covered by the example. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231966,231966#msg-231966 From nginx-forum at nginx.us Thu Oct 18 11:32:03 2012 From: nginx-forum at nginx.us (karolis) Date: Thu, 18 Oct 2012 07:32:03 -0400 Subject: rewrite all locations to https except one In-Reply-To: <20121018092225.GU40452@mdounin.ru> References: <20121018092225.GU40452@mdounin.ru> Message-ID: <649258c461275d131591fd8e6154053f.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim for your help. But then where to write this proxy_pass: proxy_pass http://www_serveriai_80; And still, going further in my site, https drops off and i'm being left only with http. How to permanently stay on https, but in this location: location /SomeService { proxy_method POST; proxy_pass http://10.255.6.120:8080/SomeService; proxy_set_header Host $http_host; to stay always on http. Thanks Regards, Karolis Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231948,231967#msg-231967 From mdounin at mdounin.ru Thu Oct 18 12:02:33 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2012 16:02:33 +0400 Subject: rewrite all locations to https except one In-Reply-To: <649258c461275d131591fd8e6154053f.NginxMailingListEnglish@forum.nginx.org> References: <20121018092225.GU40452@mdounin.ru> <649258c461275d131591fd8e6154053f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121018120233.GX40452@mdounin.ru> Hello! On Thu, Oct 18, 2012 at 07:32:03AM -0400, karolis wrote: > Thanks Maxim for your help. But then where to write this proxy_pass: > > proxy_pass http://www_serveriai_80; > > And still, going further in my site, https drops off and i'm being left only > with http. How to permanently stay on https, but in this location: > > location /SomeService { > proxy_method POST; > proxy_pass http://10.255.6.120:8080/SomeService; > proxy_set_header Host $http_host; > > to stay always on http. > Thanks The server{} block you provided is http-only, not https. On the other hand, what you ask about needs correct configuration of two server blocks, one for http, and another one for https. Here is an example: server { listen 80; server_name www.example.com; location / { # we are in http server, but want https for normal # requests - redirect to https return 301 https://$host$request_uri; } location /http_only_service { ... do real work here ... } } server { listen 443 ssl; server_name www.example.com; ssl_certificate ... ssl_certificate_key ... location / { ... do real work here ... } location /http_only_service { # this is http only service, but we are in https # server - redirect to http return 301 http://$host$request_uri; } } Hope this helps. -- Maxim Dounin http://nginx.com/support.html From ne at vbart.ru Thu Oct 18 12:07:50 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 18 Oct 2012 16:07:50 +0400 Subject: Nginx Location Matching Documentation Update In-Reply-To: <1464f8b9a060b6a3d26695fb0ef4df75.NginxMailingListEnglish@forum.nginx.org> References: <1464f8b9a060b6a3d26695fb0ef4df75.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201210181607.50338.ne@vbart.ru> On Thursday 18 October 2012 15:24:14 kustodian wrote: > I would recommend that the documentation for location matching is updated > http://wiki.nginx.org/HttpCoreModule#location, explicitaly the example. > [...] You're referring to the community wiki, not documentation. Feel free to update it yourself. The documentation is here: http://nginx.org/en/docs/ wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Oct 18 12:33:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2012 16:33:54 +0400 Subject: Nginx Location Matching Documentation Update In-Reply-To: <201210181607.50338.ne@vbart.ru> References: <1464f8b9a060b6a3d26695fb0ef4df75.NginxMailingListEnglish@forum.nginx.org> <201210181607.50338.ne@vbart.ru> Message-ID: <20121018123354.GZ40452@mdounin.ru> Hello! On Thu, Oct 18, 2012 at 04:07:50PM +0400, Valentin V. Bartenev wrote: > On Thursday 18 October 2012 15:24:14 kustodian wrote: > > I would recommend that the documentation for location matching is updated > > http://wiki.nginx.org/HttpCoreModule#location, explicitaly the example. > > [...] > > You're referring to the community wiki, not documentation. Feel free to update > it yourself. > > The documentation is here: http://nginx.org/en/docs/ This wiki article was originated as a translation of official documentation in Russian (which is now available in English, too, see http://nginx.org/r/location), and I always asked people to don't deviate from official docs but keep it as a translation, keeping comments/suggestions/additional notes separate from an article. And I actually think that suggested change will be good for understanding, and it would be good to add it to official documentation. Ruslan? -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Oct 18 12:41:22 2012 From: nginx-forum at nginx.us (vinhomn) Date: Thu, 18 Oct 2012 08:41:22 -0400 Subject: Logging 'return 404' to error log? Message-ID: <8be50b2b627e306225bbbf1788a882a8.NginxMailingListEnglish@forum.nginx.org> Asked this in How to... forum, but it seems not much activity, so i try to post in here... Anyone know this? Hi, I'm have a question like title subject. I have an nginx/1.2.1 installation, the nginx configuration like this: location ~ ^/abc/ { if (!-e $request_filename) { return 404; break; } proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } config running good, but all the 404 log from this "return 404;" move to access log. I want it move to error log, so I try use log_not_found but have syntax error. Do you have an idea how to do this? Sorry for my poor english! Thanks, vinhomn Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231972,231972#msg-231972 From nginx-forum at nginx.us Thu Oct 18 13:21:58 2012 From: nginx-forum at nginx.us (hsrmmr) Date: Thu, 18 Oct 2012 09:21:58 -0400 Subject: Nginx php-fpm setp fastcgi_pass_header not being set Message-ID: I am using Nginx + php-fpm setup I noticed that I could set fastcgi_param however fastcgi_pass_header is not being set. I tried to print $_SERVER in php which has only fastcgi_params and not fastcgi_pass_headers. My queries are: 1. Is there any additional configuration parameter needed for fastcgi_pass_headers? I have set 'underscores_in_headers on;', however headers I am trying to set don't have underscores. I tried to manipulate client request headers (through TamperData) it works fine as by default it is on. 2. Is there any other way to check (apart from _SERVER) to check if headers are set properly while passing to php-fpm? Any help is appreciated. I have following configuration: fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; ..... ..... fastcgi_pass_header Authorization; fastcgi_pass_header Host; fastcgi_pass_header X-Real-IP; fastcgi_pass_header X-Forwarded-For; location block looks like following: location ~ \.php(.*)$ { try_files $uri =404; root /home/myntradomain/public_html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231973,231973#msg-231973 From ne at vbart.ru Thu Oct 18 14:36:55 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 18 Oct 2012 18:36:55 +0400 Subject: Nginx php-fpm setp fastcgi_pass_header not being set In-Reply-To: References: Message-ID: <201210181836.55359.ne@vbart.ru> On Thursday 18 October 2012 17:21:58 hsrmmr wrote: > I am using Nginx + php-fpm setup > > I noticed that I could set fastcgi_param however fastcgi_pass_header is > not being set. I tried to print $_SERVER in php which has only > fastcgi_params and not fastcgi_pass_headers. > > My queries are: > 1. Is there any additional configuration parameter needed for > fastcgi_pass_headers? I have set 'underscores_in_headers on;', however > headers I am trying to set don't have underscores. I tried to manipulate > client request headers (through TamperData) it works fine as by default it > is on. > > 2. Is there any other way to check (apart from _SERVER) to check if headers > are set properly while passing to php-fpm? > > Any help is appreciated. It seems that you don't understand what fastcgi_pass_header does. Please, read the documentation: http://nginx.org/r/fastcgi_pass_header wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Oct 18 14:52:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2012 18:52:00 +0400 Subject: Getting waitpid error In-Reply-To: <8ff586f6dae14aa469afb4e5fe962a15.NginxMailingListEnglish@forum.nginx.org> References: <8ff586f6dae14aa469afb4e5fe962a15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121018145200.GC40452@mdounin.ru> Hello! On Fri, Oct 12, 2012 at 06:38:55AM -0400, goelvivek wrote: > I am using Amazon Linux. > uname -a gives followng result > Linux 3.2.22-35.60.amzn1.x86_64 #1 SMP Thu Jul 5 14:07:24 UTC 2012 > x86_64 x86_64 x86_64 GNU/Linux Thanks for your report, I've committed the fix: http://trac.nginx.org/nginx/changeset/4890/nginx -- Maxim Dounin http://nginx.com/support.html From lists-nginx at swsystem.co.uk Thu Oct 18 18:49:36 2012 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 18 Oct 2012 19:49:36 +0100 Subject: gzip client & proxy with sub_filter sanity check. Message-ID: <50804F40.1050208@swsystem.co.uk> After some playing with debian's current version of nginx I've reached the conclusion it's not possible without the gunzip module. Below is my current test configuration, in short I'm trying to reduce the server bandwidth by having proxy upstream requests and client responses gzipped, however I need to rewrite some content in-line. Is this the best way to go about this or am I over complicating it? server { listen 80; listen [::]:80 default_server ipv6only=on; server_name localhost; gzip on; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Accept-Encoding ""; } } server { listen 127.0.0.1:8000; server_name site1; location / { proxy_pass http://127.0.0.1:8001; sub_filter foo bar; sub_filter_once off; proxy_set_header Accept-Encoding ""; } } server { listen 127.0.0.1:8001; server_name site2; gunzip on; location / { proxy_pass http://www.example.org; proxy_set_header Accept-Encoding gzip; } } nginx -V: nginx version: nginx/1.3.7 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-log-path=/var/log/nginx/access.log \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --lock-path=/var/lock/nginx.lock \ --pid-path=/var/run/nginx.pid \ --with-pcre-jit \ --with-debug \ --with-file-aio \ --with-http_addition_module \ --with-http_dav_module \ --with-http_gunzip_module \ --with-http_flv_module \ --with-http_geoip_module \ --with-http_gzip_static_module \ --with-http_image_filter_module \ --with-http_mp4_module \ --with-http_perl_module \ --with-http_random_index_module \ --with-http_realip_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_sub_module \ --with-http_xslt_module \ --with-ipv6 \ --with-sha1=/usr/include/openssl \ --with-md5=/usr/include/openssl \ --with-mail \ --with-mail_ssl_module \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-auth-pam \ --add-module=/root/build/nginx-1.3.7/debian/modules/chunkin-nginx-module \ --add-module=/root/build/nginx-1.3.7/debian/modules/headers-more-nginx-module \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-development-kit \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-echo \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-push-stream-module \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-lua \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-upload-module\ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-upload-progress \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-upstream-fair\ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-dav-ext-module --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-syslog \ --add-module=/root/build/nginx-1.3.7/debian/modules/nginx-cache-purge From mdounin at mdounin.ru Thu Oct 18 19:00:35 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2012 23:00:35 +0400 Subject: gzip client & proxy with sub_filter sanity check. In-Reply-To: <50804F40.1050208@swsystem.co.uk> References: <50804F40.1050208@swsystem.co.uk> Message-ID: <20121018190035.GE40452@mdounin.ru> Hello! On Thu, Oct 18, 2012 at 07:49:36PM +0100, Steve Wilson wrote: > After some playing with debian's current version of nginx I've reached > the conclusion it's not possible without the gunzip module. > > Below is my current test configuration, in short I'm trying to reduce > the server bandwidth by having proxy upstream requests and client > responses gzipped, however I need to rewrite some content in-line. > > Is this the best way to go about this or am I over complicating it? > > server { > listen 80; > listen [::]:80 default_server ipv6only=on; > server_name localhost; > gzip on; > location / { > proxy_pass http://127.0.0.1:8000; > proxy_set_header Accept-Encoding ""; > } > } > > server { > listen 127.0.0.1:8000; > server_name site1; > location / { > proxy_pass http://127.0.0.1:8001; > sub_filter foo bar; > sub_filter_once off; > proxy_set_header Accept-Encoding ""; > } > } These two server{} blocks may be safely joined together. > server { > listen 127.0.0.1:8001; > server_name site2; > gunzip on; > location / { > proxy_pass http://www.example.org; > proxy_set_header Accept-Encoding gzip; > } > } And probably we want to implement something like gunzip always; to allow such processing in a single server block. [...] > --with-sha1=/usr/include/openssl \ > --with-md5=/usr/include/openssl \ Just a side note: this is incorrect. As per ./configure help: --with-md5=DIR set path to md5 library sources --with-sha1=DIR set path to sha1 library sources Obviously you don't have any sources to build in /usr/include/openssl. Just remove these configure arguments. -- Maxim Dounin http://nginx.com/support.html From lists-nginx at swsystem.co.uk Thu Oct 18 19:18:01 2012 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 18 Oct 2012 20:18:01 +0100 Subject: gzip client & proxy with sub_filter sanity check. In-Reply-To: <20121018190035.GE40452@mdounin.ru> References: <50804F40.1050208@swsystem.co.uk> <20121018190035.GE40452@mdounin.ru> Message-ID: <508055E9.4040006@swsystem.co.uk> On 18/10/2012 20:00, Maxim Dounin wrote: > Hello! > > On Thu, Oct 18, 2012 at 07:49:36PM +0100, Steve Wilson wrote: > >> After some playing with debian's current version of nginx I've reached >> the conclusion it's not possible without the gunzip module. >> >> Below is my current test configuration, in short I'm trying to reduce >> the server bandwidth by having proxy upstream requests and client >> responses gzipped, however I need to rewrite some content in-line. >> >> Is this the best way to go about this or am I over complicating it? >> >> server { >> listen 80; >> listen [::]:80 default_server ipv6only=on; >> server_name localhost; >> gzip on; >> location / { >> proxy_pass http://127.0.0.1:8000; >> proxy_set_header Accept-Encoding ""; >> } >> } >> >> server { >> listen 127.0.0.1:8000; >> server_name site1; >> location / { >> proxy_pass http://127.0.0.1:8001; >> sub_filter foo bar; >> sub_filter_once off; >> proxy_set_header Accept-Encoding ""; >> } >> } > > These two server{} blocks may be safely joined together. Ah yeah, looking at it now I can see this as the content for sub_filter is already gunzipped. >> server { >> listen 127.0.0.1:8001; >> server_name site2; >> gunzip on; >> location / { >> proxy_pass http://www.example.org; >> proxy_set_header Accept-Encoding gzip; >> } >> } > > And probably we want to implement something like > > gunzip always; > > to allow such processing in a single server block. ;) > [...] > >> --with-sha1=/usr/include/openssl \ >> --with-md5=/usr/include/openssl \ > > Just a side note: this is incorrect. As per ./configure help: > > --with-md5=DIR set path to md5 library sources > --with-sha1=DIR set path to sha1 library sources > > Obviously you don't have any sources to build in > /usr/include/openssl. Just remove these configure arguments. > > I should have perhaps mentioned that I use the nginx-extras package from debian in production to get the configure line and just added in the gunzip flag myself for testing. Having read through some of the enabled options I'll probably end up creating our own nginx build to remove some unwanted options. Steve. From nginx-forum at nginx.us Thu Oct 18 19:28:15 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 15:28:15 -0400 Subject: Questions about proxy_pass and internal directives Message-ID: Hello, My question is two-part: 1. given the following setup: root /home/www/mysite/static; location /foo/bar/ { proxy_pass http://127.0.0.1:8080; proxy_redirect off; } Why a request for "mysite.com/foo/bar/sth.html" does not get proxied to the server at "127.0.0.1:8080"? Instead, the file "/home/www/mysite/static/foo/bar/sth.html" is served. 2. is there a way to use 'internal' directive together with 'proxy_pass' for the same location? e.g. root /home/www/mysite/static; location /foo/bar/ { proxy_pass http://127.0.0.1:8080; proxy_redirect off; internal; } I've tried with the above, but it seems 'internal' directive takes precedence, and just return 404 immediately. What I want to achieve is to block direct access to files under "/home/www/mysite/static/foo/bar", and proxy all requests to an application server which will decide the access. Any suggestion? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,231998#msg-231998 From nginx-forum at nginx.us Thu Oct 18 19:54:43 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 15:54:43 -0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: References: Message-ID: A correction to my earlier post: the request in my first question above should be for "mysite.com/foo/bar/sth/sth.html", and there is a corresponding file on the filesystem: "/home/www/mysite/static/foo/bar/sth/sth.html". Just tried something different, with this config: root /home/www/mysite/static; location /foo/bar/ { internal; proxy_pass http://127.0.0.1:8080; proxy_redirect off; } request to "mysite.com/foo/bar/sth" will return 404, and I can see that the request does not even get proxied to the application server. In addition, this access is not logged in error log but in the access log of nginx. However, request to "mysite.com/foo/bar/sth/sth.html" is served fine, despite of using "internal" directive. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,231999#msg-231999 From ne at vbart.ru Thu Oct 18 20:23:30 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 19 Oct 2012 00:23:30 +0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: References: Message-ID: <201210190023.30306.ne@vbart.ru> On Thursday 18 October 2012 23:54:43 mrtn wrote: > A correction to my earlier post: the request in my first question above > should be for "mysite.com/foo/bar/sth/sth.html", and there is a > corresponding file on the filesystem: > "/home/www/mysite/static/foo/bar/sth/sth.html". > > Just tried something different, with this config: > > root /home/www/mysite/static; > > location /foo/bar/ { > internal; > proxy_pass http://127.0.0.1:8080; > proxy_redirect off; > } > > request to "mysite.com/foo/bar/sth" will return 404, and I can see that the > request does not even get proxied to the application server. In addition, > this access is not logged in error log but in the access log of nginx. > However, request to "mysite.com/foo/bar/sth/sth.html" is served fine, > despite of using "internal" directive. > Is this the only one "location" that you have in your "server" block? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Oct 18 20:32:39 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 16:32:39 -0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: <201210190023.30306.ne@vbart.ru> References: <201210190023.30306.ne@vbart.ru> Message-ID: <47e2fb9ad50379d0e506a8a588ec5b80.NginxMailingListEnglish@forum.nginx.org> Hello Valentin, No, it is not. I have a few other location blocks, but none of them should be matched to "/foo/bar". For examples, i have: location /foo/a/ location /foo/b/ location /foo/c/ also, location ~* (\.jpg|\.png|\.css|\.js|\.html)$ { valid_referers none blocked www.mysite.com static.mysite.com mysite.com; if ($invalid_referer) { return 405; } } Is there anything particular that will interfere with the "/foo/bar" location block I should look for? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,232001#msg-232001 From r at roze.lv Thu Oct 18 20:54:34 2012 From: r at roze.lv (Reinis Rozitis) Date: Thu, 18 Oct 2012 23:54:34 +0300 Subject: Questions about proxy_pass and internal directives In-Reply-To: <47e2fb9ad50379d0e506a8a588ec5b80.NginxMailingListEnglish@forum.nginx.org> References: <201210190023.30306.ne@vbart.ru> <47e2fb9ad50379d0e506a8a588ec5b80.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52A01713F2E94DF591B1EA694D5155E3@NeiRoze> > also, > location ~* (\.jpg|\.png|\.css|\.js|\.html)$ { > valid_referers none blocked www.mysite.com > static.mysite.com Nginx uses only one location and since your request is "mysite.com/foo/bar/sth.html" it is matched by this regexp it is the one used by nginx / so not really proxy_passed. http://nginx.org/en/docs/http/ngx_http_core_module.html#location - see the order. rr From nginx-forum at nginx.us Thu Oct 18 22:18:18 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 18:18:18 -0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: <52A01713F2E94DF591B1EA694D5155E3@NeiRoze> References: <52A01713F2E94DF591B1EA694D5155E3@NeiRoze> Message-ID: <6728cca45fb1c495da40c58c4f5a831d.NginxMailingListEnglish@forum.nginx.org> Hello Reinis, Thanks for pointing that out. Just to make sure I understand the doc correctly for my case. Given that: location /foo/bar/ (directive with conventional strings) and location ~* (\.jpg|\.png|\.css|\.js|\.html)$ (regular expressions). The /foo/bar/ location is actually checked and matched first, however, since I don't have a '^~' prefix in front of /foo/bar/, the search continues with regex match and matches with location ~* (\.jpg|\.png|\.css|\.js|\.html)$. In the end, it is the regex location directive that is used by nginx. So to correct the problem, I simply need to add a prefix '^~' in front of /foo/bar/ to stop the search once matched. Please correct me if I get anything wrong. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,232005#msg-232005 From nginx-forum at nginx.us Thu Oct 18 22:48:14 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 18:48:14 -0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: <52A01713F2E94DF591B1EA694D5155E3@NeiRoze> References: <52A01713F2E94DF591B1EA694D5155E3@NeiRoze> Message-ID: <0263536da2332f0187a74ddbfd180fb0.NginxMailingListEnglish@forum.nginx.org> hmm, so adding '^~' to the front of location /foo/bar/ makes "internal" directive work correctly. All direct access to "/foo/bar/sth/sth.html" are blocked with 404 now. However, the proxy_pass inside '/foo/bar/' location still doesn't work. I even put debugging echo inside all the location blocks, and only the /foo/bar one appears in the log file. But there is still no request proxied to the other server. In the debugging log, I see: 2012/10/18 18:27:57 [debug] 6983#0: *3 http script if 2012/10/18 18:27:57 [debug] 6983#0: *3 http script if: false 2012/10/18 18:27:57 [debug] 6983#0: *3 test location: "/game/play/" 2012/10/18 18:27:57 [debug] 6983#0: *3 http finalize request: 404, "/foo/bar/test-sth?" a:1, c:1 2012/10/18 18:27:57 [debug] 6983#0: *3 http special response: 404, "/foo/bar/test-sth?" 2012/10/18 18:27:57 [debug] 6983#0: *3 internal redirect: "/404.html?" 2012/10/18 18:27:57 [debug] 6983#0: *3 rewrite phase: 1 Then I get 404 in the requesting browser obviously. Any other guess? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,232006#msg-232006 From r at roze.lv Thu Oct 18 23:37:25 2012 From: r at roze.lv (Reinis Rozitis) Date: Fri, 19 Oct 2012 02:37:25 +0300 Subject: Questions about proxy_pass and internal directives In-Reply-To: <0263536da2332f0187a74ddbfd180fb0.NginxMailingListEnglish@forum.nginx.org> References: <52A01713F2E94DF591B1EA694D5155E3@NeiRoze> <0263536da2332f0187a74ddbfd180fb0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> > hmm, so adding '^~' to the front of location /foo/bar/ makes "internal" > directive work correctly. All direct access to "/foo/bar/sth/sth.html" are > blocked with 404 now. > However, the proxy_pass inside '/foo/bar/' location still doesn't work. You are using 'internal' in a wrong way (at least judging from your configuration excerpts). If you read the documentation http://nginx.org/en/docs/http/ngx_http_core_module.html#internal you should see that internal locations can't be accessed directly from client/browser but need some sort of _internal_ redirect. If there is a need for a backend application to check for permissions but serve the file from nginx (while the same time denying direct access) one way to do it is making the backend application to send 'X-Accel-Redirect' header ( some examples: http://wiki.nginx.org/XSendfile ). You can also try auth_request_module by Maxim Dounin ( http://mdounin.ru/hg/ngx_http_auth_request_module/file/a29d74804ff1/README / http://forum.nginx.org/read.php?2,58047,58047#msg-58047 ) .. or access_by_lua from nginx_lua module by agentzh ( https://github.com/chaoslawful/lua-nginx-module ) rr From nginx-forum at nginx.us Fri Oct 19 00:08:21 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 20:08:21 -0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> References: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> Message-ID: <19979b56214c909853dd69cbeed10184.NginxMailingListEnglish@forum.nginx.org> Reinis Rozitis Wrote: ------------------------------------------------------- > You are using 'internal' in a wrong way (at least judging from your > configuration excerpts). > > If you read the documentation > http://nginx.org/en/docs/http/ngx_http_core_module.html#internal you > should > see that internal locations can't be accessed directly from > client/browser > but need some sort of _internal_ redirect. > > > > If there is a need for a backend application to check for permissions > but > serve the file from nginx (while the same time denying direct access) > one > way to do it is making the backend application to send > 'X-Accel-Redirect' > header ( some examples: http://wiki.nginx.org/XSendfile ). That's exactly what I am doing. location ^~ /foo/bar/ { internal; proxy_pass http://127.0.0.1:8080; proxy_redirect off; } I use "internal" directive to block direct access to anything "/foo/bar/,,,", which seems to be what nginx is doing. At the same time, I proxy_pass the request to the backend application server to check for permissions. If success, the backend server sends a 'X-Accel-Redirect' header back to nginx to serve the file. Now the problem is proxy_pass does not seem to work, as no request is being proxied to the backend server. I wondered if this is because I put it together with "internal" directive inside the location block, but from here: http://wiki.nginx.org/X-accel, it seems that this combination should be fine, as it is given as an example. I'm scratching my head now... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,232008#msg-232008 From nginx-forum at nginx.us Fri Oct 19 00:38:14 2012 From: nginx-forum at nginx.us (mrtn) Date: Thu, 18 Oct 2012 20:38:14 -0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> References: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> Message-ID: On a second thought, I think I get what you mean by "internal" directive. It will blocks ALL external requests from browsers, thus my browser request to "/foo/bar/sth" is block immediately and 404 is returned, without doing any proxy_pass. I may need to rethink my design here. Ideally, I want users who request "/foo/bar/sth" in their browsers get served by nginx with the file "/foo/bar/sth/sth.html", while letting the backend application server control the access to the file. I may need to rethink the design here. Thanks for the pointers. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231998,232009#msg-232009 From ne at vbart.ru Fri Oct 19 00:38:46 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 19 Oct 2012 04:38:46 +0400 Subject: Questions about proxy_pass and internal directives In-Reply-To: <19979b56214c909853dd69cbeed10184.NginxMailingListEnglish@forum.nginx.org> References: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> <19979b56214c909853dd69cbeed10184.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201210190438.47257.ne@vbart.ru> On Friday 19 October 2012 04:08:21 mrtn wrote: > Reinis Rozitis Wrote: > ------------------------------------------------------- > > > You are using 'internal' in a wrong way (at least judging from your > > configuration excerpts). > > > > If you read the documentation > > http://nginx.org/en/docs/http/ngx_http_core_module.html#internal you > > should > > see that internal locations can't be accessed directly from > > client/browser > > but need some sort of _internal_ redirect. > > > > > > > > If there is a need for a backend application to check for permissions > > but > > serve the file from nginx (while the same time denying direct access) > > one > > way to do it is making the backend application to send > > 'X-Accel-Redirect' > > header ( some examples: http://wiki.nginx.org/XSendfile ). > > That's exactly what I am doing. > > location ^~ /foo/bar/ { > internal; > proxy_pass http://127.0.0.1:8080; > proxy_redirect off; > } > > I use "internal" directive to block direct access to anything > "/foo/bar/,,,", which seems to be what nginx is doing. At the same time, I > proxy_pass the request to the backend application server to check for > permissions. If success, the backend server sends a 'X-Accel-Redirect' > header back to nginx to serve the file. > [...] It seems you misunderstand the "location" directive. http://nginx.org/en/docs/http/ngx_http_core_module.html#location "Sets a configuration based on a request URI.", that's all. You protect this location (i.e. configuration, including proxy_pass and so on) from external requests. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From r at roze.lv Fri Oct 19 01:32:38 2012 From: r at roze.lv (Reinis Rozitis) Date: Fri, 19 Oct 2012 04:32:38 +0300 Subject: Questions about proxy_pass and internal directives In-Reply-To: References: <7B1B8871E2924FBC89FF4168083755D1@NeiRoze> Message-ID: <1B5522E6DF5145FBBB3CF522449ECC3F@NeiRoze> > I use "internal" directive to block direct access to anything > "/foo/bar/,,,", which seems to be what nginx is doing. At the same time, I > proxy_pass the request to the backend application server to check for > permissions. If success, the backend server sends a 'X-Accel-Redirect' > header back to nginx to serve the file. > I may need to rethink my design here. Ideally, I want users who request > "/foo/bar/sth" in their browsers get served by nginx with the file > "/foo/bar/sth/sth.html", while letting the backend application server > control the access to the file. Well then you are doing it generally right, the only tricky part to innitially understand is using different location blocks - one for the proxy_pass and one for the protected files. The example is shown also in the XSendfile wiki page. - To really protect the files while not necessary you should keep them out of the default webroot. - First you define the location you will be using as URLs on your website (there is no need for such directories or files to actually exist as all the requests will be sent to the backend for it to decide what to do next). location /foo/bar { proxy_pass http://127.0.0.1:8080; proxy_redirect off; } - Second you define the location what will be used in the X-Accel-Redirect header sent from the backend server. location /protected/ { internal; root /data/files; #or alias /data/files/; - in case you want to leave the '/protected' out of your physical data path. } 1. Now if you request mysite.com/foo/bar/sth.html the request is sent the to backend ( http://127.0.0.1:8080/foo/bar/sth.html ) 2. If the download is allowed (whatever logic the application implements) backend should respond with X-Accel-Redirect: /protected/foo/bar/sth.html ( you can change the directory tree or even the resulting file names as you wish / the only requirement is to leave the defined internal path (in this case '/protected'). 3. Depending on what you used ('root' or 'alias') in the protected location block a file from /data/files/protected/foo/bar/sth.html or /data/files/foo/bar/sth.html will be served by nginx. 4 .Even if people discover the backend url or the X-Accel-Redirect header there is no way for them to acess the files directly since mysite.com/protected/foo/bar/sth.html wont work for them. rr From quintinpar at gmail.com Fri Oct 19 02:20:25 2012 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 19 Oct 2012 07:50:25 +0530 Subject: Cache always shows expired status for anonymous users Message-ID: I have my homepage cached as shown location = / { if (-f /var/www/statichtmls/during_build.html) { return 503; } set $country_code $http_x_country_code; set_if_empty $country_code $geoip_country_code; limit_req zone=pw burst=5 nodelay; proxy_pass http://localhost:82; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Country-Code $country_code; proxy_set_header Accept-Encoding ""; proxy_ignore_headers Cache-Control; proxy_ignore_headers Expires; proxy_ignore_headers X-Accel-Expires; add_header X-Cache-Status $upstream_cache_status; proxy_cache cache; proxy_cache_key $scheme$host$request_uri$cookie_sessionid$country_code; proxy_cache_valid 200 302 2m; proxy_cache_use_stale updating; } But every time I hit the page as an anonymous user it shows X-Cache-Status: EXPIRED user$ curl -I site.com HTTP/1.1 200 OK Server: ngx_openresty Date: Fri, 19 Oct 2012 02:15:32 GMT Content-Type: text/html; charset=utf-8 Connection: keep-alive Keep-Alive: timeout=60 Vary: Accept-Encoding Vary: Cookie Set-Cookie: csrftoken=Dl4mvy4Rky7sfZwqek27hFrCXzWCi9As; expires=Fri, 18-Oct-2013 02:15:32 GMT; Max-Age=31449600; Path=/ X-Cache-Status: EXPIRED I want to cache it both for logged in and anonymous and it looks like the absence of session cookie is creating issues. How do I cache the homepage correctly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeszenszky.zoltan at gmail.com Fri Oct 19 22:22:14 2012 From: jeszenszky.zoltan at gmail.com (=?ISO-8859-1?Q?Jeszenszky_Zolt=E1n?=) Date: Sat, 20 Oct 2012 00:22:14 +0200 Subject: replace string in content Message-ID: Hello, I just add these rows to the default config (/usr/local/nginx/conf/nginx.conf) at the end of the location section: location / { root html; index index.html index.htm; sub_filter nginx 'newstring'; sub_filter_once off; subs_filter_types text/html text/css text/xml; subs_filter nginx 'newstring' i; } But if I see the http://localhost in firefox there is no "newstring" instead of "nginx". Why? I expected to replace the nginx string... /usr/src/nginx/nginx-1.2.4# /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.2.4 built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) TLS SNI support enabled configure arguments: --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --add-module=/usr/src/nginx/ngx_http_substitutions_filter_module/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Oct 20 07:30:22 2012 From: nginx-forum at nginx.us (hussain) Date: Sat, 20 Oct 2012 03:30:22 -0400 Subject: SSL port other than 443 In-Reply-To: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> References: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> Message-ID: <05e79278ac459c75cfe31e87818938ad.NginxMailingListEnglish@forum.nginx.org> I am having the same issue. Here is the server block of my nginx.conf - ######### server{ listen 8090 ssl; server_name foo.bar.com; ssl_certificate conf.d/ssl/foo.bar.com.crt; ssl_certificate_key conf.d/ssl/foo.bar.com.key; keepalive_timeout 60; location / { proxy_pass https://127.0.0.1:8010; ### force timeouts if one of backend is died ## proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; ### Set headers #### proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ### Most PHP, Python, Rails, Java App can use this header ### proxy_set_header X-Forwarded-Proto https; ### By default we don't want to redirect it #### proxy_redirect off; } location ~ /\.ht { deny all; } } ######### As you can see I am using other port than 443. How do I make it work? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230326,232049#msg-232049 From nginx-forum at nginx.us Sat Oct 20 07:55:58 2012 From: nginx-forum at nginx.us (nnttoo) Date: Sat, 20 Oct 2012 03:55:58 -0400 Subject: nginx dead but pid file exists Message-ID: <86fe367c6416f6f9612b86f56e537ad3.NginxMailingListEnglish@forum.nginx.org> I had this problem a few times, "nginx dead but pid file exists" My site suddenly can not access, after I typed "service nginx status" and the answer is "nginx dead but pid file exists", what happened to my vps? if possible, how to make the nginx reload / restart automatically if it happens again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232050,232050#msg-232050 From quintinpar at gmail.com Sat Oct 20 09:42:13 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sat, 20 Oct 2012 15:12:13 +0530 Subject: Cache always shows expired status for anonymous users In-Reply-To: References: Message-ID: I realized my folly. Set-Cookie for csrf token is invalidating the cache! - Quintin On Fri, Oct 19, 2012 at 7:50 AM, Quintin Par wrote: > I have my homepage cached as shown > > location = / { > > if (-f /var/www/statichtmls/during_build.html) { > > return 503; > > } > > set $country_code $http_x_country_code; > > set_if_empty $country_code $geoip_country_code; > > > > limit_req zone=pw burst=5 nodelay; > > proxy_pass http://localhost:82; > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Country-Code $country_code; > > proxy_set_header Accept-Encoding ""; > > proxy_ignore_headers Cache-Control; > > proxy_ignore_headers Expires; > > proxy_ignore_headers X-Accel-Expires; > > add_header X-Cache-Status $upstream_cache_status; > > proxy_cache cache; > > proxy_cache_key > $scheme$host$request_uri$cookie_sessionid$country_code; > > proxy_cache_valid 200 302 2m; > > proxy_cache_use_stale updating; > > } > > But every time I hit the page as an anonymous user it shows > > X-Cache-Status: EXPIRED > > > user$ curl -I site.com > > HTTP/1.1 200 OK > > Server: ngx_openresty > > Date: Fri, 19 Oct 2012 02:15:32 GMT > > Content-Type: text/html; charset=utf-8 > > Connection: keep-alive > > Keep-Alive: timeout=60 > > Vary: Accept-Encoding > > Vary: Cookie > > Set-Cookie: csrftoken=Dl4mvy4Rky7sfZwqek27hFrCXzWCi9As; expires=Fri, > 18-Oct-2013 02:15:32 > > GMT; Max-Age=31449600; Path=/ > > X-Cache-Status: EXPIRED > > I want to cache it both for logged in and anonymous and it looks like the > absence of session cookie is creating issues. > > How do I cache the homepage correctly? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.boenning at gmail.com Sat Oct 20 10:07:33 2012 From: christian.boenning at gmail.com (=?utf-8?Q?Christian_B=C3=B6nning?=) Date: Sat, 20 Oct 2012 12:07:33 +0200 Subject: nginx dead but pid file exists In-Reply-To: <86fe367c6416f6f9612b86f56e537ad3.NginxMailingListEnglish@forum.nginx.org> References: <86fe367c6416f6f9612b86f56e537ad3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4C3B7BCA-1977-4737-9D7F-2A3A944BECFD@gmail.com> Hi, I've no idea what happened, but may be a look into the error log may help. What you can do to prevent this kind of issue is to use monitoring on that server. I for one use use monit on my servers to make sure services are running and will be restarted in case a service crashes. Regards, Christian Am 20.10.2012 um 09:55 schrieb "nnttoo" : > I had this problem a few times, "nginx dead but pid file exists" > > My site suddenly can not access, after I typed "service nginx status" and > the answer is "nginx dead but pid file exists", what happened to my vps? > > if possible, how to make the nginx reload / restart automatically if it > happens again. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232050,232050#msg-232050 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Oct 20 10:33:23 2012 From: nginx-forum at nginx.us (ahmadtrco) Date: Sat, 20 Oct 2012 06:33:23 -0400 Subject: install nginx-1.2.4 problems Message-ID: <5adc5e33d4f69a4d05bd5e23267eac31.NginxMailingListEnglish@forum.nginx.org> When we try to install nginx-1.2.4 and check with. /configure then two problems come as fallows: + OpenSSL library is not used + sha1 library is not found And then we check openssl installed or not then result come:- rpm -qa openssl openssl-0.9.8e-22.el5_8.4 openssl-0.9.8e-22.el5_8.4 Please advise; what is the solution? as we are not interested to face any problem after installation of nginx-1.2.4 Complete . /configure result as fallows:- Configuration summary + using system PCRE library + OpenSSL library is not used + using builtin md5 code + sha1 library is not found + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232053,232053#msg-232053 From nginx-forum at nginx.us Sat Oct 20 11:36:22 2012 From: nginx-forum at nginx.us (nnttoo) Date: Sat, 20 Oct 2012 07:36:22 -0400 Subject: nginx dead but pid file exists In-Reply-To: <4C3B7BCA-1977-4737-9D7F-2A3A944BECFD@gmail.com> References: <4C3B7BCA-1977-4737-9D7F-2A3A944BECFD@gmail.com> Message-ID: <3ca7d54768e388f881f335c2acaef889.NginxMailingListEnglish@forum.nginx.org> Can I create a sh file that checks the status of nginx and run it with cron job? if nginx status becomes "dead", then the sh file is going to restart nginx. if it can, how to sample his sh file? and how examples of "cron job"? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232050,232054#msg-232054 From contact at jpluscplusm.com Sat Oct 20 11:37:23 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 20 Oct 2012 12:37:23 +0100 Subject: install nginx-1.2.4 problems In-Reply-To: <5adc5e33d4f69a4d05bd5e23267eac31.NginxMailingListEnglish@forum.nginx.org> References: <5adc5e33d4f69a4d05bd5e23267eac31.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 20 October 2012 11:33, ahmadtrco wrote: > When we try to install nginx-1.2.4 and check with. /configure then two > problems come as fallows: > + OpenSSL library is not used > + sha1 library is not found > > And then we check openssl installed or not then result come:- > > rpm -qa openssl > openssl-0.9.8e-22.el5_8.4 > openssl-0.9.8e-22.el5_8.4 I suggest these aren't the -dev packages with headers/source, and you're just installing runtime binaries which nginx can't use during compilation. Install the devel packages. Jonathan -- Jonathan Matthews // Oxford, London, UK From contact at jpluscplusm.com Sat Oct 20 12:00:54 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 20 Oct 2012 13:00:54 +0100 Subject: nginx dead but pid file exists In-Reply-To: <3ca7d54768e388f881f335c2acaef889.NginxMailingListEnglish@forum.nginx.org> References: <4C3B7BCA-1977-4737-9D7F-2A3A944BECFD@gmail.com> <3ca7d54768e388f881f335c2acaef889.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 20 October 2012 12:36, nnttoo wrote: > Can I create a sh file that checks the status of nginx and run it with cron > job? if nginx status becomes "dead", then the sh file is going to restart > nginx. if it can, how to sample his sh file? and how examples of "cron job"? If you're asking for example of how cron jobs work, I don't think you're experienced enough to write scripts to monitor a daemon robustly. Please look into "monit" instead, if indeed your VPS environment seems to kill off processes like nginx randomly. Personally, I've never seen this happen, but I'm sure there are situations in which it can. Solutions like monit are a band-aid, and not a universal panacea. Jonathan -- Jonathan Matthews // Oxford, London, UK From edho at myconan.net Sun Oct 21 02:26:44 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 21 Oct 2012 09:26:44 +0700 Subject: SSL port other than 443 In-Reply-To: <05e79278ac459c75cfe31e87818938ad.NginxMailingListEnglish@forum.nginx.org> References: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> <05e79278ac459c75cfe31e87818938ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sat, Oct 20, 2012 at 2:30 PM, hussain wrote: > ######### > As you can see I am using other port than 443. How do I make it work? > You forgot to mention what the problem you're having. I'm guessing your backend server isn't using ssl but you configured it as https (proxy_pass https://). Try using http:// instead. From quintinpar at gmail.com Sun Oct 21 03:17:47 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 21 Oct 2012 08:47:47 +0530 Subject: Different cache time for authentication & non auth users Message-ID: Hi all, How can I set separate caching expiration time for authenticated and non auth users? In my case the caching is set this way proxy_cache_key $scheme$host$request_uri$cookie_sessionid; proxy_cache_valid 200 302 2m; proxy_cache_use_stale updating; The presence of session ID ensure separate cache pages for auth & non auth users. But I want an experience where non auth users have a page cached for 1 hour and auth users have a page cached for 2 mins. How can I achieve this in Nginx? Can someone help? FYI, this is under the same location directive. - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Sun Oct 21 11:38:55 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 21 Oct 2012 17:08:55 +0530 Subject: Different cache time for authentication & non auth users In-Reply-To: References: Message-ID: Hi, Can someone help? I don?t want to end up using if case. - Quintin On Sun, Oct 21, 2012 at 8:47 AM, Quintin Par wrote: > Hi all, > > How can I set separate caching expiration time for authenticated and non > auth users? > > In my case the caching is set this way > > proxy_cache_key $scheme$host$request_uri$cookie_sessionid; > > proxy_cache_valid 200 302 2m; > > proxy_cache_use_stale updating; > > The presence of session ID ensure separate cache pages for auth & non auth > users. > > But I want an experience where non auth users have a page cached for 1 > hour and auth users have a page cached for 2 mins. > > How can I achieve this in Nginx? Can someone help? > > FYI, this is under the same location directive. > > - Quintin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeszenszky.zoltan at gmail.com Sun Oct 21 11:41:26 2012 From: jeszenszky.zoltan at gmail.com (=?ISO-8859-1?Q?Jeszenszky_Zolt=E1n?=) Date: Sun, 21 Oct 2012 13:41:26 +0200 Subject: replace string in content In-Reply-To: References: Message-ID: Why I can not replace string in pages? Can I debug what happening? Help me, please. 2012/10/20 Jeszenszky Zolt?n > Hello, > > I just add these rows to the default config > (/usr/local/nginx/conf/nginx.conf) at the end of the location section: > > location / { > root html; > index index.html index.htm; > sub_filter nginx 'newstring'; > sub_filter_once off; > subs_filter_types text/html text/css text/xml; > subs_filter nginx 'newstring' i; > } > > But if I see the http://localhost in firefox there is no "newstring" > instead of "nginx". > Why? I expected to replace the nginx string... > > /usr/src/nginx/nginx-1.2.4# /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.2.4 > built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) > TLS SNI support enabled > configure arguments: --with-pcre-jit --with-debug --with-file-aio > --with-http_addition_module --with-http_dav_module --with-http_flv_module > --with-http_gzip_static_module --with-http_mp4_module > --with-http_random_index_module --with-http_realip_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_ssl_module --with-http_sub_module > --add-module=/usr/src/nginx/ngx_http_substitutions_filter_module/ > -- ?dv?zlettel / Best Regards, Unix and TSM administrator -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Sun Oct 21 12:57:13 2012 From: farseas at gmail.com (Bob S.) Date: Sun, 21 Oct 2012 08:57:13 -0400 Subject: Could people using auth-request please send me their actual nginx.conf files please? Message-ID: If you are successfully using auth request, -please send me your actual nginx.conf file. I am a relative newbie on nginx and cgi programming so actual examples will be very helpful. Thanks to all. My email is: farseas at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Oct 21 16:29:48 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 21 Oct 2012 17:29:48 +0100 Subject: Could people using auth-request please send me their actual nginx.conf files please? In-Reply-To: References: Message-ID: On 21 October 2012 13:57, Bob S. wrote: > If you are successfully using auth request, -please send me your actual > nginx.conf file. I'm not using this module but, reading its README, it seems trivially replaceable with X-Accel-Redirect and an "internal" location. I'll be blogging our specific setup sometime soon, but in production I'm basically using a modified version of what's described here: http://ls4.sourceforge.net/doc/howto/ddt.html. For us, it works reliably, efficiently, without a 3rd party module, delivering gigs of data a day. I literally can't see what extra that module gives you. Regards, Jonathan -- Jonathan Matthews // Oxford, London, UK From nginx-forum at nginx.us Sun Oct 21 21:23:56 2012 From: nginx-forum at nginx.us (kochbidyut) Date: Sun, 21 Oct 2012 17:23:56 -0400 Subject: problem in config..... Message-ID: <1147047c3f1cf7f316c8a8a4efdb08ae.NginxMailingListEnglish@forum.nginx.org> Hi Everybody, Kindly provide me the details for the config.... and installation... my site is accessible at http://www.Loversearth.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232074,232074#msg-232074 From ne at vbart.ru Sun Oct 21 21:44:38 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 22 Oct 2012 01:44:38 +0400 Subject: replace string in content In-Reply-To: References: Message-ID: <201210220144.38452.ne@vbart.ru> On Sunday 21 October 2012 15:41:26 Jeszenszky Zolt?n wrote: > Why I can not replace string in pages? Probably you have a problem in your config, but since you have provided only small part of it, then it's impossible to determine. > Can I debug what happening? Help me, please. Yes, you can: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From jeszenszky.zoltan at gmail.com Mon Oct 22 08:27:32 2012 From: jeszenszky.zoltan at gmail.com (=?ISO-8859-1?Q?Jeszenszky_Zolt=E1n?=) Date: Mon, 22 Oct 2012 10:27:32 +0200 Subject: replace string in content In-Reply-To: <201210220144.38452.ne@vbart.ru> References: <201210220144.38452.ne@vbart.ru> Message-ID: I will check it with the debug mode when I get home, thanks. I use the default config. I just add the filter rows all other lines are default. That's why I don't understand what can be wrong. 2012.10.21. 23:45, "Valentin V. Bartenev" ezt ?rta: > On Sunday 21 October 2012 15:41:26 Jeszenszky Zolt?n wrote: > > Why I can not replace string in pages? > > Probably you have a problem in your config, but since you have provided > only > small part of it, then it's impossible to determine. > > > Can I debug what happening? Help me, please. > > Yes, you can: http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Oct 22 09:31:24 2012 From: nginx-forum at nginx.us (rahul286) Date: Mon, 22 Oct 2012 05:31:24 -0400 Subject: Controlling Message-ID: <3235007cb6a9bf3bcd1f928124e9b761.NginxMailingListEnglish@forum.nginx.org> With reference to: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_path file names in a cache looks like this: /data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c Is there any way to use different scheme for this, for example: /data/nginx/cache/$http_host/$request_uri/ === Reason: With reference to http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_key For fastcgi_cache_key "$scheme$request_method$host$request_uri"; If cache path become: /data/nginx/cache/$scheme/$request_method/$host/$request_uri Then in location / block I can have something like: location / { try_files /data/nginx/cache/$scheme/$request_method/$host/$request_uri $uri $uri/ /index.php?$args; } == I think if try_files can "hit" cached location like above, it will be faster. As of now for a cached page, a request reaches fastcgi location block (not fastcgI backend/upstream handler - just another location handler/another internal rewrite). In some tests, I found nginx's fastcgi_cache was taking slightly more time to return a cached page. Difference was too small and I was testing against only 1 URL. I think nginx's way will perform better when lookup will be done among 1000s of cached pages. == Another reason for some control over fastcgi_cache_path is to have $http_host prefix storage. As of now when clearing cache we need to wipe it out completely. May be with $http_host in fastcgi_cache_path we will be able to clear for a single-domain. Something like: fastcgi_cache_path /data/nginx/cache/$http_host levels=1:2 keys_zone=one:10m; Not sure, if above works and creates non-existent directory automatically. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232079,232079#msg-232079 From kasperg at benjamin.dk Mon Oct 22 12:01:17 2012 From: kasperg at benjamin.dk (Kasper Grubbe) Date: Mon, 22 Oct 2012 14:01:17 +0200 Subject: NGINX looks for /logs/error.log as default? Message-ID: Hello, I have a problem with NGINX version 1.2.4 it will constantly look for /logs/error.log, even though I have defined another error.log in my configuration. I start NGINX like this: nginx -p . -c config/nginx/development.conf My development.conf is fairly simple and looks like this: # main error log error_log log/nginx.error.log debug; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # main access log access_log log/nginx.access.log combined; sendfile on; keepalive_timeout 65; server { listen 9002; #omg! server_name local.woman.dk; rewrite ^/(.*)/$ /$1 last; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; location / { ssi on; proxy_pass http://127.0.0.1:3000; } } } But NGINX don't want to start because ./logs/error.log doesn't exist. This is my error: [ben2 (master)]=> nginx -p . -c config/nginx/development.conf nginx: [alert] could not open error log file: open() "./logs/error.log" failed (2: No such file or directory) Am I doing it wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at waeme.net Mon Oct 22 13:23:26 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 22 Oct 2012 17:23:26 +0400 Subject: NGINX looks for /logs/error.log as default? In-Reply-To: References: Message-ID: On 22 Oct2012, at 16:01 , Kasper Grubbe wrote: > > Am I doing it wrong? Nginx have to write log even before it read config file, so nginx try to open file specified by --error-log-path or relative to --prefix configure options. From ne at vbart.ru Mon Oct 22 13:37:23 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 22 Oct 2012 17:37:23 +0400 Subject: NGINX looks for /logs/error.log as default? In-Reply-To: References: Message-ID: <201210221737.23202.ne@vbart.ru> On Monday 22 October 2012 16:01:17 Kasper Grubbe wrote: > Hello, I have a problem with NGINX version 1.2.4 it will constantly look > for /logs/error.log, even though I have defined another error.log in my > configuration. [...] > But NGINX don't want to start because ./logs/error.log doesn't exist. This > is my error: > > [ben2 (master)]=> nginx -p . -c config/nginx/development.conf > nginx: [alert] could not open error log file: open() "./logs/error.log" > failed (2: No such file or directory) > > Am I doing it wrong? Nginx must have access to error log before it will parse configuration. See also the "--error-log-path=" configure parameter. http://www.nginx.org/en/docs/install.html wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From jeszenszky.zoltan at gmail.com Mon Oct 22 19:52:37 2012 From: jeszenszky.zoltan at gmail.com (=?ISO-8859-1?Q?Jeszenszky_Zolt=E1n?=) Date: Mon, 22 Oct 2012 21:52:37 +0200 Subject: replace string in content In-Reply-To: <201210220144.38452.ne@vbart.ru> References: <201210220144.38452.ne@vbart.ru> Message-ID: Here is the config, replace content doesn't work: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; error_log /var/log/nginx/error.log debug; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip off; server { error_log /var/log/nginx/error.log debug; listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; sub_filter nginx korte; sub_filter_once off; subs_filter_types text/html text/css text/xml; subs_filter nginx 'newstring' i; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } 2012/10/21 Valentin V. Bartenev > On Sunday 21 October 2012 15:41:26 Jeszenszky Zolt?n wrote: > > Why I can not replace string in pages? > > Probably you have a problem in your config, but since you have provided > only > small part of it, then it's impossible to determine. > > > Can I debug what happening? Help me, please. > > Yes, you can: http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?dv?zlettel / Best Regards, Unix and TSM administrator -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeszenszky.zoltan at gmail.com Mon Oct 22 22:20:27 2012 From: jeszenszky.zoltan at gmail.com (=?ISO-8859-1?Q?Jeszenszky_Zolt=E1n?=) Date: Tue, 23 Oct 2012 00:20:27 +0200 Subject: replace string in content In-Reply-To: References: <201210220144.38452.ne@vbart.ru> Message-ID: I just change the default port 80 to 8869 and the reprace content started to work! Then I wrote the port to 80 again and the filter is still working. Incredible... 2012/10/22 Jeszenszky Zolt?n > Here is the config, replace content doesn't work: > > #user nobody; > worker_processes 1; > > #error_log logs/error.log; > #error_log logs/error.log notice; > #error_log logs/error.log info; > error_log /var/log/nginx/error.log debug; > > #pid logs/nginx.pid; > > > events { > worker_connections 1024; > } > > > http { > include mime.types; > default_type application/octet-stream; > > #log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > # '$status $body_bytes_sent "$http_referer" ' > # '"$http_user_agent" "$http_x_forwarded_for"'; > > #access_log logs/access.log main; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > > gzip off; > > server { > error_log /var/log/nginx/error.log debug; > listen 80; > server_name localhost; > > #charset koi8-r; > > #access_log logs/host.access.log main; > > location / { > root html; > index index.html index.htm; > sub_filter nginx korte; > sub_filter_once off; > subs_filter_types text/html text/css text/xml; > subs_filter nginx 'newstring' i; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > # > #location ~ \.php$ { > # proxy_pass http://127.0.0.1; > #} > > # pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > # > #location ~ \.php$ { > # root html; > # fastcgi_pass 127.0.0.1:9000; > # fastcgi_index index.php; > # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > # include fastcgi_params; > #} > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > #location ~ /\.ht { > # deny all; > #} > } > > > # another virtual host using mix of IP-, name-, and port-based > configuration > # > #server { > # listen 8000; > # listen somename:8080; > # server_name somename alias another.alias; > > # location / { > # root html; > # index index.html index.htm; > # } > #} > > > # HTTPS server > # > #server { > # listen 443; > # server_name localhost; > > # ssl on; > # ssl_certificate cert.pem; > # ssl_certificate_key cert.key; > > # ssl_session_timeout 5m; > > # ssl_protocols SSLv2 SSLv3 TLSv1; > # ssl_ciphers HIGH:!aNULL:!MD5; > # ssl_prefer_server_ciphers on; > > # location / { > # root html; > # index index.html index.htm; > # } > #} > > } > > > 2012/10/21 Valentin V. Bartenev > >> On Sunday 21 October 2012 15:41:26 Jeszenszky Zolt?n wrote: >> > Why I can not replace string in pages? >> >> Probably you have a problem in your config, but since you have provided >> only >> small part of it, then it's impossible to determine. >> >> > Can I debug what happening? Help me, please. >> >> Yes, you can: http://nginx.org/en/docs/debugging_log.html >> >> wbr, Valentin V. Bartenev >> >> -- >> http://nginx.com/support.html >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > ?dv?zlettel / Best Regards, > Unix and TSM administrator > > -- ?dv?zlettel / Best Regards, Unix and TSM administrator -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Samad at yieldbroker.com Tue Oct 23 06:25:06 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Tue, 23 Oct 2012 06:25:06 +0000 Subject: Question about ssl CRL Message-ID: Hi New to nginx, trying to setup a SSL reverse proxy. I have the SSL server and client setup working, but when I add in crl pem it fails I downloaded the CRL from verisign converted from DER to PEM format and saved. When I uncomment #ssl_crl /var/www/dev.xyz.com/certs/crl.pem; My clients fail to connect, I get an 400 error ! Not sure what the issue is ? Thanks Alex {code} server { listen 447 ssl; server_name dev.xyz.com; ssl on; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /var/www/dev.xyz.com/certs/dev.xyz.com.crt; ssl_certificate_key /var/www/dev.xyz.com/certs/dev.xyz.com.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; # 1.3.7 #ssl_client_certificate /var/www/dev.xyz.com/certs/dev.xyz.com.AcceptableUserCertsCA; #ssl_trusted_certificate /var/www/dev.xyz.com/certs/dev.xyz.com.UserCertsCA; ssl_client_certificate /var/www/dev.xyz.com/certs/dev.xyz.com.UserCertsCA; #ssl_crl /var/www/dev.xyz.com/certs/crl.pem; ssl_verify_client on; ssl_verify_depth 3; access_log /var/log/nginx/dev.xyz.com.access.log main; error_log /var/log/nginx/dev.xyz.com.error.log warn; location / { root /var/www/dev.xyz.com/wwwroot/; index index.html index.htm; autoindex on; } From lists at ruby-forum.com Tue Oct 23 12:35:24 2012 From: lists at ruby-forum.com (Cristian R.) Date: Tue, 23 Oct 2012 14:35:24 +0200 Subject: Limit the burst speed controlled by limit_rate_after Message-ID: Hello I search since days how to limit the actual burst speed as well, not just after the initial data is sent. I wan that 4MB to be sent at 500KBps then the rest of the file to be sent at 90KBps. Is there any way to do this, or can someone write a patch? I am ready to donate for the cause... :) Thank you. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Tue Oct 23 14:57:17 2012 From: nginx-forum at nginx.us (marcochecco) Date: Tue, 23 Oct 2012 10:57:17 -0400 Subject: Help me... Message-ID: <74f94777a53a5a81a361580631e0e901.NginxMailingListEnglish@forum.nginx.org> does not work in the rewriting patch /var/www/localhost server { listen 80; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } .htaccess # Translate and upload by DLEVIET Team (dleviet.com) # Redirects rewrite ^/page/(.*)$ /index.php?cstart=$1 last; # Post rewrite ^/([0-9]+)/([0-9]+)/([0-9]+)/page,([0-9]+),([0-9]+),(.*).html(/?)+$ /index.php?subaction=showfull&year=$1&month=$2&day=$3&news_page=$4&cstart=$5&news_name=$6 last; rewrite ^/([0-9]+)/([0-9]+)/([0-9]+)/page,([0-9]+),(.*).html(/?)+$ /index.php?subaction=showfull&year=$1&month=$2&day=$3&news_page=$4&news_name=$5 last; rewrite ^/([0-9]+)/([0-9]+)/([0-9]+)/print:page,([0-9]+),(.*).html(/?)+$ /engine/print.php?subaction=showfull&year=$1&month=$2&day=$3&news_page=$4&news_name=$5 last; rewrite ^/([0-9]+)/([0-9]+)/([0-9]+)/(.*).html(/?)+$ /index.php?subaction=showfull&year=$1&month=$2&day=$3&news_name=$4 last; rewrite ^/([^.]+)/page,([0-9]+),([0-9]+),([0-9]+)-(.*).html(/?)+$ /index.php?newsid=$4&news_page=$2&cstart=$3&seourl=$5&seocat=$1 last; rewrite ^/([^.]+)/page,([0-9]+),([0-9]+)-(.*).html(/?)+$ /index.php?newsid=$3&news_page=$2&seourl=$4&seocat=$1 last; rewrite ^/([^.]+)/print:page,([0-9]+),([0-9]+)-(.*).html(/?)+$ /engine/print.php?news_page=$2&newsid=$3&seourl=$4&seocat=$1 last; rewrite ^/([^.]+)/([0-9]+)-(.*).html(/?)+$ /index.php?newsid=$2&seourl=$3&seocat=$1 last; rewrite ^/page,([0-9]+),([0-9]+),([0-9]+)-(.*).html(/?)+$ /index.php?newsid=$3&news_page=$1&cstart=$2&seourl=$4 last; rewrite ^/page,([0-9]+),([0-9]+)-(.*).html(/?)+$ /index.php?newsid=$2&news_page=$1&seourl=$3 last; rewrite ^/print:page,([0-9]+),([0-9]+)-(.*).html(/?)+$ /engine/print.php?news_page=$1&newsid=$2&seourl=$3 last; rewrite ^/([0-9]+)-(.*).html(/?)+$ /index.php?newsid=$1&seourl=$2 last; # For day rewrite ^/([0-9]+)/([0-9]+)/([0-9]+)(/?)+$ /index.php?year=$1&month=$2&day=$3 last; rewrite ^/([0-9]+)/([0-9]+)/([0-9]+)/page/([0-9]+)(/?)+$ /index.php?year=$1&month=$2&day=$3&cstart=$4 last; # For all months rewrite ^/([0-9]+)/([0-9]+)(/?)+$ /index.php?year=$1&month=$2 last; rewrite ^/([0-9]+)/([0-9]+)/page/([0-9]+)(/?)+$ /index.php?year=$1&month=$2&cstart=$3 last; # Output for the entire year rewrite ^/([0-9]+)(/?)+$ /index.php?year=$1 last; rewrite ^/([0-9]+)/page/([0-9]+)(/?)+$ /index.php?year=$1&cstart=$2 last; # Output for tags rewrite ^/tags/([^/]*)(/?)+$ /index.php?do=tags&tag=$1 last; rewrite ^/tags/([^/]*)/page/([0-9]+)(/?)+$ /index.php?do=tags&tag=$1&cstart=$2 last; # Output for users rewrite ^/user/([^/]*)/rss.xml$ /engine/rss.php?subaction=allnews&user=$1 last; rewrite ^/user/([^/]*)(/?)+$ /index.php?subaction=userinfo&user=$1 last; rewrite ^/user/([^/]*)/page/([0-9]+)(/?)+$ /index.php?subaction=userinfo&user=$1&cstart=$2 last; rewrite ^/user/([^/]*)/news(/?)+$ /index.php?subaction=allnews&user=$1 last; rewrite ^/user/([^/]*)/news/page/([0-9]+)(/?)+$ /index.php?subaction=allnews&user=$1&cstart=$2 last; rewrite ^/user/([^/]*)/news/rss.xml(/?)+$ /engine/rss.php?subaction=allnews&user=$1 last; # Output for last news rewrite ^/lastnews/(/?)+$ /index.php?do=lastnews last; rewrite ^/lastnews/page/([0-9]+)(/?)+$ /index.php?do=lastnews&cstart=$1 last; # Output for catalog rewrite ^/catalog/([^/]*)/rss.xml$ /engine/rss.php?catalog=$1 last; rewrite ^/catalog/([^/]*)(/?)+$ /index.php?catalog=$1 last; rewrite ^/catalog/([^/]*)/page/([0-9]+)(/?)+$ /index.php?catalog=$1&cstart=$2 last; # Output for new posts rewrite ^/newposts(/?)+$ /index.php?subaction=newposts last; rewrite ^/newposts/page/([0-9]+)(/?)+$ /index.php?subaction=newposts&cstart=$1 last; # Output for favorites news rewrite ^/favorites(/?)+$ /index.php?do=favorites last; rewrite ^/favorites/page/([0-9]+)(/?)+$ /index.php?do=favorites&cstart=$1 last; rewrite ^/rules.html$ /index.php?do=rules last; rewrite ^/statistics.html$ /index.php?do=stats last; rewrite ^/addnews.html$ /index.php?do=addnews last; rewrite ^/rss.xml$ /engine/rss.php last; rewrite ^/sitemap.xml$ /uploads/sitemap.xml last; if (!-d $request_filename) { rewrite ^/([^.]+)/page/([0-9]+)(/?)+$ /index.php?do=cat&category=$1&cstart=$2 last; rewrite ^/([^.]+)/?$ /index.php?do=cat&category=$1 last; } if (!-f $request_filename) { rewrite ^/([^<]+)/rss.xml$ /engine/rss.php?do=cat&category=$1 last; rewrite ^/page,([0-9]+),([^/]+).html$ /index.php?do=static&page=$2&news_page=$1 last; rewrite ^/print:([^/]+).html$ /engine/print.php?do=static&page=$1 last; } if (!-f $request_filename) { rewrite ^/([^/]+).html$ /index.php?do=static&page=$1 last; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232117,232117#msg-232117 From lists at ruby-forum.com Tue Oct 23 15:42:55 2012 From: lists at ruby-forum.com (Cristian R.) Date: Tue, 23 Oct 2012 17:42:55 +0200 Subject: limit_rate_after In-Reply-To: <20090528104608.GF11372@rambler-co.ru> References: <20090527145044.GE77580@rambler-co.ru> <20090528104608.GF11372@rambler-co.ru> Message-ID: <82bdbb12bce16375e4c54f7898932fed@ruby-forum.com> Igor Sysoev wrote in post #820789: > On Wed, May 27, 2009 at 06:50:44PM +0400, Igor Sysoev wrote: > >> Patch creates limit_rate_after directive: >> >> limit_rate_after 1m; >> limit_rate 100k; >> >> The directive limits speed only after the first part was sent. > > The renewed patch. Any chance for a new patch here to shape the burst as well? I want first 1m to go with 500k and the remaining with 100k... Thank you -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Oct 24 05:02:15 2012 From: nginx-forum at nginx.us (runesoerensen) Date: Wed, 24 Oct 2012 01:02:15 -0400 Subject: Worker processes not shutting down In-Reply-To: <20120125134709.GQ67687@mdounin.ru> References: <20120125134709.GQ67687@mdounin.ru> Message-ID: <7afec80bebe754cdb4bdfafd9380a85b.NginxMailingListEnglish@forum.nginx.org> nginx -V returns: nginx version: nginx/1.2.0 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.2.0/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.2.0/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.2.0/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.2.0/debian/modules/nginx-dav-ext-module I've only seen the issue in a production environment where it's difficult to test with vanilla nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221591,232122#msg-232122 From nginx-forum at nginx.us Wed Oct 24 10:03:23 2012 From: nginx-forum at nginx.us (pedroandal) Date: Wed, 24 Oct 2012 06:03:23 -0400 Subject: nginx windows 64bit In-Reply-To: <5d75178e6e1b4d09076aae216956ba07.NginxMailingListEnglish@forum.nginx.org> References: <5d75178e6e1b4d09076aae216956ba07.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51d57157e2a5b8b98f35d5ed1a3d62db.NginxMailingListEnglish@forum.nginx.org> native nginx 1.2.0 W64 here http://dl.dropbox.com/u/5229158/nginx-1.2.0-x64.zip Posted at Nginx Forum: http://forum.nginx.org/read.php?2,213426,232130#msg-232130 From christian.boenning at gmail.com Wed Oct 24 11:25:30 2012 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Wed, 24 Oct 2012 13:25:30 +0200 Subject: gzip_static and file-extension '.gzip' Message-ID: hi, is there any chance to extend gzip_static to act on more than the current file extension '.gz' for pre-compressed files? In my case TYPO3 does generate '.css.gzip' or '.js.gzip' or even '.gzip.js' in some places files which are plain gzip files too. Now I can imagine that this could result in a speed-bump to be able to deliver them directly without having to either let them be compressed by nginx or patch TYPO3 Core and a lot of Extensions to make them write '.css.gz' (and so on). Regards, Christian From farseas at gmail.com Wed Oct 24 12:23:24 2012 From: farseas at gmail.com (Bob S.) Date: Wed, 24 Oct 2012 08:23:24 -0400 Subject: Could people using auth-request please send me their actual nginx.conf files please? In-Reply-To: References: Message-ID: That link is dead. Also, I am trying to use auth_request to provide access to a protected directory, whereas it looks like X-Accel-Redirect is used to provide download access to a particular file. Bob S. On Sun, Oct 21, 2012 at 12:29 PM, Jonathan Matthews wrote: > On 21 October 2012 13:57, Bob S. wrote: > > If you are successfully using auth request, -please send me your actual > > nginx.conf file. > > I'm not using this module but, reading its README, it seems trivially > replaceable with X-Accel-Redirect and an "internal" location. > > I'll be blogging our specific setup sometime soon, but in production > I'm basically using a modified version of what's described here: > http://ls4.sourceforge.net/doc/howto/ddt.html. > > For us, it works reliably, efficiently, without a 3rd party module, > delivering gigs of data a day. > I literally can't see what extra that module gives you. > > Regards, > Jonathan > -- > Jonathan Matthews // Oxford, London, UK > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Oct 24 12:55:29 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 24 Oct 2012 13:55:29 +0100 Subject: Could people using auth-request please send me their actual nginx.conf files please? In-Reply-To: References: Message-ID: On 24 October 2012 13:23, Bob S. wrote: > That link is dead. Works for me. user at host:~$ curl -I http://ls4.sourceforge.net/doc/howto/ddt.html HTTP/1.1 200 OK Server: Apache/2.2.15 (CentOS) Vary: Host Last-Modified: Mon, 11 Apr 2011 09:36:27 GMT ETag: "2ab2-4a0a14f45dcc0" Accept-Ranges: bytes Content-Length: 10930 [snip] > Also, I am trying to use auth_request to provide access to a protected > directory, whereas it looks like X-Accel-Redirect is used to provide > download access to a particular file. Reading http://mdounin.ru/hg/ngx_http_auth_request_module/file/a29d74804ff1/README slightly more carefully, I understand the distinction you're seeing. I still think it's trivially replaceable with X-Accel-Redirect *if* you are in control of the URI structure. Do read that page I linked - it really can do this! J -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From lbonetto at kenzanmedia.com Wed Oct 24 14:27:33 2012 From: lbonetto at kenzanmedia.com (Laurent Bonetto) Date: Wed, 24 Oct 2012 10:27:33 -0400 Subject: Configuring nginx as mail proxy Message-ID: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> I need to use nginx as a mail proxy. I am completely new to nginx and need some help with the configuration. Here is what I did: First I built a service that mocks the authentication services described here: http://wiki.nginx.org/NginxMailCoreModule. For example, curl -v -H "Host:auth.server.hostname" -H "Auth-Method:plain" -H "Auth-User:user" -H "Auth-pass:123" -H "Auth-Protocol:imap" -H "Auth-Login-Attempt:1" -H "Client-IP: 192.168.1.1" http://localhost:8080/authorize returns the following response header (no matter what Auth-User, Auth-Protocol, and Client-IP are for now): < HTTP/1.1 200 OK < Content-Type: text/html;charset=ISO-8859-1 < Auth-Status: OK < Auth-Server: < Auth-Port: 110 Second I installed nginx on my mac after installing macports: $ sudo port -d selfupdate $ sudo port install nginx Third I created an nginx.conf with the following: worker_processes 1; error_log /var/log/nginx/error.log info; mail { # not needed because returned in Auth-Server actually? server_name ; auth_http http://localhost:8080/authorize; pop3_auth plain apop cram-md5; pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL"; xclient off; server { listen 110; protocol pop3; proxy on; proxy_pass_error_message on; } } Here is what I got running nginx: $ nginx -V nginx version: nginx/1.2.4 configure arguments: --prefix=/opt/local --with-cc-opt='-I/opt/local/include -O2' --with-ld-opt=-L/opt/local/lib --conf-path=/opt/local/etc/nginx/nginx.conf --error-log-path=/opt/local/var/log/nginx/error.log --http-log-path=/opt/local/var/log/nginx/access.log --pid-path=/opt/local/var/run/nginx/nginx.pid --lock-path=/opt/local/var/run/nginx/nginx.lock --http-client-body-temp-path=/opt/local/var/run/nginx/client_body_temp --http-proxy-temp-path=/opt/local/var/run/nginx/proxy_temp --http-fastcgi-temp-path=/opt/local/var/run/nginx/fastcgi_temp --http-uwsgi-temp-path=/opt/local/var/run/nginx/uwsgi_temp --with-ipv6 $ nginx nginx: [emerg] unknown directive "mail" in /opt/local/etc/nginx/nginx.conf:6 The only mention of that error on the web brings up a discussion in Russian and the translation is "This question is no longer relevant" My questions: Why am I getting this unknow directive? Does my config look correct at first sight or am I missing some key component for the mail proxy to work using the authentication approach described here: http://wiki.nginx.org/NginxMailCoreModule? Thanks a lot. ?????! -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Wed Oct 24 14:41:50 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Wed, 24 Oct 2012 18:41:50 +0400 Subject: Configuring nginx as mail proxy In-Reply-To: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> Message-ID: <5087FE2E.7000502@citrin.ru> On 10/24/12 18:27, Laurent Bonetto wrote: > *Second* I installed nginx on my mac after installing macports: > > |$ sudo port -d selfupdate > $ sudo port install nginx > | > *Here is what I got running nginx:* > > $ nginx -V nginx version: nginx/1.2.4 configure arguments: --prefix=/opt/local > --with-cc-opt='-I/opt/local/include -O2' --with-ld-opt=-L/opt/local/lib > --conf-path=/opt/local/etc/nginx/nginx.conf > --error-log-path=/opt/local/var/log/nginx/error.log > --http-log-path=/opt/local/var/log/nginx/access.log > --pid-path=/opt/local/var/run/nginx/nginx.pid > --lock-path=/opt/local/var/run/nginx/nginx.lock > --http-client-body-temp-path=/opt/local/var/run/nginx/client_body_temp > --http-proxy-temp-path=/opt/local/var/run/nginx/proxy_temp > --http-fastcgi-temp-path=/opt/local/var/run/nginx/fastcgi_temp > --http-uwsgi-temp-path=/opt/local/var/run/nginx/uwsgi_temp --with-ipv6 > > $ nginx nginx: [emerg] unknown directive "mail" in /opt/local/etc/nginx/nginx.conf:6 > nginx -V show, that there is no mail module is build. --with-mail should be added to configure porameters. Try to see port variants nginx my be it is possible to build nginx with mail module via mac ports. -- Anton Yuzhaninov From mdounin at mdounin.ru Wed Oct 24 14:53:36 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Oct 2012 18:53:36 +0400 Subject: Configuring nginx as mail proxy In-Reply-To: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> Message-ID: <20121024145336.GD40452@mdounin.ru> Hello! On Wed, Oct 24, 2012 at 10:27:33AM -0400, Laurent Bonetto wrote: > I need to use nginx as a mail proxy. I am completely new to > nginx and need some help with the configuration. [...] > $ nginx -V nginx version: nginx/1.2.4 configure arguments: > --prefix=/opt/local --with-cc-opt='-I/opt/local/include -O2' > --with-ld-opt=-L/opt/local/lib > --conf-path=/opt/local/etc/nginx/nginx.conf > --error-log-path=/opt/local/var/log/nginx/error.log > --http-log-path=/opt/local/var/log/nginx/access.log > --pid-path=/opt/local/var/run/nginx/nginx.pid > --lock-path=/opt/local/var/run/nginx/nginx.lock > --http-client-body-temp-path=/opt/local/var/run/nginx/client_body_temp > --http-proxy-temp-path=/opt/local/var/run/nginx/proxy_temp > --http-fastcgi-temp-path=/opt/local/var/run/nginx/fastcgi_temp > --http-uwsgi-temp-path=/opt/local/var/run/nginx/uwsgi_temp > --with-ipv6 > > $ nginx nginx: [emerg] unknown directive "mail" in > /opt/local/etc/nginx/nginx.conf:6 > > The only mention of that error on the web brings up a discussion > in Russian and the translation is "This question is no longer > relevant" > > > My questions: > > Why am I getting this unknow directive? To use nginx mail proxy module, you have to enable it during compilation using the --with-mail configure argument. From the "nginx -V" output you've provided it's clear that nginx binary you are using doesn't have mail module compiled in. See here for more details: http://nginx.org/en/docs/mail/ngx_mail_core_module.html [...] -- Maxim Dounin http://nginx.com/support.html From lbonetto at kenzanmedia.com Wed Oct 24 15:49:43 2012 From: lbonetto at kenzanmedia.com (Laurent Bonetto) Date: Wed, 24 Oct 2012 11:49:43 -0400 Subject: Configuring nginx as mail proxy In-Reply-To: <20121024145336.GD40452@mdounin.ru> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> Message-ID: <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> Thanks. That was indeed my first issue. I did sudo port edit nginx, added --with-mail to the config options, reinstalled, and now I am passed that error. I then got an error that no events was present so I just added events { worker_connections 1; } Now nginx is starting but I never see any hit to my mock service despite it being specified in auth_http auth_http http://localhost:8080/authorize; No errors reported in the error log. When is nginx expected to hit the url specified in nginx? When it gets launched? When an event occurs on the ports 110 and 2525 with the protocols I specified? I tried sending emails to/from my mail server that uses POP3 on port 110 and SMTP on port 2525 and never saw any hit to the authorize url. Am I missing another component in my config? Again, here is my current config: worker_processes 1; error_log /var/log/nginx/error.log info; events { worker_connections 1; } mail { server_name ; auth_http http://localhost:8080/authorize; pop3_auth plain apop cram-md5; pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL"; smtp_auth login plain cram-md5; smtp_capabilities "SIZE 10485760" ENHANCEDSTATUSCODES 8BITMIME DSN; xclient off; server { # SMTP on port 2525 not 25 listen 2525; protocol smtp; } server { listen 110; protocol pop3; # proxy on; proxy_pass_error_message on; } } On Oct 24, 2012, at 10:53 AM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 24, 2012 at 10:27:33AM -0400, Laurent Bonetto wrote: > >> I need to use nginx as a mail proxy. I am completely new to >> nginx and need some help with the configuration. > > [...] > >> $ nginx -V nginx version: nginx/1.2.4 configure arguments: >> --prefix=/opt/local --with-cc-opt='-I/opt/local/include -O2' >> --with-ld-opt=-L/opt/local/lib >> --conf-path=/opt/local/etc/nginx/nginx.conf >> --error-log-path=/opt/local/var/log/nginx/error.log >> --http-log-path=/opt/local/var/log/nginx/access.log >> --pid-path=/opt/local/var/run/nginx/nginx.pid >> --lock-path=/opt/local/var/run/nginx/nginx.lock >> --http-client-body-temp-path=/opt/local/var/run/nginx/client_body_temp >> --http-proxy-temp-path=/opt/local/var/run/nginx/proxy_temp >> --http-fastcgi-temp-path=/opt/local/var/run/nginx/fastcgi_temp >> --http-uwsgi-temp-path=/opt/local/var/run/nginx/uwsgi_temp >> --with-ipv6 >> >> $ nginx nginx: [emerg] unknown directive "mail" in >> /opt/local/etc/nginx/nginx.conf:6 >> >> The only mention of that error on the web brings up a discussion >> in Russian and the translation is "This question is no longer >> relevant" >> >> >> My questions: >> >> Why am I getting this unknow directive? > > To use nginx mail proxy module, you have to enable it during > compilation using the --with-mail configure argument. From the > "nginx -V" output you've provided it's clear that nginx binary you > are using doesn't have mail module compiled in. > > See here for more details: > http://nginx.org/en/docs/mail/ngx_mail_core_module.html > > [...] > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Oct 24 16:26:52 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Oct 2012 20:26:52 +0400 Subject: Configuring nginx as mail proxy In-Reply-To: <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> Message-ID: <20121024162652.GG40452@mdounin.ru> Hello! On Wed, Oct 24, 2012 at 11:49:43AM -0400, Laurent Bonetto wrote: > Thanks. That was indeed my first issue. I did sudo port edit > nginx, added --with-mail to the config options, reinstalled, and > now I am passed that error. > > I then got an error that no events was present so I just added > events { > worker_connections 1; > } This isn't going to work. With such a low number of worker connections nginx won't be able to start worker processes properly (unless you have no listening sockets configured). Try looking into error log, you should see something like: 2012/10/24 20:17:53 [alert] 58202#0: 1 worker_connections are not enough 2012/10/24 20:17:53 [notice] 58201#0: signal 20 (SIGCHLD) received 2012/10/24 20:17:53 [notice] 58201#0: worker process 58202 exited with code 2 2012/10/24 20:17:53 [alert] 58201#0: worker process 58202 exited with fatal code 2 and cannot be respawned You have to set worker_processes to something reasonable. Something like 512 as by default is usually a good choice for a small test server. > Now nginx is starting but I never see any hit to my mock service > despite it being specified in auth_http > auth_http http://localhost:8080/authorize; > No errors reported in the error log. > > When is nginx expected to hit the url specified in nginx? When > it gets launched? When an event occurs on the ports 110 and 2525 > with the protocols I specified? The auth service is requested when nginx needs to authenticate a client and to find out a backend server address to proxy the client to. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Oct 24 16:36:12 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Oct 2012 20:36:12 +0400 Subject: Configuring nginx as mail proxy In-Reply-To: <20121024162652.GG40452@mdounin.ru> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> <20121024162652.GG40452@mdounin.ru> Message-ID: <20121024163612.GH40452@mdounin.ru> Hello! On Wed, Oct 24, 2012 at 08:26:52PM +0400, Maxim Dounin wrote: [...] > You have to set worker_processes to something reasonable. Ops, I meant to write "worker_connections" here. > Something like 512 as by default is usually a good choice for a > small test server. [...] -- Maxim Dounin http://nginx.com/support.html From andrejaenisch at googlemail.com Wed Oct 24 19:50:53 2012 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Wed, 24 Oct 2012 21:50:53 +0200 Subject: Could people using auth-request please send me their actual nginx.conf files please? In-Reply-To: References: Message-ID: 2012/10/24 Jonathan Matthews : > Do read that page I linked - it really can do this! I can confirm, that this link http://ls4.sourceforge.net/doc/howto/ddt.html isn't dead. Maybe you can do a search for "Direct Data Transfer using Nginx?s X-Accel-Redirect" (header), "LS4 v1.0.0 documentation" (title) and Sourceforge ;-) Regards, Andre Jaenisch From lbonetto at kenzanmedia.com Wed Oct 24 21:38:29 2012 From: lbonetto at kenzanmedia.com (Laurent Bonetto) Date: Wed, 24 Oct 2012 17:38:29 -0400 Subject: Configuring nginx as mail proxy In-Reply-To: <20121024162652.GG40452@mdounin.ru> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> <20121024162652.GG40452@mdounin.ru> Message-ID: Hi Maxim, Thank you for sticking with me on this. I appreciate very much. I did understand you meant to change the number of worker_connections. The only reason why I had lowered it was that I got a warning: nginx: [warn] 1024 worker_connections exceed open file resource limit: 256 After pointing my mail client to localhost, I was finally able to see nginx hit my mock for an authentication request so there is definitely some progress! Unfortunately, the proxying is still not working. More precisely: nginx hits my authenticate mock server with: Host: localhost Auth-User: Auth-Pass: Auth-Protocol: pop3 Auth-Login-Attempt: 1 Client-IP: 192.168.1.104 - If my mock responds with < HTTP/1.1 200 OK < Content-Type: text/html < Auth-Status: Invalid login or password < Auth-Wait: 3 < Content-Length: 0 Then my mail client tells me that I have the incorrect username or password, as expected. - However, if my mock responds with: < Auth-Status: OK < Auth-Server: < Auth-Port: 110 The the mail client responds with an internal server error. I added the Auth-Pass (which should not be needed anyway) in the response and that didn't help. Since I didn't see any error in the error.log from nginx I used wireshark to monitor traffic. I filtered on tcp.port eq 110 and compared side by side the traffic coming from an account using a direct connection to my mail server, and an account going through the nginx proxy. In the second case (through proxy), I do not see any traffic going out to my mail server, suggesting it does not get the info it was expecting from my authentication service. - Can you think of something I am missing? - How do I even go about debugging what's happening here apart from what I am already doing (using wireshark)? Again, for info, here is my current config: worker_processes 1; error_log /var/log/nginx/error.log info; events { worker_connections 1024; } mail { # I assume server_name comes from Auth-Server so I tried commenting out. Same behavior. server_name ; auth_http localhost:8080/authorize; pop3_auth plain; pop3_capabilities "TOP" "USER" "UIDL"; smtp_auth login plain cram-md5; smtp_capabilities "SIZE 10485760" ENHANCEDSTATUSCODES 8BITMIME DSN; xclient off; server { listen 2525; protocol smtp; } server { listen 110; protocol pop3; proxy on; proxy_pass_error_message on; } } On Oct 24, 2012, at 12:26 PM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 24, 2012 at 11:49:43AM -0400, Laurent Bonetto wrote: > >> Thanks. That was indeed my first issue. I did sudo port edit >> nginx, added --with-mail to the config options, reinstalled, and >> now I am passed that error. >> >> I then got an error that no events was present so I just added >> events { >> worker_connections 1; >> } > > This isn't going to work. With such a low number of worker > connections nginx won't be able to start worker processes properly > (unless you have no listening sockets configured). > > Try looking into error log, you should see something like: > > 2012/10/24 20:17:53 [alert] 58202#0: 1 worker_connections are not enough > 2012/10/24 20:17:53 [notice] 58201#0: signal 20 (SIGCHLD) received > 2012/10/24 20:17:53 [notice] 58201#0: worker process 58202 exited with code 2 > 2012/10/24 20:17:53 [alert] 58201#0: worker process 58202 exited with fatal code 2 and cannot be respawned > > You have to set worker_processes to something reasonable. > Something like 512 as by default is usually a good choice for a > small test server. > >> Now nginx is starting but I never see any hit to my mock service >> despite it being specified in auth_http >> auth_http http://localhost:8080/authorize; >> No errors reported in the error log. >> >> When is nginx expected to hit the url specified in nginx? When >> it gets launched? When an event occurs on the ports 110 and 2525 >> with the protocols I specified? > > The auth service is requested when nginx needs to authenticate a > client and to find out a backend server address to proxy the > client to. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Oct 24 22:13:07 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 24 Oct 2012 15:13:07 -0700 Subject: Worker processes not shutting down In-Reply-To: <1845550347547ea591de17585b45e02a.NginxMailingListEnglish@forum.nginx.org> References: <1845550347547ea591de17585b45e02a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Jan 24, 2012 at 12:42 PM, runesoerensen wrote: > Following an upgrade from nginx 1.0.2 to 1.0.11 I've experienced a > problem with worker processes that are not shutting down. When looking > in the list of running processes the worker processes appears to be > shutting down but never actually exits. > Tools like pstack and strace are your good friends here :) You can try both of these tools to inspect your problematic worker processes to see what they're spinning onto. You can paste the results here if you still don't understand it. Best regards, -agentzh From mdounin at mdounin.ru Wed Oct 24 22:54:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Oct 2012 02:54:48 +0400 Subject: Configuring nginx as mail proxy In-Reply-To: References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> <20121024162652.GG40452@mdounin.ru> Message-ID: <20121024225447.GL40452@mdounin.ru> Hello! On Wed, Oct 24, 2012 at 05:38:29PM -0400, Laurent Bonetto wrote: > I did understand you meant to change the number of > worker_connections. The only reason why I had lowered it was > that I got a warning: > nginx: [warn] 1024 worker_connections exceed open file resource limit: 256 This indicate that you have very low open file resource limit set. Easiest way to fix this is to use worker_limit_nofile nginx configuration directive, see here: http://nginx.org/r/worker_limit_nofile Of course tuning your OS and/or using ulimit will do the trick as well. Using worker_connections set to something like 128 will help as well, but it's just to low for any real work and may be only used for testing. > After pointing my mail client to localhost, I was finally able > to see nginx hit my mock for an authentication request so there > is definitely some progress! Unfortunately, the proxying is > still not working. More precisely: > > nginx hits my authenticate mock server with: > Host: localhost > Auth-User: > Auth-Pass: > Auth-Protocol: pop3 > Auth-Login-Attempt: 1 > Client-IP: 192.168.1.104 > - If my mock responds with > < HTTP/1.1 200 OK > < Content-Type: text/html > < Auth-Status: Invalid login or password > < Auth-Wait: 3 > < Content-Length: 0 > Then my mail client tells me that I have the incorrect username or password, as expected. > > - However, if my mock responds with: > < Auth-Status: OK > < Auth-Server: > < Auth-Port: 110 > The the mail client responds with an internal server error. > I added the Auth-Pass (which should not be needed anyway) in the > response and that didn't help. Do you return response without http response line, i.e. "HTTP/1.1 200 OK"? What's exactly in the Auth-Server header returned? Note that this must be an IP address, not a hostname. > Since I didn't see any error in the error.log from nginx I used > wireshark to monitor traffic. I filtered on tcp.port eq 110 and > compared side by side the traffic coming from an account using a > direct connection to my mail server, and an account going > through the nginx proxy. In the second case (through proxy), I > do not see any traffic going out to my mail server, suggesting > it does not get the info it was expecting from my authentication > service. > > - Can you think of something I am missing? > - How do I even go about debugging what's happening here apart > from what I am already doing (using wireshark)? Appropriate errors should be logged by nginx into error log. I would suggests there should be something like 2012/10/25 02:29:10 [error] 64793#0: *1 auth http server 127.0.0.1:8081 sent invalid server address:"foobar" while in http auth state ... this time. It's strange you don't see anything. Detailed debug information may be obtained using debug log, see http://nginx.org/en/docs/debugging_log.html. [...] > mail { > # I assume server_name comes from Auth-Server so I tried commenting out. Same behavior. > server_name ; Just a side note: server_name is needed mostly to present something to a client when it connects, see here: http://nginx.org/en/docs/mail/ngx_mail_core_module.html#server_name It can't be from Auth-Server as it's only available later, after auth http service request. It should be safe to omit it though, machine hostname will be used by default. [...] -- Maxim Dounin http://nginx.com/support.html From lbonetto at kenzanmedia.com Thu Oct 25 01:33:29 2012 From: lbonetto at kenzanmedia.com (Laurent Bonetto) Date: Wed, 24 Oct 2012 21:33:29 -0400 Subject: Configuring nginx as mail proxy In-Reply-To: <20121024225447.GL40452@mdounin.ru> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> <20121024162652.GG40452@mdounin.ru> <20121024225447.GL40452@mdounin.ru> Message-ID: <31855C66-EE6A-4AF0-B20F-55630B5FFFFB@kenzanmedia.com> Maxime, Thanks so much. This was the key: > Note that this must be an IP address, not a hostname. My mail server was passing me a hostname, which nginx passed to the authenticate service. I had assumed it was fine to return a hostname. Returning the IP instead did the trick. I have now the proxy working inbound and outbound. Much appreciated also your clarifications regarding the low open file resource and server_name. You were of a big help today. Laurent On Oct 24, 2012, at 6:54 PM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 24, 2012 at 05:38:29PM -0400, Laurent Bonetto wrote: > >> I did understand you meant to change the number of >> worker_connections. The only reason why I had lowered it was >> that I got a warning: >> nginx: [warn] 1024 worker_connections exceed open file resource limit: 256 > > This indicate that you have very low open file resource limit set. > Easiest way to fix this is to use worker_limit_nofile nginx > configuration directive, see here: > > http://nginx.org/r/worker_limit_nofile > > Of course tuning your OS and/or using ulimit will do the trick as > well. Using worker_connections set to something like 128 will > help as well, but it's just to low for any real work and may be > only used for testing. > >> After pointing my mail client to localhost, I was finally able >> to see nginx hit my mock for an authentication request so there >> is definitely some progress! Unfortunately, the proxying is >> still not working. More precisely: >> >> nginx hits my authenticate mock server with: >> Host: localhost >> Auth-User: >> Auth-Pass: >> Auth-Protocol: pop3 >> Auth-Login-Attempt: 1 >> Client-IP: 192.168.1.104 >> - If my mock responds with >> < HTTP/1.1 200 OK >> < Content-Type: text/html >> < Auth-Status: Invalid login or password >> < Auth-Wait: 3 >> < Content-Length: 0 >> Then my mail client tells me that I have the incorrect username or password, as expected. >> >> - However, if my mock responds with: >> < Auth-Status: OK >> < Auth-Server: >> < Auth-Port: 110 >> The the mail client responds with an internal server error. >> I added the Auth-Pass (which should not be needed anyway) in the >> response and that didn't help. > > Do you return response without http response line, i.e. > "HTTP/1.1 200 OK"? > > What's exactly in the Auth-Server header returned? Note that this > must be an IP address, not a hostname. > >> Since I didn't see any error in the error.log from nginx I used >> wireshark to monitor traffic. I filtered on tcp.port eq 110 and >> compared side by side the traffic coming from an account using a >> direct connection to my mail server, and an account going >> through the nginx proxy. In the second case (through proxy), I >> do not see any traffic going out to my mail server, suggesting >> it does not get the info it was expecting from my authentication >> service. >> >> - Can you think of something I am missing? >> - How do I even go about debugging what's happening here apart >> from what I am already doing (using wireshark)? > > Appropriate errors should be logged by nginx into error log. I > would suggests there should be something like > > 2012/10/25 02:29:10 [error] 64793#0: *1 auth http server 127.0.0.1:8081 sent invalid server address:"foobar" while in http auth state ... > > this time. It's strange you don't see anything. > > Detailed debug information may be obtained using debug log, see > http://nginx.org/en/docs/debugging_log.html. > > [...] > >> mail { >> # I assume server_name comes from Auth-Server so I tried commenting out. Same behavior. >> server_name ; > > Just a side note: server_name is needed mostly to present > something to a client when it connects, see here: > > http://nginx.org/en/docs/mail/ngx_mail_core_module.html#server_name > > It can't be from Auth-Server as it's only available later, after > auth http service request. It should be safe to omit it though, > machine hostname will be used by default. > > [...] > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Alex.Samad at yieldbroker.com Thu Oct 25 03:49:32 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Thu, 25 Oct 2012 03:49:32 +0000 Subject: AJP Message-ID: Hi I was wondering what people use to connect to tomcat with AJP I found this site https://github.com/yaoweibin/nginx_ajp_module But it's not part of the 3rd party table on the wiki site. Alex From yaoweibin at gmail.com Thu Oct 25 05:39:26 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 25 Oct 2012 13:39:26 +0800 Subject: AJP In-Reply-To: References: Message-ID: Hi, It's a third party module. I haven't put it on the wiki. The wiki are edited by the third party module author, not the nginx official team. If you have any problem with this module, you can report to me. 2012/10/25 Alex Samad - Yieldbroker > Hi > > I was wondering what people use to connect to tomcat with AJP > > I found this site > https://github.com/yaoweibin/nginx_ajp_module > > But it's not part of the 3rd party table on the wiki site. > > > Alex > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Samad at yieldbroker.com Thu Oct 25 05:42:26 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Thu, 25 Oct 2012 05:42:26 +0000 Subject: AJP In-Reply-To: References: Message-ID: Hi Thanks, just started to look at it. I was wondering if other users of nginx who use this module have had any experiences they would like to share. I would like to get a feel for something before I look at putting it into production :) Is there any plan to do an official AJP module ? Alex From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of ??? Sent: Thursday, 25 October 2012 4:39 PM To: nginx at nginx.org Subject: Re: AJP Hi, It's a third party module. I haven't put it on the wiki. The wiki are edited by the third party module author, not the nginx official team. If you have any problem with this module, you can report to me. 2012/10/25 Alex Samad - Yieldbroker > Hi I was wondering what people use to connect to tomcat with AJP I found this site https://github.com/yaoweibin/nginx_ajp_module But it's not part of the 3rd party table on the wiki site. Alex _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Oct 25 06:08:31 2012 From: nginx-forum at nginx.us (rahul286) Date: Thu, 25 Oct 2012 02:08:31 -0400 Subject: .htaccess style support in existing nginx Message-ID: <5082b2f2b1bde63db07ac341b217aa4a.NginxMailingListEnglish@forum.nginx.org> First, we can use "watch/monitor" files in linux for changes and execute some command based on it. Now, for a site lets put a ".nginxaccess" file to hold site specific configuration (file will be writable by PHp, etc so web-site can update it) Then we can put in main site config "include $documentroot/.nginxaccess" And also start a daemon to watch "/var/www/path/to/site/.nginxaccess". Whenever any changes are detected in "/var/www/path/to/site/.nginxaccess" we can test nginx config and reload it. I will be giving this a try to see what issues I may face... Please give your suggestions/opinion/alternative approach... Goal is to allow wordpress like web-apps to update a site-specific nginx config file AND have nginx auto-reloaded new config. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232176,232176#msg-232176 From jerome at loyet.net Thu Oct 25 06:11:18 2012 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Thu, 25 Oct 2012 08:11:18 +0200 Subject: AJP In-Reply-To: References: Message-ID: Hi, and don't know about nginx and AJP, but why don't you switch to HTTP between nginx and tomcat ? We were in the same situation and didn't want to take the risk to use a third party module, so we switched to HTTP and it's just perfect. my 2 cents ;) 2012/10/25 Alex Samad - Yieldbroker : > Hi > > > > Thanks, just started to look at it. I was wondering if other users of nginx > who use this module have had any experiences they would like to share. > > > > I would like to get a feel for something before I look at putting it into > production J > > > > Is there any plan to do an official AJP module ? > > > > Alex > > > > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of > ??? > Sent: Thursday, 25 October 2012 4:39 PM > To: nginx at nginx.org > Subject: Re: AJP > > > > Hi, > > > > It's a third party module. I haven't put it on the wiki. The wiki are edited > by the third party module author, not the nginx official team. > > > > If you have any problem with this module, you can report to me. > > 2012/10/25 Alex Samad - Yieldbroker > > Hi > > I was wondering what people use to connect to tomcat with AJP > > I found this site > https://github.com/yaoweibin/nginx_ajp_module > > But it's not part of the 3rd party table on the wiki site. > > > Alex > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From dem0n at ntn.tv Thu Oct 25 07:07:38 2012 From: dem0n at ntn.tv (Igor Grabin) Date: Thu, 25 Oct 2012 10:07:38 +0300 Subject: mail-proxy, ssl and line termination Message-ID: <20121025070738.GA3036@tomoe.ntn.tv> Good morning, maybe, I'm posting this to the wrong place. nginx-devel@ rejected this. any pointers appreciated :-) the setup... 1 nginx frontend, pop3 / pop3s / imap / imaps 2 backends, dovecot + ms-exchange. the problem: pop3s / imaps connections being forwarded to exchange (in other words, decapsulated from ssl) stall after login. otherwise, all types of connections work fine, i.e. nginx:pop3s -> dovecot:pop3, nginx:pop3 -> exchange:pop3 tested on 1.2.4 as bundled with ubuntu 10.10, and 1.3.7, compiled by hand. I did a bit of tracing and have an assumption. nginx doesn't put an extra '\r' in a first statement of ssl-decapsulated session. here's a sample (being captured between nginx and a backend). this may upset redmond-based products ;-). $ hexdump -c inflow.imap.good ( nginx:imap -> exchange:imap) 0000000 1 L O G I N { 9 } \r \n c a c 0000010 o d e m o n { 7 } \r \n X X X X 0000020 X X X \r \n 2 s e l e c t i n 0000030 b o x \r \n 3 l o g o u t \r \n $ hexdump -c inflow.imap.bad (nginx:imaps -> exchange:imap) 0000000 1 L O G I N { 9 } \r \n c a c 0000010 o d e m o n { 7 } \r \n X X X X 0000020 X X X \r \n 2 s e l e c t i n 0000030 b o x \n same goes for pop3 in the same direction - missing '\r' after 'list' command. unfortunately, my C skills suck, so I'm unable to propose a patch. full config-file below === user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; } mail { auth_http 127.0.0.1:80/mailauth.pl; auth_http_header X-NGX-Auth-Key "censored :-)"; proxy on; ssl_certificate_key /etc/nginx/ssl/cert.pem; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_session_timeout 5m; server { protocol pop3; ssl on; listen 1.2.3.4:995; listen 192.168.1.1:995; } server { listen 1.2.3.4:993; listen 192.168.1.1:993; protocol imap; ssl on; } imap_auth plain login; pop3_auth plain; } tia for any pointers, -- Igor "CacoDem0n" Grabin, http://violent.death.kiev.ua/ From mdounin at mdounin.ru Thu Oct 25 08:04:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Oct 2012 12:04:54 +0400 Subject: mail-proxy, ssl and line termination In-Reply-To: <20121025070738.GA3036@tomoe.ntn.tv> References: <20121025070738.GA3036@tomoe.ntn.tv> Message-ID: <20121025080454.GM40452@mdounin.ru> Hello! On Thu, Oct 25, 2012 at 10:07:38AM +0300, Igor Grabin wrote: > Good morning, > > maybe, I'm posting this to the wrong place. nginx-devel@ rejected > this. > > any pointers appreciated :-) > > the setup... > 1 nginx frontend, pop3 / pop3s / imap / imaps > 2 backends, dovecot + ms-exchange. > > the problem: > pop3s / imaps connections being forwarded to exchange (in other > words, decapsulated from ssl) stall after login. > otherwise, all types of connections work fine, i.e. > nginx:pop3s -> dovecot:pop3, nginx:pop3 -> exchange:pop3 > > tested on 1.2.4 as bundled with ubuntu 10.10, and 1.3.7, compiled by > hand. > > I did a bit of tracing and have an assumption. nginx doesn't put an > extra '\r' in a first statement of ssl-decapsulated session. > here's a sample (being captured between nginx and a backend). this may > upset redmond-based products ;-). > > $ hexdump -c inflow.imap.good ( nginx:imap -> exchange:imap) > 0000000 1 L O G I N { 9 } \r \n c a c > 0000010 o d e m o n { 7 } \r \n X X X X > 0000020 X X X \r \n 2 s e l e c t i n > 0000030 b o x \r \n 3 l o g o u t \r \n > > $ hexdump -c inflow.imap.bad (nginx:imaps -> exchange:imap) > 0000000 1 L O G I N { 9 } \r \n c a c > 0000010 o d e m o n { 7 } \r \n X X X X > 0000020 X X X \r \n 2 s e l e c t i n > 0000030 b o x \n > > same goes for pop3 in the same direction - missing '\r' after 'list' > command. The "2 select ..." is not something nginx sent by itself, it's client data it forwarded. You may take a look at a client you use instead. -- Maxim Dounin http://nginx.com/support.html From benimaur at gmail.com Thu Oct 25 08:18:25 2012 From: benimaur at gmail.com (Benimaur Gao) Date: Thu, 25 Oct 2012 16:18:25 +0800 Subject: I want to ask something about limit_zone Message-ID: I've learned that if some IP exceeds the limit, a 503 http status code will be returned. I'm trying to find a way to change the default 503 value, e.g. use some other code, such as 403 to replace it, can I ? and how to? thanks! From dem0n at ntn.tv Thu Oct 25 08:24:58 2012 From: dem0n at ntn.tv (Igor Grabin) Date: Thu, 25 Oct 2012 11:24:58 +0300 Subject: mail-proxy, ssl and line termination In-Reply-To: <20121025080454.GM40452@mdounin.ru> References: <20121025070738.GA3036@tomoe.ntn.tv> <20121025080454.GM40452@mdounin.ru> Message-ID: <20121025082458.GT19736@tomoe.ntn.tv> On Thu, Oct 25, 2012 at 12:04:54PM +0400, Maxim Dounin wrote: > > $ hexdump -c inflow.imap.good ( nginx:imap -> exchange:imap) > > 0000000 1 L O G I N { 9 } \r \n c a c > > 0000010 o d e m o n { 7 } \r \n X X X X > > 0000020 X X X \r \n 2 s e l e c t i n > > 0000030 b o x \r \n 3 l o g o u t \r \n > > $ hexdump -c inflow.imap.bad (nginx:imaps -> exchange:imap) > > 0000000 1 L O G I N { 9 } \r \n c a c > > 0000010 o d e m o n { 7 } \r \n X X X X > > 0000020 X X X \r \n 2 s e l e c t i n > > 0000030 b o x \n > > same goes for pop3 in the same direction - missing '\r' after 'list' > > command. > The "2 select ..." is not something nginx sent by itself, it's > client data it forwarded. You may take a look at a client you use > instead. both testcases produced by me, using plain linux telnet and plain linux openssl s_client. I'd kinda expect no '\r' in that case, but it's there in the beginning in both cases. tia, -- Igor "CacoDem0n" Grabin, http://violent.death.kiev.ua/ From sb at waeme.net Thu Oct 25 08:38:42 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Thu, 25 Oct 2012 12:38:42 +0400 Subject: I want to ask something about limit_zone In-Reply-To: References: Message-ID: On 25 Oct2012, at 12:18 , Benimaur Gao wrote: > I've learned that if some IP exceeds the limit, a 503 http status code > will be returned. > I'm trying to find a way to change the default 503 value, e.g. use > some other code, such as 403 to replace it, can I ? and how to? > thanks! error_page 503 =403 /error_page.html; http://nginx.org/r/error_page From kasperg at benjamin.dk Thu Oct 25 08:44:50 2012 From: kasperg at benjamin.dk (Kasper Grubbe) Date: Thu, 25 Oct 2012 10:44:50 +0200 Subject: NGINX looks for /logs/error.log as default? In-Reply-To: <201210221737.23202.ne@vbart.ru> References: <201210221737.23202.ne@vbart.ru> Message-ID: 2012/10/22 Valentin V. Bartenev > On Monday 22 October 2012 16:01:17 Kasper Grubbe wrote: > > Hello, I have a problem with NGINX version 1.2.4 it will constantly look > > for /logs/error.log, even though I have defined another error.log in my > > configuration. > [...] > > But NGINX don't want to start because ./logs/error.log doesn't exist. > This > > is my error: > > > > [ben2 (master)]=> nginx -p . -c config/nginx/development.conf > > nginx: [alert] could not open error log file: open() "./logs/error.log" > > failed (2: No such file or directory) > > > > Am I doing it wrong? > > Nginx must have access to error log before it will parse configuration. > See also the "--error-log-path=" configure parameter. > http://www.nginx.org/en/docs/install.html > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Oh, that makes sense. Then I am not able to make a smart setup that I wanted. Thanks for your email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kasperg at benjamin.dk Thu Oct 25 08:48:28 2012 From: kasperg at benjamin.dk (Kasper Grubbe) Date: Thu, 25 Oct 2012 10:48:28 +0200 Subject: SSI stubs/blocks Message-ID: Hello, I am using nginx/1.2.3 on my server, and tried to use this in my SSI includes: However, this doesn't seem to work. I just get a blank page, and nothing is included. Is my NGINX too old, or do I need to it differently? I am following the example mentioned here: http://nginx.org/en/docs/http/ngx_http_ssi_module.html Greetings, Kasper -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Samad at yieldbroker.com Thu Oct 25 09:03:32 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Thu, 25 Oct 2012 09:03:32 +0000 Subject: AJP In-Reply-To: References: Message-ID: Hi So we have setup for using AJP, its meant to be faster ( I guess that depends on the implementation :) ) How did you deal with load balancing and stickiness. Are you using long lived connections ? Alex > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > Behalf Of J?r?me Loyet > Sent: Thursday, 25 October 2012 5:11 PM > To: nginx at nginx.org > Subject: Re: AJP > > Hi, > > and don't know about nginx and AJP, but why don't you switch to HTTP > between nginx and tomcat ? > > > We were in the same situation and didn't want to take the risk to use a third > party module, so we switched to HTTP and it's just perfect. > > my 2 cents ;) > > 2012/10/25 Alex Samad - Yieldbroker : > > Hi > > > > > > > > Thanks, just started to look at it. I was wondering if other users of > > nginx who use this module have had any experiences they would like to > share. > > > > > > > > I would like to get a feel for something before I look at putting it > > into production J > > > > > > > > Is there any plan to do an official AJP module ? > > > > > > > > Alex > > > > > > > > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > > Behalf Of ??? > > Sent: Thursday, 25 October 2012 4:39 PM > > To: nginx at nginx.org > > Subject: Re: AJP > > > > > > > > Hi, > > > > > > > > It's a third party module. I haven't put it on the wiki. The wiki are > > edited by the third party module author, not the nginx official team. > > > > > > > > If you have any problem with this module, you can report to me. > > > > 2012/10/25 Alex Samad - Yieldbroker > > > > Hi > > > > I was wondering what people use to connect to tomcat with AJP > > > > I found this site > > https://github.com/yaoweibin/nginx_ajp_module > > > > But it's not part of the 3rd party table on the wiki site. > > > > > > Alex > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ne at vbart.ru Thu Oct 25 09:05:23 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 25 Oct 2012 13:05:23 +0400 Subject: SSI stubs/blocks In-Reply-To: References: Message-ID: <201210251305.24048.ne@vbart.ru> On Thursday 25 October 2012 12:48:28 Kasper Grubbe wrote: > Hello, > > I am using nginx/1.2.3 on my server, and tried to use this in my SSI > includes: > > > > > However, this doesn't seem to work. I just get a blank page, and nothing is > included. > [...] #{url} and #{unique} don't look like valid values. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From kasperg at benjamin.dk Thu Oct 25 09:08:45 2012 From: kasperg at benjamin.dk (Kasper Grubbe) Date: Thu, 25 Oct 2012 11:08:45 +0200 Subject: SSI stubs/blocks In-Reply-To: <201210251305.24048.ne@vbart.ru> References: <201210251305.24048.ne@vbart.ru> Message-ID: 2012/10/25 Valentin V. Bartenev > On Thursday 25 October 2012 12:48:28 Kasper Grubbe wrote: > > Hello, > > > > I am using nginx/1.2.3 on my server, and tried to use this in my SSI > > includes: > > > > > > > > > > However, this doesn't seem to work. I just get a blank page, and nothing > is > > included. > > [...] > > #{url} and #{unique} don't look like valid values. > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Sorry, that is from my Ruby code, the rendered output would be this: And after running it through NGINX, it would just not render anything. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Thu Oct 25 09:16:10 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 25 Oct 2012 13:16:10 +0400 Subject: SSI stubs/blocks In-Reply-To: References: <201210251305.24048.ne@vbart.ru> Message-ID: <201210251316.11033.ne@vbart.ru> On Thursday 25 October 2012 13:08:45 Kasper Grubbe wrote: > [...] > > Sorry, that is from my Ruby code, the rendered output would be this: > > > > > And after running it through NGINX, it would just not render anything. Are you sure, that /poll/sidebar_widget_fragment?poll= returns something with 200 status code? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From kasperg at benjamin.dk Thu Oct 25 09:48:56 2012 From: kasperg at benjamin.dk (Kasper Grubbe) Date: Thu, 25 Oct 2012 11:48:56 +0200 Subject: SSI stubs/blocks In-Reply-To: <201210251316.11033.ne@vbart.ru> References: <201210251305.24048.ne@vbart.ru> <201210251316.11033.ne@vbart.ru> Message-ID: 2012/10/25 Valentin V. Bartenev > On Thursday 25 October 2012 13:08:45 Kasper Grubbe wrote: > > [...] > > > > Sorry, that is from my Ruby code, the rendered output would be this: > > > > > > > > > > And after running it through NGINX, it would just not render anything. > > Are you sure, that /poll/sidebar_widget_fragment?poll= returns something > with > 200 status code? Yep, http://woman.dk/poll/sidebar_widget_fragment?poll= -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrejaenisch at googlemail.com Thu Oct 25 09:49:56 2012 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Thu, 25 Oct 2012 11:49:56 +0200 Subject: .htaccess style support in existing nginx In-Reply-To: <5082b2f2b1bde63db07ac341b217aa4a.NginxMailingListEnglish@forum.nginx.org> References: <5082b2f2b1bde63db07ac341b217aa4a.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/10/25 rahul286 : > Now, for a site lets put a ".nginxaccess" file to hold site specific configuration (file will be writable by PHp, etc so web-site can update it) > [...] > Whenever any changes are detected in "/var/www/path/to/site/.nginxaccess" we can test nginx config and reload it. Be careful concerning safety. I'm afraid about maleficient code entered this way. PHP provides functions like htmlspecialchars [1] to avoid this. Just a word of warning ;-) Regards, Andre [1] http://www.php.net/manual/en/function.htmlspecialchars.php From nginx-forum at nginx.us Thu Oct 25 10:02:26 2012 From: nginx-forum at nginx.us (rahul286) Date: Thu, 25 Oct 2012 06:02:26 -0400 Subject: .htaccess style support in existing nginx In-Reply-To: References: Message-ID: Thanks for warning. :-) I guess a serious threat will be present when a malicious code injects perl scripts via nginx config... Then request to some path will trigger perl script (which can be a backdoor, destruction program) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232176,232197#msg-232197 From mdounin at mdounin.ru Thu Oct 25 10:25:49 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Oct 2012 14:25:49 +0400 Subject: mail-proxy, ssl and line termination In-Reply-To: <20121025082458.GT19736@tomoe.ntn.tv> References: <20121025070738.GA3036@tomoe.ntn.tv> <20121025080454.GM40452@mdounin.ru> <20121025082458.GT19736@tomoe.ntn.tv> Message-ID: <20121025102549.GN40452@mdounin.ru> Hello! On Thu, Oct 25, 2012 at 11:24:58AM +0300, Igor Grabin wrote: > On Thu, Oct 25, 2012 at 12:04:54PM +0400, Maxim Dounin wrote: > > > $ hexdump -c inflow.imap.good ( nginx:imap -> exchange:imap) > > > 0000000 1 L O G I N { 9 } \r \n c a c > > > 0000010 o d e m o n { 7 } \r \n X X X X > > > 0000020 X X X \r \n 2 s e l e c t i n > > > 0000030 b o x \r \n 3 l o g o u t \r \n > > > $ hexdump -c inflow.imap.bad (nginx:imaps -> exchange:imap) > > > 0000000 1 L O G I N { 9 } \r \n c a c > > > 0000010 o d e m o n { 7 } \r \n X X X X > > > 0000020 X X X \r \n 2 s e l e c t i n > > > 0000030 b o x \n > > > same goes for pop3 in the same direction - missing '\r' after 'list' > > > command. > > The "2 select ..." is not something nginx sent by itself, it's > > client data it forwarded. You may take a look at a client you use > > instead. > > both testcases produced by me, using plain linux telnet and plain > linux openssl s_client. So the difference observed more or less comes from telnet vs. openssl s_client. Try "openssl s_client -crlf" instead, quote from man s_client: -crlf this option translated a line feed from the terminal into CR+LF as required by some servers. > I'd kinda expect no '\r' in that case, but it's there in the > beginning in both cases. The CRLF is correctly sent in the "LOGIN" command as it's sent by nginx itself. In case of telnet you don't get bare LF as it does LF -> CRLF conversion by default. I would recommend nc (aka netcat) if you need raw tcp client without any conversions. -- Maxim Dounin http://nginx.com/support.html From contact at jpluscplusm.com Thu Oct 25 11:56:20 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Oct 2012 12:56:20 +0100 Subject: .htaccess style support in existing nginx In-Reply-To: <5082b2f2b1bde63db07ac341b217aa4a.NginxMailingListEnglish@forum.nginx.org> References: <5082b2f2b1bde63db07ac341b217aa4a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 25 October 2012 07:08, rahul286 wrote: > Please give your suggestions/opinion/alternative approach... > > Goal is to allow wordpress like web-apps to update a site-specific nginx > config file AND have nginx auto-reloaded new config. In a multi-tenant system, which is what you appear to be aiming for, this is a bad idea. A very bad idea. Here are a few ways, as a customer, I could fuck you up: In my /var/www/path/to/site/.nginxaccess: START ------------------------------------------------------------------------- } # close the "location /{" we assume we're included from within } # close the "server{" we must be included from within server { # get access to some files we shouldn't be allowed to see listen 80; server_name invalid.name1; root /etc/; } server { # destroy someone else's site listen 80; server_name invalid.name2; root /var/www/path/to/someone/elses/site; location / { dav_methods PUT DELETE MKCOL COPY MOVE; client_body_temp_path /var/www/path/to/someone/elses/site; create_full_put_path on; dav_access group:rwx all:rwx; } } server { # DoS someone else's site listen 80; server_name another.customer.on.this.server; rewrite ^ http://google.com; } server { # re-enter our normal "server{" block, so nginx reloads OK listen 80; server_name invalid.name3; location { END ------------------------------------------------------------------------- Don't do this. It's a bad idea. The quality of badly-written nginx howtos, blogs, etc out there on the web is poor enough without this flawed pattern gaining any traction or exposure. Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Thu Oct 25 12:39:21 2012 From: nginx-forum at nginx.us (crirus) Date: Thu, 25 Oct 2012 08:39:21 -0400 Subject: Patch limit_rate_max: for your review Message-ID: Hello I have a new patch for Nginx. I use this server to stream videos. I needed a way to burst the rate with limit_rate_after but controlled from config, not at full throttle. Eg.: send 4Mb with 500KBps and the remaining with 90KBps. This is usefull when sending mp4 that have a 2 to 6MB of index data at the beginning of the file and playback only starts after this is received. This way I can start the video playback faster not after 20 seconds or so. However, if you have 500 new users per seconds you can choke the bandwidth so the need to control the burst speed. I created a new config directive limit_rate_max that applies to initial burst speed. Another change I made is to put the limit_rate_after in server variables. I need this to control the burst quantity on the fly based on url param. if ($arg_burst != ""){ $limit_rate_after $arg_burst; } The index for each file is different and with this I control how much burst we need, usefull when users seek a lot and index is recalculated on seek for each file... If anyone can review this and find it useful, let me know, it works for my needs. I thank to Andrei (Latzease) for his contribution to this... he made it happen. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232178,232178#msg-232178 From contact at jpluscplusm.com Thu Oct 25 12:48:15 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Oct 2012 13:48:15 +0100 Subject: Patch limit_rate_max: for your review In-Reply-To: References: Message-ID: I don't see a patch anywhere for review! From nginx-forum at nginx.us Thu Oct 25 12:51:52 2012 From: nginx-forum at nginx.us (rahul286) Date: Thu, 25 Oct 2012 08:51:52 -0400 Subject: .htaccess style support in existing nginx In-Reply-To: References: Message-ID: <7ac3f3d0c3228c02a0d7d74a6b03e30f.NginxMailingListEnglish@forum.nginx.org> Thanks for a really scary example! :D By the way, I was NOT planning this for shared environment. In fact for a wordpress blog-network which use our plugin http://wordpress.org/extend/plugins/nginx-helper/ Whenever users create new sites, the plugin add new sites id in map.conf file (simple key value pair table of domain-name and numeric-ids for efficient file-handling) I was thinking to run a linux inotify based script to auto-reload nginx whenever changes are detected in map.conf file. After your example, I can add some sed commands to my script so any chars like "{' and "}" will be stripped down! At the end, you cannot guarantee security for anyone, no matter how safe codes you develop. :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232176,232203#msg-232203 From nginx-forum at nginx.us Thu Oct 25 12:57:57 2012 From: nginx-forum at nginx.us (rahul286) Date: Thu, 25 Oct 2012 08:57:57 -0400 Subject: .htaccess style support in existing nginx In-Reply-To: References: Message-ID: By the way more details about problem we are trying to solve are here - https://github.com/rtCamp/nginx-helper/issues/9 Another approach is to add PHP user to sudoers list and allow them to execute only one command "www-data ALL=NOPASSWD: nginx -t && service nginx reload" Again, other PHP script adding unwanted code can not be rules out! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232176,232204#msg-232204 From nginx-forum at nginx.us Thu Oct 25 14:17:08 2012 From: nginx-forum at nginx.us (mgenov) Date: Thu, 25 Oct 2012 10:17:08 -0400 Subject: using nginx as proxy from http to https Message-ID: Hello, I'm trying to use nginx as a proxy server that should dispatch HTTP requests to remote HTTPS server. Here are some configuration details which I'm using: location /test_server { proxy_pass https://testserver.testdomainserver.com:9090/; proxy_set_header X-Real-IP $remote_addr; } The problem is that when I send request to the nginx server I get 505 error from remote server. If I execute the request directly to the remote server it works as expected. Some additional details that I can provide is that the remote server is using basic authorization and only accepts requests through HTTPS. I also tried to bind server on port 443 with ssl on and certificates added, but when I try to use it, I get the following error: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed The configuration steps which I tried are explained in: http://wiki.nginx.org/HttpSslModule Any idea how can I make it work ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232212,232212#msg-232212 From nginx-forum at nginx.us Thu Oct 25 15:07:36 2012 From: nginx-forum at nginx.us (crirus) Date: Thu, 25 Oct 2012 11:07:36 -0400 Subject: Patch limit_rate_max: for your review In-Reply-To: References: Message-ID: I don't know how to attach it here, should I just paste the whole diff. Regards Cristian Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232178,232207#msg-232207 From farseas at gmail.com Thu Oct 25 15:07:19 2012 From: farseas at gmail.com (Bob S.) Date: Thu, 25 Oct 2012 11:07:19 -0400 Subject: using http basic authentication with custom form Message-ID: Hello, Is there a way to use http basic authentication with nginx with a custom form? I heard that it may be possible to use auth_request to do this but am unsure how My main goal is to avoid the standard form that pops up with http basic authentication and supply my own form. That way I still have the cacheing benefits of auth_basic with a custom form Any suggestions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Oct 25 15:13:31 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Oct 2012 16:13:31 +0100 Subject: Patch limit_rate_max: for your review In-Reply-To: References: Message-ID: On 25 October 2012 16:07, crirus wrote: > I don't know how to attach it here, should I just paste the whole diff. Perhaps join the mailing list (which the forum is cross-posted to, and is where I'm posting from) and attach your patch to a mail? http://mailman.nginx.org/mailman/listinfo/nginx Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From francis at daoine.org Thu Oct 25 16:06:48 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Oct 2012 17:06:48 +0100 Subject: using nginx as proxy from http to https In-Reply-To: References: Message-ID: <20121025160648.GO17159@craic.sysops.org> On Thu, Oct 25, 2012 at 10:17:08AM -0400, mgenov wrote: Hi there, > location /test_server { > proxy_pass > https://testserver.testdomainserver.com:9090/; > proxy_set_header X-Real-IP $remote_addr; > } > > The problem is that when I send request to the nginx server I get 505 error > from remote server. If I execute the request directly to the remote server > it works as expected. HTTP 505 means HTTP Version Not Supported. What version do you use when you execute the request directly? What version does nginx use? Do you see a different response from curl -i -k https://testserver.testdomainserver.com:9090/ and curl -i -k -0 https://testserver.testdomainserver.com:9090/ ? Good luck, f -- Francis Daly francis at daoine.org From aweber at comcast.net Thu Oct 25 16:17:38 2012 From: aweber at comcast.net (AJ Weber) Date: Thu, 25 Oct 2012 12:17:38 -0400 Subject: 404 redirect or disconnect... Message-ID: <50896622.8060902@comcast.net> I have a custom site where valid users will always login and have a valid session (and a cookie or headers to represent it). I would like to do the following, but not sure how to start...can someone just point me in the right direction? IF user has a valid session (I need to check if the header/cookie is in the http request), and a 404 is appropriate, re-direct to a specific location (or a "real" 404 page). ELSE (the user does not have a valid session for my website/application), return 444. Really, I need to understand how to do the if/then and detect the cookie or header to indicate that the user is "valid". Thanks in advance, AJ From nginx-forum at nginx.us Thu Oct 25 16:20:16 2012 From: nginx-forum at nginx.us (mgenov) Date: Thu, 25 Oct 2012 12:20:16 -0400 Subject: using nginx as proxy from http to https In-Reply-To: <20121025160648.GO17159@craic.sysops.org> References: <20121025160648.GO17159@craic.sysops.org> Message-ID: <0c14c3806f4d7d595212561ad2573319.NginxMailingListEnglish@forum.nginx.org> Actually it returns error 500. Here is the log information from access.log [25/Oct/2012:18:44:10 +0300] "POST /testurl/testservice HTTP/1.1" 500 75 "-" "-" Here is my version. nginx -v nginx version: nginx/0.7.67 In a few minutes I'll try with curl and will post the result. > HTTP 505 means HTTP Version Not Supported. > > What version do you use when you execute the request directly? > > What version does nginx use? > > Do you see a different response from > > curl -i -k https://testserver.testdomainserver.com:9090/ > > and > > curl -i -k -0 https://testserver.testdomainserver.com:9090/ > > ? > > Good luck, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232212,232224#msg-232224 From nginx-forum at nginx.us Thu Oct 25 16:31:11 2012 From: nginx-forum at nginx.us (rmalayter) Date: Thu, 25 Oct 2012 12:31:11 -0400 Subject: AJP In-Reply-To: References: Message-ID: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> J?r?me Loyet Wrote: ------------------------------------------------------- > We were in the same situation and didn't want to take the risk to use > a third party module, so we switched to HTTP and it's just perfect. > > my 2 cents ;) We did the same here. Now that nginx does keep-alives to the back-ends, there's really no advantage to using AJP between the web server and Tomcat. In fact, our internal load testing showed nginx->HTTP->Tomcat performing marginally better than Apache->AJP->Tomcat with Tomcat serving a simple test JSP. I suspect this is because the HTTP code in Tomcat has been optimized far more than the AJP code. However, in the real world your bottlenecks will always be DB, disk IO, and memory/garbage collection caused by your app at the Tomcat layer. Profile your app, and find out where the bottlenecks actually are. Address the worst ones first. I suspect communication overhead between your web server and Tomcat isn't even visible no matter what you use. Proper caching, efficient client-side layout and script, DB query tuning, etc. all matter far more. Spend whatever optimization time you have on that which will make a real difference for the end-user. -- RPM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232172,232225#msg-232225 From jerome at loyet.net Thu Oct 25 16:34:30 2012 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Thu, 25 Oct 2012 18:34:30 +0200 Subject: AJP In-Reply-To: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> References: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/10/25 rmalayter : > J?r?me Loyet Wrote: > ------------------------------------------------------- >> We were in the same situation and didn't want to take the risk to use >> a third party module, so we switched to HTTP and it's just perfect. >> >> my 2 cents ;) > > We did the same here. Now that nginx does keep-alives to the back-ends, > there's really no advantage to using AJP between the web server and Tomcat. > > > In fact, our internal load testing showed nginx->HTTP->Tomcat performing > marginally better than Apache->AJP->Tomcat with Tomcat serving a simple test > JSP. Same results here. In fact the way nginx works (async) compensate the loss due to switching to HTTP (and even more than expected). >I suspect this is because the HTTP code in Tomcat has been optimized > far more than the AJP code. However, in the real world your bottlenecks will > always be DB, disk IO, and memory/garbage collection caused by your app at > the Tomcat layer. > > Profile your app, and find out where the bottlenecks actually are. Address > the worst ones first. I suspect communication overhead between your web > server and Tomcat isn't even visible no matter what you use. Proper caching, > efficient client-side layout and script, DB query tuning, etc. all matter > far more. Spend whatever optimization time you have on that which will make > a real difference for the end-user. in all the cases, there's 99% chances that you'll gain more by tweaking your app than trying to optimize the protocol between nginx and tomcat ! > > -- > RPM > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232172,232225#msg-232225 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jossandavis at gmail.com Thu Oct 25 18:52:25 2012 From: jossandavis at gmail.com (Jossan Davis) Date: Fri, 26 Oct 2012 02:52:25 +0800 Subject: is that possible to queue request one by one Message-ID: Hi nginx module limit_req only can limit request frequency by url param key. it delay a request by "ngx_add_timer(r->connection->write, delay)". my requirement is a little bit different. i need the next request be process immediately after the prev one have done. i have tried to use "ngx_http_cleanup_add". but it seem that the ngx_http_request_t is not in shared mem, i can't use "ngx_add_timer(r->connection->write, 0)" to notify the next request. thx -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrejaenisch at googlemail.com Thu Oct 25 19:03:53 2012 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Thu, 25 Oct 2012 21:03:53 +0200 Subject: using nginx as proxy from http to https In-Reply-To: <0c14c3806f4d7d595212561ad2573319.NginxMailingListEnglish@forum.nginx.org> References: <20121025160648.GO17159@craic.sysops.org> <0c14c3806f4d7d595212561ad2573319.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/10/25 mgenov : > Here is my version. > > nginx -v > nginx version: nginx/0.7.67 Your version looks outdated ... > 2012-09-25 nginx-1.2.4 stable version has been released. (See: http://nginx.org/) Don't sure, wether this is the problem ... Well, this is the oldest announcement I've found: > 2009-12-15 nginx-0.8.30 development version has been released. (See: http://nginx.org/2009.html) So your versions seems to be REALLY old! Regards, Andre From Alex.Samad at yieldbroker.com Thu Oct 25 19:13:00 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Thu, 25 Oct 2012 19:13:00 +0000 Subject: AJP In-Reply-To: References: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > Behalf Of J?r?me Loyet > Sent: Friday, 26 October 2012 3:35 AM > To: nginx at nginx.org > Subject: Re: AJP > > 2012/10/25 rmalayter : > > J?r?me Loyet Wrote: > > ------------------------------------------------------- > >> We were in the same situation and didn't want to take the risk to use > >> a third party module, so we switched to HTTP and it's just perfect. > >> > >> my 2 cents ;) > > > > We did the same here. Now that nginx does keep-alives to the > > back-ends, there's really no advantage to using AJP between the web > server and Tomcat. > > > > > > In fact, our internal load testing showed nginx->HTTP->Tomcat > > performing marginally better than Apache->AJP->Tomcat with Tomcat > > serving a simple test JSP. > > Same results here. In fact the way nginx works (async) compensate the loss > due to switching to HTTP (and even more than expected). Okay I have enough reason to try out http. Still my question though how did you deal with stickiness? [snip] Alex From jerome at loyet.net Thu Oct 25 19:23:00 2012 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Thu, 25 Oct 2012 21:23:00 +0200 Subject: AJP In-Reply-To: References: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/10/25 Alex Samad - Yieldbroker : > Hi > >> -----Original Message----- >> From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On >> Behalf Of J?r?me Loyet >> Sent: Friday, 26 October 2012 3:35 AM >> To: nginx at nginx.org >> Subject: Re: AJP >> >> 2012/10/25 rmalayter : >> > J?r?me Loyet Wrote: >> > ------------------------------------------------------- >> >> We were in the same situation and didn't want to take the risk to use >> >> a third party module, so we switched to HTTP and it's just perfect. >> >> >> >> my 2 cents ;) >> > >> > We did the same here. Now that nginx does keep-alives to the >> > back-ends, there's really no advantage to using AJP between the web >> server and Tomcat. >> > >> > >> > In fact, our internal load testing showed nginx->HTTP->Tomcat >> > performing marginally better than Apache->AJP->Tomcat with Tomcat >> > serving a simple test JSP. >> >> Same results here. In fact the way nginx works (async) compensate the loss >> due to switching to HTTP (and even more than expected). > > Okay I have enough reason to try out http. > > Still my question though how did you deal with stickiness? http://code.google.com/p/nginx-sticky-module/ simple and efficient ! works like a charms without any configuration on the tomcat side and no need to setup complex session sharing system on the tomcat side > > [snip] > > > Alex > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Alex.Samad at yieldbroker.com Thu Oct 25 20:13:13 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Thu, 25 Oct 2012 20:13:13 +0000 Subject: AJP In-Reply-To: References: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi [snip] > >> Behalf Of J?r?me Loyet > >> Sent: Friday, 26 October 2012 3:35 AM > >> To: nginx at nginx.org > >> Subject: Re: AJP [snip] > > > > Still my question though how did you deal with stickiness? > > http://code.google.com/p/nginx-sticky-module/ So just for my understand what is the difference between this 3rd party add on and the ajp 3rd party addon. I understand one is on the wiki and one is not. But does that mean the sticky module is supported as well ? Does stick respect jsession cookie. So if server fails will the connection go to new server and then eventually bounce back to the restored old server ? > > simple and efficient ! works like a charms without any configuration on the > tomcat side and no need to setup complex session sharing system on the > tomcat side That's good like KISS [snip] From jerome at loyet.net Thu Oct 25 20:23:37 2012 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Thu, 25 Oct 2012 22:23:37 +0200 Subject: AJP In-Reply-To: References: <77e2ae8a19e1e6284b6cfb13922b522f.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/10/25 Alex Samad - Yieldbroker : > Hi > > [snip] > >> >> Behalf Of J?r?me Loyet >> >> Sent: Friday, 26 October 2012 3:35 AM >> >> To: nginx at nginx.org >> >> Subject: Re: AJP > [snip] > >> > >> > Still my question though how did you deal with stickiness? >> >> http://code.google.com/p/nginx-sticky-module/ > > So just for my understand what is the difference between this 3rd party add on and the ajp 3rd party addon. > > I understand one is on the wiki and one is not. But does that mean the sticky module is supported as well ? > > Does stick respect jsession cookie. So if server fails will the connection go to new server and then eventually bounce back to the restored old server ? sticky module does not deal with jsession cookie. It creates a cookie on its own which indicates which backend has been used and nginx will always use the backend from the cookie. If no cookie are sent (first time), nginx will use standard round robin to choose one server and then send back a cookie indicating which backend server has been used. If the backend from the server is down, classic round robin takes place and a new cookie is sent with the new backend to use. see the main page of http://code.google.com/p/nginx-sticky-module/ with the explanation and a smal schematic which is far better than any explaination > > >> >> simple and efficient ! works like a charms without any configuration on the >> tomcat side and no need to setup complex session sharing system on the >> tomcat side > > That's good like KISS > > [snip] > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Oct 25 20:34:14 2012 From: nginx-forum at nginx.us (florian.iragne) Date: Thu, 25 Oct 2012 16:34:14 -0400 Subject: Strange problem with cache Message-ID: <0d8e620754a3583c2cd54d5afeabdf99.NginxMailingListEnglish@forum.nginx.org> Hi, i've come accross a strange problem of caching. My site is run by nginx 1.2.1, on two different servers. The config are exactly the same (i use puppet to mirror configs). The contents are exactly the same, everything that i can think of is exactly identical on both servers. In the config, i set "expires max" for the /static location. The two nginx are behind a haproxy instance for loadbalancing and failover. The setup is in active-active mode. When i load a page with no browser cache, each item in the static location is served by nginx with the correct response regarding the max-age and expire directives. Then i reload the page : some of these ressources are served with a 304 response (that is the expected behaviour), some with a 200 code (hence, a fresh copy of the file). If i reload the page again and again, the ressources served with 304 answer are varying. Now, i've put the server into active-backup mode, and only one instance of nginx is serving the files. In this setup, the ressources are always served with a 304 answer. Did i miss something? I assume that my servers setup is quite common, and i d'ont believe that the behaviour i observe is "normal". Any idea? thanks Florian Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232235,232235#msg-232235 From aweber at comcast.net Fri Oct 26 03:03:57 2012 From: aweber at comcast.net (AJ Weber) Date: Thu, 25 Oct 2012 23:03:57 -0400 Subject: why two different responses? Message-ID: <5089FD9D.2070308@comcast.net> One of these requests is sent by my program. One is sent by curl (it's easy to tell which). They are identical as far as I can tell. However, the curl request returns the expected result (204) from my servlet. The program keeps getting this 302 returned with some generic "302 Found" html from nginx. My eyes are blurry from staring at it. Can anyone tell me why my test with curl works, but the program sending the same request does not??? Thanks, AJ xx.yy.zz.64 - - [26/Oct/2012:02:49:38 +0000] "GET /myCoordinator/Coordinator?Action=servers HTTP/1.1" upstream_cache_status - 302 171 "-" "-" "-" tt.ss.kk.134 - - [26/Oct/2012:02:52:43 +0000] "GET /myCoordinator/Coordinator?Action=servers HTTP/1.1" upstream_cache_status MISS 204 0 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" "-" -------------- next part -------------- An HTML attachment was scrubbed... URL: From Niall.Gallagher at yieldbroker.com Fri Oct 26 04:36:41 2012 From: Niall.Gallagher at yieldbroker.com (Niall Gallagher - Yieldbroker) Date: Fri, 26 Oct 2012 04:36:41 +0000 Subject: Ngix Performace as a Reverse Proxy Message-ID: <91F89A63AE64CF4692B740C52E446768EA2F2F@DC1INTADCW8201.yieldbroker.com> Hi, We have been doing some testing with Nginx as a reverse proxy. We have been comparing it to a number of solutions which it easily beats, like IIS and Apache with mod_proxy etc. However, as an experiment we have been comparing it to an adapted NIO server written in Java. This seems to be out performing Nginx in the reverse proxy role by a factor of 3 times. We are convinced our configuration is wrong. Both run on the same box (at different times) with the same sysctl settings (see below). We also saw some spikes, up to 3 seconds per request at times, and some at 10 over a 1 million request test of 1000 concurrent clients. We are using a fairly straight forward configuration for Nginx. Since we have two processors on the box we tried worker_processes of 4 with worker_connections of 6000, then we tried worker_processes of 40 with worker_connections of 5000. No change. We need to be able to support responsive Ajax requests with strategies like HTTP streaming and long polling in our setup. Any ideas what we can do to boost our throughput and latency? [root at dc1dmzngx02 apachebench]# uname -a Linux dc1dmzngx02 2.6.32-220.13.1.el6.x86_64 #1 SMP Tue Apr 17 23:56:34 BST 2012 x86_64 x86_64 x86_64 GNU/Linux [root at dc1dmzngx02 apachebench]# cat /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # High perf config net.core.somaxconn = 12048 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 4096 16777216 net.ipv4.tcp_wmem = 4096 4096 16777216 net.ipv4.tcp_mem = 786432 2097152 3145728 net.ipv4.tcp_max_syn_backlog = 16384 net.core.netdev_max_backlog = 20000 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_max_orphans = 131072 # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Thanks, Niall -------------- next part -------------- An HTML attachment was scrubbed... URL: From dem0n at ntn.tv Fri Oct 26 04:55:33 2012 From: dem0n at ntn.tv (Igor Grabin) Date: Fri, 26 Oct 2012 07:55:33 +0300 Subject: mail-proxy, ssl and line termination In-Reply-To: <20121025102549.GN40452@mdounin.ru> References: <20121025070738.GA3036@tomoe.ntn.tv> <20121025080454.GM40452@mdounin.ru> <20121025082458.GT19736@tomoe.ntn.tv> <20121025102549.GN40452@mdounin.ru> Message-ID: <20121026045533.GX19736@tomoe.ntn.tv> On Thu, Oct 25, 2012 at 02:25:49PM +0400, Maxim Dounin wrote: > > > The "2 select ..." is not something nginx sent by itself, it's > > > client data it forwarded. You may take a look at a client you use > > > instead. > > both testcases produced by me, using plain linux telnet and plain > > linux openssl s_client. > So the difference observed more or less comes from telnet vs. > openssl s_client. Try "openssl s_client -crlf" instead, quote > from man s_client: [...skip...] > The CRLF is correctly sent in the "LOGIN" command as it's sent by > nginx itself. > In case of telnet you don't get bare LF as it does LF -> CRLF > conversion by default. I would recommend nc (aka netcat) if you > need raw tcp client without any conversions. my lame. thanks. :-) -- Igor "CacoDem0n" Grabin, http://violent.death.kiev.ua/ From crirus at gmail.com Fri Oct 26 07:13:28 2012 From: crirus at gmail.com (Cristian Rusu) Date: Fri, 26 Oct 2012 10:13:28 +0300 Subject: Patch limit_rate_max: for your review Message-ID: Hello I have a new patch for Nginx. I use this server to stream videos. I needed a way to burst the rate with limit_rate_after but controlled from config, not at full throttle. Eg.: send 4Mb with 500KBps and the remaining with 90KBps. This is useful when sending mp4 that have a 2 to 6MB of index data at the beginning of the file and playback only starts after this is received. This way I can start the video playback faster not after 20 seconds or so. However, if you have 500 new users per seconds you can choke the bandwidth so the need to control the burst speed. I created a new config directive limit_rate_max that applies to initial burst speed. Another change I made is to put the limit_rate_after in server variables. I need this to control the burst quantity on the fly based on url param. if ($arg_burst){ set $limit_rate_after $arg_burst; } The index for each file is different and with this I control how much burst we need, useful when users seek a lot and index is recalculated on seek for each file... If anyone can review this and find it useful, let me know, it works for my needs. I thank to Andrei (Latzease) for his contribution to this... he made it happen. --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: limit_rate_max.patch Type: application/octet-stream Size: 5544 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Oct 26 07:55:27 2012 From: nginx-forum at nginx.us (abcomp01) Date: Fri, 26 Oct 2012 03:55:27 -0400 Subject: nginx rewite base file name help! Message-ID: http://pic.test.com/view.php?filename=947284035_234601603334998.jpg to http://pic.test.com/947284035_234601603334998 base file name thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232243,232243#msg-232243 From francis at daoine.org Fri Oct 26 08:12:36 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 26 Oct 2012 09:12:36 +0100 Subject: why two different responses? In-Reply-To: <5089FD9D.2070308@comcast.net> References: <5089FD9D.2070308@comcast.net> Message-ID: <20121026081236.GQ17159@craic.sysops.org> On Thu, Oct 25, 2012 at 11:03:57PM -0400, AJ Weber wrote: Hi there, > One of these requests is sent by my program. One is sent by curl (it's > easy to tell which). They are identical as far as I can tell. What is the full request sent in each case? "curl -v" to show what it sends. Or "tcpdump" on the nginx server to see what it receives. > However, the curl request returns the expected result (204) from my > servlet. The program keeps getting this 302 returned with some generic > "302 Found" html from nginx. If you see different headers sent from your code and curl, perhaps try adding or removing one at a time to see if there is one that matters. > My eyes are blurry from staring at it. Can anyone tell me why my test > with curl works, but the program sending the same request does not??? Purely on the lines you have shown: does your code or config do anything based on $http_user_agent or the equivalent? f -- Francis Daly francis at daoine.org From andrejaenisch at googlemail.com Fri Oct 26 08:20:00 2012 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Fri, 26 Oct 2012 10:20:00 +0200 Subject: .htaccess style support in existing nginx In-Reply-To: References: Message-ID: 2012/10/25 rahul286 : > Another approach is to add PHP user to sudoers list and allow them to execute only one command "www-data ALL=NOPASSWD: nginx -t && service nginx reload" Another suggesting to save your idea: Fetch pen & paper and list commands users would need to change the things you had in mind. Then think of (or ask someone) wether it would be possible to do anything harmful with just using these code. If no -> Allow users to execute those command. If yes -> Is your idea realisable in another way? However, whitelisting (allow just certain commands) is always better than blacklisting (forbid certain commands). Maybe you could just save some settings using JSON or so. But as shown above by Jonathan Matthews be careful of harmful code. Regards, Andr? From igor at sysoev.ru Fri Oct 26 08:36:54 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 26 Oct 2012 12:36:54 +0400 Subject: why two different responses? In-Reply-To: <5089FD9D.2070308@comcast.net> References: <5089FD9D.2070308@comcast.net> Message-ID: On Oct 26, 2012, at 7:03 , AJ Weber wrote: > One of these requests is sent by my program. One is sent by curl (it's easy to tell which). They are identical as far as I can tell. However, the curl request returns the expected result (204) from my servlet. The program keeps getting this 302 returned with some generic "302 Found" html from nginx. > > My eyes are blurry from staring at it. Can anyone tell me why my test with curl works, but the program sending the same request does not??? > > Thanks, > AJ > > xx.yy.zz.64 - - [26/Oct/2012:02:49:38 +0000] "GET /myCoordinator/Coordinator?Action=servers HTTP/1.1" upstream_cache_status - 302 171 "-" "-" "-" > > tt.ss.kk.134 - - [26/Oct/2012:02:52:43 +0000] "GET /myCoordinator/Coordinator?Action=servers HTTP/1.1" upstream_cache_status MISS 204 0 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" "-" Most probably your program does not pass "Host" header. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Oct 26 08:38:47 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 26 Oct 2012 04:38:47 -0400 Subject: .htaccess style support in existing nginx In-Reply-To: References: Message-ID: <791b6c3a2d5e830c9f71ccd1f62ea38a.NginxMailingListEnglish@forum.nginx.org> Thanks for suggestion Andr?. :-) Yes, we will take whitelisting approach only. Rather than giving direct command like "nginx -t && service nginx reload" in sudoers list, we will create a small shell script, put it outside web-writable path (so php/web-scripts cannot alter it) www-data user will have sudo privilege on our script only Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232176,232247#msg-232247 From contact at jpluscplusm.com Fri Oct 26 10:00:47 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 26 Oct 2012 11:00:47 +0100 Subject: .htaccess style support in existing nginx In-Reply-To: <791b6c3a2d5e830c9f71ccd1f62ea38a.NginxMailingListEnglish@forum.nginx.org> References: <791b6c3a2d5e830c9f71ccd1f62ea38a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 26 October 2012 09:38, rahul286 wrote: > Yes, we will take whitelisting approach only. > > Rather than giving direct command like "nginx -t && service nginx reload" > in sudoers list, we will create a small shell script, put it outside > web-writable path (so php/web-scripts cannot alter it) > > www-data user will have sudo privilege on our script only Don't forget the simplest DoS of all - just create a config file snippet that causes "nginx -t" to fail. Then no-one can reload. (It's still a bad idea, sorry!) Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From contact at jpluscplusm.com Fri Oct 26 10:08:13 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 26 Oct 2012 11:08:13 +0100 Subject: nginx rewite base file name help! In-Reply-To: References: Message-ID: On 26 October 2012 08:55, abcomp01 wrote: > http://pic.test.com/view.php?filename=947284035_234601603334998.jpg > > to > > http://pic.test.com/947284035_234601603334998 > > base file name This is very doable, and you should be able to work it out yourself after reading these: http://wiki.nginx.org/HttpRewriteModule#rewrite and http://wiki.nginx.org/HttpCoreModule#Variables Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From mdounin at mdounin.ru Fri Oct 26 10:41:43 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Oct 2012 14:41:43 +0400 Subject: Strange problem with cache In-Reply-To: <0d8e620754a3583c2cd54d5afeabdf99.NginxMailingListEnglish@forum.nginx.org> References: <0d8e620754a3583c2cd54d5afeabdf99.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121026104143.GZ40452@mdounin.ru> Hello! On Thu, Oct 25, 2012 at 04:34:14PM -0400, florian.iragne wrote: > Hi, > > i've come accross a strange problem of caching. My site is run by nginx > 1.2.1, on two different servers. The config are exactly the same (i use > puppet to mirror configs). The contents are exactly the same, everything > that i can think of is exactly identical on both servers. > > In the config, i set "expires max" for the /static location. > > The two nginx are behind a haproxy instance for loadbalancing and failover. > The setup is in active-active mode. > > When i load a page with no browser cache, each item in the static location > is served by nginx with the correct response regarding the max-age and > expire directives. Then i reload the page : some of these ressources are > served with a 304 response (that is the expected behaviour), some with a 200 > code (hence, a fresh copy of the file). If i reload the page again and > again, the ressources served with 304 answer are varying. > > Now, i've put the server into active-backup mode, and only one instance of > nginx is serving the files. In this setup, the ressources are always served > with a 304 answer. > > Did i miss something? I assume that my servers setup is quite common, and i > d'ont believe that the behaviour i observe is "normal". > > Any idea? Symptoms suggest last modification times are different for the same static file on different servers. You have to sync files between servers with modification time preserved for conditional requests to work correctly ("rsync -a" is usually a good starting point). -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Oct 26 11:43:48 2012 From: nginx-forum at nginx.us (rahul286) Date: Fri, 26 Oct 2012 07:43:48 -0400 Subject: .htaccess style support in existing nginx In-Reply-To: References: Message-ID: <9b14c81fbbc752ae3affe8d151e9363e.NginxMailingListEnglish@forum.nginx.org> @Jonathan Thanks again for your inputs. This thread helped me learn few more things. :-) Apart from that, I think, an application like web-based control-panel - built for Nginx specially, will anyway need take care of all this. Technically, if a script can write to a file and have new config updated automatically, it will have to handle security issue as well. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232176,232253#msg-232253 From nginx-forum at nginx.us Fri Oct 26 11:57:08 2012 From: nginx-forum at nginx.us (florian.iragne) Date: Fri, 26 Oct 2012 07:57:08 -0400 Subject: Strange problem with cache In-Reply-To: <20121026104143.GZ40452@mdounin.ru> References: <20121026104143.GZ40452@mdounin.ru> Message-ID: <3ebd9a68bdcf802fca5ebcff437f4023.NginxMailingListEnglish@forum.nginx.org> Ok, now i understand. The browser sends a request withe the if-not-modified-since corresponding to one server to the other server, and this one reply with 200 since the mtime is in the future of the if-not-modified-since is there any simple solution to handle this? I don't want to rsync assets between the servers I use django in my project and its static/collectstatic process. So, i force the mtime of each static file collected, afet each collecstatic operation. I give the command line if someone fall in the same trap: ./manage.py collectstatic --noinput && find static -type f -exec touch -t $(date +%m%d%H)00 '{}' \; Now, the cache is working as intended and my servers are in active-active setup. Thanks for your answer that leads me to the solution Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232235,232254#msg-232254 From sb at waeme.net Fri Oct 26 12:04:27 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Fri, 26 Oct 2012 16:04:27 +0400 Subject: Ngix Performace as a Reverse Proxy In-Reply-To: <91F89A63AE64CF4692B740C52E446768EA2F2F@DC1INTADCW8201.yieldbroker.com> References: <91F89A63AE64CF4692B740C52E446768EA2F2F@DC1INTADCW8201.yieldbroker.com> Message-ID: On 26 Oct2012, at 08:36 , Niall Gallagher - Yieldbroker wrote: > Hi, > > We have been doing some testing with Nginx as a reverse proxy. We have been comparing it to a number of solutions which it easily beats, like IIS and Apache with mod_proxy etc. However, as an experiment we have been comparing it to an adapted NIO server written in Java. This seems to be out performing Nginx in the reverse proxy role by a factor of 3 times. We are convinced our configuration is wrong. Both run on the same box (at different times) with the same sysctl settings (see below). We also saw some spikes, up to 3 seconds per request at times, and some at 10 over a 1 million request test of 1000 concurrent clients. > > We are using a fairly straight forward configuration for Nginx. Since we have two processors on the box we tried worker_processes of 4 with worker_connections of 6000, then we tried worker_processes of 40 with worker_connections of 5000. No change. We need to be able to support responsive Ajax requests with strategies like HTTP streaming and long polling in our setup. > > Any ideas what we can do to boost our throughput and latency? Buffers. nginx should not write/read anything to/from disk while proxing for maximum performance. Check your average request and response sizes and tune buffers sizes accordingly (look at proxy_buffers, proxy_buffer_size, proxy_max_temp_file_size documentation). From max at mxcrypt.com Fri Oct 26 12:49:26 2012 From: max at mxcrypt.com (Maxim Khitrov) Date: Fri, 26 Oct 2012 08:49:26 -0400 Subject: error_page not redirecting as expected Message-ID: Hello, I'm sure it's something simple, but I don't know where the problem is in the following configuration: server { listen 80; server_name example.com; root /path/to/webroot; error_page 404 /status/404; location / { try_files $uri @dynamic; } location @dynamic { include fastcgi_params; fastcgi_pass unix:/tmp/myapp.sock; fastcgi_intercept_errors on; } } I have some static files (css, js, etc.) under webroot that are served by nginx. Everything else should be sent to @dynamic. If the initial @dynamic request is answered with 404, I want nginx to make a second request to @dynamic with the path set to /status/404. What happens instead is that if the initial request receives a 404 response from the FastCGI responder, nginx serves the default 404 Not Found page (not even the content returned by @dynamic) instead of rewriting the request to /status/404. Opening /status/404 directly works just fine. Where did I make a mistake? There is nothing in the error log to give me any hints. - Max From nginx-forum at nginx.us Fri Oct 26 13:19:48 2012 From: nginx-forum at nginx.us (ghadamyari) Date: Fri, 26 Oct 2012 09:19:48 -0400 Subject: Replace content of body reverse proxy In-Reply-To: <600498.56498.qm@web63208.mail.re1.yahoo.com> References: <600498.56498.qm@web63208.mail.re1.yahoo.com> Message-ID: <0039334844a37dbfe4ff1e54d26e8a2c.NginxMailingListEnglish@forum.nginx.org> I need this feature too. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,161250,232257#msg-232257 From amoiz.shine at gmail.com Fri Oct 26 14:13:53 2012 From: amoiz.shine at gmail.com (Sharl Jimh Tsin) Date: Fri, 26 Oct 2012 22:13:53 +0800 Subject: Is nginx in Pre-built format ready for Ubuntu 12.10? Message-ID: <508A9AA1.2070501@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi,all: follow the steps in download page of nginx site.and apt-get gives those error messages below: sharljimhtsin at sharl-laptop:~$ sudo apt-get update | grep nginx ?? http://nginx.org quantal InRelease ?? http://nginx.org quantal Release.gpg ?? http://nginx.org quantal Release ?? http://nginx.org quantal/nginx Sources ?? http://nginx.org quantal/nginx i386 Packages ?? http://nginx.org quantal/nginx Translation-zh_CN ?? http://nginx.org quantal/nginx Translation-zh ?? http://nginx.org quantal/nginx Translation-en W: ???? http://nginx.org/packages/ubuntu/dists/quantal/nginx/source/Sources 404 Not Found W: ???? http://nginx.org/packages/ubuntu/dists/quantal/nginx/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. as that,i can not find and install it via nginx official source repository. - -- Best regards, Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) Using Gmail? Please read this important notice: http://www.fsf.org/campaigns/jstrap/gmail?10073. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ iQEcBAEBAgAGBQJQipqVAAoJEEYmNy4jisTjmlcIAIKHEfFTzO/RgQfQellQGYYv F5IApQ1Ny3xaBe7l4lBr99NYWB8fwQbnvJqV4iG13sCs5QFUNEYWQHjPcodgZEdt R3SoHj8thS1ByG6JEze/G0X1ASnv6KWZYXterKiv4LN9FffCS0g37sGOP+NlbKPK wzrtqaxkJZ/Pd603OAIbKtwSv4F65UD88LmmxrKYEhM/pmvE2vhx+W++d10dD0VT eoWHL247zKZntGlkBe6dayadlMzeFDOm474ovH6oVEpUGRbZ/ZFOUSMv8JmeBczV GQcPKoQz0u3DXMJMqYaYsygGv1F9GQU9Lm+8ovZdPrJFR2UTSebVwI9HJKLgYV0= =XnEG -----END PGP SIGNATURE----- From sb at waeme.net Fri Oct 26 14:29:49 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Fri, 26 Oct 2012 18:29:49 +0400 Subject: Is nginx in Pre-built format ready for Ubuntu 12.10? In-Reply-To: <508A9AA1.2070501@gmail.com> References: <508A9AA1.2070501@gmail.com> Message-ID: <3B174CA4-4146-41B4-BB08-1794B1AADF54@waeme.net> On 26 Oct2012, at 18:13 , Sharl Jimh Tsin wrote: > > as that,i can not find and install it via nginx official source > repository. There is no packages for ubuntu 12.10 right now. We will build packages for quantal on next stable nginx release. It is planned at 07.11. From aweber at comcast.net Fri Oct 26 20:58:46 2012 From: aweber at comcast.net (AJ Weber) Date: Fri, 26 Oct 2012 16:58:46 -0400 Subject: Chunked Transfer Message-ID: <508AF986.2040402@comcast.net> Asking, because the documentation looks like it's a little outdated on this... Is Chunked Transfer still not enabled OOTB? This would seem like almost a mandatory feature of HTTP 1.1 to implement, and the only reference I could find is to separate source code/module/patch that I would have to download and recompile all of nginx for? Has it been implemented or added to the default, pre-compiled packages and I just can't see it in the nginx -V output? I need the ability to upload large content, and this would appear to be the proper way to do that. I'm using CentOS 6.x if anyone knows of an up-to-date version of the nginx binaries that includes what is or was "ngx_chunkin". Thanks, AJ From duanemulder at rattyshack.ca Sat Oct 27 05:27:27 2012 From: duanemulder at rattyshack.ca (Duane) Date: Sat, 27 Oct 2012 01:27:27 -0400 Subject: Http 1.1 chucking support In-Reply-To: References: <20121016173547.4DFF95EC072@homiemail-a75.g.dreamhost.com> Message-ID: <508B70BF.9050709@rattyshack.ca> Hello trm: Tried enabling proxy_http_version 1.1 which returned the same error. I did try installing the chunking_plugin as well. In both cased the error I am getting still is 2012/10/17 16:09:55 [info] 15748#0: *37 client sent "Transfer-Encoding: chunked" header while reading client request headers, client: 10.10.10.205, server: hostname.zzz.com, request: "OPTIONS /svn/repos/path-65x/developer/trunk HTTP/1.1", host: "10.10.2.194" Duane On 12-10-16 1:48 PM, trm asn wrote: > > > On Tue, Oct 16, 2012 at 11:05 PM, rattyshack > > wrote: > > Hello list. > So we are running nginx 1.2.4 and when connecting with an svn > client using the service library nginx is returning the following > error. > XML parsing failed: (411 Length Required) > From what I can tell I need to compile in the httpchunkinmodule. > However the module was last tested on the website with version 1.1.5. > Does 1.2.4 have support for httpchunkinmodule? And how would I > use it. > > Duane > > Sent from my BlackBerry? PlayBook^(TM) > www.blackberry.com > > > > use the below function after proxy_pass. > > proxy_http_version 1.1; > > i think it'll solve the issue. > > --trm > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangsamp at gmail.com Sat Oct 27 12:35:51 2012 From: wangsamp at gmail.com (Oleksandr V. Typlyns'kyi) Date: Sat, 27 Oct 2012 15:35:51 +0300 (EEST) Subject: Strange problem with cache In-Reply-To: <3ebd9a68bdcf802fca5ebcff437f4023.NginxMailingListEnglish@forum.nginx.org> References: <20121026104143.GZ40452@mdounin.ru> <3ebd9a68bdcf802fca5ebcff437f4023.NginxMailingListEnglish@forum.nginx.org> Message-ID: Yesterday Oct 26, 2012 at 07:57 florian.iragne wrote: > Ok, now i understand. The browser sends a request withe the > if-not-modified-since corresponding to one server to the other server, and > this one reply with 200 since the mtime is in the future of the > if-not-modified-since > > is there any simple solution to handle this? I don't want to rsync assets > between the servers http://nginx.org/r/if_modified_since if_modified_since before; -- WNGS-RIPE From aweber at comcast.net Sat Oct 27 23:24:43 2012 From: aweber at comcast.net (AJ Weber) Date: Sat, 27 Oct 2012 19:24:43 -0400 Subject: build question Message-ID: <508C6D3B.7010808@comcast.net> I was attempting to build nginx 1.2.4 from source, and include the chunkin module. I also included the autolib module, because from what I could understand, it would help reduce the need for some of the other source/devel packages to build. I ended up with a clean build, but nginx is 6.3MB, and the version I installed from a binary package is only 813KB. So I'm wondering whether this is normal or what happened, and basically how to test my build now. Is there a test-script package or anything that can verify that binary I created? Thanks again, AJ From quintinpar at gmail.com Sun Oct 28 03:59:35 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 28 Oct 2012 09:29:35 +0530 Subject: Multiple URL parameters in the same location directive. Message-ID: Hi all, I have some URL patterns that follow the same directive specifications. E.g. sitemap.xml, robots.txt etc. follow the same location directive with the same caching policy, invalidation etc. location /robots.txt { proxy_pass http://localhost:82; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_ignore_headers Cache-Control; proxy_ignore_headers Expires; proxy_ignore_headers X-Accel-Expires; proxy_cache cache; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 302 2m; proxy_cache_use_stale updating; } Now I have written separate directives for all of these URL patterns. How can I combine them into one line? Can I do this? location /robots.txt, sitemap.xml, ~*.xml { etc. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at automattic.com Sun Oct 28 04:35:06 2012 From: barry at automattic.com (Barry Abrahamson) Date: Sun, 28 Oct 2012 00:35:06 -0400 Subject: Multiple URL parameters in the same location directive. In-Reply-To: References: Message-ID: <508CB5FA.9060800@automattic.com> On 10/27/12 11:59 PM, Quintin Par wrote: > How can I combine them into one line? You *could* combine them into one line using a regex, but I would not recommend it - it will be more complicated to maintain over time and less readable. I would recommend putting all of your common configuration directives into a separate file and including it in each individual location block. Something like this: location = /robots.txt { include common-config.conf; } location = /sitemap.xml { include common-config.conf; } ... I would also be sure to read the documentation on location syntax [0] to understand the differences between exact, prefix, and regex matches so you are sure to use the correct type for each pattern you are trying to match. Hope this is useful. References: 0. http://nginx.org/en/docs/http/ngx_http_core_module.html#location -- Barry Abrahamson | Systems Wrangler | Automattic Blog: http://barry.wordpress.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 881 bytes Desc: OpenPGP digital signature URL: From rockybai1989 at gmail.com Sun Oct 28 09:46:25 2012 From: rockybai1989 at gmail.com (=?GB2312?B?sNfH7Mbm?=) Date: Sun, 28 Oct 2012 17:46:25 +0800 Subject: why nginx request time too big while upstream response time small Message-ID: Hi, I'm using nginx as a reverse proxy to my backends. I see sometimes the request time is too big while the upstream response time is quite small: 211.139.145.99 - - [26/Oct/2012:10:49:55 +0800] 5.287 "POST /service/3/app_components/?screen_type=iphone_2x&device_platform=iphone&channel=App%20Store&app_name=news_article&device_type=iPhone%205&os_version=6.0&version_code=1.2&uuid=27506e0c4c09b881f44e1a458f471f3de165cc34 HTTP/1.1" 200 6916 "-" "-" "i.snssdk.com" "uuid=27506e0c4c09b881f44e1a458f471f3de165cc34" upstream_response_time: 0.287 As I analyze the access log, I see sometimes there are many logs like above which the request time is 5 or more bigger than upstream response time. For find out the big difference between the request time and request respons time, i use tcpdump to get packages on nginx server 80 port like : As I analyze the packages? I find many Strange phenomenon below: 219.234.82.78 - - [26/Oct/2012:10:52:48 +0800] 5.260 "POST /service/2/app_alert/?uuid=4960614f474b0990f3dcb83c12bde42e55eb42b1&lang=zh-Hans&access=mobile&carrier=%E4%B8%AD%E5%9B%BD%E7%A7%BB%E5%8A%A8&mcc_mnc=46002&device_platform=iphone&channel=App%20Store&app_name=joke_essay&device_type=iPhone%204&os_version=6.0&version_code=1.3 HTTP/1.1" 200 49 "-" "\xE5\x86\x85\xE6\xB6\xB5\xE6\xAE\xB5\xE5\xAD\x90 1.3 (iPhone; iPhone OS 6.0; zh_CN)" "i.snssdk.com" "uuid=4960614f474b0990f3dcb83c12bde42e55eb42b1" upstream_response_time: 0.260 tcpdump: 10:52:43.099494 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [S], seq 599253512, win 5840, options [mss 1460], length 0 10:52:43.099526 IP 60.29.255.91.80 > 219.234.82.78.57120: Flags [S.], seq 1300382255, ack 599253513, win 14600, options [mss 1460], length 0 10:52:43.110052 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [.], ack 1, win 5840, length 0 10:52:43.614325 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [P.], seq 1:553, ack 1, win 5840, length 552 E..P.. at .1.....RN<..[. .P#.. M.B0P.......POST /service/2/app_alert/?uuid=4960614f474b0990f3dcb83c12bde42e55eb42b1&lang=zh-Hans&access=mobile&carrier=%E4%B8%AD%E5%9B%BD%E7%A7%BB%E5%8A%A8&mcc_mnc=46002&device_platform=iphone&channel=App%20Store&app_name=joke_essay&device_type=iPhone%204&os_version=6.0&version_code=1.3 HTTP/1.1^M Host: i.snssdk.com^M Accept-Encoding: gzip^M Content-Length: 0^M Cookie: uuid=4960614f474b0990f3dcb83c12bde42e55eb42b1^M User-Agent: ............ 1.3 (iPhone; iPhone OS 6.0; zh_CN)^M Via: 1.1 dxt_2_168 (squid/3.1.16)^M Cache-Control: max-age=0^M Connection: keep-alive^M 10:52:46.613596 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [P.], seq 1:553, ack 1, win 5840, length 552 10:52:52.614245 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [P.], seq 1:553, ack 1, win 5840, length 552 10:53:03.048953 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [F.], seq 553, ack 1, win 5840, length 0 10:53:28.987395 IP 60.29.255.91.80 > 219.234.82.78.57120: Flags [P.], seq 1:229, ack 554, win 15456, length 228 10:53:28.997888 IP 219.234.82.78.57120 > 60.29.255.91.80: Flags [R], seq 599254066, win 0, length 0 I analyze many nginx access log with tcpdump packages. I find sometimes the nginx server does not response ack on the client in time, when the client post request to server, and in some cases the server does not send any package more after The three handshake with client. I fell quite puzzled why nginx log the request time 5.260 seconds while the nginx server send the ack package 40 seconds later. And I try debug the nginx, I find the request time logged after the ngx_http_finalize_request, which call the sendfile to write the reseult the tcp sk_buffer. I guess there is some thing wrong with my system or network, but i use netstat -s -t, i find no packages dropped outgoing: 2922 ICMP packets dropped because they were out-of-window 73 ICMP packets dropped because socket was locked 927337 SYNs to LISTEN sockets dropped Any idea what's happening and how to resolve this problem ? Thanks. -- thanks rockybai -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Sun Oct 28 10:25:04 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sun, 28 Oct 2012 15:55:04 +0530 Subject: Multiple URL parameters in the same location directive. In-Reply-To: <508CB5FA.9060800@automattic.com> References: <508CB5FA.9060800@automattic.com> Message-ID: Is there a way I can specify this in the same file itself? I don?t want to spread this out to multiple files. Include a section? - Quintin On Sun, Oct 28, 2012 at 10:05 AM, Barry Abrahamson wrote: > On 10/27/12 11:59 PM, Quintin Par wrote: > > > How can I combine them into one line? > > You *could* combine them into one line using a regex, but I would not > recommend it - it will be more complicated to maintain over time and > less readable. I would recommend putting all of your common > configuration directives into a separate file and including it in each > individual location block. Something like this: > > location = /robots.txt { > include common-config.conf; > } > > location = /sitemap.xml { > include common-config.conf; > } > > ... > > I would also be sure to read the documentation on location syntax [0] to > understand the differences between exact, prefix, and regex matches so > you are sure to use the correct type for each pattern you are trying to > match. > > Hope this is useful. > > References: > 0. http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > -- > Barry Abrahamson | Systems Wrangler | Automattic > Blog: http://barry.wordpress.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Oct 28 14:37:41 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 28 Oct 2012 14:37:41 +0000 Subject: Multiple URL parameters in the same location directive. In-Reply-To: References: Message-ID: On 28 October 2012 03:59, Quintin Par wrote: > Hi all, > > I have some URL patterns that follow the same directive specifications. > [snip] > Now I have written separate directives for all of these URL patterns. > How can I combine them into one line? > > Can I do this? > location /robots.txt, sitemap.xml, ~*.xml { Not quite. You'll need to use a regular expresion in your location, as detailed at http://nginx.org/r/location and http://wiki.nginx.org/HttpCoreModule#location I find the wiki documentation on this specific topic to be clearer and more useful. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From contact at jpluscplusm.com Sun Oct 28 14:40:37 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 28 Oct 2012 14:40:37 +0000 Subject: Multiple URL parameters in the same location directive. In-Reply-To: References: <508CB5FA.9060800@automattic.com> Message-ID: On 28 October 2012 10:25, Quintin Par wrote: > Is there a way I can specify this in the same file itself? I don?t want to > spread this out to multiple files. > > Include a section? Yes, you can. Here you go. http://bit.ly/RdVWFo (A mailing list is not the place to ask *trivially* answerable questions!) Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From sb at waeme.net Sun Oct 28 15:57:23 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Sun, 28 Oct 2012 19:57:23 +0400 Subject: build question In-Reply-To: <508C6D3B.7010808@comcast.net> References: <508C6D3B.7010808@comcast.net> Message-ID: <5307F4BB-9A21-4B4E-B0FA-5C298F27C9C1@waeme.net> On 28 Oct2012, at 03:24 , AJ Weber wrote: > I was attempting to build nginx 1.2.4 from source, and include the chunkin module. I also included the autolib module, because from what I could understand, it would help reduce the need for some of the other source/devel packages to build. > > I ended up with a clean build, but nginx is 6.3MB, and the version I installed from a binary package is only 813KB. So I'm wondering whether this is normal or what happened, and basically how to test my build now. Is there a test-script package or anything that can verify that binary I created? It is normal. Your binary contains debug information and version from package is stripped (man strip). From aweber at comcast.net Sun Oct 28 17:00:57 2012 From: aweber at comcast.net (AJ Weber) Date: Sun, 28 Oct 2012 13:00:57 -0400 Subject: build question In-Reply-To: <5307F4BB-9A21-4B4E-B0FA-5C298F27C9C1@waeme.net> References: <508C6D3B.7010808@comcast.net> <5307F4BB-9A21-4B4E-B0FA-5C298F27C9C1@waeme.net> Message-ID: <508D64C9.2090904@comcast.net> Thank you for the reply! Is it better if I rebuild without the debug information in the binary (for performance or other reasons)? If so, are there some recommended options to pass to cc-opt and/or ld-opt? Thank you again! -AJ On 10/28/2012 11:57 AM, Sergey Budnevitch wrote: > On 28 Oct2012, at 03:24 , AJ Weber wrote: > >> I was attempting to build nginx 1.2.4 from source, and include the chunkin module. I also included the autolib module, because from what I could understand, it would help reduce the need for some of the other source/devel packages to build. >> >> I ended up with a clean build, but nginx is 6.3MB, and the version I installed from a binary package is only 813KB. So I'm wondering whether this is normal or what happened, and basically how to test my build now. Is there a test-script package or anything that can verify that binary I created? > It is normal. Your binary contains debug information and version from package is stripped (man strip). > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Oct 28 21:48:30 2012 From: nginx-forum at nginx.us (kustodian) Date: Sun, 28 Oct 2012 17:48:30 -0400 Subject: Nginx Location Matching Documentation Update In-Reply-To: <20121018123354.GZ40452@mdounin.ru> References: <20121018123354.GZ40452@mdounin.ru> Message-ID: <8e97cfaced2dd42ada4dd49bc6d4c411.NginxMailingListEnglish@forum.nginx.org> Sorry for a late reply and thanks for your help. I did update the wiki documentaion http://wiki.nginx.org/HttpCoreModule#location, I hope it is ok and that location matching will now be clearer to new Nginx users. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231966,232286#msg-232286 From Niall.Gallagher at yieldbroker.com Sun Oct 28 22:26:04 2012 From: Niall.Gallagher at yieldbroker.com (Niall Gallagher - Yieldbroker) Date: Sun, 28 Oct 2012 22:26:04 +0000 Subject: Ngix Performace as a Reverse Proxy In-Reply-To: References: <91F89A63AE64CF4692B740C52E446768EA2F2F@DC1INTADCW8201.yieldbroker.com> Message-ID: <91F89A63AE64CF4692B740C52E446768EA30FA@DC1INTADCW8201.yieldbroker.com> According to documentation for proxy_buffering "For Comet applications based on long-polling it is important to set proxy_buffering to off, otherwise the asynchronous response is buffered and the Comet does not work." I have tried the following and am not getting better results, however its still being outperformed by the Java HTTP proxy by about 10% - 15%. worker_processes 2; worker_cpu_affinity 01 10; worker_rlimit_nofile 20000; However spikes of up to 20 seconds are still frequent under high load. -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Sergey Budnevitch Sent: Friday, 26 October 2012 11:04 PM To: nginx at nginx.org Subject: Re: Ngix Performace as a Reverse Proxy On 26 Oct2012, at 08:36 , Niall Gallagher - Yieldbroker wrote: > Hi, > > We have been doing some testing with Nginx as a reverse proxy. We have been comparing it to a number of solutions which it easily beats, like IIS and Apache with mod_proxy etc. However, as an experiment we have been comparing it to an adapted NIO server written in Java. This seems to be out performing Nginx in the reverse proxy role by a factor of 3 times. We are convinced our configuration is wrong. Both run on the same box (at different times) with the same sysctl settings (see below). We also saw some spikes, up to 3 seconds per request at times, and some at 10 over a 1 million request test of 1000 concurrent clients. > > We are using a fairly straight forward configuration for Nginx. Since we have two processors on the box we tried worker_processes of 4 with worker_connections of 6000, then we tried worker_processes of 40 with worker_connections of 5000. No change. We need to be able to support responsive Ajax requests with strategies like HTTP streaming and long polling in our setup. > > Any ideas what we can do to boost our throughput and latency? Buffers. nginx should not write/read anything to/from disk while proxing for maximum performance. Check your average request and response sizes and tune buffers sizes accordingly (look at proxy_buffers, proxy_buffer_size, proxy_max_temp_file_size documentation). _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From greminn at gmail.com Mon Oct 29 02:11:00 2012 From: greminn at gmail.com (Simon) Date: Mon, 29 Oct 2012 15:11:00 +1300 Subject: Secure shared hosting environment basics Message-ID: Hi There, I know this question has probably been churned over and over in various different variations, so i applogise in advance if its one of those 'here we go again?' posts :) We are a hosting provider that is currently using the LAMP stack to host our clients websites. 80% of these are on our shared hosting instance that currently has PHP 5.3 using safemode to assist in keeping sites to their own area. We are looking to build a new shared service and have been playing with nginx/php-fpm (on debian squeeze) and love the easy configuration and speed. My question is: What is the best way to go about setting up a shared hosting environment for 1000's of customers, which is secure from the point of view that the customers can't access other sites. Im not looking for a how-to, really just a "this is the correct way you should look todo this" hint that will point us in the right direction?. Thank you for reading! PS: Would like to stick with debian if thats possible Simon From quintinpar at gmail.com Mon Oct 29 04:19:19 2012 From: quintinpar at gmail.com (Quintin Par) Date: Mon, 29 Oct 2012 09:49:19 +0530 Subject: Multiple URL parameters in the same location directive. In-Reply-To: References: <508CB5FA.9060800@automattic.com> Message-ID: Thanks Jon. I was desperately looking for a way to include this in the same file and in my experience there are a lot of things like this that are not documented in the wiki elsewhere but Maxim, Sergey, Antonio etc. seem to know. On Sun, Oct 28, 2012 at 8:10 PM, Jonathan Matthews wrote: > On 28 October 2012 10:25, Quintin Par wrote: > > Is there a way I can specify this in the same file itself? I don?t want > to > > spread this out to multiple files. > > > > Include a section? > > Yes, you can. Here you go. http://bit.ly/RdVWFo > > (A mailing list is not the place to ask *trivially* answerable questions!) > > Jonathan > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Oct 29 07:59:56 2012 From: nginx-forum at nginx.us (florian.iragne) Date: Mon, 29 Oct 2012 03:59:56 -0400 Subject: Strange problem with cache In-Reply-To: References: Message-ID: <5c25d498a69de825d5f801dbc3d6ac6f.NginxMailingListEnglish@forum.nginx.org> Thanks for your answer. However, it's not secure enough for me and i prefer adding one more command line to the deployment process to ensure cache is working. thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232235,232297#msg-232297 From sb at waeme.net Mon Oct 29 08:19:01 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 29 Oct 2012 12:19:01 +0400 Subject: build question In-Reply-To: <508D64C9.2090904@comcast.net> References: <508C6D3B.7010808@comcast.net> <5307F4BB-9A21-4B4E-B0FA-5C298F27C9C1@waeme.net> <508D64C9.2090904@comcast.net> Message-ID: <4E5C7EA7-B8EF-4F56-8B6E-A29983636406@waeme.net> On 28 Oct2012, at 21:00 , AJ Weber wrote: > Thank you for the reply! > > Is it better if I rebuild without the debug information in the binary (for performance or other reasons)? No, debugging symbols has neglectable impact on performance and RAM usage, you need to strip binary only if you want to save disk storage. > If so, are there some recommended options to pass to cc-opt and/or ld-opt? > > Thank you again! > -AJ > > > On 10/28/2012 11:57 AM, Sergey Budnevitch wrote: >> On 28 Oct2012, at 03:24 , AJ Weber wrote: >> >>> I was attempting to build nginx 1.2.4 from source, and include the chunkin module. I also included the autolib module, because from what I could understand, it would help reduce the need for some of the other source/devel packages to build. >>> >>> I ended up with a clean build, but nginx is 6.3MB, and the version I installed from a binary package is only 813KB. So I'm wondering whether this is normal or what happened, and basically how to test my build now. Is there a test-script package or anything that can verify that binary I created? >> It is normal. Your binary contains debug information and version from package is stripped (man strip). >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Oct 29 08:26:36 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Oct 2012 12:26:36 +0400 Subject: Nginx Location Matching Documentation Update In-Reply-To: <8e97cfaced2dd42ada4dd49bc6d4c411.NginxMailingListEnglish@forum.nginx.org> References: <20121018123354.GZ40452@mdounin.ru> <8e97cfaced2dd42ada4dd49bc6d4c411.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121029082636.GD40452@mdounin.ru> Hello! On Sun, Oct 28, 2012 at 05:48:30PM -0400, kustodian wrote: > Sorry for a late reply and thanks for your help. I did update the wiki > documentaion http://wiki.nginx.org/HttpCoreModule#location, I hope it is ok > and that location matching will now be clearer to new Nginx users. Ruslan already updated official docs according to your suggestion, BTW: http://nginx.org/r/location -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Mon Oct 29 08:39:09 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Oct 2012 12:39:09 +0400 Subject: Strange problem with cache In-Reply-To: References: <20121026104143.GZ40452@mdounin.ru> <3ebd9a68bdcf802fca5ebcff437f4023.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121029083909.GE40452@mdounin.ru> Hello! On Sat, Oct 27, 2012 at 03:35:51PM +0300, Oleksandr V. Typlyns'kyi wrote: > Yesterday Oct 26, 2012 at 07:57 florian.iragne wrote: > > > Ok, now i understand. The browser sends a request withe the > > if-not-modified-since corresponding to one server to the other server, and > > this one reply with 200 since the mtime is in the future of the > > if-not-modified-since > > > > is there any simple solution to handle this? I don't want to rsync assets > > between the servers > > http://nginx.org/r/if_modified_since > if_modified_since before; 1) This will work only if you'll switch off ETag's as well. 2) This isn't a solution, but a bandaid. It won't fix the real problem (mtimes mismatch), but will make response with greatest mtime eventually win. Even with only 2 servers used this will still result in about 1.5x more 200 responses than needed. -- Maxim Dounin http://nginx.com/support.html From sb at waeme.net Mon Oct 29 09:12:10 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 29 Oct 2012 13:12:10 +0400 Subject: Ngix Performace as a Reverse Proxy In-Reply-To: <91F89A63AE64CF4692B740C52E446768EA30FA@DC1INTADCW8201.yieldbroker.com> References: <91F89A63AE64CF4692B740C52E446768EA2F2F@DC1INTADCW8201.yieldbroker.com> <91F89A63AE64CF4692B740C52E446768EA30FA@DC1INTADCW8201.yieldbroker.com> Message-ID: <81191436-B63B-40CD-95A7-D6CD06D37DF3@waeme.net> On 29 Oct2012, at 02:26 , Niall Gallagher - Yieldbroker wrote: > According to documentation for proxy_buffering > > "For Comet applications based on long-polling it is important to set proxy_buffering to off, otherwise the asynchronous response is buffered and the Comet does not work." > I have tried the following and am not getting better results, however its still being outperformed by the Java HTTP proxy by about 10% - 15%. Try to set postpone_output 0; > worker_processes 2; > worker_cpu_affinity 01 10; > worker_rlimit_nofile 20000; > > However spikes of up to 20 seconds are still frequent under high load. > > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Sergey Budnevitch > Sent: Friday, 26 October 2012 11:04 PM > To: nginx at nginx.org > Subject: Re: Ngix Performace as a Reverse Proxy > > > On 26 Oct2012, at 08:36 , Niall Gallagher - Yieldbroker wrote: > >> Hi, >> >> We have been doing some testing with Nginx as a reverse proxy. We have been comparing it to a number of solutions which it easily beats, like IIS and Apache with mod_proxy etc. However, as an experiment we have been comparing it to an adapted NIO server written in Java. This seems to be out performing Nginx in the reverse proxy role by a factor of 3 times. We are convinced our configuration is wrong. Both run on the same box (at different times) with the same sysctl settings (see below). We also saw some spikes, up to 3 seconds per request at times, and some at 10 over a 1 million request test of 1000 concurrent clients. >> >> We are using a fairly straight forward configuration for Nginx. Since we have two processors on the box we tried worker_processes of 4 with worker_connections of 6000, then we tried worker_processes of 40 with worker_connections of 5000. No change. We need to be able to support responsive Ajax requests with strategies like HTTP streaming and long polling in our setup. >> >> Any ideas what we can do to boost our throughput and latency? > > Buffers. nginx should not write/read anything to/from disk while proxing for maximum performance. Check your average request and response sizes and tune buffers sizes accordingly (look at proxy_buffers, proxy_buffer_size, proxy_max_temp_file_size documentation). > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From liangsuilong at gmail.com Mon Oct 29 09:24:12 2012 From: liangsuilong at gmail.com (Liang Suilong) Date: Mon, 29 Oct 2012 17:24:12 +0800 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: References: Message-ID: Hi, all. I am a newbie. I am using nginx for Windows to host static big files. I find out a strange problem. When Windows client downloads files from nginx for Windows server, the download rate is very very slow. The files are bigger than 200MB. Network: 100Mbps LAN Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, 1.2.4 and 1.3.7. They are the same. Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl are the same. The download rate is not more than 300KB/s Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and Chrome and curl are the same. The download rate are more than 8MB/s. Almost 10MB/s. The same network 100Mbps LAN Server Side: Fedora 17 x86_64, nginx 1.0.15 Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl are the same. The download rate is more than 8MB/s. Almost 10MB/s Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and Chrome and curl are the same. The download rate are more than 8MB/s. Almost 10MB/s. It looks quite strange. I just use the default configure file for nginx and put the files into nginx root directory. I do not change anything. I checked HTTP Header. I try to change Firefox user-agent on Windows to Linux edition. It could not change download rate, still very very slow. I try to use Apache on Windows to host the files. The download rate is normal, about 8~10MB/s. How could I debug the problem? Which logs and configure files should I provide? If you have some good advices, Please tell me soon. Thank you Liang Suilong Sent From My Heart My Page: http:// www.liangsuilong.info -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoiz.shine at gmail.com Mon Oct 29 09:36:42 2012 From: amoiz.shine at gmail.com (Sharl Jimh Tsin) Date: Mon, 29 Oct 2012 17:36:42 +0800 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: References: Message-ID: <508E4E2A.3010207@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ? 2012?10?29? 17:24, Liang Suilong ??: > Hi, all. > > I am a newbie. I am using nginx for Windows to host static big > files. I find out a strange problem. When Windows client downloads > files from nginx for Windows server, the download rate is very very > slow. The files are bigger than 200MB. > > Network: 100Mbps LAN > > Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, > 1.2.4 and 1.3.7. They are the same. > > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl > are the same. The download rate is not more than 300KB/s > > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and > Chrome and curl are the same. The download rate are more than > 8MB/s. Almost 10MB/s. > > The same network 100Mbps LAN > > Server Side: Fedora 17 x86_64, nginx 1.0.15 > > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl > are the same. The download rate is more than 8MB/s. Almost 10MB/s > > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and > Chrome and curl are the same. The download rate are more than > 8MB/s. Almost 10MB/s. > > It looks quite strange. I just use the default configure file for > nginx and put the files into nginx root directory. I do not change > anything. I checked HTTP Header. I try to change Firefox user-agent > on Windows to Linux edition. It could not change download rate, > still very very slow. I try to use Apache on Windows to host the > files. The download rate is normal, about 8~10MB/s. > > How could I debug the problem? Which logs and configure files > should I provide? If you have some good advices, Please tell me > soon. > > Thank you > > Liang Suilong > > Sent From My Heart My Page: http:// > www.liangsuilong.info > > > > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > sounds impossible,is it caused by client configuration? - -- Best regards, Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) Using Gmail? Please read this important notice: http://www.fsf.org/campaigns/jstrap/gmail?10073. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ iQEcBAEBAgAGBQJQjk4pAAoJEEYmNy4jisTj8+sH/2bcodbpexhjoywZW2T8M8DD c+N5LHjRuKU5zVoOV8tcsU0guhbt4VG+1nGJtOtLV0fn0qgHUKSpVnLriZnpMXpy 1n0I76Uu3O4fsL0T2iYswmMl10ycEJLZwtf5WIPLlNVVGzz0J2gp3FjzkvTVy4Da FKt/qMsqKRMjb6tsCCdL/AXoTyDjFXmNef8jrjMLfdxot87qvipxVRaXgiOKYAtM uClfMP50/1d89qt+DrdJrB7CBDWOUUYaUSKFhj3622/s5J1CouF/EdkXI6sZlshl uH93RQKoYGuSAJN8z/kPSiXVc9XCSAEvDXtKno3iTljO8LjqN+T8T1GKQDxoXks= =Un5o -----END PGP SIGNATURE----- From nginx-forum at nginx.us Mon Oct 29 09:40:39 2012 From: nginx-forum at nginx.us (hristoc) Date: Mon, 29 Oct 2012 05:40:39 -0400 Subject: Rewrite or proxy_pass to internal ip ? Message-ID: <6d5e1ceabd95003f71213b73661eecec.NginxMailingListEnglish@forum.nginx.org> Hello, I want to ask a question about what I to use proxy_pass or rewrite and how to use them. Basicly I want to redirect connection to internal server for example if user hit: www.mydomain.com/admin/ nginx to redirect all requests to http://192.168.1.1/. I made this with: location /admin { proxy_pass http://192.168.1.1:80; proxy_set_header X-Real-IP $remote_addr; } But I got Error from http://192.168.1.1 admin does not exists. What I to do to redirect all requsts after /admin on web server to internal server with admin ? Regards, C. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232298,232298#msg-232298 From francis at daoine.org Mon Oct 29 09:51:42 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 29 Oct 2012 09:51:42 +0000 Subject: Rewrite or proxy_pass to internal ip ? In-Reply-To: <6d5e1ceabd95003f71213b73661eecec.NginxMailingListEnglish@forum.nginx.org> References: <6d5e1ceabd95003f71213b73661eecec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121029095142.GR17159@craic.sysops.org> On Mon, Oct 29, 2012 at 05:40:39AM -0400, hristoc wrote: Hi there, > location /admin { > proxy_pass http://192.168.1.1:80; > proxy_set_header X-Real-IP $remote_addr; > } > > But I got Error from http://192.168.1.1 admin does not exists. What I to do > to redirect all requsts after /admin on web server to internal server with > admin ? http://nginx.org/r/proxy_pass See the part that starts "A request URI is passed to the server as follows:" The short version is: proxy_pass http://192.168.1.1:80/; f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Oct 29 10:04:14 2012 From: nginx-forum at nginx.us (kustodian) Date: Mon, 29 Oct 2012 06:04:14 -0400 Subject: Nginx Location Matching Documentation Update In-Reply-To: <20121029082636.GD40452@mdounin.ru> References: <20121029082636.GD40452@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Sun, Oct 28, 2012 at 05:48:30PM -0400, kustodian wrote: > > > Sorry for a late reply and thanks for your help. I did update the > wiki > > documentaion http://wiki.nginx.org/HttpCoreModule#location, I hope > it is ok > > and that location matching will now be clearer to new Nginx users. > > Ruslan already updated official docs according to your suggestion, > BTW: > > http://nginx.org/r/location > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Then I will update the documentation wiki to be exactly like the official docs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231966,232311#msg-232311 From mdounin at mdounin.ru Mon Oct 29 10:08:22 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Oct 2012 14:08:22 +0400 Subject: Chunked Transfer In-Reply-To: <508AF986.2040402@comcast.net> References: <508AF986.2040402@comcast.net> Message-ID: <20121029100821.GI40452@mdounin.ru> Hello! On Fri, Oct 26, 2012 at 04:58:46PM -0400, AJ Weber wrote: > Asking, because the documentation looks like it's a little outdated > on this... > > Is Chunked Transfer still not enabled OOTB? This would seem like > almost a mandatory feature of HTTP 1.1 to implement, and the only > reference I could find is to separate source code/module/patch that > I would have to download and recompile all of nginx for? > > Has it been implemented or added to the default, pre-compiled > packages and I just can't see it in the nginx -V output? > > I need the ability to upload large content, and this would appear to > be the proper way to do that. I'm working on it, and it's expected to be available later this month. BTW, if you know examples of real-world use of chunked transfer encoding by clients - please let me know. AFAIK no browsers use it, and most widespread example I'm aware of is the webdav client in Mac OS X. -- Maxim Dounin http://nginx.com/support.html From contact at jpluscplusm.com Mon Oct 29 10:23:45 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 29 Oct 2012 10:23:45 +0000 Subject: Chunked Transfer In-Reply-To: <20121029100821.GI40452@mdounin.ru> References: <508AF986.2040402@comcast.net> <20121029100821.GI40452@mdounin.ru> Message-ID: On 29 October 2012 10:08, Maxim Dounin wrote: > Hello! > > On Fri, Oct 26, 2012 at 04:58:46PM -0400, AJ Weber wrote: > >> Asking, because the documentation looks like it's a little outdated >> on this... >> >> Is Chunked Transfer still not enabled OOTB? This would seem like >> almost a mandatory feature of HTTP 1.1 to implement, and the only >> reference I could find is to separate source code/module/patch that >> I would have to download and recompile all of nginx for? >> >> Has it been implemented or added to the default, pre-compiled >> packages and I just can't see it in the nginx -V output? >> >> I need the ability to upload large content, and this would appear to >> be the proper way to do that. > > I'm working on it, and it's expected to be available later this > month. > > BTW, if you know examples of real-world use of chunked transfer > encoding by clients - please let me know. AFAIK no browsers use > it, and most widespread example I'm aware of is the webdav client > in Mac OS X. I wanted to use it in this report-generating pipeline: bash$ mysql -e 'generate lots of data' | perl 'do some munging' | csvtool 'make a proper CSV' | gzip | curl --upload-file - http://my.webdav.endpoint.for.customers/foo/bar.gz The fact that the chunkin module wouldn't work properly with webdav due to curl choosing chunking when PUTting stdin, meant I had to break this into multiple parts and ensure that curl could upload a complete file from disk. I mentioned the issue here: http://mailman.nginx.org/pipermail/nginx/2012-April/033141.html This increased the complexity of what I was doing, as I now had local state on disk that had to be managed/cleaned-up/etc. Just my 2 cents, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Mon Oct 29 11:30:00 2012 From: nginx-forum at nginx.us (hristoc) Date: Mon, 29 Oct 2012 07:30:00 -0400 Subject: Rewrite or proxy_pass to internal ip ? In-Reply-To: <20121029095142.GR17159@craic.sysops.org> References: <20121029095142.GR17159@craic.sysops.org> Message-ID: <632dbbc6daee40b10039902091d0e256.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Mon, Oct 29, 2012 at 05:40:39AM -0400, hristoc wrote: > > Hi there, > > > location /admin { > > proxy_pass http://192.168.1.1:80; > > proxy_set_header X-Real-IP $remote_addr; > > } > > > > But I got Error from http://192.168.1.1 admin does not exists. What > I to do > > to redirect all requsts after /admin on web server to internal > server with > > admin ? > > http://nginx.org/r/proxy_pass > > See the part that starts "A request URI is passed to the server as > follows:" > > The short version is: > > proxy_pass http://192.168.1.1:80/; > > f > -- > Francis Daly francis at daoine.org > Thank you, one step forward but now I receive error 502 bad gateway. Server mydomain.com have 2 Ethernet. First one with realip that I connect, second one with virtual 192.168.1.2 connected to internal network. I don't want to add in my fw redirect rule, so I expect to do it over the web. When I access realip or domain name in that directory, I actually to see the web of my internal server located somewhere in local network. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232298,232315#msg-232315 From nginx-forum at nginx.us Mon Oct 29 13:09:34 2012 From: nginx-forum at nginx.us (nilesh) Date: Mon, 29 Oct 2012 09:09:34 -0400 Subject: How can i block these requests Message-ID: <7663e21d34aeddcf75e30914b4c56d13.NginxMailingListEnglish@forum.nginx.org> I am getting following access.log: and are significant in number RequestMethod=- RequestURI=- HTTPStatus=400 BodyBytesSent=0 RequestTime=0.000 ResponseTime=- HTTPReferrer="-" UserAgent="-" X-Forwarded-For=- HTTPStatus=400 Does anyone know why i am getting these empty requests, and how to stop them. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232321,232321#msg-232321 From edho at myconan.net Mon Oct 29 13:12:11 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 29 Oct 2012 20:12:11 +0700 Subject: How can i block these requests In-Reply-To: <7663e21d34aeddcf75e30914b4c56d13.NginxMailingListEnglish@forum.nginx.org> References: <7663e21d34aeddcf75e30914b4c56d13.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Oct 29, 2012 8:09 PM, "nilesh" wrote: > > I am getting following access.log: and are significant in number > RequestMethod=- RequestURI=- HTTPStatus=400 BodyBytesSent=0 > RequestTime=0.000 ResponseTime=- HTTPReferrer="-" UserAgent="-" > X-Forwarded-For=- HTTPStatus=400 > > Does anyone know why i am getting these empty requests, and how to stop > them. > That's modern browser's keepalive > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232321,232321#msg-232321 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Oct 29 13:57:54 2012 From: nginx-forum at nginx.us (gerard breiner) Date: Mon, 29 Oct 2012 09:57:54 -0400 Subject: Site URL not completed. Bad redirection ? Message-ID: Hello, I'm trying to replace apache2 by nginx. So far my platform debian squeeze + Apache2 + sogo 2.0.2 works very fine. However I decided to replace apache2 by nginx 1.2.4-1. It works too but it seems there is a redirection that doesn't work. My url : https://sogo.mydomain give a page error 403 forbidden. I have to complete the url with /SOGo so that I get the login. Here is the error.log of nginx : directory index of "/usr/lib/GNUstep/SOGo/WebServerResources/" is forbidden, client: 90.2.225.249, server: sogo.mydomain, request: "GET / HTTP/1.1", host: "sogo.mydomain" Here is my sogo.conf : --------------------------------- server { listen 443; server_name sogo.ias.u-psud.fr; root /usr/lib/GNUstep/SOGo/WebServerResources/; ssl on; ssl_certificate /etc/nginx/ssl.fac/wildcard.ias.u-psud.fr.crt; ssl_certificate_key /etc/nginx/ssl.fac/wildcard.ias.u-psud.fr.key; location ^~/SOGo { proxy_pass http://127.0.0.1:20000; proxy_redirect http://127.0.0.1:20000 default; # forward user's IP address proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header x-webobjects-server-protocol HTTP/1.0; proxy_set_header x-webobjects-remote-host 127.0.0.1; proxy_set_header x-webobjects-server-name $server_name; #proxy_set_header x-webobjects-server-url https://sogo.ias.u-psud.fr; proxy_set_header x-webobjects-server-url $scheme://$host; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; client_max_body_size 50m; client_body_buffer_size 128k; break; } location /SOGo.woa/WebServerResources/ { alias /usr/lib/GNUstep/SOGo/WebServerResources/; allow all; } location /SOGo/WebServerResources/ { alias /usr/lib/GNUstep/SOGo/WebServerResources/; allow all; } location ^/SOGo/so/ControlPanel/Products/([^/]*)/Resources/(.*)$ { alias /usr/lib/GNUstep/SOGo/$1.SOGo/Resources/$2; } location ^/SOGo/so/ControlPanel/Products/[^/]*UI/Resources/.*\.(jpg|png|gif|css|js)$ { alias /usr/lib/GNUstep/SOGo/$1.SOGo/Resources/$2; } } ------------------------------ I would be very pleased to get some advices. Best regards. Gerard Breiner Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232325,232325#msg-232325 From nginx-forum at nginx.us Mon Oct 29 14:58:21 2012 From: nginx-forum at nginx.us (mgenov) Date: Mon, 29 Oct 2012 10:58:21 -0400 Subject: using nginx as proxy from http to https In-Reply-To: References: Message-ID: <4f5a66a9e4a0571fe7c8fd1072734117.NginxMailingListEnglish@forum.nginx.org> I've update my version to: nginx: nginx version: nginx/1.0.5 and it seems that the result is similar. Any other ideas ? Andre Jaenisch Wrote: ------------------------------------------------------- > 2012/10/25 mgenov : > > Here is my version. > > > > nginx -v > > nginx version: nginx/0.7.67 > > Your version looks outdated ... > > 2012-09-25 nginx-1.2.4 stable version has been released. (See: > http://nginx.org/) > > Don't sure, wether this is the problem ... > Well, this is the oldest announcement I've found: > > 2009-12-15 nginx-0.8.30 development version has been released. (See: > http://nginx.org/2009.html) > > So your versions seems to be REALLY old! > > Regards, Andre > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232212,232326#msg-232326 From nginx-forum at nginx.us Mon Oct 29 17:13:01 2012 From: nginx-forum at nginx.us (peschuster) Date: Mon, 29 Oct 2012 13:13:01 -0400 Subject: Memory consumption of ningx Message-ID: Hi, I cross-compiled nginx for microblaze processors (http://github.com/peschuster/nginx) and am currently doing some performance benchmarks with nginx running on a microblaze processor with a custom designed SoC on a FPGA. However, I am having problems with the memory consumption of nginx: When I perform 10,000 requests with 20 conn/s and 2 requests/conn (using httperf - 1), memory used by nginx grows to about 40 MB. When I repeat this benchmark, the used memory grows from 40 to 80 MB. The problem with this behavior is that my SoC only has 256 MB of RAM in total (the file system also runs completely from RAM using a ramdisk). Therefore nginx crashes the complete system by consuming all memory for longer/extended benchmark scenarios. Is this the intended behavior of nginx? Why isn't it "re-using" the already allocated memory? Any hints on how I can circumvent or track down this problem? Thanks. Peter 1: httperf --timeout=5 --client=0/1 --server=192.168.2.125 --port=80 --uri=/index.html --rate=20 --send-buffer=4096 --recv-buffer=16384 --num-conns=5000 --num-calls=2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232328,232328#msg-232328 From francis at daoine.org Mon Oct 29 20:57:29 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 29 Oct 2012 20:57:29 +0000 Subject: Site URL not completed. Bad redirection ? In-Reply-To: References: Message-ID: <20121029205729.GS17159@craic.sysops.org> On Mon, Oct 29, 2012 at 09:57:54AM -0400, gerard breiner wrote: Hi there, > It works too but it seems there is a redirection that doesn't work. My url : > https://sogo.mydomain give a page error 403 forbidden. I have to complete > the url with /SOGo so that I get the login. If you don't tell nginx about the redirection you want, you can't expect it to work. You have five locations defined: > location ^~/SOGo { > location /SOGo.woa/WebServerResources/ { > location /SOGo/WebServerResources/ { > location ^/SOGo/so/ControlPanel/Products/([^/]*)/Resources/(.*)$ { > location ^/SOGo/so/ControlPanel/Products/[^/]*UI/Resources/.*\.(jpg|png|gif|css|js)$ The rules are at http://nginx.org/r/location The last two above are unlikely to do what you expect. The initial request is for "/". That doesn't match any of your locations, and so will use the "default" server-level one. Probably what you want is something like location = / { return 301 /SOGo/; } but you may prefer to use a full url, starting "http". f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 29 21:04:42 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 29 Oct 2012 21:04:42 +0000 Subject: Rewrite or proxy_pass to internal ip ? In-Reply-To: <632dbbc6daee40b10039902091d0e256.NginxMailingListEnglish@forum.nginx.org> References: <20121029095142.GR17159@craic.sysops.org> <632dbbc6daee40b10039902091d0e256.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121029210442.GT17159@craic.sysops.org> On Mon, Oct 29, 2012 at 07:30:00AM -0400, hristoc wrote: > Francis Daly Wrote: Hi there, > > proxy_pass http://192.168.1.1:80/; > one step forward but now I receive error 502 bad gateway. Server > mydomain.com have 2 Ethernet. First one with realip that I connect, second > one with virtual 192.168.1.2 connected to internal network. I don't want to > add in my fw redirect rule, so I expect to do it over the web. When I access > realip or domain name in that directory, I actually to see the web of my > internal server located somewhere in local network. I'm confused. Which machine is creating these error messages? If you start on the public nginx machine, and do curl -i -0 -H X-Real-IP:\ 10.1.2.3 http://192.168.1.1:80/ what response do you get? What response do you expect? f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Mon Oct 29 23:43:27 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Oct 2012 19:43:27 -0400 Subject: Nginx answers too large requests with bad status Message-ID: Hello, When submitting too large requests, Nginx replies a page containing the right code status but serves the errors page with a HTTP/0.9 (!) 200 answer. Screenshot as attachment. Have a nice evening, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 414_200.png Type: image/png Size: 16063 bytes Desc: not available URL: From ne at vbart.ru Tue Oct 30 00:07:32 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 30 Oct 2012 04:07:32 +0400 Subject: Nginx answers too large requests with bad status In-Reply-To: References: Message-ID: <201210300407.32815.ne@vbart.ru> On Tuesday 30 October 2012 03:43:27 B.R. wrote: > Hello, > > When submitting too large requests, Nginx replies a page containing the > right code status but serves the errors page with a HTTP/0.9 (!) 200 > answer. Screenshot as attachment. > You should tune "large_client_header_buffers". http://nginx.org/r/large_client_header_buffers wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From francis at daoine.org Tue Oct 30 00:12:38 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Oct 2012 00:12:38 +0000 Subject: Nginx answers too large requests with bad status In-Reply-To: References: Message-ID: <20121030001237.GU17159@craic.sysops.org> On Mon, Oct 29, 2012 at 07:43:27PM -0400, B.R. wrote: Hi there, > When submitting too large requests, Nginx replies a page containing the > right code status but serves the errors page with a HTTP/0.9 (!) 200 answer. No, it doesn't. Look at the "curl -i" output. Your browser is making up the 200 status. It is right about the HTTP/0.9 response, though, because nginx couldn't see that the request was HTTP/1.0 or HTTP/1.1. And therefore it "knows" that it was HTTP/0.9. See the thread including http://forum.nginx.org/read.php?2,228425,228431 for more details. I think that nginx is not wrong in this case. But I also think that it wouldn't be wrong if this HTTP/0.9 response had two paragraphs, and began with the characters "HTTP/1.1 414". I imagine that if someone provided a justification with a patch, it would be considered like any other suggested patch. f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Tue Oct 30 00:57:40 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Oct 2012 20:57:40 -0400 Subject: Nginx answers too large requests with bad status In-Reply-To: <20121030001237.GU17159@craic.sysops.org> References: <20121030001237.GU17159@craic.sysops.org> Message-ID: @Valentin I am not seeking a solution to the 414 problem, I know how to address the issue Thanks anyway @Francis The browser output has just be made to illustrate the wrong answer. The problem triggered using a Python script which checked against the 'status' field of a HTTP answer to decide on its actions. It crashed several times at random occurences with a BadStatusLine exception. The exception message was something like '\n\n\n\n200'. I do not know the implementation details, although I suspect an answer code should be an answer code. I didn't get 414, I get something which made my script crash. That's why I guess that's a bug --- *B. R.* On Mon, Oct 29, 2012 at 8:12 PM, Francis Daly wrote: > On Mon, Oct 29, 2012 at 07:43:27PM -0400, B.R. wrote: > > Hi there, > > > When submitting too large requests, Nginx replies a page containing the > > right code status but serves the errors page with a HTTP/0.9 (!) 200 > answer. > > No, it doesn't. > > Look at the "curl -i" output. > > Your browser is making up the 200 status. > > It is right about the HTTP/0.9 response, though, because nginx couldn't > see that the request was HTTP/1.0 or HTTP/1.1. And therefore it "knows" > that it was HTTP/0.9. > > See the thread including http://forum.nginx.org/read.php?2,228425,228431 > for more details. > > I think that nginx is not wrong in this case. But I also think that it > wouldn't be wrong if this HTTP/0.9 response had two paragraphs, and began > with the characters "HTTP/1.1 414". > > I imagine that if someone provided a justification with a patch, it > would be considered like any other suggested patch. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Oct 30 01:09:16 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Oct 2012 21:09:16 -0400 Subject: Nginx answers too large requests with bad status In-Reply-To: References: <20121030001237.GU17159@craic.sysops.org> Message-ID: To avoid saying odd things, the only think I am sure of when I checked the exception output from Python is that it started with ''. Maybe the rest of the output came from my script. I am not at work any more I can't check. The only thing I am pretty sure about is that the status must be a status like '200 OK' or '414 WTHAreYouDoin'. I hate Python, but I'll trust it on that one ^^ I'll try to make a little PoC if needed, --- *B. R.* On Mon, Oct 29, 2012 at 8:57 PM, B.R. wrote: > @Valentin > I am not seeking a solution to the 414 problem, I know how to address the > issue > Thanks anyway > > @Francis > The browser output has just be made to illustrate the wrong answer. > > The problem triggered using a Python script which checked against the > 'status' field of a HTTP answer to decide on its actions. > It crashed several times at random occurences with a BadStatusLine > exception. > The exception message was something like '\n\n\n\n200'. > > I do not know the implementation details, although I suspect an answer > code should be an answer code. > I didn't get 414, I get something which made my script crash. > > That's why I guess that's a bug > --- > *B. R.* > > > > On Mon, Oct 29, 2012 at 8:12 PM, Francis Daly wrote: > >> On Mon, Oct 29, 2012 at 07:43:27PM -0400, B.R. wrote: >> >> Hi there, >> >> > When submitting too large requests, Nginx replies a page containing the >> > right code status but serves the errors page with a HTTP/0.9 (!) 200 >> answer. >> >> No, it doesn't. >> >> Look at the "curl -i" output. >> >> Your browser is making up the 200 status. >> >> It is right about the HTTP/0.9 response, though, because nginx couldn't >> see that the request was HTTP/1.0 or HTTP/1.1. And therefore it "knows" >> that it was HTTP/0.9. >> >> See the thread including http://forum.nginx.org/read.php?2,228425,228431 >> for more details. >> >> I think that nginx is not wrong in this case. But I also think that it >> wouldn't be wrong if this HTTP/0.9 response had two paragraphs, and began >> with the characters "HTTP/1.1 414". >> >> I imagine that if someone provided a justification with a patch, it >> would be considered like any other suggested patch. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liangsuilong at gmail.com Tue Oct 30 01:58:35 2012 From: liangsuilong at gmail.com (Liang Suilong) Date: Tue, 30 Oct 2012 09:58:35 +0800 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: <508E4E2A.3010207@gmail.com> References: <508E4E2A.3010207@gmail.com> Message-ID: On Mon, Oct 29, 2012 at 5:36 PM, Sharl Jimh Tsin wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > ? 2012?10?29? 17:24, Liang Suilong ??: > > Hi, all. > > > > I am a newbie. I am using nginx for Windows to host static big > > files. I find out a strange problem. When Windows client downloads > > files from nginx for Windows server, the download rate is very very > > slow. The files are bigger than 200MB. > > > > Network: 100Mbps LAN > > > > Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, > > 1.2.4 and 1.3.7. They are the same. > > > > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl > > are the same. The download rate is not more than 300KB/s > > > > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and > > Chrome and curl are the same. The download rate are more than > > 8MB/s. Almost 10MB/s. > > > > The same network 100Mbps LAN > > > > Server Side: Fedora 17 x86_64, nginx 1.0.15 > > > > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl > > are the same. The download rate is more than 8MB/s. Almost 10MB/s > > > > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and > > Chrome and curl are the same. The download rate are more than > > 8MB/s. Almost 10MB/s. > > > > It looks quite strange. I just use the default configure file for > > nginx and put the files into nginx root directory. I do not change > > anything. I checked HTTP Header. I try to change Firefox user-agent > > on Windows to Linux edition. It could not change download rate, > > still very very slow. I try to use Apache on Windows to host the > > files. The download rate is normal, about 8~10MB/s. > > > > How could I debug the problem? Which logs and configure files > > should I provide? If you have some good advices, Please tell me > > soon. > > > > Thank you > > > > Liang Suilong > > > > Sent From My Heart My Page: http:// > > www.liangsuilong.info > > > > > > > > > > _______________________________________________ nginx mailing list > > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > > > sounds impossible,is it caused by client configuration? > > Thank you, Sharl I think the problem should be impossible, however it is exactly existed. I test nginx for Windows on Windows Server 2008, and the client side is Windows 7. In the same 100Mbps LAN network environment, the download rate seems to have a little improvement, about 400KB/s. It is still very slow. I tried to reinstall new Windows. There is no change on nginx for Windows. - -- > Best regards, > Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) > > Using Gmail? Please read this important notice: > http://www.fsf.org/campaigns/jstrap/gmail?10073. > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ > > iQEcBAEBAgAGBQJQjk4pAAoJEEYmNy4jisTj8+sH/2bcodbpexhjoywZW2T8M8DD > c+N5LHjRuKU5zVoOV8tcsU0guhbt4VG+1nGJtOtLV0fn0qgHUKSpVnLriZnpMXpy > 1n0I76Uu3O4fsL0T2iYswmMl10ycEJLZwtf5WIPLlNVVGzz0J2gp3FjzkvTVy4Da > FKt/qMsqKRMjb6tsCCdL/AXoTyDjFXmNef8jrjMLfdxot87qvipxVRaXgiOKYAtM > uClfMP50/1d89qt+DrdJrB7CBDWOUUYaUSKFhj3622/s5J1CouF/EdkXI6sZlshl > uH93RQKoYGuSAJN8z/kPSiXVc9XCSAEvDXtKno3iTljO8LjqN+T8T1GKQDxoXks= > =Un5o > -----END PGP SIGNATURE----- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Oct 30 02:14:26 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Oct 2012 22:14:26 -0400 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: References: <508E4E2A.3010207@gmail.com> Message-ID: I guess the problem come from buffers. For big files, using asynchronous serving is recommended. Take a look at: http://nginx.org/en/docs/http/ngx_http_core_module.html#aio http://wiki.nginx.org/HttpCoreModule#aio --- *B. R.* On Mon, Oct 29, 2012 at 9:58 PM, Liang Suilong wrote: > On Mon, Oct 29, 2012 at 5:36 PM, Sharl Jimh Tsin wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> ? 2012?10?29? 17:24, Liang Suilong ??: >> > Hi, all. >> > >> > I am a newbie. I am using nginx for Windows to host static big >> > files. I find out a strange problem. When Windows client downloads >> > files from nginx for Windows server, the download rate is very very >> > slow. The files are bigger than 200MB. >> > >> > Network: 100Mbps LAN >> > >> > Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, >> > 1.2.4 and 1.3.7. They are the same. >> > >> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >> > are the same. The download rate is not more than 300KB/s >> > >> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >> > Chrome and curl are the same. The download rate are more than >> > 8MB/s. Almost 10MB/s. >> > >> > The same network 100Mbps LAN >> > >> > Server Side: Fedora 17 x86_64, nginx 1.0.15 >> > >> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >> > are the same. The download rate is more than 8MB/s. Almost 10MB/s >> > >> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >> > Chrome and curl are the same. The download rate are more than >> > 8MB/s. Almost 10MB/s. >> > >> > It looks quite strange. I just use the default configure file for >> > nginx and put the files into nginx root directory. I do not change >> > anything. I checked HTTP Header. I try to change Firefox user-agent >> > on Windows to Linux edition. It could not change download rate, >> > still very very slow. I try to use Apache on Windows to host the >> > files. The download rate is normal, about 8~10MB/s. >> > >> > How could I debug the problem? Which logs and configure files >> > should I provide? If you have some good advices, Please tell me >> > soon. >> > >> > Thank you >> > >> > Liang Suilong >> > >> > Sent From My Heart My Page: http:// >> > www.liangsuilong.info >> > >> > >> > >> > >> > _______________________________________________ nginx mailing list >> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx >> > >> >> sounds impossible,is it caused by client configuration? >> >> > Thank you, Sharl > > I think the problem should be impossible, however it is exactly existed. I > test nginx for Windows on Windows Server 2008, and the client side is > Windows 7. In the same 100Mbps LAN network environment, the download rate > seems to have a little improvement, about 400KB/s. It is still very slow. > > I tried to reinstall new Windows. There is no change on nginx for Windows. > > - -- >> Best regards, >> Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) >> >> Using Gmail? Please read this important notice: >> http://www.fsf.org/campaigns/jstrap/gmail?10073. >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1.4.11 (GNU/Linux) >> Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ >> >> iQEcBAEBAgAGBQJQjk4pAAoJEEYmNy4jisTj8+sH/2bcodbpexhjoywZW2T8M8DD >> c+N5LHjRuKU5zVoOV8tcsU0guhbt4VG+1nGJtOtLV0fn0qgHUKSpVnLriZnpMXpy >> 1n0I76Uu3O4fsL0T2iYswmMl10ycEJLZwtf5WIPLlNVVGzz0J2gp3FjzkvTVy4Da >> FKt/qMsqKRMjb6tsCCdL/AXoTyDjFXmNef8jrjMLfdxot87qvipxVRaXgiOKYAtM >> uClfMP50/1d89qt+DrdJrB7CBDWOUUYaUSKFhj3622/s5J1CouF/EdkXI6sZlshl >> uH93RQKoYGuSAJN8z/kPSiXVc9XCSAEvDXtKno3iTljO8LjqN+T8T1GKQDxoXks= >> =Un5o >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Tue Oct 30 02:16:59 2012 From: pchychi at gmail.com (Payam Chychi) Date: Mon, 29 Oct 2012 19:16:59 -0700 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: References: <508E4E2A.3010207@gmail.com> Message-ID: <58D81646-5C0F-4FE4-B8D7-B711E929B627@gmail.com> have you tried looking at the network layer using tcpdump/wireshark to see the connection properties? such as mtu, fragment packets, window size -Payam Sent from my iPhone On 2012-10-29, at 6:58 PM, Liang Suilong wrote: > On Mon, Oct 29, 2012 at 5:36 PM, Sharl Jimh Tsin wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> ? 2012?10?29? 17:24, Liang Suilong ??: >> > Hi, all. >> > >> > I am a newbie. I am using nginx for Windows to host static big >> > files. I find out a strange problem. When Windows client downloads >> > files from nginx for Windows server, the download rate is very very >> > slow. The files are bigger than 200MB. >> > >> > Network: 100Mbps LAN >> > >> > Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, >> > 1.2.4 and 1.3.7. They are the same. >> > >> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >> > are the same. The download rate is not more than 300KB/s >> > >> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >> > Chrome and curl are the same. The download rate are more than >> > 8MB/s. Almost 10MB/s. >> > >> > The same network 100Mbps LAN >> > >> > Server Side: Fedora 17 x86_64, nginx 1.0.15 >> > >> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >> > are the same. The download rate is more than 8MB/s. Almost 10MB/s >> > >> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >> > Chrome and curl are the same. The download rate are more than >> > 8MB/s. Almost 10MB/s. >> > >> > It looks quite strange. I just use the default configure file for >> > nginx and put the files into nginx root directory. I do not change >> > anything. I checked HTTP Header. I try to change Firefox user-agent >> > on Windows to Linux edition. It could not change download rate, >> > still very very slow. I try to use Apache on Windows to host the >> > files. The download rate is normal, about 8~10MB/s. >> > >> > How could I debug the problem? Which logs and configure files >> > should I provide? If you have some good advices, Please tell me >> > soon. >> > >> > Thank you >> > >> > Liang Suilong >> > >> > Sent From My Heart My Page: http:// >> > www.liangsuilong.info >> > >> > >> > >> > >> > _______________________________________________ nginx mailing list >> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx >> > >> >> sounds impossible,is it caused by client configuration? > > Thank you, Sharl > > I think the problem should be impossible, however it is exactly existed. I test nginx for Windows on Windows Server 2008, and the client side is Windows 7. In the same 100Mbps LAN network environment, the download rate seems to have a little improvement, about 400KB/s. It is still very slow. > > I tried to reinstall new Windows. There is no change on nginx for Windows. > >> - -- >> Best regards, >> Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) >> >> Using Gmail? Please read this important notice: >> http://www.fsf.org/campaigns/jstrap/gmail?10073. >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1.4.11 (GNU/Linux) >> Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ >> >> iQEcBAEBAgAGBQJQjk4pAAoJEEYmNy4jisTj8+sH/2bcodbpexhjoywZW2T8M8DD >> c+N5LHjRuKU5zVoOV8tcsU0guhbt4VG+1nGJtOtLV0fn0qgHUKSpVnLriZnpMXpy >> 1n0I76Uu3O4fsL0T2iYswmMl10ycEJLZwtf5WIPLlNVVGzz0J2gp3FjzkvTVy4Da >> FKt/qMsqKRMjb6tsCCdL/AXoTyDjFXmNef8jrjMLfdxot87qvipxVRaXgiOKYAtM >> uClfMP50/1d89qt+DrdJrB7CBDWOUUYaUSKFhj3622/s5J1CouF/EdkXI6sZlshl >> uH93RQKoYGuSAJN8z/kPSiXVc9XCSAEvDXtKno3iTljO8LjqN+T8T1GKQDxoXks= >> =Un5o >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From benimaur at gmail.com Tue Oct 30 02:20:47 2012 From: benimaur at gmail.com (Benimaur Gao) Date: Tue, 30 Oct 2012 10:20:47 +0800 Subject: I want to ask something about limit_zone In-Reply-To: References: Message-ID: thanks, but is this method a global setting? can I only change the return code for limit_zone? On Thu, Oct 25, 2012 at 4:38 PM, Sergey Budnevitch wrote: > > On 25 Oct2012, at 12:18 , Benimaur Gao wrote: > >> I've learned that if some IP exceeds the limit, a 503 http status code >> will be returned. >> I'm trying to find a way to change the default 503 value, e.g. use >> some other code, such as 403 to replace it, can I ? and how to? >> thanks! > > error_page 503 =403 /error_page.html; > > http://nginx.org/r/error_page > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From dey.ranjib at gmail.com Tue Oct 30 05:13:55 2012 From: dey.ranjib at gmail.com (Ranjib Dey) Date: Tue, 30 Oct 2012 10:43:55 +0530 Subject: How can i block these requests In-Reply-To: References: <7663e21d34aeddcf75e30914b4c56d13.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks for the pointer Edho, right now we are getting these log entries at a rate of 5000 per hour per server, and we have multiple nginx installations, and this is almost flooding our centralized logging system. Is there a way to disable logging such requests, or lower the number of occurrences (like tweaking the keep alive timeout value or number of requests)? thanks in advance ranjib On Mon, Oct 29, 2012 at 6:42 PM, Edho Arief wrote: > > On Oct 29, 2012 8:09 PM, "nilesh" wrote: > > > > I am getting following access.log: and are significant in number > > RequestMethod=- RequestURI=- HTTPStatus=400 BodyBytesSent=0 > > RequestTime=0.000 ResponseTime=- HTTPReferrer="-" UserAgent="-" > > X-Forwarded-For=- HTTPStatus=400 > > > > Does anyone know why i am getting these empty requests, and how to stop > > them. > > > > That's modern browser's keepalive > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,232321,232321#msg-232321 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Tue Oct 30 05:19:39 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 30 Oct 2012 12:19:39 +0700 Subject: How can i block these requests In-Reply-To: References: <7663e21d34aeddcf75e30914b4c56d13.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Oct 30, 2012 at 12:13 PM, Ranjib Dey wrote: > > Thanks for the pointer Edho, > right now we are getting these log entries at a rate of 5000 per hour per > server, and we have multiple nginx installations, and this is almost > flooding our centralized logging system. Is there a way to disable logging > such requests, or lower the number of occurrences (like tweaking the keep > alive timeout value or number of requests)? > > thanks in advance > ranjib > Maybe something like this will work (untested) error_page 400 /400.html; location = /400.html { access_log off; } From liangsuilong at gmail.com Tue Oct 30 08:26:15 2012 From: liangsuilong at gmail.com (Liang Suilong) Date: Tue, 30 Oct 2012 16:26:15 +0800 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: References: <508E4E2A.3010207@gmail.com> Message-ID: Thank you, B.R. I add output_buffers 1 128k option into nginx.conf. It has a great improvement. The download rate raise from 400KB/s to 4~5MB/s. I think the problem may affect all version nginx on Windows. Should it add output_buffer option into nginx.conf by default? Sent From My Heart My Page: http://www.liangsuilong.info On Tue, Oct 30, 2012 at 10:14 AM, B.R. wrote: > I guess the problem come from buffers. > For big files, using asynchronous serving is recommended. > > Take a look at: > http://nginx.org/en/docs/http/ngx_http_core_module.html#aio > http://wiki.nginx.org/HttpCoreModule#aio > --- > *B. R.* > > > > On Mon, Oct 29, 2012 at 9:58 PM, Liang Suilong wrote: > >> On Mon, Oct 29, 2012 at 5:36 PM, Sharl Jimh Tsin wrote: >> >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA1 >>> >>> ? 2012?10?29? 17:24, Liang Suilong ??: >>> > Hi, all. >>> > >>> > I am a newbie. I am using nginx for Windows to host static big >>> > files. I find out a strange problem. When Windows client downloads >>> > files from nginx for Windows server, the download rate is very very >>> > slow. The files are bigger than 200MB. >>> > >>> > Network: 100Mbps LAN >>> > >>> > Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, >>> > 1.2.4 and 1.3.7. They are the same. >>> > >>> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >>> > are the same. The download rate is not more than 300KB/s >>> > >>> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >>> > Chrome and curl are the same. The download rate are more than >>> > 8MB/s. Almost 10MB/s. >>> > >>> > The same network 100Mbps LAN >>> > >>> > Server Side: Fedora 17 x86_64, nginx 1.0.15 >>> > >>> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >>> > are the same. The download rate is more than 8MB/s. Almost 10MB/s >>> > >>> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >>> > Chrome and curl are the same. The download rate are more than >>> > 8MB/s. Almost 10MB/s. >>> > >>> > It looks quite strange. I just use the default configure file for >>> > nginx and put the files into nginx root directory. I do not change >>> > anything. I checked HTTP Header. I try to change Firefox user-agent >>> > on Windows to Linux edition. It could not change download rate, >>> > still very very slow. I try to use Apache on Windows to host the >>> > files. The download rate is normal, about 8~10MB/s. >>> > >>> > How could I debug the problem? Which logs and configure files >>> > should I provide? If you have some good advices, Please tell me >>> > soon. >>> > >>> > Thank you >>> > >>> > Liang Suilong >>> > >>> > Sent From My Heart My Page: http:// >>> > www.liangsuilong.info >>> > >>> > >>> > >>> > >>> > _______________________________________________ nginx mailing list >>> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx >>> > >>> >>> sounds impossible,is it caused by client configuration? >>> >>> >> Thank you, Sharl >> >> I think the problem should be impossible, however it is exactly existed. >> I test nginx for Windows on Windows Server 2008, and the client side is >> Windows 7. In the same 100Mbps LAN network environment, the download rate >> seems to have a little improvement, about 400KB/s. It is still very slow. >> >> I tried to reinstall new Windows. There is no change on nginx for Windows. >> >> - -- >>> Best regards, >>> Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) >>> >>> Using Gmail? Please read this important notice: >>> http://www.fsf.org/campaigns/jstrap/gmail?10073. >>> -----BEGIN PGP SIGNATURE----- >>> Version: GnuPG v1.4.11 (GNU/Linux) >>> Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ >>> >>> iQEcBAEBAgAGBQJQjk4pAAoJEEYmNy4jisTj8+sH/2bcodbpexhjoywZW2T8M8DD >>> c+N5LHjRuKU5zVoOV8tcsU0guhbt4VG+1nGJtOtLV0fn0qgHUKSpVnLriZnpMXpy >>> 1n0I76Uu3O4fsL0T2iYswmMl10ycEJLZwtf5WIPLlNVVGzz0J2gp3FjzkvTVy4Da >>> FKt/qMsqKRMjb6tsCCdL/AXoTyDjFXmNef8jrjMLfdxot87qvipxVRaXgiOKYAtM >>> uClfMP50/1d89qt+DrdJrB7CBDWOUUYaUSKFhj3622/s5J1CouF/EdkXI6sZlshl >>> uH93RQKoYGuSAJN8z/kPSiXVc9XCSAEvDXtKno3iTljO8LjqN+T8T1GKQDxoXks= >>> =Un5o >>> -----END PGP SIGNATURE----- >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Tue Oct 30 09:17:43 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 30 Oct 2012 13:17:43 +0400 Subject: Nginx answers too large requests with bad status In-Reply-To: References: <20121030001237.GU17159@craic.sysops.org> Message-ID: <201210301317.43878.ne@vbart.ru> On Tuesday 30 October 2012 04:57:40 B.R. wrote: > @Valentin > I am not seeking a solution to the 414 problem, I know how to address the > issue > Thanks anyway > > @Francis > The browser output has just be made to illustrate the wrong answer. > > The problem triggered using a Python script which checked against the > 'status' field of a HTTP answer to decide on its actions. > [...] There's no Status-Line in HTTP/0.9. So, the problem is that your script do not support HTTP/0.9. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From ne at vbart.ru Tue Oct 30 09:21:24 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 30 Oct 2012 13:21:24 +0400 Subject: I want to ask something about limit_zone In-Reply-To: References: Message-ID: <201210301321.24918.ne@vbart.ru> On Tuesday 30 October 2012 06:20:47 Benimaur Gao wrote: > thanks, but is this method a global setting? > can I only change the return code for limit_zone? > Nginx does not return 503 itself, except limit_req/limit_conn. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From citrin at citrin.ru Tue Oct 30 10:15:15 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 30 Oct 2012 14:15:15 +0400 Subject: Memory consumption of ningx In-Reply-To: References: Message-ID: <508FA8B3.20406@citrin.ru> On 10/29/12 21:13, peschuster wrote: > However, I am having problems with the memory consumption of nginx: > > When I perform 10,000 requests with 20 conn/s and 2 requests/conn (using > httperf - 1), memory used by nginx grows to about 40 MB. > When I repeat this benchmark, the used memory grows from 40 to 80 MB. 1. Do you use 3rd party modules? 2. This request served by nginx (e. g. static files) or proxied to some backend? 3. Memory usage depends on used features: SSL, SSI, gzip, limt rate, geo module, e. t. c. If gzip is used for static files, better to pre-compress them, and use ngx_http_gzip_static_module Also yo save memory use 1 worker and set reasonable small limit on connections: worker_processes 1; events { worker_connections 512; } -- Anton Yuzhaninov From marcin.deranek at booking.com Tue Oct 30 10:17:41 2012 From: marcin.deranek at booking.com (Marcin Deranek) Date: Tue, 30 Oct 2012 11:17:41 +0100 Subject: SSL client verification with chained CA Message-ID: <20121030111741.0d0b3aed@booking.com> Hi, So far we were able to run nginx (1.0.x & 1.2.x) with SSL client verification enabled where certs were signed by single root CA: ssl on; ssl_certificate server_cert_signed_by_CA.pem; ssl_certificate_key server_key.pem; ssl_client_certificate ca_cert.pem; ssl_verify_client optional; Now we would like to introduce chained CAs: root CA -> intermediate CA -> client cert so nginx should be able to verify client certificates which are signed by intermediate CA. Unfortunately I was not able make it working (I see that development version 1.3.x has some additional options which would suggest that this setup can work with it). Is this setup possible with nginx 1.2.x ? Some other people had identical problem: http://stackoverflow.com/questions/8431528/nginx-ssl-certificate-authentication-signed-by-intermediate-ca-chain SSL module documentation (http://wiki.nginx.org/HttpSslModule) mentions that SSL module "supports checking client certificates with two limitations" whereas 2nd limitation seems to be related to server cetificate rather than client certificate. Is this a bad wording or am I missing something there ? Regards, Marcin From nginx-forum at nginx.us Tue Oct 30 11:13:34 2012 From: nginx-forum at nginx.us (peschuster) Date: Tue, 30 Oct 2012 07:13:34 -0400 Subject: Memory consumption of ningx In-Reply-To: <508FA8B3.20406@citrin.ru> References: <508FA8B3.20406@citrin.ru> Message-ID: I don't use any third-party modules, neigther SSL or gzip. nginx was compiled using the following parameters: > --with-debug \ > --without-http_rewrite_module \ > --without-http_gzip_module nginx only serves one static file. Changing the size of the file (10 KB to 200 B) has no effect on the memory comsumption of nginx. Here is my nginx.conf: > user nobody; > worker_processes 1; > > error_log logs/error.log; > > pid logs/nginx.pid; > > events { > worker_connections 768; > } > > http { > include mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log logs/access.log main; > > sendfile on; > > keepalive_timeout 65; > > > server { > listen 80; > server_name localhost; > > location / { > root html; > index index.html index.htm; > } > } > } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232328,232368#msg-232368 From nginx-forum at nginx.us Tue Oct 30 11:33:14 2012 From: nginx-forum at nginx.us (hristoc) Date: Tue, 30 Oct 2012 07:33:14 -0400 Subject: Rewrite or proxy_pass to internal ip ? In-Reply-To: <20121029210442.GT17159@craic.sysops.org> References: <20121029210442.GT17159@craic.sysops.org> Message-ID: <21fdff2033876f5d6b94e450e8c485b2.NginxMailingListEnglish@forum.nginx.org> I get real html from internal server with curl and is exact what I expect. Over the nginx I fix the problem with bad gateway, but the problem that I now see: nginx does not access remote directories after / for example: https://www.mydomain.com/server1 I see the html page, but if I organize my page with jquery for example and is included from different directory js/jquery.js I see is not loaded in html ie nginx does not sub directories, only top level dir. For example if I try to access direct js file over the web like: http://www.mydomain1.com/server1/js/jsquery.js I get error 404 file not found. I think nginx does not forward this request to internal serve and trying to find the js file on local directory. How to resolve this issue ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232298,232358#msg-232358 From marcin.deranek at booking.com Tue Oct 30 11:38:55 2012 From: marcin.deranek at booking.com (Marcin Deranek) Date: Tue, 30 Oct 2012 12:38:55 +0100 Subject: SSL client verification with chained CA In-Reply-To: <20121030111741.0d0b3aed@booking.com> References: <20121030111741.0d0b3aed@booking.com> Message-ID: <20121030123855.12679b38@booking.com> On Tue, 30 Oct 2012 11:17:41 +0100 Marcin Deranek wrote: > would suggest that this setup can work with it). Is this setup > possible with nginx 1.2.x ? I have enabled additional debugging and got this in logs: client SSL certificate verify error: (26:unsupported certificate purpose) while SSL handshaking Looks like our security team needs to re-generate certificates. I'm sorry for the noise. Marcin From ne at vbart.ru Tue Oct 30 11:41:41 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 30 Oct 2012 15:41:41 +0400 Subject: Memory consumption of ningx In-Reply-To: References: Message-ID: <201210301541.42076.ne@vbart.ru> On Monday 29 October 2012 21:13:01 peschuster wrote: > Hi, > > I cross-compiled nginx for microblaze processors > (http://github.com/peschuster/nginx) and am currently doing some > performance benchmarks with nginx running on a microblaze processor with a > custom designed SoC on a FPGA. > > However, I am having problems with the memory consumption of nginx: > > When I perform 10,000 requests with 20 conn/s and 2 requests/conn (using > httperf - 1), memory used by nginx grows to about 40 MB. > When I repeat this benchmark, the used memory grows from 40 to 80 MB. > > The problem with this behavior is that my SoC only has 256 MB of RAM in > total (the file system also runs completely from RAM using a ramdisk). > Therefore nginx crashes the complete system by consuming all memory for > longer/extended benchmark scenarios. > > Is this the intended behavior of nginx? Why isn't it "re-using" the already > allocated memory? Nginx releases allocated memory after it completes each request. > Any hints on how I can circumvent or track down this problem? > It most likely that your system memory allocator do not return freed memory to the OS. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Tue Oct 30 14:03:22 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 30 Oct 2012 10:03:22 -0400 Subject: Windows Client downloads something from nginx for Windows is too slow In-Reply-To: References: <508E4E2A.3010207@gmail.com> Message-ID: You needed specific configuration for your specific usage with specific requirements. Why should that become the default? Keep it light and clean for the most common usage: serving lightweight files, as every standard web page should be. --- *B. R.* On Tue, Oct 30, 2012 at 4:26 AM, Liang Suilong wrote: > Thank you, B.R. > > I add output_buffers 1 128k option into nginx.conf. It has a great > improvement. The download rate raise from 400KB/s to 4~5MB/s. > > I think the problem may affect all version nginx on Windows. Should it add > output_buffer option into nginx.conf by default? > > Sent From My Heart > My Page: http://www.liangsuilong.info > > > > > > On Tue, Oct 30, 2012 at 10:14 AM, B.R. wrote: > >> I guess the problem come from buffers. >> For big files, using asynchronous serving is recommended. >> >> Take a look at: >> http://nginx.org/en/docs/http/ngx_http_core_module.html#aio >> http://wiki.nginx.org/HttpCoreModule#aio >> --- >> *B. R.* >> >> >> >> On Mon, Oct 29, 2012 at 9:58 PM, Liang Suilong wrote: >> >>> On Mon, Oct 29, 2012 at 5:36 PM, Sharl Jimh Tsin wrote: >>> >>>> -----BEGIN PGP SIGNED MESSAGE----- >>>> Hash: SHA1 >>>> >>>> ? 2012?10?29? 17:24, Liang Suilong ??: >>>> > Hi, all. >>>> > >>>> > I am a newbie. I am using nginx for Windows to host static big >>>> > files. I find out a strange problem. When Windows client downloads >>>> > files from nginx for Windows server, the download rate is very very >>>> > slow. The files are bigger than 200MB. >>>> > >>>> > Network: 100Mbps LAN >>>> > >>>> > Server SIde: Windows XP 32bit, nginx for Windows, I test 1.0.15, >>>> > 1.2.4 and 1.3.7. They are the same. >>>> > >>>> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >>>> > are the same. The download rate is not more than 300KB/s >>>> > >>>> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >>>> > Chrome and curl are the same. The download rate are more than >>>> > 8MB/s. Almost 10MB/s. >>>> > >>>> > The same network 100Mbps LAN >>>> > >>>> > Server Side: Fedora 17 x86_64, nginx 1.0.15 >>>> > >>>> > Windows Client Side: WIndows XP 32bit, Firefox, IE, Chrome and curl >>>> > are the same. The download rate is more than 8MB/s. Almost 10MB/s >>>> > >>>> > Linux Client Side: Linux, Fedora 17 x86_64 and Ubuntu, Firefox and >>>> > Chrome and curl are the same. The download rate are more than >>>> > 8MB/s. Almost 10MB/s. >>>> > >>>> > It looks quite strange. I just use the default configure file for >>>> > nginx and put the files into nginx root directory. I do not change >>>> > anything. I checked HTTP Header. I try to change Firefox user-agent >>>> > on Windows to Linux edition. It could not change download rate, >>>> > still very very slow. I try to use Apache on Windows to host the >>>> > files. The download rate is normal, about 8~10MB/s. >>>> > >>>> > How could I debug the problem? Which logs and configure files >>>> > should I provide? If you have some good advices, Please tell me >>>> > soon. >>>> > >>>> > Thank you >>>> > >>>> > Liang Suilong >>>> > >>>> > Sent From My Heart My Page: http:// >>>> > www.liangsuilong.info >>>> > >>>> > >>>> > >>>> > >>>> > _______________________________________________ nginx mailing list >>>> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx >>>> > >>>> >>>> sounds impossible,is it caused by client configuration? >>>> >>>> >>> Thank you, Sharl >>> >>> I think the problem should be impossible, however it is exactly existed. >>> I test nginx for Windows on Windows Server 2008, and the client side is >>> Windows 7. In the same 100Mbps LAN network environment, the download rate >>> seems to have a little improvement, about 400KB/s. It is still very slow. >>> >>> I tried to reinstall new Windows. There is no change on nginx for >>> Windows. >>> >>> - -- >>>> Best regards, >>>> Sharl.Jimh.Tsin (From China *Obviously Taiwan INCLUDED*) >>>> >>>> Using Gmail? Please read this important notice: >>>> http://www.fsf.org/campaigns/jstrap/gmail?10073. >>>> -----BEGIN PGP SIGNATURE----- >>>> Version: GnuPG v1.4.11 (GNU/Linux) >>>> Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ >>>> >>>> iQEcBAEBAgAGBQJQjk4pAAoJEEYmNy4jisTj8+sH/2bcodbpexhjoywZW2T8M8DD >>>> c+N5LHjRuKU5zVoOV8tcsU0guhbt4VG+1nGJtOtLV0fn0qgHUKSpVnLriZnpMXpy >>>> 1n0I76Uu3O4fsL0T2iYswmMl10ycEJLZwtf5WIPLlNVVGzz0J2gp3FjzkvTVy4Da >>>> FKt/qMsqKRMjb6tsCCdL/AXoTyDjFXmNef8jrjMLfdxot87qvipxVRaXgiOKYAtM >>>> uClfMP50/1d89qt+DrdJrB7CBDWOUUYaUSKFhj3622/s5J1CouF/EdkXI6sZlshl >>>> uH93RQKoYGuSAJN8z/kPSiXVc9XCSAEvDXtKno3iTljO8LjqN+T8T1GKQDxoXks= >>>> =Un5o >>>> -----END PGP SIGNATURE----- >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 30 14:08:40 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Oct 2012 18:08:40 +0400 Subject: nginx-1.3.8 Message-ID: <20121030140839.GJ40452@mdounin.ru> Changes with nginx 1.3.8 30 Oct 2012 *) Feature: the "optional_no_ca" parameter of the "ssl_verify_client" directive. Thanks to Mike Kazantsev and Eric O'Connor. *) Feature: the $bytes_sent, $connection, and $connection_requests variables can now be used not only in the "log_format" directive. Thanks to Benjamin Gr?ssing. *) Feature: the "auto" parameter of the "worker_processes" directive. *) Bugfix: "cache file ... has md5 collision" alert. *) Bugfix: in the ngx_http_gunzip_filter_module. *) Bugfix: in the "ssl_stapling" directive. -- Maxim Dounin http://nginx.com/support.html From reallfqq-nginx at yahoo.fr Tue Oct 30 14:08:38 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 30 Oct 2012 10:08:38 -0400 Subject: Nginx answers too large requests with bad status In-Reply-To: <201210301317.43878.ne@vbart.ru> References: <20121030001237.GU17159@craic.sysops.org> <201210301317.43878.ne@vbart.ru> Message-ID: Alright, I got it now. That's because the protocol defines its version after the request address... which is already too long to be fully read. Thanks for your help! No bug in Nginx indeed :o) I'll go set up better buffers, --- *B. R.* On Tue, Oct 30, 2012 at 5:17 AM, Valentin V. Bartenev wrote: > On Tuesday 30 October 2012 04:57:40 B.R. wrote: > > @Valentin > > I am not seeking a solution to the 414 problem, I know how to address the > > issue > > Thanks anyway > > > > @Francis > > The browser output has just be made to illustrate the wrong answer. > > > > The problem triggered using a Python script which checked against the > > 'status' field of a HTTP answer to decide on its actions. > > [...] > > There's no Status-Line in HTTP/0.9. So, the problem is that your script do > not > support HTTP/0.9. > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magic.drums at gmail.com Tue Oct 30 14:16:21 2012 From: magic.drums at gmail.com (magic.drums at gmail.com) Date: Tue, 30 Oct 2012 11:16:21 -0300 Subject: nginx-1.3.8 In-Reply-To: <20121030140839.GJ40452@mdounin.ru> References: <20121030140839.GJ40452@mdounin.ru> Message-ID: Why Worker process directive "auto" i have 4 processor nginx utilizin the 4 processor or i need... Excuse me for my English. Regards. On Tue, Oct 30, 2012 at 11:08 AM, Maxim Dounin wrote: > Changes with nginx 1.3.8 30 Oct > 2012 > > *) Feature: the "optional_no_ca" parameter of the "ssl_verify_client" > directive. > Thanks to Mike Kazantsev and Eric O'Connor. > > *) Feature: the $bytes_sent, $connection, and $connection_requests > variables can now be used not only in the "log_format" directive. > Thanks to Benjamin Gr?ssing. > > *) Feature: the "auto" parameter of the "worker_processes" directive. > > *) Bugfix: "cache file ... has md5 collision" alert. > > *) Bugfix: in the ngx_http_gunzip_filter_module. > > *) Bugfix: in the "ssl_stapling" directive. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Victor Pereira Fono : +56981882989 Linkedin : http://www.linkedin.com/in/magicdrums -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Oct 30 15:11:31 2012 From: nginx-forum at nginx.us (gerard breiner) Date: Tue, 30 Oct 2012 11:11:31 -0400 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <20121029205729.GS17159@craic.sysops.org> References: <20121029205729.GS17159@craic.sysops.org> Message-ID: <0852115bf5770b844a4174395001a1f9.NginxMailingListEnglish@forum.nginx.org> Hello Francis, I replaced : location ^~/SOGo { proxy_pass http://127.0.0.1:20000; proxy_redirect http://127.0.0.1:20000 default; By location /SOGo { proxy_pass http://127.0.0.1:20000/; } And I get the URL I expected https://sogo.mydomain/SOGo but when I want authenticate to I get the message : "You can not authenticate because the witnesses (cookies) on your browser is disabled. Enable cookies in your web browser and try again." How to configure this in nginx ? I tried to add "add_header Set-Cookie lcid=1033;" without success. Best regards Gerard Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232325,232391#msg-232391 From nginx-forum at nginx.us Tue Oct 30 17:01:58 2012 From: nginx-forum at nginx.us (zsero) Date: Tue, 30 Oct 2012 13:01:58 -0400 Subject: nginx 0day exploit for nginx + fastcgi PHP In-Reply-To: References: Message-ID: <4c72548c65ff84791d491ce2f0119208.NginxMailingListEnglish@forum.nginx.org> I know it's an old thread but my question really belongs to here. 1. Can you confirm that with recent PHP implementations (5.3.9+) this fix isn't needed anymore? 2. Does it mean that some PHP implementations like the up-to-date ones in DotDeb repository doesn't need it (PHP 5.4.8 and PHP 5.3.18), but Debian stable still needs it (5.3.3-7+squeeze14)? http://packages.debian.org/stable/php/ http://www.dotdeb.org/ Thanks! Reinis Rozitis Wrote: ------------------------------------------------------- > > Seriously if it doesn't works for lighttppd that use php fcgi and > works > > for nginx it is nginx issue isn't it ? > > With certain configuration similar issues are also in apache but it > doesn't necessary mean the webserver is at fault. > > Since php 5.3.9 the fpm sapi has 'security.limit_extensions' > (defaults to '.php') which limits the extensions of the main script > FPM will allow to parse. > It should prevent poor configuration mistakes. > > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,88845,232398#msg-232398 From contact at jpluscplusm.com Tue Oct 30 17:15:30 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 30 Oct 2012 17:15:30 +0000 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <0852115bf5770b844a4174395001a1f9.NginxMailingListEnglish@forum.nginx.org> References: <20121029205729.GS17159@craic.sysops.org> <0852115bf5770b844a4174395001a1f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 30 October 2012 15:11, gerard breiner wrote: > I get the message : > > "You can not authenticate because the witnesses (cookies) on your browser is > disabled. Enable cookies in your web browser and try again." > > How to configure this in nginx ? I tried to add "add_header Set-Cookie > lcid=1033;" without success. Nginx doesn't touch cookies unless you explicitly tell it to. Perhaps you have them disabled in your browser. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From francis at daoine.org Tue Oct 30 19:37:46 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Oct 2012 19:37:46 +0000 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <0852115bf5770b844a4174395001a1f9.NginxMailingListEnglish@forum.nginx.org> References: <20121029205729.GS17159@craic.sysops.org> <0852115bf5770b844a4174395001a1f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121030193746.GX17159@craic.sysops.org> On Tue, Oct 30, 2012 at 11:11:31AM -0400, gerard breiner wrote: Hi there, > I replaced : > > location ^~/SOGo { > proxy_pass http://127.0.0.1:20000; > proxy_redirect http://127.0.0.1:20000 default; > > By > > location /SOGo { > proxy_pass http://127.0.0.1:20000/; > } I don't think that that is what I suggested. How does that help "GET /" become "GET /SOGo"? > And I get the URL I expected https://sogo.mydomain/SOGo but when I want > authenticate to I get the message : > > "You can not authenticate because the witnesses (cookies) on your browser is > disabled. Enable cookies in your web browser and try again." If I was trying to fix this, I would try to learn what the expected http request/response sequence was; and then see what the actual http request/response sequence is; and then compare the two. So: starting from the nginx machine, what "curl -i" query or queries should you use to do a login? (Possibly the logs of the sogo server will tell you.) What happens when you make those queries of (a) the sogo server; and (b) the nginx server? What is different? Specifically, are there Set-Cookie lines that refer to the sogo internal hostname that should really refer to the nginx external hostname? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Oct 30 19:48:49 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Oct 2012 19:48:49 +0000 Subject: Rewrite or proxy_pass to internal ip ? In-Reply-To: <21fdff2033876f5d6b94e450e8c485b2.NginxMailingListEnglish@forum.nginx.org> References: <20121029210442.GT17159@craic.sysops.org> <21fdff2033876f5d6b94e450e8c485b2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121030194849.GY17159@craic.sysops.org> On Tue, Oct 30, 2012 at 07:33:14AM -0400, hristoc wrote: Hi there, > I get real html from internal server with curl and is exact what I expect. So: when you do "curl -0 -i http://192.168.1.1/" from the nginx server, you get the same content as when you do "curl -i http://public-ip/admin" from somewhere on the internet? That suggests that the proxy_pass is working. > For example if I try to access direct js file over the > web like: http://www.mydomain1.com/server1/js/jsquery.js I get error 404 > file not found. I think nginx does not forward this request to internal > serve and trying to find the js file on local directory. > > How to resolve this issue ? What is your nginx configuration? In particular: what "location" blocks do you have? http://nginx.org/r/location for details. My guess is that you have something like "location ~ js" as well as "location /admin", and nginx chooses that location block for this request. If that is true, then possibly using "^~" will work for you. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Oct 30 20:10:08 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Oct 2012 20:10:08 +0000 Subject: Nginx answers too large requests with bad status In-Reply-To: References: <20121030001237.GU17159@craic.sysops.org> <201210301317.43878.ne@vbart.ru> Message-ID: <20121030201008.GZ17159@craic.sysops.org> On Tue, Oct 30, 2012 at 10:08:38AM -0400, B.R. wrote: Hi there, > Alright, I got it now. > That's because the protocol defines its version after the request > address... which is already too long to be fully read. Correct. Your script thought it made a HTTP/1.1 request, and expected a HTTP/1.1 response. nginx thought your script made a HTTP/0.9 request, and provided a HTTP/0.9 response. When something expects a HTTP/1.1 response and gets a HTTP/0.9 response, bad things can happen unless it was written defensively. > Thanks for your help! > No bug in Nginx indeed :o) I'd say not a bug, but an inconvenience. I imagine that the cost to the HTTP/0.9 clients of seeing an extra """ HTTP/1.0 414 No """ at the start of this response is less than the cost to the HTTP/1.x clients of not seeing those two lines. But until someone who cares does the work, the cost to nginx of changing the current behaviour is unknown, but certainly non-zero. > I'll go set up better buffers, Yes. The buffers should be set to include all requests that you wish to accept. If your system uses particularly big requests, then you need big buffers. Cheers, f -- Francis Daly francis at daoine.org From agentzh at gmail.com Wed Oct 31 01:10:19 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 30 Oct 2012 18:10:19 -0700 Subject: [ANN] ngx_openresty devel version 1.2.4.5 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new devel version of ngx_openresty, 1.2.4.5: http://openresty.org/#Download Below is the complete change log for this release, as compared to the last (devel) release, 1.2.4.3: * applied the official hotfix #1 patch to LuaJIT 2.0.0 beta11. * see details here: * upgraded LuaNginxModule to 0.7.3. * feature: added the get_keys method for the shared memory dictionaries for fetching all the (or the specified number of) keys (default to 1024 keys). thanks Brian Akins for the patch. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002004 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From greminn at gmail.com Wed Oct 31 02:11:51 2012 From: greminn at gmail.com (Simon) Date: Wed, 31 Oct 2012 15:11:51 +1300 Subject: Secure shared hosting environment basics In-Reply-To: References: Message-ID: On 29/10/2012, at 3:11 PM, Simon wrote: > Hi There, > > I know this question has probably been churned over and over in various different variations, so i applogise in advance if its one of those 'here we go again?' posts :) > > We are a hosting provider that is currently using the LAMP stack to host our clients websites. 80% of these are on our shared hosting instance that currently has PHP 5.3 using safemode to assist in keeping sites to their own area. We are looking to build a new shared service and have been playing with nginx/php-fpm (on debian squeeze) and love the easy configuration and speed. > > My question is: What is the best way to go about setting up a shared hosting environment for 1000's of customers, which is secure from the point of view that the customers can't access other sites. > > Im not looking for a how-to, really just a "this is the correct way you should look todo this" hint that will point us in the right direction?. > > Thank you for reading! From what i have read so far nginx is just not really setup for truly secure shared hosting (multiple owners) - is this the case? Simon From reallfqq-nginx at yahoo.fr Wed Oct 31 02:28:40 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 30 Oct 2012 22:28:40 -0400 Subject: Secure shared hosting environment basics In-Reply-To: References: Message-ID: ??? I would be interested by knowing which elements you consider to reach such a conclusion. AFAIK, whichever Webserver you use, the problem lies on the environment & server configuration. --- *B. R.* On Tue, Oct 30, 2012 at 10:11 PM, Simon wrote: > > On 29/10/2012, at 3:11 PM, Simon wrote: > > > Hi There, > > > > I know this question has probably been churned over and over in various > different variations, so i applogise in advance if its one of those 'here > we go again?' posts :) > > > > We are a hosting provider that is currently using the LAMP stack to host > our clients websites. 80% of these are on our shared hosting instance that > currently has PHP 5.3 using safemode to assist in keeping sites to their > own area. We are looking to build a new shared service and have been > playing with nginx/php-fpm (on debian squeeze) and love the easy > configuration and speed. > > > > My question is: What is the best way to go about setting up a shared > hosting environment for 1000's of customers, which is secure from the point > of view that the customers can't access other sites. > > > > Im not looking for a how-to, really just a "this is the correct way you > should look todo this" hint that will point us in the right direction?. > > > > Thank you for reading! > > From what i have read so far nginx is just not really setup for truly > secure shared hosting (multiple owners) - is this the case? > > Simon > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Oct 31 05:17:13 2012 From: nginx-forum at nginx.us (jonefee) Date: Wed, 31 Oct 2012 01:17:13 -0400 Subject: strange 499 response code ?? Message-ID: <1e23b34184b41bfd83844fa2b7084390.NginxMailingListEnglish@forum.nginx.org> my server use nginx+php-fpm ,nginx as reverse proxy server,, but our nginx access log got a lot of 499 response code,,,,the document say that 499 means "client abort the connection ",,,,,i think the the reason cause " client abort connection" comes from two: the first is client is in weak network environment, client request can't be delivered successfully, and client has canceled the request. the second is php-fpm is too busy,,,such as for a long time nginx can't get the response from upstream server(php-fpm) ,,,and the client canceled the request. my question is the first reason can be accepted . but the second can't. since i checked the nginx access log,, when there is 499 response code in nginx access log file ,,,there is always no "upstream_response_time" field value in it, seems the php-fpm is not busy and can server it successfully . here is some sample 499 nginx access log and some explanation: 171.213.141.240 - 31/Oct/2012:13:09:34 +0800 POST /php/xyz/iface/?key=be7fc0cdf0afbfedff1e09ec6443823a&device_id=351870052329449&network=1&ua=LT18i&os=2.3.4&version=3.7&category_id=2&f_ps=10&s=5&ps=30&pn=1&pcat=2&up_tm=1351655451272 HTTP/1.1 499 0($body_bytes_sent) - Dalvik/1.4.0 (Linux; U; Android 2.3.4; LT18i Build/4.0.2.A.0.62) 21($content_length) 2.448($request_time) -($upstream_response_time) - - - 124.236.132.41 - 31/Oct/2012:13:09:34 +0800 POST /php/xyz/iface/?key=3179f25bc69e815ad828327ccf10c539&device_id=356330048225461&network=1&ua=GT-S5670&os=2.2.1&version=3.7.1&category_id=0&f_ps=10&s=5&ps=30&pn=1&pcat=2&f=1 HTTP/1.1 499 0 - Dalvik/1.2.0 (Linux; U; Android 2.2.1; GT-S5670 Build/FROYO) 44 10.034 - Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232418,232418#msg-232418 From nginx-forum at nginx.us Wed Oct 31 11:02:55 2012 From: nginx-forum at nginx.us (gerard breiner) Date: Wed, 31 Oct 2012 07:02:55 -0400 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <20121030193746.GX17159@craic.sysops.org> References: <20121030193746.GX17159@craic.sysops.org> Message-ID: Hello, curl -k -i https://127.0.0.1 as curl -k -i https://sogo.mydomain.fr give: ------------------------------ HTTP/1.1 302 Found Server: nginx/0.7.67 Date: Wed, 31 Oct 2012 10:37:27 GMT Content-Type: text/plain; charset=utf-8 Connection: keep-alive content-length: 0 location: /SOGo/ -------------------------------- >From sogo.log Oct 31 11:44:05 sogod [29392]: SOGoRootPage successful login for user 'gbreiner' - expire = -1 grace = -1 [31/Oct/2012:11:44:05 GMT] "POST /SOGoSOGoSOGo/connect HTTP/1.0" 200 27/62 0.016 - - 4K I think the "POST /SOGoSOGoSOGo/" is wrong ... (it is not the navigator because under apache2 it works very fine). Thanks. Best regards Gerard Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232325,232424#msg-232424 From nginx-forum at nginx.us Wed Oct 31 11:28:58 2012 From: nginx-forum at nginx.us (peschuster) Date: Wed, 31 Oct 2012 07:28:58 -0400 Subject: Memory consumption of ningx In-Reply-To: <201210301541.42076.ne@vbart.ru> References: <201210301541.42076.ne@vbart.ru> Message-ID: VBart Wrote: > It most likely that your system memory allocator do not return freed > memory to the OS. How can I check this? I suspect this should be part of the OS (Linux)? Could you give me any keyword to read more about this? I looked at /proc/meminfo after the first and second batch of requests. The newly allocated memory is categorized as "Active(anon)": > Active: 36976 kB > Inactive: 59280 kB > Active(anon): 34376 kB > .. > AnonPages: 34400 kB vs. > Active: 71092 kB > Inactive: 60088 kB > Active(anon): 68428 kB > .. > AnonPages: 68452 kB Thanks. Peter Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232328,232426#msg-232426 From nginx-forum at nginx.us Wed Oct 31 13:42:26 2012 From: nginx-forum at nginx.us (hristoc) Date: Wed, 31 Oct 2012 09:42:26 -0400 Subject: Rewrite or proxy_pass to internal ip ? In-Reply-To: <20121030194849.GY17159@craic.sysops.org> References: <20121030194849.GY17159@craic.sysops.org> Message-ID: <82ca00318ee898de8de2312a523e32fb.NginxMailingListEnglish@forum.nginx.org> Thank you, I resolve the problem. My knowledge on nginx is little and I unable to redirect all js, html to their locations and I did not spend much time for that, but your answer give me idea that probably something in my configuration making nginx to look for these files in different places ie some logic in my config is wrong. I added new virtual host name in nginx and proxy_pass everything to internal ip and everything is work perfect. Thank you for your time. Here is my configuration if some one looking for some similar. With this I redirect web from internel hp iLO web port to external web. # HTTPS server server { listen 443 ssl; listen [::]:443 ssl; server_name virt.domain1.com; ssl on; ssl_certificate /etc/nginx/server.crt; ssl_certificate_key /etc/nginx/server.key; keepalive_timeout 70; ssl_session_cache shared:SSL:5m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location ~ /\.ht { deny all; } location / { proxy_pass https://192.168.1.200/; proxy_set_header Accept-Encoding ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232298,232421#msg-232421 From nginx-forum at nginx.us Wed Oct 31 14:31:12 2012 From: nginx-forum at nginx.us (rmalayter) Date: Wed, 31 Oct 2012 10:31:12 -0400 Subject: consistent hashing using split_clients Message-ID: <2f09dd2d8ebfabcc6f3590926d922115.NginxMailingListEnglish@forum.nginx.org> I'm looking for a way to do consistent hashing without any 3rd-party modules or perl/lua. I came up with the idea of generating a split_clients and list of upstreams via script, so we can add/remove backends without blowing out the cache on each upstream when a backend server is added, removed or otherwise offline. What I have looks like the config below. The example only includes 16 upstreams for clarity, and is generated by sorting by the SHA1 hash of server names for each upstream bucket along with the bucket name. Unfortunately, to get an even distribution of requests to upstream buckets with consistent hashing, I am actually going to need at least 4096 upstreams, and the corresponding number of entries in split_clients. Will 4096 entries in single split_clients block pose a performance issue? Will split_clients have a distribution problem with a small percentage like "0.0244140625%"? How many "buckets" does the hash table for split_clients have (it doesn't seem to be configurable)? Thanks for any insights. I haven't actually built a test environment for this yet as the setup is quite a bit of work, so I want to find out if I am doing something stupid before committing a lot of time and resources. upstream hash0 {server 192.168.47.104; server 192.168.47.102 backup;} upstream hash1 {server 192.168.47.104; server 192.168.47.102 backup;} upstream hash2 {server 192.168.47.105; server 192.168.47.104 backup;} upstream hash3 {server 192.168.47.101; server 192.168.47.103 backup;} upstream hash4 {server 192.168.47.102; server 192.168.47.101 backup;} upstream hash5 {server 192.168.47.101; server 192.168.47.104 backup;} upstream hash6 {server 192.168.47.103; server 192.168.47.102 backup;} upstream hash7 {server 192.168.47.101; server 192.168.47.105 backup;} upstream hash8 {server 192.168.47.102; server 192.168.47.105 backup;} upstream hash9 {server 192.168.47.105; server 192.168.47.102 backup;} upstream hashA {server 192.168.47.105; server 192.168.47.103 backup;} upstream hashB {server 192.168.47.103; server 192.168.47.105 backup;} upstream hashC {server 192.168.47.103; server 192.168.47.105 backup;} upstream hashD {server 192.168.47.103; server 192.168.47.104 backup;} upstream hashE {server 192.168.47.101; server 192.168.47.105 backup;} upstream hashF {server 192.168.47.104; server 192.168.47.101 backup;} split_clients "${scheme}://${host}${request_uri}" $uhash { 6.25% hash0; 6.25% hash1; 6.25% hash2; 6.25% hash3; 6.25% hash4; 6.25% hash5; 6.25% hash6; 6.25% hash7; 6.25% hash8; 6.25% hash9; 6.25% hashA; 6.25% hashB; 6.25% hashC; 6.25% hashD; 6.25% hashE; * hashF;} location /foo { proxy_pass http://$uhash;} Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232428,232428#msg-232428 From ne at vbart.ru Wed Oct 31 14:42:28 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 31 Oct 2012 18:42:28 +0400 Subject: Memory consumption of ningx In-Reply-To: References: <201210301541.42076.ne@vbart.ru> Message-ID: <201210311842.28583.ne@vbart.ru> On Wednesday 31 October 2012 15:28:58 peschuster wrote: > VBart Wrote: > > It most likely that your system memory allocator do not return freed > > memory to the OS. > > How can I check this? I suspect this should be part of the OS (Linux)? Usually it's a part of glibc in Linux. > Could you give me any keyword to read more about this? Try man mallopt. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Oct 31 14:50:47 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Oct 2012 18:50:47 +0400 Subject: consistent hashing using split_clients In-Reply-To: <2f09dd2d8ebfabcc6f3590926d922115.NginxMailingListEnglish@forum.nginx.org> References: <2f09dd2d8ebfabcc6f3590926d922115.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121031145047.GC40452@mdounin.ru> Hello! On Wed, Oct 31, 2012 at 10:31:12AM -0400, rmalayter wrote: > I'm looking for a way to do consistent hashing without any 3rd-party modules > or perl/lua. I came up with the idea of generating a split_clients and list > of upstreams via script, so we can add/remove backends without blowing out > the cache on each upstream when a backend server is added, removed or > otherwise offline. > > What I have looks like the config below. The example only includes 16 > upstreams for clarity, and is generated by sorting by the SHA1 hash of > server names for each upstream bucket along with the bucket name. > > Unfortunately, to get an even distribution of requests to upstream buckets > with consistent hashing, I am actually going to need at least 4096 > upstreams, and the corresponding number of entries in split_clients. > > Will 4096 entries in single split_clients block pose a performance issue? > Will split_clients have a distribution problem with a small percentage like > "0.0244140625%"? Percentage values are stored in fixed point with 2 digits after the point. Configuration parsing will complain if you'll try to specify more digits after the point. > How many "buckets" does the hash table for split_clients > have (it doesn't seem to be configurable)? The split_clients algorithm doesn't use buckets, as it's not a hash table. Instead, it calculates hash function of the original value, and selects resulting value based on a hash function result. See http://nginx.org/r/split_clients for details. [...] -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Oct 31 15:40:58 2012 From: nginx-forum at nginx.us (cmpan) Date: Wed, 31 Oct 2012 11:40:58 -0400 Subject: FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream In-Reply-To: References: Message-ID: i think it would be a bug. it's ok, if i set the document root as the top level folder,or like your problem too. location ~ \.php$ { root /web; # ok #root /web/htdocs; # your problem fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } i do't know how to deal with. the version is nginx-1.2.4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230967,232432#msg-232432 From nginx-forum at nginx.us Wed Oct 31 16:46:09 2012 From: nginx-forum at nginx.us (rmalayter) Date: Wed, 31 Oct 2012 12:46:09 -0400 Subject: consistent hashing using split_clients In-Reply-To: <20121031145047.GC40452@mdounin.ru> References: <20121031145047.GC40452@mdounin.ru> Message-ID: <92a9002a80000f041bbe90480856003a.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: > > Percentage values are stored in fixed point with 2 digits after > the point. Configuration parsing will complain if you'll try to > specify more digits after the point. > > > How many "buckets" does the hash table for split_clients > > have (it doesn't seem to be configurable)? > > The split_clients algorithm doesn't use buckets, as it's not a > hash table. Instead, it calculates hash function of the > original value, and selects resulting value based on a hash > function result. See http://nginx.org/r/split_clients for > details. > So clearly I am down the wrong path here, and split_clients just cannot do what I need. I will have to rethink things. The 3rd-party ngx_http_consistent_hash module appears to be un-maintained, un-commented. It also uses binary search to find an upstream instead of a hash table, making it O(log(n)) for each request. My C skills haven't been used in anger since about 1997, so updating or maintaining it myself would probably not be a fruitless exercise. Perhaps I will have to fall back to using perl to get a hash bucket for the time being. I assume 4096 upstreams is not a problem for nginx given that it is used widely by CDNs. A long time ago Igor mentioned he was working on an variable-based upstream hashing module using MurmurHash3: http://forum.nginx.org/read.php?29,212712,212739#msg-212739 I suppose other work took priority. Maybe Igor has some code stashed somewhere that just needs testing and polishing. If not, it seems that the current "ip_hash" scheme used in nginx could be easily adapted to fast consistent hashing by simply -using MurmurHash3 or similar instead of the current simple multiply+modulo scheme -allowing arbitrary nginx variables as hash input instead of just the IP address during upstream selection -at initialization utilizing a hash table of 4096 or whatever configurable number of buckets -fill the hash table by sorting the server array on murmurhash3(bucket_number + server_name + server_weight_counter) and taking the first server Is there a mechanism for sponsoring development along these lines and getting it into the official nginx distribution? Consistent hashing is the one commonly-used proxy server function that nginx seems to be missing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232428,232434#msg-232434 From chiterri at operamail.com Wed Oct 31 23:47:26 2012 From: chiterri at operamail.com (chiterri at operamail.com) Date: Wed, 31 Oct 2012 16:47:26 -0700 Subject: Incorrect SSL cert chain build order used/required by nginx 1.3.8 ? Message-ID: <1351727246.12496.140661147970765.741EC5D0@webmail.messagingengine.com> I'm running nginx/1.3.8 on linux/64. I'm installing a commercial cert in nginx (Comodo Essential SSL). When I build the SSL chain in order per instructions from Comodo (Root -> Intermediate(s) https://comodosslstore.com/blog/how-do-i-make-my-own-bundle-file-from-crt-files.html I do cat AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt cat UTNAddTrustSGCCA.crt >> my.domain.com.CHAIN.crt cat ComodoUTNSGCCA.crt >> my.domain.com.CHAIN.crt cat EssentialSSLCA_2.crt >> my.domain.com.CHAIN.crt cat STAR_domain.com.crt >> my.domain.com.CHAIN.crt If use this CHAIN'd cert in my nginx conf, ssl on; ssl_verify_client off; ssl_certificate "/path/to/my.domain.com.CHAIN.crt"; ssl_certificate_key "/path/to/my.domain.com.key"; and start nginx, it fails, ==> error.log <== 2012/10/31 16:36:44 [emerg] 8666#0: SSL_CTX_use_PrivateKey_file("/path/to/my.domain.com.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) If I simply switch the cert CHAIN build order, so the personal site crt is *first* to, + cat STAR_domain.com.crt > my.domain.com.CHAIN.crt - cat AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt + cat AddTrustExternalCARoot.crt >> my.domain.com.CHAIN.crt cat UTNAddTrustSGCCA.crt >> my.domain.com.CHAIN.crt cat ComodoUTNSGCCA.crt >> my.domain.com.CHAIN.crt cat EssentialSSLCA_2.crt >> my.domain.com.CHAIN.crt - cat STAR_domain.com.crt >> my.domain.com.CHAIN.crt then start nginx, it starts correctly, with no error. The site's accessible from most locations. But a check with https://www.ssllabs.com/ssltest/index.html returns/reports "Chain issues Incorrect order" I'd like to get nginx to accept/use the correct/instructed CHAIN order so that it starts-up correctly AND is reported 'correct order; by testing sites. Is this is a config issue on my end -- either nginx or the cert build? Or a bug? From francis at daoine.org Wed Oct 31 23:48:56 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 31 Oct 2012 23:48:56 +0000 Subject: Site URL not completed. Bad redirection ? In-Reply-To: References: <20121030193746.GX17159@craic.sysops.org> Message-ID: <20121031234856.GD17159@craic.sysops.org> On Wed, Oct 31, 2012 at 07:02:55AM -0400, gerard breiner wrote: Hi there, > curl -k -i https://127.0.0.1 as > curl -k -i https://sogo.mydomain.fr give: > ------------------------------ > HTTP/1.1 302 Found > Server: nginx/0.7.67 > Date: Wed, 31 Oct 2012 10:37:27 GMT > Content-Type: text/plain; charset=utf-8 > Connection: keep-alive > content-length: 0 > location: /SOGo/ > -------------------------------- So it redirects to /SOGo/. What happens when you do that manually? curl -k -i https://127.0.0.1/SOGo/ Probably it will redirect again, or else return some html. What you probably want to do is to manually step through the full login sequence until you see the specific problem. Then you can concentrate on that one request. (Also: that doesn't look like nginx 1.2.4. Are you sure that your test system is exactly what you expect it to be?) > From sogo.log > Oct 31 11:44:05 sogod [29392]: SOGoRootPage successful login for user > 'gbreiner' - expire = -1 grace = -1 This is from a later time. So some other requests were involved here. > [31/Oct/2012:11:44:05 GMT] "POST /SOGoSOGoSOGo/connect HTTP/1.0" 200 27/62 > 0.016 - - 4K > > I think the "POST /SOGoSOGoSOGo/" is wrong ... Can you see where that request came from? Probably it was the "action" of a html form within the response of a previous request. Maybe that will help show why SOGo is repeated here. (That said, the HTTP 200 response suggests that the web server was happy with the request.) > (it is not the navigator because under apache2 it works very fine). Searching the web for "sogo and nginx" returns articles from people who claim to have it working. I suggest you step back and do exactly one thing at a time. With your original "location ^~ /SOGo" block, did it all work apart from the initial redirect? If not, fix that first. The SOGo installation guide mentions an apache config file, and says "The default configuration will use mod_proxy and mod_headers to relay requests to the sogod parent process. This is suitable for small to medium deployments.". That suggests that your proxy_pass, proxy_redirect, and proxy_set_header directives may be enough. Good luck with it, f -- Francis Daly francis at daoine.org