From agentzh at gmail.com Sun Mar 1 08:24:35 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 1 Mar 2015 00:24:35 -0800 Subject: [ANN] OpenResty 1.7.10.1 released Message-ID: Hi folks! I am pleased to announce the new formal release, 1.7.10.1, of the OpenResty bundle: http://openresty.org/#Download Special thanks go to all our contributors and users for making this happen! Below is the complete change log for this release, as compared to the last formal release (1.7.7.2): * upgraded the Nginx core to 1.7.10. * see the changes here: * bugfix: applied the upstream_filter_finalize patch to the nginx core to fix corrupted $upstream_response_time variable values when "filter_finalize" and error_page are both used. thanks Daniel Bento for the report and Maxim Dounin for the patch. * bugfix: ./configure: added "--without-http_upstream_least_conn_module" and "--without-http_upstream_keepalive_module" to the usage text (for "--help") to reflect recent changes in the nginx core. thanks Seyhun Cavus for the report. * bugfix: ./configure: renamed the "--without-http_limit_zone_module" option to "--without-http_limit_conn_module" to reflect the change in recent nginx cores. thanks Seyhun Cavus for the report. * upgraded LuaJIT to v2.1-20150223: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest changes: * x86/x64: fix code generation for fused test/arith ops. thanks to Alexander Nasonov and AFL. * fix string to number conversion. thanks to Lesley De Cruz. * fix lexer error for chunks without tokens. * LJ_FR2: fix bytecode generation for method lookups. * FFI: Prevent DSE across "ffi.string()". * upgraded the ngx_lua module to 0.9.15. * bugfix: the value of the Location response header set by ngx.redirect() or the ngx.header.HEADER API might get overwritten by nginx's header filter to the fully qualified form (with the scheme and host parts). * bugfix: lua_shared_dict: use of Lua numbers as the value in shared dict might lead to unaligned accesses which could lead to crashes on architectures requiring data alignment (like ARMv6). thanks Shuxin Yang for the fix and thanks Stefan Parvu and Brandon B for the report. * bugfix: using error codes ("ngx.ERROR" or >=300) in ngx.exit() in header_filter_by_lua* might lead to Lua stack overflow. * feature: improved the debugging event logging for timers created by ngx.timer.at(). * optimize: fixed padding holes in our struct memory layouts for 64-bit systems to save a little memory. * optimize: header_filter_by_lua*: removed a piece of useless code. thanks Zi Lin for the report. * doc: emphasized the capability of using nginx variables in the Lua file path in content_by_lua_file/rewrite_by_lua_file/access_by_lua_file. * upgraded the ngx_srcache module to 0.29. * bugfix: upon cache hits, we might let the nginx core's header filter module overwrite the "Location" response header's values like "/foo/bar" to the fully-qualified form (like "http://test.com/foo/bar"). thanks AlexClineBB for the report. * upgraded resty-cli to 0.02. * bugfix: we did not explicitly specify the pid file path, which may conflict with the default pid path if the user compiles nginx with the "--pid-path=PATH" ./configure option. thanks fancyrabbit for the report. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1007010 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org And we have always been running the latest OpenResty in CloudFlare's global CDN network. Have fun! Best regards, -agentzh From nginx-forum at nginx.us Sun Mar 1 12:05:43 2015 From: nginx-forum at nginx.us (shumisha) Date: Sun, 01 Mar 2015 07:05:43 -0500 Subject: Does ssl_trusted_certificate actually send certs to client? In-Reply-To: <20150212131148.GS19012@mdounin.ru> References: <20150212131148.GS19012@mdounin.ru> Message-ID: <7ac870455f585d001cab7cb425937fac.NginxMailingListEnglish@forum.nginx.org> Hi I'm facing this problem as well, though in a different context: OCSP stapling. Everything looks good without OCSP stapling: my ssl_certificate file contain my domain (wildcard) cert from AlphaSSL, that doesn't require any intermediate cert, so the domain cert is the only one in that file. However to enable OCSP stapling, I have to specify the full cert chain in ssl_trusted_certificate. I do this by including first GlobalSign root, then alpha SSL intermediate. This works fine, and OCSP stapling is operating normally. But as a side effect, now clients also receives the full chain of certificates. I think, from your response above, that openssl auto chain building is responsible for that (you also made the same reply in http://forum.nginx.org/read.php?2,248153,248168#msg-248168) 1 - You say: "It shouldn't happen as long as there is at least one intermediate cert in ssl_certificate file". That's precisely what I want to avoid, include the while chain in the ssl_certificate file. Only adding alphassl intermediate cert in ssl_certificate (ie NO adding GlobalSign root cert) results in an error #20) 2 - Googling a bit more, and totally shooting in the dark here, I also found that Openssl has an SSL_MODE_NO_AUTO_CHAIN flag that "...Allow an application to disable the automatic SSL chain building....". Isn't it something you could use to disable the auto chain building? (originated from http://t93518.encryption-openssl-development.encryptiontalk.info/ssl-server-root-certs-and-client-auth-t93518.html I think) Thanks for any input anyway! Cheers Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256613,256970#msg-256970 From nginx-forum at nginx.us Sun Mar 1 17:05:32 2015 From: nginx-forum at nginx.us (blason) Date: Sun, 01 Mar 2015 12:05:32 -0500 Subject: nginx reverse proxy sizing Message-ID: <75b52a2e6f74fd52b9f5f48dd29d2d85.NginxMailingListEnglish@forum.nginx.org> Hi Guys, This is my first forum and I am pretty new to nginx. What I am going to do here is planning to setup nginx as a reverse proxy for my certain websites on suse enterprise linux and I do have certain questions about sizing hence would appreciate if community can help me here? 1. How is sizing done for reverse proxy? Is it based on hits, IOs etc.. 2. Can we have redundancy built for reverse proxy since my proxy will be serving very critical resources 3. Do we have any GUI for log analysis like weblyzer or something which can show up nice log analysis? 4. is it advisable to introduce WAF modules with nginx like from trustwave or comodo? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256972,256972#msg-256972 From nginx-forum at nginx.us Mon Mar 2 10:32:34 2015 From: nginx-forum at nginx.us (ertyi) Date: Mon, 02 Mar 2015 05:32:34 -0500 Subject: How write at access log only part of $request? Message-ID: <8104f3ae9f003f06ade5737c56b45292.NginxMailingListEnglish@forum.nginx.org> Hi All, I have next question: is it possible at nginx access log file decrease stored $request to smaller size? We have very big traffic, and $request part is significant for us only at some start substring of $request value. Is it possible write at log for example some kind of SubString( $request, 0, someLengthFromStart ) instead of full $request length ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256976,256976#msg-256976 From nginx-forum at nginx.us Mon Mar 2 11:11:24 2015 From: nginx-forum at nginx.us (mastercan) Date: Mon, 02 Mar 2015 06:11:24 -0500 Subject: Fastcgi_cache sometimes returns statuscode 500 Message-ID: <65eaa0429226d868715d025379b36090.NginxMailingListEnglish@forum.nginx.org> Hello, Nginx (all versions since September 2014, but at least 1.7.9, 1.7.10) sometimes returns HTTP status code 500, when serving pages from fastcgi_cache. Each time this happens, following conditions hold true: *) $upstream_cache_status = HIT (so we don't even hit php-fpm) *) $body_bytes_sent = 0 (which is strange, because I've got an error page defined for http code 500, that is obviously not sent) I've configured an error page for status code 500 which is correctly shown on errors from PHP - this was tested for status code 503. Nginx seems to have a bug here, I guess? Why would it otherwise return a zero bytes size body? This happens for about 30-40 requests a day. All other requests (several thousand) are correctly returning status code 200. Server load/CPU load is very low... I've configured fastcgi cache as follows: fastcgi_cache_path /dev/shm/ngxcache levels=1:2 use_temp_path=off keys_zone=MYSITE:5M max_size=50M inactive=2h; Maybe somebody knows advice? br, mastercan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256977,256977#msg-256977 From mdounin at mdounin.ru Mon Mar 2 12:45:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 15:45:58 +0300 Subject: Fastcgi_cache sometimes returns statuscode 500 In-Reply-To: <65eaa0429226d868715d025379b36090.NginxMailingListEnglish@forum.nginx.org> References: <65eaa0429226d868715d025379b36090.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150302124558.GO19012@mdounin.ru> Hello! On Mon, Mar 02, 2015 at 06:11:24AM -0500, mastercan wrote: > Hello, > > Nginx (all versions since September 2014, but at least 1.7.9, 1.7.10) > sometimes returns HTTP status code 500, when serving pages from > fastcgi_cache. > > Each time this happens, following conditions hold true: > *) $upstream_cache_status = HIT (so we don't even hit php-fpm) > *) $body_bytes_sent = 0 (which is strange, because I've got an error page > defined for http code 500, that is obviously not sent) > > I've configured an error page for status code 500 which is correctly shown > on errors from PHP - this was tested for status code 503. > Nginx seems to have a bug here, I guess? Why would it otherwise return a > zero bytes size body? > > This happens for about 30-40 requests a day. All other requests (several > thousand) are correctly returning status code 200. Server load/CPU load is > very low... > > I've configured fastcgi cache as follows: > fastcgi_cache_path /dev/shm/ngxcache levels=1:2 use_temp_path=off > keys_zone=MYSITE:5M max_size=50M inactive=2h; > > Maybe somebody knows advice? Try looking into the error log. When nginx returns 500, it used to complain to the error log explaining the reason. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 2 12:50:46 2015 From: nginx-forum at nginx.us (mastercan) Date: Mon, 02 Mar 2015 07:50:46 -0500 Subject: Fastcgi_cache sometimes returns statuscode 500 In-Reply-To: <20150302124558.GO19012@mdounin.ru> References: <20150302124558.GO19012@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > Try looking into the error log. When nginx returns 500, it used to > complain to the error log explaining the reason. > Unfortunately the error log for that vhost does not reveal anything at the specific times in question... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256977,256980#msg-256980 From mdounin at mdounin.ru Mon Mar 2 12:52:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 15:52:16 +0300 Subject: How write at access log only part of $request? In-Reply-To: <8104f3ae9f003f06ade5737c56b45292.NginxMailingListEnglish@forum.nginx.org> References: <8104f3ae9f003f06ade5737c56b45292.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150302125216.GP19012@mdounin.ru> Hello! On Mon, Mar 02, 2015 at 05:32:34AM -0500, ertyi wrote: > Hi All, > > I have next question: > is it possible at nginx access log file decrease stored $request to smaller > size? > > We have very big traffic, and $request part is significant for us only at > some start substring of $request value. > > Is it possible write at log for example some kind of SubString( $request, 0, > someLengthFromStart ) instead of full $request length ? For example, you can do something like this using map: map $request $request_truncated { "~(?.{0,100})" $tmp; } See log_format directive description for details on how to configure custom access logging formats. http://nginx.org/r/map http://nginx.org/r/log_format -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 2 12:56:35 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 15:56:35 +0300 Subject: Fastcgi_cache sometimes returns statuscode 500 In-Reply-To: References: <20150302124558.GO19012@mdounin.ru> Message-ID: <20150302125635.GQ19012@mdounin.ru> Hello! On Mon, Mar 02, 2015 at 07:50:46AM -0500, mastercan wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > Try looking into the error log. When nginx returns 500, it used to > > complain to the error log explaining the reason. > > > > Unfortunately the error log for that vhost does not reveal anything at the > specific times in question... This makes me think that it is just a cached 500 response from your backend then. If in doubt, you can obtain details using debug log, see http://wiki.nginx.org/Debugging. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 2 13:04:53 2015 From: nginx-forum at nginx.us (mastercan) Date: Mon, 02 Mar 2015 08:04:53 -0500 Subject: Fastcgi_cache sometimes returns statuscode 500 In-Reply-To: <20150302125635.GQ19012@mdounin.ru> References: <20150302125635.GQ19012@mdounin.ru> Message-ID: <8a81a0dac1051339c640934a62377453.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: > This makes me think that it is just a cached 500 response from > your backend then. If in doubt, you can obtain details using > debug log, see http://wiki.nginx.org/Debugging. > I also considered that, but then I'd need to have at least hundreds of 500 status codes since other users are hitting the same page some seconds before or later. They get status code 200 though with a valid response. I even looked at the content of the cache file and it seemed ok. Thanks for the link - I'll turn on debugging for the error log and see if I can find something. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256977,256983#msg-256983 From nginx-forum at nginx.us Mon Mar 2 13:09:58 2015 From: nginx-forum at nginx.us (ertyi) Date: Mon, 02 Mar 2015 08:09:58 -0500 Subject: How write at access log only part of $request? In-Reply-To: <20150302125216.GP19012@mdounin.ru> References: <20150302125216.GP19012@mdounin.ru> Message-ID: thanks very much - checking ) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256976,256984#msg-256984 From nginx-forum at nginx.us Mon Mar 2 13:49:04 2015 From: nginx-forum at nginx.us (lolallalol) Date: Mon, 02 Mar 2015 08:49:04 -0500 Subject: FastCGI caching mistakes... Message-ID: <18e51c1fe7919ae4949620d1284f887d.NginxMailingListEnglish@forum.nginx.org> Hi, I encountered several failure about fastcgi caching. Requests with special characters are not cached. Same for requests with query string. When I try to purge those URLs, each one send me a 404. This is my caching key used in a location directive : fastcgi_cache_key "$scheme$request_method$host$request_uri$is_args$args#$http_range$isAjax"; $isAjax : string (empty or "ajax") Anyone can tell me why it may not work ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256987,256987#msg-256987 From nginx-forum at nginx.us Mon Mar 2 14:11:11 2015 From: nginx-forum at nginx.us (mastercan) Date: Mon, 02 Mar 2015 09:11:11 -0500 Subject: Fastcgi_cache sometimes returns statuscode 500 In-Reply-To: <8a81a0dac1051339c640934a62377453.NginxMailingListEnglish@forum.nginx.org> References: <20150302125635.GQ19012@mdounin.ru> <8a81a0dac1051339c640934a62377453.NginxMailingListEnglish@forum.nginx.org> Message-ID: <75ed919949587e2341a071db5dbae597.NginxMailingListEnglish@forum.nginx.org> I've had 2 cases with status code 500 now since setting error log to debug level: The error msg: "epoll_wait() reported that client prematurely closed connection while sending request to upstream" It's interesting to note that: If a "normal" file (no caching involved) is requested and the client closes the connection prematurely, the status code is 200 and the response body is 0 bytes. If first a php script is called, which responds with a X-Accel-Redirect to the cached file, and the client closes the connection prematurely, the status code is 500 and the response body is 0 bytes. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256977,256988#msg-256988 From mdounin at mdounin.ru Mon Mar 2 14:51:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 17:51:02 +0300 Subject: Does ssl_trusted_certificate actually send certs to client? In-Reply-To: <7ac870455f585d001cab7cb425937fac.NginxMailingListEnglish@forum.nginx.org> References: <20150212131148.GS19012@mdounin.ru> <7ac870455f585d001cab7cb425937fac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150302145102.GU19012@mdounin.ru> Hello! On Sun, Mar 01, 2015 at 07:05:43AM -0500, shumisha wrote: > Hi > I'm facing this problem as well, though in a different context: OCSP > stapling. Everything looks good without OCSP stapling: my ssl_certificate > file contain my domain (wildcard) cert from AlphaSSL, that doesn't require > any intermediate cert, so the domain cert is the only one in that file. > > However to enable OCSP stapling, I have to specify the full cert chain in > ssl_trusted_certificate. I do this by including first GlobalSign root, then > alpha SSL intermediate. This works fine, and OCSP stapling is operating > normally. > > But as a side effect, now clients also receives the full chain of > certificates. I think, from your response above, that openssl auto chain > building is responsible for that (you also made the same reply in > http://forum.nginx.org/read.php?2,248153,248168#msg-248168) > > 1 - You say: "It shouldn't happen as long as there is at least one > intermediate cert in ssl_certificate file". That's precisely what I want to > avoid, include the while chain in the ssl_certificate file. Only adding > alphassl intermediate cert in ssl_certificate (ie NO adding GlobalSign root > cert) results in an error #20) > > 2 - Googling a bit more, and totally shooting in the dark here, I also found > that Openssl has an SSL_MODE_NO_AUTO_CHAIN flag that "...Allow an > application to disable the automatic SSL chain building....". Isn't it > something you could use to disable the auto chain building? (originated from > http://t93518.encryption-openssl-development.encryptiontalk.info/ssl-server-root-certs-and-client-auth-t93518.html > I think) > > Thanks for any input anyway! Thanks, this looks like correct flag to use. Try the following patch: --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -277,6 +277,10 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_mode(ssl->ctx, SSL_MODE_RELEASE_BUFFERS); #endif +#ifdef SSL_MODE_NO_AUTO_CHAIN + SSL_CTX_set_mode(ssl->ctx, SSL_MODE_NO_AUTO_CHAIN); +#endif + SSL_CTX_set_read_ahead(ssl->ctx, 1); SSL_CTX_set_info_callback(ssl->ctx, ngx_ssl_info_callback); -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 2 15:03:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 18:03:34 +0300 Subject: Fastcgi_cache sometimes returns statuscode 500 In-Reply-To: <75ed919949587e2341a071db5dbae597.NginxMailingListEnglish@forum.nginx.org> References: <20150302125635.GQ19012@mdounin.ru> <8a81a0dac1051339c640934a62377453.NginxMailingListEnglish@forum.nginx.org> <75ed919949587e2341a071db5dbae597.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150302150334.GW19012@mdounin.ru> Hello! On Mon, Mar 02, 2015 at 09:11:11AM -0500, mastercan wrote: > I've had 2 cases with status code 500 now since setting error log to debug > level: > > The error msg: "epoll_wait() reported that client prematurely closed > connection while sending request to upstream" It's expected to be 499, not 500. If it's 500, it problably means that there is some invalid error_page handling configured. > It's interesting to note that: > If a "normal" file (no caching involved) is requested and the client closes > the connection prematurely, the status code is 200 and the response body is > 0 bytes. > If first a php script is called, which responds with a X-Accel-Redirect to > the cached file, and the client closes the connection prematurely, the > status code is 500 and the response body is 0 bytes. When talking to upstream servers, nginx tries to detect if a client closed connection. If it does so, nginx terminates request processing with the 499 status code. The fastcgi_ignore_client_abort directive can be used to control the behaviour in case of the fastcgi module, see http://nginx.org/r/fastcgi_ignore_client_abort for details. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 2 15:53:41 2015 From: nginx-forum at nginx.us (shumisha) Date: Mon, 02 Mar 2015 10:53:41 -0500 Subject: Does ssl_trusted_certificate actually send certs to client? In-Reply-To: <20150302145102.GU19012@mdounin.ru> References: <20150302145102.GU19012@mdounin.ru> Message-ID: <17b39f607bc22b65dfe4ba08af8531b8.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Just did that and work fine for me! The warning "chain contains anchor" is gone from qualys ssl test page, while OCSP stapling is on, as well as ssl_stapling_verify. Side note: after applying this patch, I realized my config was actually wrong: the ssl_certificate file was indeed lacking my ssl cert provider intermediate cert and the trust chain verification started to fail. Previously, this error was masked by openssl auto building the trust chain using alphaSSL intermediate found in ssl_trsuted_certificate. Also, I applied the patch to nginx 1.6.2, which I'm using. Assuming this needs more testing, hope it can make it into an upcoming release. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256613,256996#msg-256996 From nginx-forum at nginx.us Mon Mar 2 16:16:23 2015 From: nginx-forum at nginx.us (lolallalol) Date: Mon, 02 Mar 2015 11:16:23 -0500 Subject: FastCGI caching mistakes... In-Reply-To: <18e51c1fe7919ae4949620d1284f887d.NginxMailingListEnglish@forum.nginx.org> References: <18e51c1fe7919ae4949620d1284f887d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8ee4586f1b2f8884436e4e096260d4fc.NginxMailingListEnglish@forum.nginx.org> Ok, it works for query string now. I saw that $request_uri already contains query string. location ~ ^/purge(/.*) { fastcgi_cache_purge FASTCGICACHE "$scheme$request_method$host$1$is_args$args#$http_range$isAjax"; } I use $1$is_args$args like $request_uri. But special characters are probably endoded during the location regex ? So now, hox to deal with special characters (?, ?...) ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256987,256997#msg-256997 From tronje85 at gmail.com Mon Mar 2 16:48:31 2015 From: tronje85 at gmail.com (Pascal Christen) Date: Mon, 2 Mar 2015 17:48:31 +0100 Subject: nginx symfony2 configuration Message-ID: I'm trying to set up an nginx webserver for website that uses angularjs for the frontend and symfony2 for backend stuff. I'm having trouble to set up nginx for symfony (the angular configuration works fine). If I navigate in browser to : http://localhost:9090/backend/app_dev.php/, I get a 404 Error from nginx.. here is my configuration file for nginx: ---- worker_processes 1; error_log /usr/local/etc/nginx/logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 200; gzip on; server { listen 9090; server_name localhost; #angular app location / { root /Users/test/myproject/client/; index index.html index.htm; } #symfony backend location /backend { alias /Users/test/myproject/backend/web; location ~ ^/(app|app_dev)\.php(/|$) { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_pass 127.0.0.1:9000; fastcgi_read_timeout 50000; } } } } ---- What I'm doing wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Mar 2 19:50:53 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 2 Mar 2015 19:50:53 +0000 Subject: nginx symfony2 configuration In-Reply-To: References: Message-ID: <20150302195053.GB3010@daoine.org> On Mon, Mar 02, 2015 at 05:48:31PM +0100, Pascal Christen wrote: Hi there, > If I navigate in > browser to : http://localhost:9090/backend/app_dev.php/, I get a 404 Error > from nginx.. The request is /backend/app_dev.php/. > location / { It could match this location, but there is a better match. > location /backend { It could match this location, and it does because there is no better match. > alias /Users/test/myproject/backend/web; > > location ~ ^/(app|app_dev)\.php(/|$) { It does not match this nested location. So the config says "serve the directory /Users/test/myproject/backend/web/app_dev.php/". > What I'm doing wrong? What does the error_log say? Anything about "Not a directory"? f -- Francis Daly francis at daoine.org From tronje85 at gmail.com Mon Mar 2 20:49:37 2015 From: tronje85 at gmail.com (Pascal Christen) Date: Mon, 2 Mar 2015 21:49:37 +0100 Subject: nginx symfony2 configuration In-Reply-To: <20150302195053.GB3010@daoine.org> References: <20150302195053.GB3010@daoine.org> Message-ID: hi you're right: there is an "Not a directory" error. the error log is : 2015/03/02 21:40:52 [error] 20047#0: *8 "/Users/test/myproject/backend/web/app.php/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /backend/app.php/ HTTP/1.1", host: "localhost:9090" how can I fix this problem? 2015-03-02 20:50 GMT+01:00 Francis Daly : > On Mon, Mar 02, 2015 at 05:48:31PM +0100, Pascal Christen wrote: > > Hi there, > > > If I navigate in > > browser to : http://localhost:9090/backend/app_dev.php/, I get a 404 > Error > > from nginx.. > > The request is /backend/app_dev.php/. > > > location / { > > It could match this location, but there is a better match. > > > location /backend { > > It could match this location, and it does because there is no better match. > > > alias /Users/test/myproject/backend/web; > > > > location ~ ^/(app|app_dev)\.php(/|$) { > > It does not match this nested location. > > So the config says "serve the directory > /Users/test/myproject/backend/web/app_dev.php/". > > > What I'm doing wrong? > > What does the error_log say? Anything about "Not a directory"? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Mar 2 22:14:11 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 2 Mar 2015 22:14:11 +0000 Subject: nginx symfony2 configuration In-Reply-To: References: <20150302195053.GB3010@daoine.org> Message-ID: <20150302221411.GC3010@daoine.org> On Mon, Mar 02, 2015 at 09:49:37PM +0100, Pascal Christen wrote: > 2015-03-02 20:50 GMT+01:00 Francis Daly : > > On Mon, Mar 02, 2015 at 05:48:31PM +0100, Pascal Christen wrote: Hi there, > how can I fix this problem? > > > location /backend { > > > > It could match this location, and it does because there is no better match. > > > > > alias /Users/test/myproject/backend/web; > > > > > > location ~ ^/(app|app_dev)\.php(/|$) { > > > > It does not match this nested location. I guess that you want this request to match this nested location? If so, changing it to be location ~ ^/backend/(app|app_dev)\.php(/|$) { should make it match. That may be enough to get you looking in the right place for the full fix, if that is not it. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Mar 3 11:02:41 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 03 Mar 2015 06:02:41 -0500 Subject: [ANN] EBLB with IWCP for nginx, any OS (beta 0.2) Message-ID: Coming up in our next nginx for Windows release, but available now for everyone; (EBLB) Elastic Backend Load Balancer, and (IWCP) Inter Worker Communication Protocol [Lua required, for Windows don't try it until our next release] EBLB allows you to manage and scale your upstreams including their IP:port addresses! EBLB only works for one worker (the one receiving the EBLB request) in order to use EBLB with all workers you need IWCP. If you only run with one worker you don't need IWCP. If you want to participate in testing you can find the source, GUI, commandline examples and a working test nginx conf file, here: http://nginx-win.ecsds.eu/devtest/EBLB_upstream_dev3.zip See also https://groups.google.com/forum/#!topic/openresty-en/wt_9m7GvROg IWCP has been designed as a generic solution to allow workers to talk to each other and send over messages and/or commands. IWCP uses shared memory and is blazing fast even under high load. (a shared pool of 1mb allows 10.000 messages to be processed in +-40 seconds with 8 workers) Possible other usages for IWCP are: - moving long running requests to workers only doing this type of processing - swapping/scaling servers between overloaded upstreams in real-time - dynamically change the runtime environment of all or only some workers - if the shared memory pool is elsewhere you could talk to other nginx instances and their workers ... needless to say it's potential is huge. Obviously IWCP is in its infant state and could do with more development; comments, use cases and new code are welcome. Source and other files will appear on github in due time (they are already there in fork merge requests). Enjoy! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257019,257019#msg-257019 From johnzeng2013 at yahoo.com Tue Mar 3 15:12:29 2015 From: johnzeng2013 at yahoo.com (johnzeng) Date: Tue, 03 Mar 2015 23:12:29 +0800 Subject: Whether nginx can cache video file and large file via the way ( monitoring Port mirroring + send 302 http, packet to redirect ) ? Message-ID: <54F5CF5D.8010201@yahoo.com> Hi , i have a switch , and i hope to redirect video traffic to Cache via using Port mirroring feature , and monitoring network traffic that involves forwarding a copy of each packet from one network switch. Whether nginx can listen and identify mirroring data packet ? maybe we can use gor ( https://github.com/buger/gor/blob/master/README.md ) if nginx can identify , i hope to match video part and send 302 http packet to end user via url_rewrite_access and redirect the user's request to Cache Whether my thought is correct way ? please give me some advisement and i am reading the detail http://xathrya.web.id/blog/2013/05/14/caching-youtube-video-with-squid-and-nginx/ http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/ http://blog.multiplay.co.uk/2013/04/caching-steam-downloads-lans/ From nginx-forum at nginx.us Wed Mar 4 04:07:16 2015 From: nginx-forum at nginx.us (blason) Date: Tue, 03 Mar 2015 23:07:16 -0500 Subject: How do I show 403 error Message-ID: <5495a686340e5292494740ded29f9f7f.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I just setup nginx reverse proxy for my webservers which has port 80/443 opened from internet and have very restircted access on firewall to the destination servers again those to particular servers on port 80 and 443. What I see in the logs is "GET http://www.baidu.com/ HTTP/1.1" 200 626 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022)" "-" I know people are trying to use my server as open proxy which is failing and even I am not able to browse the sites but I am not getting any error page on my browser and just see blank page that means server is accepting the request but unable to forward. Hence would like to know how do I throw error message in nginx so that those requests would not even accepted by my proxy. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257041,257041#msg-257041 From johnzeng2013 at yahoo.com Wed Mar 4 13:25:27 2015 From: johnzeng2013 at yahoo.com (johnzeng) Date: Wed, 04 Mar 2015 21:25:27 +0800 Subject: Whether nginx can cache video file and large file via the way ( monitoring Port mirroring + send 302 http, packet to redirect ) ? In-Reply-To: <54F5CF5D.8010201@yahoo.com> References: <54F5CF5D.8010201@yahoo.com> Message-ID: <54F707C7.1090304@yahoo.com> Hi , i have a switch , and i hope to redirect video traffic to Cache via using Port mirroring feature , and monitoring network traffic that involves forwarding a copy of each packet from one network switch. Whether nginx can listen and identify mirroring data packet ? maybe we can use gor ( https://github.com/buger/gor/blob/master/README.md ) if nginx can identify , i hope to match video part and send 302 http packet to end user via url_rewrite_access and redirect the user's request to Cache Whether my thought is correct way ? please give me some advisement and i am reading the detail http://xathrya.web.id/blog/2013/05/14/caching-youtube-video-with-squid-and-nginx/ http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/ http://blog.multiplay.co.uk/2013/04/caching-steam-downloads-lans/ From mdounin at mdounin.ru Wed Mar 4 13:46:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Mar 2015 16:46:52 +0300 Subject: How do I show 403 error In-Reply-To: <5495a686340e5292494740ded29f9f7f.NginxMailingListEnglish@forum.nginx.org> References: <5495a686340e5292494740ded29f9f7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150304134652.GE97191@mdounin.ru> Hello! On Tue, Mar 03, 2015 at 11:07:16PM -0500, blason wrote: > Hi Guys, > > I just setup nginx reverse proxy for my webservers which has port 80/443 > opened from internet and have very restircted access on firewall to the > destination servers again those to particular servers on port 80 and 443. > > What I see in the logs is > "GET http://www.baidu.com/ HTTP/1.1" 200 626 "-" "Mozilla/4.0 (compatible; > MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; > .NET CLR 3.5.21022)" "-" > > I know people are trying to use my server as open proxy which is failing and > even I am not able to browse the sites but I am not getting any error page > on my browser and just see blank page that means server is accepting the > request but unable to forward. > > Hence would like to know how do I throw error message in nginx so that those > requests would not even accepted by my proxy. Try something like this in your config: server { listen 80 default_server; return 403; } See here for details: http://nginx.org/en/docs/http/request_processing.html -- Maxim Dounin http://nginx.org/ From johnzeng2013 at yahoo.com Wed Mar 4 13:52:41 2015 From: johnzeng2013 at yahoo.com (johnzeng) Date: Wed, 04 Mar 2015 21:52:41 +0800 Subject: Whether nginx can cache video file and large file via the way ( monitoring Port mirroring + send 302 http, packet to redirect ) ? In-Reply-To: <54F5CF5D.8010201@yahoo.com> References: <54F5CF5D.8010201@yahoo.com> Message-ID: <54F70E29.8020207@yahoo.com> Hi , i have a switch , and i hope to redirect video traffic to Cache via using Port mirroring feature , and monitoring network traffic that involves forwarding a copy of each packet from one network switch. Whether nginx can listen and identify mirroring data packet ? maybe we can use gor ( https://github.com/buger/gor/blob/master/README.md ) if nginx can identify , i hope to match video part and send 302 http packet to end user via url_rewrite_access and redirect the user's request to Cache Whether my thought is correct way ? please give me some advisement and i am reading the detail http://xathrya.web.id/blog/2013/05/14/caching-youtube-video-with-squid-and-nginx/ http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/ http://blog.multiplay.co.uk/2013/04/caching-steam-downloads-lans/ From cesarshlbrn at hotmail.com Wed Mar 4 15:56:54 2015 From: cesarshlbrn at hotmail.com (Julio Cesar Dos Santos) Date: Wed, 4 Mar 2015 12:56:54 -0300 Subject: nginx Digest, Vol 65, Issue 5 Message-ID: Por favor arem de enviar esse e-mail para mim. Atenciosamente: Julio Cesar Dos Santos -mensag. original- Assunto: nginx Digest, Vol 65, Issue 5 De: nginx-request at nginx.org Data: 04/03/2015 09:00 Send nginx mailing list submissions to nginx at nginx.org To subscribe or unsubscribe via the World Wide Web, visit http://mailman.nginx.org/mailman/listinfo/nginx or, via email, send a message with subject or body 'help' to nginx-request at nginx.org You can reach the person managing the list at nginx-owner at nginx.org When replying, please edit your Subject line so it is more specific than "Re: Contents of nginx digest..." Today's Topics: 1. Whether nginx can cache video file and large file via the way ( monitoring Port mirroring + send 302 http, packet to redirect ) ? (johnzeng) 2. How do I show 403 error (blason) ---------------------------------------------------------------------- Message: 1 Date: Tue, 03 Mar 2015 23:12:29 +0800 From: johnzeng To: nginx at nginx.org Subject: Whether nginx can cache video file and large file via the way ( monitoring Port mirroring + send 302 http, packet to redirect ) ? Message-ID: <54F5CF5D.8010201 at yahoo.com> Content-Type: text/plain; charset=GB2312 Hi , i have a switch , and i hope to redirect video traffic to Cache via using Port mirroring feature , and monitoring network traffic that involves forwarding a copy of each packet from one network switch. Whether nginx can listen and identify mirroring data packet ? maybe we can use gor ( https://github.com/buger/gor/blob/master/README.md ) if nginx can identify , i hope to match video part and send 302 http packet to end user via url_rewrite_access and redirect the user's request to Cache Whether my thought is correct way ? please give me some advisement and i am reading the detail http://xathrya.web.id/blog/2013/05/14/caching-youtube-video-with-squid-and-nginx/ http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/ http://blog.multiplay.co.uk/2013/04/caching-steam-downloads-lans/ ------------------------------ Message: 2 Date: Tue, 03 Mar 2015 23:07:16 -0500 From: "blason" To: nginx at nginx.org Subject: How do I show 403 error Message-ID: <5495a686340e5292494740ded29f9f7f.NginxMailingListEnglish at forum.nginx.org> Content-Type: text/plain; charset=UTF-8 Hi Guys, I just setup nginx reverse proxy for my webservers which has port 80/443 opened from internet and have very restircted access on firewall to the destination servers again those to particular servers on port 80 and 443. What I see in the logs is "GET http://www.baidu.com/ HTTP/1.1" 200 626 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022)" "-" I know people are trying to use my server as open proxy which is failing and even I am not able to browse the sites but I am not getting any error page on my browser and just see blank page that means server is accepting the request but unable to forward. Hence would like to know how do I throw error message in nginx so that those requests would not even accepted by my proxy. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257041,257041#msg-257041 ------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx End of nginx Digest, Vol 65, Issue 5 ************************************ From nginx-forum at nginx.us Wed Mar 4 16:33:50 2015 From: nginx-forum at nginx.us (neophyte) Date: Wed, 04 Mar 2015 11:33:50 -0500 Subject: add_before_body Message-ID: <944da1cdecf011f4944e00ba38f8b681.NginxMailingListEnglish@forum.nginx.org> Hey! I'm trying to use the add_before_body in nginx.comf, add_before_body \subsites\dropletlocal\library\global\func-global.class.php; After I add it all the sites display a "No input file specified.". If I use: add_before_body \subsites\dropletlocal\library\global\test.html; The file gets included fine, and displays the HTML. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257054#msg-257054 From fabian.sales at donweb.com Wed Mar 4 18:18:51 2015 From: fabian.sales at donweb.com (=?ISO-8859-1?Q?Fabi=E1n_M_Sales?=) Date: Wed, 04 Mar 2015 15:18:51 -0300 Subject: NGINX and mod_log_sql Message-ID: <54F74C8B.80007@donweb.com> Hello List. I use mod_log_sql-1.10 compiled into Apache / 2.4.7 and write correctly in MySQL. In the nginx web server with the IP writer in MySQL is the IP of the webserver and not the client IP to access the website. You might still be able to write the client IP and non-IP using nginx webserver? -- Firma Institucional -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Mar 4 19:58:00 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Mar 2015 19:58:00 +0000 Subject: add_before_body In-Reply-To: <944da1cdecf011f4944e00ba38f8b681.NginxMailingListEnglish@forum.nginx.org> References: <944da1cdecf011f4944e00ba38f8b681.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150304195800.GE3010@daoine.org> On Wed, Mar 04, 2015 at 11:33:50AM -0500, neophyte wrote: Hi there, > I'm trying to use the add_before_body in nginx.comf, The documentation is at http://nginx.org/r/add_before_body > add_before_body > \subsites\dropletlocal\library\global\func-global.class.php; > > After I add it all the sites display a "No input file specified.". What does your configuration say about how to handle that request? (You may find it works better if you use / not \.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Mar 4 21:51:43 2015 From: nginx-forum at nginx.us (neophyte) Date: Wed, 04 Mar 2015 16:51:43 -0500 Subject: add_before_body In-Reply-To: <20150304195800.GE3010@daoine.org> References: <20150304195800.GE3010@daoine.org> Message-ID: <0e869fa40702723c4d3330b0ec452293.NginxMailingListEnglish@forum.nginx.org> I've swapped to using /. https://gist.github.com/anonymous/423cbe11f91e4d342dd6 - My comf. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257057#msg-257057 From francis at daoine.org Wed Mar 4 22:24:01 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Mar 2015 22:24:01 +0000 Subject: add_before_body In-Reply-To: <0e869fa40702723c4d3330b0ec452293.NginxMailingListEnglish@forum.nginx.org> References: <20150304195800.GE3010@daoine.org> <0e869fa40702723c4d3330b0ec452293.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150304222401.GF3010@daoine.org> On Wed, Mar 04, 2015 at 04:51:43PM -0500, neophyte wrote: Hi there, > I've swapped to using /. > https://gist.github.com/anonymous/423cbe11f91e4d342dd6 - My comf. That is: """ http { add_before_body /subsites/dropletlocal/library/global/func-global.class.php; server { server_name localhost; location / { root htdocs/subsites/dropletlocal; index proxy.php; } location ~ \.php$ { root htdocs/subsites/dropletlocal; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { server_name ***.es.vc; location / { root htdocs/subsites/***; index index.php; } location ~ \.php$ { root htdocs/subsites/***; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } """ more or less. So, what do you want nginx to do with your add_before_body subrequest? As in: which file on the filesystem do you want the fastcgi processor to read? Possibly you will want to add something like location = /subsites/dropletlocal/library/global/func-global.class.php { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME _the_file_that_you_care_about_; include fastcgi_params; } to each server block; possibly something else will work. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Mar 4 22:30:22 2015 From: nginx-forum at nginx.us (neophyte) Date: Wed, 04 Mar 2015 17:30:22 -0500 Subject: add_before_body In-Reply-To: <20150304222401.GF3010@daoine.org> References: <20150304222401.GF3010@daoine.org> Message-ID: <2dcc95cfa24bf9418555e357d769e74a.NginxMailingListEnglish@forum.nginx.org> I can't manually add it to each file because I use a include *.comf, I just need Nginx to perform add_before_body without it being in each server block. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257059#msg-257059 From nginx-forum at nginx.us Wed Mar 4 22:33:34 2015 From: nginx-forum at nginx.us (neophyte) Date: Wed, 04 Mar 2015 17:33:34 -0500 Subject: add_before_body In-Reply-To: <2dcc95cfa24bf9418555e357d769e74a.NginxMailingListEnglish@forum.nginx.org> References: <20150304222401.GF3010@daoine.org> <2dcc95cfa24bf9418555e357d769e74a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Let me clear that up, I need add_before_body /subsites/dropletlocal/library/global/func-global.class.php; to work on every site that is hosted, I can't manually add every block or I'd have to do it for over 500 websites, intern taking too long and too much downtime, also I can't because the directory would be different for each server because of file indention of websites with specific settings. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257060#msg-257060 From francis at daoine.org Wed Mar 4 23:07:50 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Mar 2015 23:07:50 +0000 Subject: add_before_body In-Reply-To: <2dcc95cfa24bf9418555e357d769e74a.NginxMailingListEnglish@forum.nginx.org> References: <20150304222401.GF3010@daoine.org> <2dcc95cfa24bf9418555e357d769e74a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150304230750.GG3010@daoine.org> On Wed, Mar 04, 2015 at 05:30:22PM -0500, neophyte wrote: Hi there, > I can't manually add it to each file because I use a include *.comf, I just > need Nginx to perform add_before_body without it being in each server block. I'm afraid that I have failed to understand what you want to configure nginx to do. So I will describe what you have configured nginx to do, and perhaps you or someone will be able to see where "what you want" and "what you have" differ. When you request http://localhost/index.html, nginx will send the content of the file /usr/local/nginx/htdocs/subsites/dropletlocal/index.html, prefixed by whatever your fastcgi server returns when it is asked to process the file /usr/local/nginx/htdocs/subsites/dropletlocal/subsites/dropletlocal/library/global/func-global.class.php. When you request http://one.es.vc/index.html, nginx will send the content of the file /usr/local/nginx/htdocs/subsites/one/index.html prefixed by whatever your fastcgi server returns when it is asked to process the file /usr/local/nginx/htdocs/subsites/one/subsites/dropletlocal/library/global/func-global.class.php. Your 500 included .conf files already have many lines the same in them. It is not clear to me why you can't add one more "include the_add_before_body_bit;" line to each. But I'm sure you have your reasons. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Mar 4 23:39:37 2015 From: nginx-forum at nginx.us (neophyte) Date: Wed, 04 Mar 2015 18:39:37 -0500 Subject: add_before_body In-Reply-To: <20150304230750.GG3010@daoine.org> References: <20150304230750.GG3010@daoine.org> Message-ID: Even when I add_before_body on every individual server it still gives me "Input file not specified" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257062#msg-257062 From nginx-forum at nginx.us Thu Mar 5 00:02:19 2015 From: nginx-forum at nginx.us (neophyte) Date: Wed, 04 Mar 2015 19:02:19 -0500 Subject: add_before_body In-Reply-To: References: <20150304230750.GG3010@daoine.org> Message-ID: worker_processes 8; events { worker_connections 1024; } http { gzip off; root htdocs; add_before_body subsites/dropletlocal/library/global/func-global.class.php; #DOESN'T add_before_body subsites/dropletlocal/library/global/test.html; #WORKS include mime.types; include web-hosting/*.conf; } I don't understand how come html works fine but php wont? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257063#msg-257063 From nginx-forum at nginx.us Thu Mar 5 01:10:28 2015 From: nginx-forum at nginx.us (Fry-kun) Date: Wed, 04 Mar 2015 20:10:28 -0500 Subject: curl "Connection refused" caused by SSL config Message-ID: Hi all, I have a strange problem with nginx: I tried to harden the TLS stack by setting default to recommended values from https://wiki.mozilla.org/Security/Server_Side_TLS but one server has to keep backward compatibility -- so I set it up as http { ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers ... ssl_prefer_server_ciphers on; server { listen 443 spdy; server_name .foo.com bar.foo.com; } server { ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ... ssl_prefer_server_ciphers on; listen 443 spdy; server_name foobar.foo.com; } } Problem is that foobar.foo.com starts freezing up randomly after a few seconds -- though sometimes comes back for a short while. curl from outside reports error as "connection refused"; using curl localhost:443 responds properly with "* SSL: no alternative certificate subject name matches target host name 'localhost'" CPU usage is not much different from older config; there are no obvious errors in error_log. Problem goes away if http-level ssl config is commented out (ssl_protocols, especially). I think that indicates this config is not properly parsed at the "server" level (does not override http level?) Seems that I can use the http-level config inside .foo.com server config without interfering, but I'd like it to be config default instead. Other notes: 2 nginx hosts in questions are behind a hardware load balancer Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257064,257064#msg-257064 From nginx-forum at nginx.us Thu Mar 5 01:17:55 2015 From: nginx-forum at nginx.us (Fry-kun) Date: Wed, 04 Mar 2015 20:17:55 -0500 Subject: curl "Connection refused" caused by SSL config In-Reply-To: References: Message-ID: <4ba93adad5e70fc011427c36a7c8868a.NginxMailingListEnglish@forum.nginx.org> Also noticed that http-level config causes www.foo.com to show errors on https://www.200please.com/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257064,257065#msg-257065 From nginx-forum at nginx.us Thu Mar 5 03:16:02 2015 From: nginx-forum at nginx.us (blason) Date: Wed, 04 Mar 2015 22:16:02 -0500 Subject: How do I show 403 error In-Reply-To: <20150304134652.GE97191@mdounin.ru> References: <20150304134652.GE97191@mdounin.ru> Message-ID: <1e44700212180c80620b781cf0b16643.NginxMailingListEnglish@forum.nginx.org> Thanks Max. I am just trying that on my test server. Also this proxy I am gonna use for MS Exchange OWA and thus would not want to publish /ActiveSync and /offlineaddressbok urls through my reverse proxy. How can I block certain urls or path in nginx so that those URLs would not be accessible or proxied from internet? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257041,257066#msg-257066 From nginx-forum at nginx.us Thu Mar 5 08:45:44 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 05 Mar 2015 03:45:44 -0500 Subject: add_before_body In-Reply-To: References: <20150304230750.GG3010@daoine.org> Message-ID: <11b020e54e4c397c7169e52de36db1a4.NginxMailingListEnglish@forum.nginx.org> neophyte Wrote: ------------------------------------------------------- > Even when I add_before_body on every individual server it still gives > me "Input file not specified" This message is a PHP message, you have a php location block somewhere trying to run a php script and failing (script not found). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257054,257070#msg-257070 From mdounin at mdounin.ru Thu Mar 5 12:23:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Mar 2015 15:23:52 +0300 Subject: How do I show 403 error In-Reply-To: <1e44700212180c80620b781cf0b16643.NginxMailingListEnglish@forum.nginx.org> References: <20150304134652.GE97191@mdounin.ru> <1e44700212180c80620b781cf0b16643.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150305122352.GK97191@mdounin.ru> Hello! On Wed, Mar 04, 2015 at 10:16:02PM -0500, blason wrote: > Thanks Max. > > I am just trying that on my test server. Also this proxy I am gonna use for > MS Exchange OWA and thus would not want to publish /ActiveSync and > /offlineaddressbok urls through my reverse proxy. > > How can I block certain urls or path in nginx so that those URLs would not > be accessible or proxied from internet? http://nginx.org/r/location See here for even more docs: http://nginx.org/en/docs/ -- Maxim Dounin http://nginx.org/ From bertrand.caplet at chunkz.net Thu Mar 5 18:26:57 2015 From: bertrand.caplet at chunkz.net (Bertrand Caplet) Date: Thu, 05 Mar 2015 19:26:57 +0100 Subject: Weird issues with fastcgi_cache and images Message-ID: <54F89FF1.6010105@chunkz.net> Hey guys, I got a strange issues when I activate fastcgi_cache some images doesn't load on first load of the page but when hitting refresh it loads. Here's my configuration : location ~ \.php$ { # FastCGI optimizing fastcgi_buffers 4 256k; fastcgi_buffer_size 128k; fastcgi_connect_timeout 3s; fastcgi_send_timeout 20s; fastcgi_read_timeout 20s; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; reset_timedout_connection on; # FASTCGI CACHE fastcgi_cache WORDPRESS; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; fastcgi_cache_valid any 1h; fastcgi_cache_bypass $no_cache; fastcgi_no_cache $no_cache; fastcgi_cache_min_uses 2; [...] } and I do have exclude conf for some cookies, wordpress administration etc. Like this : if ($request_method = POST) { set $no_cache 1; } For testing purpose you can see here : https://www.gamespirits.com/bande-dessinee/serie-bastien-vives-chez-shampooing/ Thanks in advance, -- CHUNKZ.NET - script kiddie and computer technician Bertrand Caplet, Flers (FR) Feel free to send encrypted/signed messages Key ID: FF395BD9 GPG FP: DE10 73FD 17EB 5544 A491 B385 1EDA 35DC FF39 5BD9 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From nginx-forum at nginx.us Thu Mar 5 19:49:49 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 05 Mar 2015 14:49:49 -0500 Subject: Weird issues with fastcgi_cache and images In-Reply-To: <54F89FF1.6010105@chunkz.net> References: <54F89FF1.6010105@chunkz.net> Message-ID: <32f233fe1e0c1bfb04cf86adc390ee3b.NginxMailingListEnglish@forum.nginx.org> Have you tried this with curl -i to see if it's not a browser cache issue? Sounds like a cached file with an expire date which is still valid against your expire date from cache. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257078,257079#msg-257079 From nginx-forum at nginx.us Fri Mar 6 02:58:37 2015 From: nginx-forum at nginx.us (Fry-kun) Date: Thu, 05 Mar 2015 21:58:37 -0500 Subject: curl "Connection refused" caused by SSL config In-Reply-To: <4ba93adad5e70fc011427c36a7c8868a.NginxMailingListEnglish@forum.nginx.org> References: <4ba93adad5e70fc011427c36a7c8868a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <885e2cf903979307ab906806d6080aa2.NginxMailingListEnglish@forum.nginx.org> So it looks like the ssl config is valid per-port only. If I set up a server on a different port with different ssl config, it works. Is this a bug or is it by design? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257064,257086#msg-257086 From nginx-forum at nginx.us Fri Mar 6 09:51:05 2015 From: nginx-forum at nginx.us (clementsm) Date: Fri, 06 Mar 2015 04:51:05 -0500 Subject: curl "Connection refused" caused by SSL config In-Reply-To: References: Message-ID: "Connection refused", generally means that your server (sometimes this can be a firewall too - although firewalls tend to more often just silently drop packets) is sending a RST packet to an attempt to connect. Check with netstat and see if Nginx is bound to your public IP on :443 As for your question on SSL config -- it is valid in both the http and server context, ref. http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols Your configuration seems to be that you have one certificate, and two virtual hosts, that respond to different host-headers. for the first one, you probably mean: server_name foo.com bar.foo.com; or do you mean to have it as a catch-all for "*.foo.com" Are you only trying to enable SSL3 for "foobar.foo.com" and have only "TLSv1.1 TLSv1.2" for the other one? Lastly, maybe give some more stuff to work with here, like your actual URLs, and your full configuration. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257064,257088#msg-257088 From mdounin at mdounin.ru Fri Mar 6 12:47:15 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Mar 2015 15:47:15 +0300 Subject: curl "Connection refused" caused by SSL config In-Reply-To: <885e2cf903979307ab906806d6080aa2.NginxMailingListEnglish@forum.nginx.org> References: <4ba93adad5e70fc011427c36a7c8868a.NginxMailingListEnglish@forum.nginx.org> <885e2cf903979307ab906806d6080aa2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150306124714.GR97191@mdounin.ru> Hello! On Thu, Mar 05, 2015 at 09:58:37PM -0500, Fry-kun wrote: > So it looks like the ssl config is valid per-port only. If I set up a server > on a different port with different ssl config, it works. > Is this a bug or is it by design? This is by design. Before some protocol-specific handshake happens, it is not possible to tell which virtual server client is going to request. Therefore, the default server context (and corresponding options) are used before the handshake. In this particular case, you are trying to enable SSLv3 for a virtual server. This is not possible at all even in theory: there is no SNI extension in SSLv3, and requested virtual server will be known only after reading an HTTP request. But it won't be possible to send an HTTP request as SSLv3 is disabled in the default server, and therefore the SSL handshake will fail. See here for some additional details about configuring SSL in nginx: http://nginx.org/en/docs/http/configuring_https_servers.html -- Maxim Dounin http://nginx.org/ From jacklinkers at gmail.com Fri Mar 6 17:58:16 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Fri, 6 Mar 2015 18:58:16 +0100 Subject: server bloc conf ? Message-ID: Hello, I used to use sites-available & sites-enabled in a previous version of NginX. I compiled it from source with ngx_pagespeed module on another VPS. I don't see anymore the folders available / enabled. On the other hand I noticed in /etc/nginx/nginx.conf this line : include conf.d/*.conf; I'm I supposed to store my domain configs in that folder ? Like /etc/nginx/conf.d/mydomain.tld.conf; ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.theodoro at gmail.com Fri Mar 6 18:09:40 2015 From: daniel.theodoro at gmail.com (Theodoro) Date: Fri, 6 Mar 2015 15:09:40 -0300 Subject: server bloc conf ? In-Reply-To: References: Message-ID: Jack, There's no problem, or you can just add this line in your nginx.conf. include sites-enabled/*.conf; Daniel Theodoro Cel: 11 99399-3364 http://www.linkedin.com/in/danieltheodoro ? RHCA - Red Hat Certified Architect ? RHCDS - Red Hat Certified Datacenter Specialist ? RHCE - Red Hat Certified Engineer ? RHCVA - Red Hat Certified Virtualization Administrator ? LPIC-3 - Senior Level Linux Certification ? Novell Certified Linux Administrator - Suse 11 ? OCA - Oracle Enterprise Linux Administrator Certified Associate On Fri, Mar 6, 2015 at 2:58 PM, JACK LINKERS wrote: > Hello, > > I used to use sites-available & sites-enabled in a previous version of > NginX. > I compiled it from source with ngx_pagespeed module on another VPS. > I don't see anymore the folders available / enabled. > On the other hand I noticed in /etc/nginx/nginx.conf this line : include > conf.d/*.conf; > > I'm I supposed to store my domain configs in that folder ? > Like /etc/nginx/conf.d/mydomain.tld.conf; ? > > Thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacklinkers at gmail.com Fri Mar 6 18:14:00 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Fri, 6 Mar 2015 19:14:00 +0100 Subject: server bloc conf ? In-Reply-To: References: Message-ID: Hello Daniel, I'm not sure to understand, sry, how do I setup a domain ? What would be the syntax for the file name ? domain.tld.conf ? Regards 2015-03-06 19:09 GMT+01:00 Theodoro : > Jack, > > There's no problem, or you can just add this line in your nginx.conf. > > include sites-enabled/*.conf; > > Daniel Theodoro > Cel: 11 99399-3364 > http://www.linkedin.com/in/danieltheodoro > > ? RHCA - Red Hat Certified Architect > ? RHCDS - Red Hat Certified Datacenter Specialist > ? RHCE - Red Hat Certified Engineer > ? RHCVA - Red Hat Certified Virtualization Administrator > ? LPIC-3 - Senior Level Linux Certification > ? Novell Certified Linux Administrator - Suse 11 > ? OCA - Oracle Enterprise Linux Administrator Certified Associate > > On Fri, Mar 6, 2015 at 2:58 PM, JACK LINKERS > wrote: > >> Hello, >> >> I used to use sites-available & sites-enabled in a previous version of >> NginX. >> I compiled it from source with ngx_pagespeed module on another VPS. >> I don't see anymore the folders available / enabled. >> On the other hand I noticed in /etc/nginx/nginx.conf this line : include >> conf.d/*.conf; >> >> I'm I supposed to store my domain configs in that folder ? >> Like /etc/nginx/conf.d/mydomain.tld.conf; ? >> >> Thanks >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.caplet at chunkz.net Fri Mar 6 18:20:50 2015 From: bertrand.caplet at chunkz.net (Bertrand Caplet) Date: Fri, 06 Mar 2015 19:20:50 +0100 Subject: Weird issues with fastcgi_cache and images In-Reply-To: <32f233fe1e0c1bfb04cf86adc390ee3b.NginxMailingListEnglish@forum.nginx.org> References: <54F89FF1.6010105@chunkz.net> <32f233fe1e0c1bfb04cf86adc390ee3b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54F9F002.6050106@chunkz.net> > Have you tried this with curl -i to see if it's not a browser cache issue? > > Sounds like a cached file with an expire date which is still valid against > your expire date from cache. I tried yesterday but I don't really see the point... What should I look for ? I set a header to see if cache hit or miss, and when I don't see the images it misses and when I reload too. Anyway, I set image cache time to match fastcgi cache time today but I have the same problems. Sorry I'm really a newbie in caching. -- CHUNKZ.NET - script kiddie and computer technician Bertrand Caplet, Flers (FR) Feel free to send encrypted/signed messages Key ID: FF395BD9 GPG FP: DE10 73FD 17EB 5544 A491 B385 1EDA 35DC FF39 5BD9 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From nginx-forum at nginx.us Fri Mar 6 18:34:28 2015 From: nginx-forum at nginx.us (Fry-kun) Date: Fri, 06 Mar 2015 13:34:28 -0500 Subject: curl "Connection refused" caused by SSL config In-Reply-To: <20150306124714.GR97191@mdounin.ru> References: <20150306124714.GR97191@mdounin.ru> Message-ID: <00a88b89b09a03121828431e4c3a6b97.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim, that explains it Would be nice if documentation mentioned this fact, or nginx would spit out some errors about the conflicting config ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257064,257107#msg-257107 From daniel.theodoro at gmail.com Fri Mar 6 18:46:35 2015 From: daniel.theodoro at gmail.com (Theodoro) Date: Fri, 6 Mar 2015 15:46:35 -0300 Subject: server bloc conf ? In-Reply-To: References: Message-ID: Jack, Any file that you put in "conf.d" or "sites-enabled" ends with .conf will be loaded when nginx start. Daniel Theodoro Cel: 11 99399-3364 http://www.linkedin.com/in/danieltheodoro ? RHCA - Red Hat Certified Architect ? RHCDS - Red Hat Certified Datacenter Specialist ? RHCE - Red Hat Certified Engineer ? RHCVA - Red Hat Certified Virtualization Administrator ? LPIC-3 - Senior Level Linux Certification ? Novell Certified Linux Administrator - Suse 11 ? OCA - Oracle Enterprise Linux Administrator Certified Associate On Fri, Mar 6, 2015 at 3:14 PM, JACK LINKERS wrote: > Hello Daniel, > > I'm not sure to understand, sry, how do I setup a domain ? What would be > the syntax for the file name ? domain.tld.conf ? > > Regards > > 2015-03-06 19:09 GMT+01:00 Theodoro : > >> Jack, >> >> There's no problem, or you can just add this line in your nginx.conf. >> >> include sites-enabled/*.conf; >> >> Daniel Theodoro >> Cel: 11 99399-3364 >> http://www.linkedin.com/in/danieltheodoro >> >> ? RHCA - Red Hat Certified Architect >> ? RHCDS - Red Hat Certified Datacenter Specialist >> ? RHCE - Red Hat Certified Engineer >> ? RHCVA - Red Hat Certified Virtualization Administrator >> ? LPIC-3 - Senior Level Linux Certification >> ? Novell Certified Linux Administrator - Suse 11 >> ? OCA - Oracle Enterprise Linux Administrator Certified Associate >> >> On Fri, Mar 6, 2015 at 2:58 PM, JACK LINKERS >> wrote: >> >>> Hello, >>> >>> I used to use sites-available & sites-enabled in a previous version of >>> NginX. >>> I compiled it from source with ngx_pagespeed module on another VPS. >>> I don't see anymore the folders available / enabled. >>> On the other hand I noticed in /etc/nginx/nginx.conf this line : include >>> conf.d/*.conf; >>> >>> I'm I supposed to store my domain configs in that folder ? >>> Like /etc/nginx/conf.d/mydomain.tld.conf; ? >>> >>> Thanks >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 6 19:24:54 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 06 Mar 2015 14:24:54 -0500 Subject: Weird issues with fastcgi_cache and images In-Reply-To: <54F9F002.6050106@chunkz.net> References: <54F9F002.6050106@chunkz.net> Message-ID: Some curl examples; https://wordpress.org/support/topic/if-modified-since-request-header-can-cause-a-cache-control-negative-max-age It all depends on what you get against what you expected. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257078,257109#msg-257109 From David.Kazlman at saabsensis.com Fri Mar 6 21:59:47 2015 From: David.Kazlman at saabsensis.com (Kazlman, David) Date: Fri, 6 Mar 2015 21:59:47 +0000 Subject: NGINX Worker process stuck, CPU usage at 100% Message-ID: <999B812001B1DA42AE61E96EBA717AB91B4EAB95@corpmail01.corp.sensis.com> I've migrated my server over from lighttpd to NGINX(memory leaks were causing cache issues which invoked OOM Killer in Linux). It seems that after a while of running(about 30 minutes) with NGINX and processing requests just fine the NGINX worker process gets stuck in a loop maxing out the CPU and not responding to any other requests. I am running NGINX version 1.6.2, OpenSSL 1.2.0, PHP-FPM. PHP 5.5.19 (fpm-fcgi) (built: Mar 5 2015 10:12:12) Copyright (c) 1997-2014 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies This is a very minimal use of the webserver (1 user, at most 2 requests from the browser every 2 seconds). I turned on debugging in nginx and captured a log. The worker threads seems to be hung near the bottom of the log, but I can't pick anything out of the log that indicates a problem. I don't see any errors. I attached the debug log, can someone please help look it over and see if anything stands out? I've noticed sometimes when this occurs the log is reporting: 2015/03/05 20:13:34 [info] 698#0: *1348 peer closed connection in SSL handshake while SSL handshaking, client: 192.168.0.126, server: 0.0.0.0:443 But this isn't always the case. I can repeat this hang after about 30 minutes pretty consistently. I've tried 3 different versions of openSSL (1.0.0, 1.0.1, 1.0.2) and 2 versions of NGINX (1.6.2, 1.7.10) and nothing seems to resolve the issue so it may be pointing to a configuration problem. I have also tried unix sockets vs tcp sockets as the fastcgi transfer mechanism (listen = /var/run/php-fpm.sock and listen = 127.0.0.1:9000) and both seem to act similarly If I kill the webserver and restart it, the webserver starts acting on requests just fine again, and after some amount of time gets back into this state. My configuration is as follows: nginx.conf: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; error_log /var/log/nginx/error.log debug; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; client_max_body_size 30M; # Redirect HTTP Requests to HTTPS # server { listen 80; server_name localhost; return 301 https://$host; } # HTTPS server # server { listen 443 ssl; server_name $host; ssl on; ssl_certificate /mnt/emmc/ssl/nginx.crt; ssl_certificate_key /mnt/emmc/ssl/nginx.key; root /var/www/htdocs/; location / { try_files $uri $uri/ /index.php; index index.php; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php-fpm/sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } } } PHP-FPM Conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/usr). This prefix can be dynamically changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p argument) ; - /usr otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; If it's set to "syslog", log is sent to syslogd instead of being written ; in a local file. ; Note: the default prefix is /var ; Default Value: log/php-fpm.log error_log = /var/log/php-fpm.log ; syslog_facility is used to specify what type of program is logging the ; message. This lets syslogd specify that messages from different facilities ; will be handled differently. ; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON) ; Default Value: daemon syslog.facility = daemon ; syslog_ident is prepended to every message. If you have multiple FPM ; instances running on the same server, you can change the default value ; which must suit common needs. ; Default Value: php-fpm syslog.ident = php-fpm ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; The maximum number of processes FPM will fork. This has been design to control ; the global number of processes when using dynamic PM within a lot of pools. ; Use it with caution. ; Note: A value of 0 indicates no limit ; Default Value: 0 ; process.max = 128 ; Specify the nice(2) priority to apply to the master process (only if set) ; The value can vary from -19 (highest priority) to 20 (lower priority) ; Note: - It will only work if the FPM master process is launched as root ; - The pool process will inherit the master process priority ; unless it specified otherwise ; Default Value: no set ; process.priority = -19 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = yes ; Set open file descriptor rlimit for the master process. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit for the master process. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Specify the event mechanism FPM will use. The following is available: ; - select (any POSIX os) ; - poll (any POSIX os) ; - epoll (linux >= 2.5.44) ; - kqueue (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0) ; - /dev/poll (Solaris >= 7) ; - port (Solaris >= 10) ; Default Value: not set (auto detection) events.mechanism = epoll ; When FPM is build with systemd integration, specify the interval, ; in second, between health report notification to systemd. ; Set to 0 to disable. ; Available Units: s(econds), m(inutes), h(ours) ; Default Unit: seconds ; Default value: 10 ;systemd_interval = 10 ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] catch_workers_output = yes ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php-fpm.sock ; Set listen(2) backlog. ; Default Value: 65535 (-1 on FreeBSD and OpenBSD) ;listen.backlog = 65535 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0660 listen.owner = www-data listen.group = www-data listen.mode = 0660 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Specify the nice(2) priority to apply to the pool processes (only if set) ; The value can vary from -19 (highest priority) to 20 (lower priority) ; Note: - It will only work if the FPM master process is launched as root ; - The pool processes will inherit the master process priority ; unless it specified otherwise ; Default Value: no set process.priority = 5 ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed. ; Note: This value is mandatory. pm = static ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. The below defaults are based on a server without much resources. Don't ; forget to tweak pm.* to fit your needs. ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' ; Note: This value is mandatory. pm.max_children = 2 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 ;pm.start_servers = 2 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' ;pm.min_spare_servers = 1 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' ;pm.max_spare_servers = 3 ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s ;pm.process_idle_timeout = 10s; ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. It shows the following informations: ; pool - the name of the pool; ; process manager - static, dynamic or ondemand; ; start time - the date and time FPM has started; ; start since - number of seconds since FPM has started; ; accepted conn - the number of request accepted by the pool; ; listen queue - the number of request in the queue of pending ; connections (see backlog in listen(2)); ; max listen queue - the maximum number of requests in the queue ; of pending connections since FPM has started; ; listen queue len - the size of the socket queue of pending connections; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes; ; max active processes - the maximum number of active processes since FPM ; has started; ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic' and 'ondemand'); ; Value are updated in real time. ; Example output: ; pool: www ; process manager: static ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 62636 ; accepted conn: 190460 ; listen queue: 0 ; max listen queue: 1 ; listen queue len: 42 ; idle processes: 4 ; active processes: 11 ; total processes: 15 ; max active processes: 12 ; max children reached: 0 ; ; By default the status page output is formatted as text/plain. Passing either ; 'html', 'xml' or 'json' in the query string will return the corresponding ; output syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; http://www.foo.bar/status?xml ; ; By default the status page only outputs short status. Passing 'full' in the ; query string will also return status for each pool process. ; Example: ; http://www.foo.bar/status?full ; http://www.foo.bar/status?json&full ; http://www.foo.bar/status?html&full ; http://www.foo.bar/status?xml&full ; The Full status returns for each process: ; pid - the PID of the process; ; state - the state of the process (Idle, Running, ...); ; start time - the date and time the process has started; ; start since - the number of seconds since the process has started; ; requests - the number of requests the process has served; ; request duration - the duration in \ufffd\ufffds of the requests; ; request method - the request method (GET, POST, ...); ; request URI - the request URI with the query string; ; content length - the content length of the request (only with POST); ; user - the user (PHP_AUTH_USER) (or '-' if not set); ; script - the main script called (or '-' if not set); ; last request cpu - the %cpu the last request consumed ; it's always 0 if the process is not in Idle state ; because CPU calculation is done when the request ; processing has terminated; ; last request memory - the max amount of memory the last request consumed ; it's always 0 if the process is not in Idle state ; because memory calculation is done when the request ; processing has terminated; ; If the process is in Idle state, then informations are related to the ; last request the process has served. Otherwise informations are related to ; the current request being served. ; Example output: ; ************************ ; pid: 31330 ; state: Running ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 63087 ; requests: 12808 ; request duration: 1250261 ; request method: GET ; request URI: /test_mem.php?N=10000 ; content length: 0 ; user: - ; script: /home/fat/web/docs/php/test_mem.php ; last request cpu: 0.00 ; last request memory: 0 ; ; Note: There is a real-time FPM status monitoring sample web page available ; It's available in: ${prefix}/share/fpm/status.html ; ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The access log file ; Default: not set ;access.log = log/$pool.access.log ; The access log format. ; The following syntax is allowed ; %%: the '%' character ; %C: %CPU used by the request ; it can accept the following format: ; - %{user}C for user CPU only ; - %{system}C for system CPU only ; - %{total}C for user + system CPU (default) ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) ; - %{miliseconds}d ; - %{mili}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env ; variable. Some exemples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename ; %l: content-length of the request (for POST request only) ; %m: request method ; %M: peak of memory allocated by PHP ; it can accept the following format: ; - %{bytes}M (default) ; - %{kilobytes}M ; - %{kilo}M ; - %{megabytes}M ; - %{mega}M ; %n: pool name ; %o: output header ; it must be associated with embraces to specify the name of the header: ; - %{Content-Type}o ; - %{X-Powered-By}o ; - %{Transfert-Encoding}o ; - .... ; %p: PID of the child that serviced the request ; %P: PID of the parent of the child that serviced the request ; %q: the query string ; %Q: the '?' character if query string exists ; %r: the request URI (without the query string, see %q and %Q) ; %R: remote IP address ; %s: status (response code) ; %t: server time the request was received ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; %T: time the log has been written (the request has finished) ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; %u: remote user ; ; Default: "%R - %u %t \"%m %r\" %s" ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Clear environment in FPM workers ; Prevents arbitrary environment variables from reaching FPM worker processes ; by clearing the environment in workers before env vars specified in this ; pool configuration are added. ; Setting to "no" will make all environment variables available to PHP code ; via getenv(), $_ENV and $_SERVER. ; Default Value: yes ;clear_env = no ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /usr) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www at my.domain.com ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M David Kazlman Saab Sensis Corporation www.saabsensis.com This message is intended only for the addressee and may contain information that is company confidential or privileged. Any technical data in this message may be exported only in accordance with the U.S. International Traffic in Arms Regulations (22 CFR Parts 120-130) or the Export Administration Regulations (15 CFR Parts 730-774). Unauthorized use is strictly prohibited and may be unlawful. If you are not the intended recipient, or the person responsible for delivering to the intended recipient, you should not read, copy, disclose or otherwise use this message. If you have received this email in error, please delete it, and advise the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Sat Mar 7 08:25:09 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Sat, 7 Mar 2015 11:25:09 +0300 Subject: NGINX Worker process stuck, CPU usage at 100% In-Reply-To: <999B812001B1DA42AE61E96EBA717AB91B4EAB95@corpmail01.corp.sensis.com> References: <999B812001B1DA42AE61E96EBA717AB91B4EAB95@corpmail01.corp.sensis.com> Message-ID: <20150307082509.GA82734@lo0.su> On Fri, Mar 06, 2015 at 09:59:47PM +0000, Kazlman, David wrote: > I've migrated my server over from lighttpd to NGINX(memory leaks were causing cache issues which invoked OOM Killer in Linux). It seems that after a while of running(about 30 minutes) with NGINX and processing requests just fine the NGINX worker process gets stuck in a loop maxing out the CPU and not responding to any other requests. I am running NGINX version 1.6.2, OpenSSL 1.2.0, > PHP-FPM. > PHP 5.5.19 (fpm-fcgi) (built: Mar 5 2015 10:12:12) > Copyright (c) 1997-2014 The PHP Group > Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies > > This is a very minimal use of the webserver (1 user, at most 2 requests from the browser every 2 seconds). > > I turned on debugging in nginx and captured a log. The worker threads seems to be hung near the bottom of the log, but I can't pick anything out of the log that indicates a problem. I don't see any errors. I attached the debug log, can someone please help look it over and see if anything stands out? > > I've noticed sometimes when this occurs the log is reporting: > > 2015/03/05 20:13:34 [info] 698#0: *1348 peer closed connection in SSL handshake while SSL handshaking, client: 192.168.0.126, server: 0.0.0.0:443 > > But this isn't always the case. I can repeat this hang after about 30 minutes pretty consistently. I've tried 3 different versions of openSSL (1.0.0, 1.0.1, 1.0.2) and 2 versions of NGINX (1.6.2, 1.7.10) and nothing seems to resolve the issue so it may be pointing to a configuration problem. I have also tried unix sockets vs tcp sockets as the fastcgi transfer mechanism (listen = /var/run/php-fpm.sock and listen = 127.0.0.1:9000) and both seem to act similarly > > > If I kill the webserver and restart it, the webserver starts acting on requests just fine again, and after some amount of time gets back into this state. My configuration is as follows: > > nginx.conf: > > #user nobody; > worker_processes 1; > > #error_log logs/error.log; > #error_log logs/error.log notice; > error_log /var/log/nginx/error.log debug; > > #pid logs/nginx.pid; > > > events { > worker_connections 1024; > } > > > http { > include mime.types; > default_type application/octet-stream; > > sendfile on; > keepalive_timeout 65; > > client_max_body_size 30M; > > # Redirect HTTP Requests to HTTPS > # > server { > listen 80; > server_name localhost; > return 301 https://$host; > } > > # HTTPS server > # > server { > listen 443 ssl; > server_name $host; > > ssl on; > ssl_certificate /mnt/emmc/ssl/nginx.crt; > ssl_certificate_key /mnt/emmc/ssl/nginx.key; > > root /var/www/htdocs/; > > location / { > try_files $uri $uri/ /index.php; > index index.php; > } > > location ~ \.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass unix:/var/run/php-fpm/sock; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi.conf; > } > } > > } [...] http://wiki.nginx.org/Debugging#Asking_for_help From jacklinkers at gmail.com Sat Mar 7 12:01:11 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Sat, 7 Mar 2015 13:01:11 +0100 Subject: 403 Forbidden Message-ID: Hi I can't access my domain after installing nginx and coinfigurong the default server conf file : # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80; server_name www.testmydomain.ovh *.testmydomain.ovh; root /var/www/testmydomain.ovh/web; access_log /var/www/logs/testmydomain.ovh.access.log; error_log /var/www/logs/testmydomain.ovh.error.log; index index.php index.html index.htm; location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires max; } location ~ \.php$ { include fastcgi_params; fastcgi_intercept_errors on; # By all means use a different server for the fcgi processes if you need to fastcgi_pass unix:/var/run/php5-fpm.sock; } } server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} What did I miss here ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Mar 7 21:26:00 2015 From: nginx-forum at nginx.us (George) Date: Sat, 07 Mar 2015 16:26:00 -0500 Subject: native variable for one level above $document_root ? Message-ID: <4939f1242882cc15ba703c8bd3ea71c1.NginxMailingListEnglish@forum.nginx.org> At the nginx vhost level, is there a native nginx value similar to $document_root for one directory level above $document_root ? for example if $document_root = /home/username/public or /home/username2/public is there a variable I can reference at nginx vhost level that references /home/username or /home/username2 ? thanks George Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257127,257127#msg-257127 From francis at daoine.org Sun Mar 8 01:59:05 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Mar 2015 01:59:05 +0000 Subject: native variable for one level above $document_root ? In-Reply-To: <4939f1242882cc15ba703c8bd3ea71c1.NginxMailingListEnglish@forum.nginx.org> References: <4939f1242882cc15ba703c8bd3ea71c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150308015905.GN3010@daoine.org> On Sat, Mar 07, 2015 at 04:26:00PM -0500, George wrote: Hi there, > At the nginx vhost level, is there a native nginx value similar to > $document_root for one directory level above $document_root ? Probably not; but you can use "map" to make your own. For example: map $document_root $the_thing_that_you_want { default ""; ~(?P.*)/. $it; } http://nginx.org/r/map Then use $the_thing_that_you_want. Be aware of the time at which the variable gets its value. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 8 01:55:14 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Mar 2015 01:55:14 +0000 Subject: 403 Forbidden In-Reply-To: References: Message-ID: <20150308015514.GM3010@daoine.org> On Sat, Mar 07, 2015 at 01:01:11PM +0100, JACK LINKERS wrote: Hi there, > Hi I can't access my domain after installing nginx and coinfigurong the > default server conf file : What request do you make; what response do you get; what response do you want; and what do the logs say? f -- Francis Daly francis at daoine.org From artemrts at ukr.net Sun Mar 8 10:10:04 2015 From: artemrts at ukr.net (wishmaster) Date: Sun, 08 Mar 2015 12:10:04 +0200 Subject: fastcgi_ignore_headers inside if{} - block Message-ID: <1425809127.674508647.wkpxr6o5@frv34.fwdcdn.com> Hi. I need set some fascgi_* inside "if" block. E.g.: if ($foo = "bar") { fastcgi_ignore_headers "Set-Cookie"; } But the error is occure on configtest stage: nginx: [emerg] "fastcgi_ignore_headers" directive is not allowed here Is there any workaround? -- Cheers, Vitaliy From gmm at csdoc.com Sun Mar 8 14:58:05 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Sun, 08 Mar 2015 16:58:05 +0200 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <54FC6293.7000602@csdoc.com> References: <54FC6293.7000602@csdoc.com> Message-ID: <54FC637D.3010904@csdoc.com> Hello, webpage http://wiki.nginx.org/Redmine has some security problems: 1. All redmine config files are available for anybody in internet, for example: https://redmine.example.com/config/database.yml contains in plain text login and password for database connection. 2. wiki.nginx.org use nginx/1.5.12 with known security vulnerabilities 3. Unsafe variable $http_host was used instead of safe one $host =================================================================== Content of page http://wiki.nginx.org/Redmine for now: [...] This is very nearly a drop in configuration. The only thing you should need to change will be the root location, upstream servers, and the server name. upstream redmine { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; } server { server_name redmine.DOMAIN.TLD; root /var/www/redmine; location / { try_files $uri @ruby; } location @ruby { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 300; proxy_pass http://redmine; } } [...] =================================================================== -- Best regards, Gena From nginx-forum at nginx.us Sun Mar 8 17:51:11 2015 From: nginx-forum at nginx.us (ex-para) Date: Sun, 08 Mar 2015 13:51:11 -0400 Subject: Can not change email address on website. Message-ID: <4c8462aba51c0984b868620e992e0d1a.NginxMailingListEnglish@forum.nginx.org> I have for a long time had my own website and server using nginx sofware. I have only one page and use (gksudo gedit /var/www/nginx-default/index.html) to make changes to the site with no problems until I wanted to change the email address. I delete the old address then type in the new one then save but it remains the same, I also have deleted the old one then saved then typed in the new one and saved again but nothing. I have deleted and left it deleted then save but it still shows the old address. I am using ubuntu 12.4 LTS. Thanks for any replies Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257135,257135#msg-257135 From lists at mrqueue.com Sun Mar 8 18:44:54 2015 From: lists at mrqueue.com (Mr Queue) Date: Sun, 08 Mar 2015 13:44:54 -0500 Subject: Can not change email address on website. In-Reply-To: <4c8462aba51c0984b868620e992e0d1a.NginxMailingListEnglish@forum.nginx.org> References: <4c8462aba51c0984b868620e992e0d1a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54FC98A6.9050304@mrqueue.com> Your site is cached in your browser. Edit your email address and then use Google to figure out how to clear your browsers cache. On 3/8/2015 12:51 PM, ex-para wrote: > I have for a long time had my own website and server using nginx sofware. I > have only one page and use (gksudo gedit /var/www/nginx-default/index.html) > to make changes to the site with no problems until I wanted to change the > email address. I delete the old address then type in the new one then save > but it remains the same, I also have deleted the old one then saved then > typed in the new one and saved again but nothing. I have deleted and left it > deleted then save but it still shows the old address. I am using ubuntu 12.4 > LTS. Thanks for any replies > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257135,257135#msg-257135 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sun Mar 8 20:50:47 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Mar 2015 20:50:47 +0000 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <54FC637D.3010904@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> Message-ID: <20150308205047.GO3010@daoine.org> On Sun, Mar 08, 2015 at 04:58:05PM +0200, Gena Makhomed wrote: Hi there, > webpage http://wiki.nginx.org/Redmine has some security problems: > > 1. All redmine config files are available for anybody in internet, > for example: https://redmine.example.com/config/database.yml > contains in plain text login and password for database connection. I don't think that one is an nginx problem. >From reading the redmine docs, it looks like the contents of the "root" directive directory should be whatever is in the distributed redmine public/ directory; not the entire installation including configuration. And if /var/www/redmine does just have the public/ contents and the upstream servers reveal secret information, that would be their problem and not nginx's, I think. > 2. wiki.nginx.org use nginx/1.5.12 with known security vulnerabilities > > 3. Unsafe variable $http_host was used instead of safe one $host I'm not sure how $http_host is less safe than $host. It is proxy_pass'ed to the "real" redmine server as the Host header. That server must be able to handle it safely anyway, no? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Mar 9 05:26:51 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 09 Mar 2015 01:26:51 -0400 Subject: Limit the number of buffered connections? Message-ID: <184fd0a2cc3920dc3722bb6693c9c5ab.NginxMailingListEnglish@forum.nginx.org> Hi, is there a way in nginx to set a limit to the number of "buffered" connections? I am referring to the client's request being buffered on disk)? I was not able to find a directive for this but wanted to confirm, thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257138,257138#msg-257138 From nginx-forum at nginx.us Mon Mar 9 07:41:06 2015 From: nginx-forum at nginx.us (Ravitezu) Date: Mon, 09 Mar 2015 03:41:06 -0400 Subject: nginx trying to connect to upstream host which is down Message-ID: Hi, Note: I had to change the hostnames and domain names. nginx version: nginx/1.4.6 (Ubuntu) I have the following the configuration: upstream ssl-app-cluster { ip_hash; server app01.example.com:8443 max_fails=1 fail_timeout=60s; server app02.example.com:8443 max_fails=1 fail_timeout=60s; } And during the Application deployment(rolling deployment) to those backend servers(app0{1,2}.example.com) the port 8443 will not be available for 60sec. So, during the deployment I tried hitting the server(api.example.com) continuously to know, how nginx is routing the traffic, when one of the servers in down. As I am using ip_hash, my ip is bound to app01.example.com initially. Deployment process: ---------------------------- 1. I am running the curl command to hit the server api.example.com in a for loop and I am being served by app01(I see this on error log with debug enabled). 2. Deployment process has taken down the app02 host for upgrading the application on it. This doesn't effect anything, as my IP is bound to app01 and I am being served by app01. 3. Now, the deployment has taken down app01 for deployment. So, I see there's a "111 connection refused error" and nginx tried to connect to "http next upstream" which is app02. which is successful. 4. But, nginx has to wait for 60sec(fail_timeout) to establish connection to app01, But I see nginx is trying to connect to app01 immediately and I see there's an error: "111 connection refused error" again and then it connecting to app02. Can someone please, tell me why this is happening and how can I change this? Thanks, Ravi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257139,257139#msg-257139 From sjums07 at gmail.com Mon Mar 9 09:24:09 2015 From: sjums07 at gmail.com (Nikolaj Schomacker) Date: Mon, 09 Mar 2015 09:24:09 +0000 Subject: 403 Forbidden In-Reply-To: References: Message-ID: It's important that the right access is set for the files and folders in /var/www/testdomain.ovh/web. Nginx is running as user www-data (per default) and folders needs to be set with (at least) execute and read permissions for www-data. For files read permission is the least required. ~sjums On Sat, Mar 7, 2015 at 1:01 PM JACK LINKERS wrote: > Hi I can't access my domain after installing nginx and coinfigurong the > default server conf file : > > > # You may add here your > # server { > # ... > # } > # statements for each of your virtual hosts to this file > > ## > # You should look at the following URL's in order to grasp a solid > understanding > # of Nginx configuration files in order to fully unleash the power of > Nginx. > # http://wiki.nginx.org/Pitfalls > # http://wiki.nginx.org/QuickStart > # http://wiki.nginx.org/Configuration > # > # Generally, you will want to move this file somewhere, and start with a > clean > # file but keep this around for reference. Or just disable in > sites-enabled. > # > # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. > ## > > server { > listen 80; > server_name www.testmydomain.ovh *.testmydomain.ovh; > root /var/www/testmydomain.ovh/web; > access_log /var/www/logs/testmydomain.ovh.access.log; > error_log /var/www/logs/testmydomain.ovh.error.log; > > index index.php index.html index.htm; > > location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ { > access_log off; > expires max; > } > > location ~ \.php$ { > include fastcgi_params; > fastcgi_intercept_errors on; > # By all means use a different server for the fcgi processes if you > need to > fastcgi_pass unix:/var/run/php5-fpm.sock; > } > } > > server { > listen 80 default_server; > listen [::]:80 default_server ipv6only=on; > > root /usr/share/nginx/html; > index index.html index.htm; > > # Make site accessible from http://localhost/ > server_name localhost; > > location / { > # First attempt to serve request as file, then > # as directory, then fall back to displaying a 404. > try_files $uri $uri/ =404; > # Uncomment to enable naxsi on this location > # include /etc/nginx/naxsi.rules; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > #error_page 500 502 503 504 /50x.html; > #location = /50x.html { > # root /usr/share/nginx/html; > #} > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > # > #location ~ \.php$ { > # fastcgi_split_path_info ^(.+\.php)(/.+)$; > # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > # > # # With php5-cgi alone: > # fastcgi_pass 127.0.0.1:9000; > # # With php5-fpm: > # fastcgi_pass unix:/var/run/php5-fpm.sock; > # fastcgi_index index.php; > # include fastcgi_params; > #} > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > #location ~ /\.ht { > # deny all; > #} > } > > # another virtual host using mix of IP-, name-, and port-based > configuration > # > #server { > # listen 8000; > # listen somename:8080; > # server_name somename alias another.alias; > # root html; > # index index.html index.htm; > # > # location / { > # try_files $uri $uri/ =404; > # } > #} > > > # HTTPS server > # > #server { > # listen 443; > # server_name localhost; > # > # root html; > # index index.html index.htm; > # > # ssl on; > # ssl_certificate cert.pem; > # ssl_certificate_key cert.key; > # > # ssl_session_timeout 5m; > # > # ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > # ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; > # ssl_prefer_server_ciphers on; > # > # location / { > # try_files $uri $uri/ =404; > # } > #} > > What did I miss here ? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Mon Mar 9 14:44:05 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 09 Mar 2015 16:44:05 +0200 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <20150308205047.GO3010@daoine.org> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> Message-ID: <54FDB1B5.4060609@csdoc.com> On 08.03.2015 22:50, Francis Daly wrote: >> webpage http://wiki.nginx.org/Redmine has some security problems: >> >> 1. All redmine config files are available for anybody in internet, >> for example: https://redmine.example.com/config/database.yml >> contains in plain text login and password for database connection. > > I don't think that one is an nginx problem. > Yes, this is not nginx problem. This is nginx configuration problem, which provided at wiki.nginx.org as "drop in configuration" for redmine. > From reading the redmine docs, it looks like the contents of the "root" > directive directory should be whatever is in the distributed redmine > public/ directory; not the entire installation including configuration. I am talk about configuration recommended at webpage http://wiki.nginx.org/Redmine not about "reading the redmine docs". > And if /var/www/redmine does just have the public/ contents and the > upstream servers reveal secret information, that would be their problem > and not nginx's, I think. root /var/www/redmine; try_files $uri @ruby; Request https://redmine.example.com/config/database.yml will be processed by nginx, because file /var/www/redmine/config/database.yml exists. For details - see manual about try_files directive in nginx. >> 3. Unsafe variable $http_host was used instead of safe one $host > > I'm not sure how $http_host is less safe than $host. It is proxy_pass'ed > to the "real" redmine server as the Host header. That server must be > able to handle it safely anyway, no? Such configuration allow to spoof nginx built-in server selection rules. because nginx will use server name from request line, but will provide to upstream completely different server name, from Host request header. So, $host must be used always with proxy_pass instead of $http_host. -- Best regards, Gena From me at myconan.net Mon Mar 9 14:48:58 2015 From: me at myconan.net (Edho Arief) Date: Mon, 9 Mar 2015 23:48:58 +0900 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <54FDB1B5.4060609@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> Message-ID: On Mon, Mar 9, 2015 at 11:44 PM, Gena Makhomed wrote: > On 08.03.2015 22:50, Francis Daly wrote: > >>> webpage http://wiki.nginx.org/Redmine has some security problems: >>> >>> 1. All redmine config files are available for anybody in internet, >>> for example: https://redmine.example.com/config/database.yml >>> contains in plain text login and password for database connection. >> >> >> I don't think that one is an nginx problem. >> > > Yes, this is not nginx problem. This is nginx configuration problem, > which provided at wiki.nginx.org as "drop in configuration" for redmine. > >> From reading the redmine docs, it looks like the contents of the "root" >> directive directory should be whatever is in the distributed redmine >> public/ directory; not the entire installation including configuration. > It's a public wiki, not some official documentation. If there's error you can just go ahead and change it. From gmm at csdoc.com Mon Mar 9 15:21:57 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 09 Mar 2015 17:21:57 +0200 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> Message-ID: <54FDBA95.4040200@csdoc.com> On 09.03.2015 16:48, Edho Arief wrote: >>> From reading the redmine docs, it looks like the contents of the "root" >>> directive directory should be whatever is in the distributed redmine >>> public/ directory; not the entire installation including configuration. > It's a public wiki, not some official documentation. If there's error > you can just go ahead and change it. And it will be silent fixing of security vulnerability in nginx configuration recommended for redmine, so all previous redmine instances, configured by this manual will be vulnerable. I prefer to report about this vulnerability in nginx mail list, so all people who configure redmine by this recommended manual can fix this security vulnerability in their own redmine installs. =============================================================== Also, I can't fix security vulnerabilities in nginx/1.5.12 used at site http://wiki.nginx.org/ and can't contact with Cliff Wells by e-mail cliff at nginx.org and other e-mails. -- Best regards, Gena From wiebe at halfgaar.net Mon Mar 9 15:28:22 2015 From: wiebe at halfgaar.net (Wiebe Cazemier) Date: Mon, 9 Mar 2015 16:28:22 +0100 (CET) Subject: Nginx upstream delays In-Reply-To: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> Message-ID: <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> Hello, I have a question about sporadic long upstream response times I'm seeing on (two of) our Nginx servers. It's kind of hard to show and quantify, but I'll do my best. One is a Django Gunicorn server. We included the upstream response time in the Nginx access log and wrote a script to analyze them. What we see, is that on the login page of a website (a page that does almost nothing) 95%-99% of 'GET /customer/login/' requests are processed within about 50 ms. The other few percent can take several seconds. Sometimes even 5s. Our Munin graphs show no correlation in disk latency, cpu time, memory use, etc. I also added an access log to Gunicorn, so that I can see how long Gunicorn takes to process requests that Nginx thinks take long. Gunicorn has 8 workers. It can be seen that there is actually no delay in Gunicorn. For instance, Nginx sees this (the long upstream response time is marked red, 3.042s): 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ HTTP/1.1" 200 8310 0.061 0.121 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ HTTP/1.1" 200 8310 0.035 0.092 "-" "Echoping/6.0.2" 11.22.33.44 - - [ 06/Mar/2015:10:27:52 +0100] "GET /customer/login/ HTTP/1.1" 200 8310 3.042 3.098 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ HTTP/1.1" 200 8310 0.051 0.108 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ HTTP/1.1" 200 8310 0.038 0.096 "-" "Echoping/6.0.2" x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 But then the corresponding Gunicorn logs shows normal response times (the figure after 'None', in ?s) (Corresponding line marked blue):
11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ HTTP/1.0" 200 None 41686 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ HTTP/1.0" 200 None 27629 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ HTTP/1.0" 200 None 28143 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ HTTP/1.0" 200 None 41846 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ HTTP/1.0" 200 None 30192 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ HTTP/1.0" 200 None 59382 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ HTTP/1.0" 200 None 33308 "-" "Echoping/6.0.2" 11.22.33.44 - - [ 06/Mar/2015:10:27:52 ] "GET /customer/login/ HTTP/1.0" 200 None 39849 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ HTTP/1.0" 200 None 48321 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ HTTP/1.0" 200 None 36484 "-" "Echoping/6.0.2" x.x.x.x - - [06/Mar/2015:10:27:58] "POST /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 y.y.y.y - - [06/Mar/2015:10:28:02] "GET /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 97824 y.y.y.y - - [06/Mar/2015:10:28:03] "GET /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 78162 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ HTTP/1.0" 200 None 38350 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ HTTP/1.0" 200 None 31076 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ HTTP/1.0" 200 None 28536 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ HTTP/1.0" 200 None 30981 "-" "Echoping/6.0.2" 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ HTTP/1.0" 200 None 29920 "-" "Echoping/6.0.2"
As I said, there are currently 8 workers. I already increased them from 4. The log above shows that there are enough seconds between each request that 8 workers should be able to handle it. I also created a MySQL slow log, which doesn't show the delays. MySQL is always fast. Another server we have is Nginx with PHP-FPM (with 150 PHP children in the pool), no database access. On one particular recent log of a few hundred thousand entries, 99% of requests is done in 129ms. But one response even took 3170ms. Its PHP proxy settings are:
location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; }
It seems something in the communication between Nginx and the service behind it slows down sometimes, but I can't figure out what it might be. Any idea what it might be or how to diagnose it better? Regards, Wiebe Cazemier -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml-nginx at zu-con.org Mon Mar 9 15:36:00 2015 From: ml-nginx at zu-con.org (Matthias Rieber) Date: Mon, 09 Mar 2015 16:36:00 +0100 Subject: map with two variables Message-ID: <8135cefd620b6c1d8d710f2caee8ae60@ssl.scheff32.de> Hi, I'd like to set a variable to the value of $host where the dots are replaced by underscore. My first idea: map $host $graphite_host { "~(?P[^.]*)\.(?P[^.]*)\.(?P[^.]*)" $a_$b_$c; } But I can't use more than one variable in the result. $a or $b would work, but not $a_$b or $a$b. I always get an error like: nginx: [emerg] unknown "a$b" variable. Is that intentional? Is there any other way to replace the . by _? # nginx -V nginx version: nginx/1.7.10 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --with-http_stub_status_module --with-http_ssl_module --prefix=/usr/local --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --add-module=/usr/local/src/ngx_devel_kit --add-module=/usr/local/src/lua-nginx-module --add-module=/usr/local/src/headers-more-nginx-module/ --add-module=/usr/local/src/nginx-statsd --with-http_spdy_module --with-http_sub_module Regards, Matthias From sarah at nginx.com Mon Mar 9 15:51:17 2015 From: sarah at nginx.com (Sarah Novotny) Date: Mon, 9 Mar 2015 08:51:17 -0700 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <54FDBA95.4040200@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <54FDBA95.4040200@csdoc.com> Message-ID: Hi Gena, I?m happy to have you update the wiki now that you?ve reported your concerns. Do you have an account on the wiki? If not, please request one and let me know via email at sarah at nginx.com and we?ll get you set up with privileges to edit the page. Sarah > On Mar 9, 2015, at 8:21 AM, Gena Makhomed wrote: > > On 09.03.2015 16:48, Edho Arief wrote: > >>>> From reading the redmine docs, it looks like the contents of the "root" >>>> directive directory should be whatever is in the distributed redmine >>>> public/ directory; not the entire installation including configuration. > >> It's a public wiki, not some official documentation. If there's error >> you can just go ahead and change it. > > And it will be silent fixing of security vulnerability in nginx > configuration recommended for redmine, so all previous redmine instances, configured by this manual will be vulnerable. > > I prefer to report about this vulnerability in nginx mail list, > so all people who configure redmine by this recommended manual > can fix this security vulnerability in their own redmine installs. > > =============================================================== > > Also, I can't fix security vulnerabilities in nginx/1.5.12 > used at site http://wiki.nginx.org/ and can't contact with > Cliff Wells by e-mail cliff at nginx.org and other e-mails. > > -- > Best regards, > Gena > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From emailgrant at gmail.com Mon Mar 9 15:51:29 2015 From: emailgrant at gmail.com (Grant) Date: Mon, 9 Mar 2015 08:51:29 -0700 Subject: gzip_types not working as expected Message-ID: gzip is not working on my piwik.js file according to Google at developers.google.com/speed/pagespeed/insights. It's working fine on my CSS file. How can I troubleshoot this? gzip on; gzip_disable msie6; gzip_types text/javascript application/x-javascript text/css text/plain; - Grant From reallfqq-nginx at yahoo.fr Mon Mar 9 16:05:04 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 9 Mar 2015 17:05:04 +0100 Subject: Nginx upstream delays In-Reply-To: <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> Message-ID: You are on a nginx mailing list, thus I will reply on the nginx side of the problem. You can diagnose further to tell if the problem comes from nginx or from the backend by using 2 different variables in your log message: $request_time $upstream_response_time If those values are close enough (most of the time equal), you might then conclude that the trouble does not come from nginx, but rather from the backend (or communication between those). If you want to investigate the communication level, you can set up some tcpdump listening on the communication between nginx and the backend. You will need to use TCP ports to do that. Since you are using UNIX sockets, you might want to monitor file descriptors, but I would (totally out of thin air) suppose it might not be the source of your trouble, since you would have seen much more impact if it was. I guess you will have to trace/dump stuff on your backend. PHP has some slowlog capability firing up tracing in a code which takes too long to finish. I do not know anything about Python servers, but you are not at the right location for questions related to those anyway. Happy hunting, --- *B. R.* On Mon, Mar 9, 2015 at 4:28 PM, Wiebe Cazemier wrote: > Hello, > > I have a question about sporadic long upstream response times I'm seeing > on (two of) our Nginx servers. It's kind of hard to show and quantify, but > I'll do my best. > > One is a Django Gunicorn server. We included the upstream response time in > the Nginx access log and wrote a script to analyze them. What we see, is > that on the login page of a website (a page that does almost nothing) > 95%-99% of 'GET /customer/login/' requests are processed within about 50 > ms. The other few percent can take several seconds. Sometimes even 5s. Our > Munin graphs show no correlation in disk latency, cpu time, memory use, etc. > > I also added an access log to Gunicorn, so that I can see how long > Gunicorn takes to process requests that Nginx thinks take long. Gunicorn > has 8 workers. It can be seen that there is actually no delay in Gunicorn. > For instance, Nginx sees this (the long upstream response time is marked > red, 3.042s): > > 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ > HTTP/1.1" 200 8310 0.061 0.121 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ > HTTP/1.1" 200 8310 0.035 0.092 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:52 +0100] "GET /customer/login/ > HTTP/1.1" 200 8310 3.042 3.098 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ > HTTP/1.1" 200 8310 0.051 0.108 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ > HTTP/1.1" 200 8310 0.038 0.096 "-" "Echoping/6.0.2" > x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST > /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 > > > But then the corresponding Gunicorn logs shows normal response times (the > figure after 'None', in ?s) (Corresponding line marked blue): > > > 11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ HTTP/1.0" 200 > None 41686 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ HTTP/1.0" 200 > None 27629 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ HTTP/1.0" 200 > None 28143 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ HTTP/1.0" 200 > None 41846 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ HTTP/1.0" 200 > None 30192 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ HTTP/1.0" 200 > None 59382 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ HTTP/1.0" 200 > None 33308 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:52] "GET /customer/login/ HTTP/1.0" > 200 None 39849 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ HTTP/1.0" 200 > None 48321 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ HTTP/1.0" 200 > None 36484 "-" "Echoping/6.0.2" > x.x.x.x - - [06/Mar/2015:10:27:58] "POST > /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 > y.y.y.y - - [06/Mar/2015:10:28:02] "GET > /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 97824 > y.y.y.y - - [06/Mar/2015:10:28:03] "GET > /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 78162 > 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ HTTP/1.0" 200 > None 38350 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ HTTP/1.0" 200 > None 31076 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ HTTP/1.0" 200 > None 28536 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ HTTP/1.0" 200 > None 30981 "-" "Echoping/6.0.2" > 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ HTTP/1.0" 200 > None 29920 "-" "Echoping/6.0.2" > > > As I said, there are currently 8 workers. I already increased them from 4. > The log above shows that there are enough seconds between each request that > 8 workers should be able to handle it. I also created a MySQL slow log, > which doesn't show the delays. MySQL is always fast. > > Another server we have is Nginx with PHP-FPM (with 150 PHP children in the > pool), no database access. On one particular recent log of a few hundred > thousand entries, 99% of requests is done in 129ms. But one response even > took 3170ms. Its PHP proxy settings are: > > location ~ \.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > > # With php5-fpm: > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > } > > > It seems something in the communication between Nginx and the service > behind it slows down sometimes, but I can't figure out what it might be. > Any idea what it might be or how to diagnose it better? > > Regards, > > Wiebe Cazemier > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Mar 9 17:25:08 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Mar 2015 17:25:08 +0000 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <54FDB1B5.4060609@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> Message-ID: <20150309172508.GQ3010@daoine.org> On Mon, Mar 09, 2015 at 04:44:05PM +0200, Gena Makhomed wrote: > On 08.03.2015 22:50, Francis Daly wrote: Hi there, > >>webpage http://wiki.nginx.org/Redmine has some security problems: > >> > >>1. All redmine config files are available for anybody in internet, > >>for example: https://redmine.example.com/config/database.yml > >>contains in plain text login and password for database connection. > > > >I don't think that one is an nginx problem. > Yes, this is not nginx problem. This is nginx configuration problem, > which provided at wiki.nginx.org as "drop in configuration" for redmine. I think that you are incorrect in your understanding of it as an nginx configuration problem, and as a drop-in configuration. > >From reading the redmine docs, it looks like the contents of the "root" > >directive directory should be whatever is in the distributed redmine > >public/ directory; not the entire installation including configuration. > > I am talk about configuration recommended > at webpage http://wiki.nginx.org/Redmine > not about "reading the redmine docs". But the user must have followed some documentation to install redmine in the first place; and if they unthinkingly install it into /var/www/redmine they are probably doing something wrong before nginx gets involved. I see instructions to install to /opt/redmine, and to /var/lib/redmine, and to /usr/share/redmine, and in each case they say to do something like ln -s /usr/share/redmine/public /var/www/redmine to have only the web-accessible content below /var/www/redmine. If the user really wants to install to /var/www/redmine, then they must modify the "root" directive (to be /var/www/redmine/public), as the words on the wiki page already say. I do not see this as an nginx-related security problem. > >And if /var/www/redmine does just have the public/ contents and the > >upstream servers reveal secret information, that would be their problem > >and not nginx's, I think. > > root /var/www/redmine; > try_files $uri @ruby; > > Request https://redmine.example.com/config/database.yml will be > processed by nginx, because file /var/www/redmine/config/database.yml > exists. For details - see manual about try_files directive in nginx. The file /var/www/redmine/config/database.yml should not exist. If the file /var/www/redmine/config/database.yml does exist and the above nginx configuration is used, then the user will find that no part of their redmine web-related installation will work, because all of the images and stylesheets and javascripts are inaccessible. Correspondingly, if the user has installed only web content below /var/www, then using a different "root" directive will cause that installation not to work. > >>3. Unsafe variable $http_host was used instead of safe one $host > > > >I'm not sure how $http_host is less safe than $host. It is proxy_pass'ed > >to the "real" redmine server as the Host header. That server must be > >able to handle it safely anyway, no? > > Such configuration allow to spoof nginx built-in server selection rules. > because nginx will use server name from request line, but will provide > to upstream completely different server name, from Host request header. It is true that $http_host is completely controlled by the client, and $host is mostly controlled by the client. It is true that they can have different values. I do not see that the difference is a security issue in this case. > So, $host must be used always with proxy_pass instead of $http_host. If the upstream server would do anything security-relevant with the Host: header that it gets from nginx, it would do exactly the same with the Host: header that it would get from the client directly, no? Also: I suspect that $http_host was there because if you run nginx on not-port-80, using $host will probably lose that information. The server{} has no "listen", so it will use port 80 or 8080 depending on the invoking user. The config that was there (probably) works under some circumstances, and fails under some others. It's fine to change it to another configuration which works under some different circumstances, but you should probably be aware that you are failing under different circumstances too. As as "example" config, it should be understood that it does not cover all circumstances. And the newer example config may be more suitable for more installations -- I'll let someone else count them, if they care. But I don't see how security is involved in the change. Cheers, f -- Francis Daly francis at daoine.org From gmm at csdoc.com Mon Mar 9 18:24:43 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 09 Mar 2015 20:24:43 +0200 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <20150309172508.GQ3010@daoine.org> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> Message-ID: <54FDE56B.80200@csdoc.com> On 09.03.2015 19:25, Francis Daly wrote: >>> From reading the redmine docs, it looks like the contents of the "root" >>> directive directory should be whatever is in the distributed redmine >>> public/ directory; not the entire installation including configuration. >> >> I am talk about configuration recommended >> at webpage http://wiki.nginx.org/Redmine >> not about "reading the redmine docs". > > But the user must have followed some documentation to install redmine in > the first place; and if they unthinkingly install it into /var/www/redmine > they are probably doing something wrong before nginx gets involved. redmine documentation: http://www.redmine.org/projects/redmine/wiki/RedmineInstall don't forbid users to install redmine into /var/www/redmine even more, redmine documentation: http://www.redmine.org/projects/redmine/wiki/HowTo_install_Redmine_on_CentOS_5 RECOMMENDS to install redmine into /var/www/redmine see: "Configure /var/www/redmine/config/database.yml" http://www.redmine.org/projects/redmine/wiki/How_to_Install_Redmine_on_CentOS_(Detailed) see: "Configure /var/www/redmine/config/database.yml" also, FHS http://www.pathname.com/fhs/pub/fhs-2.3.html don't say what /var/www/.... must contain only "static" files. > I see instructions to install to /opt/redmine, and to /var/lib/redmine, > and to /usr/share/redmine, and in each case they say to do something like > > ln -s /usr/share/redmine/public /var/www/redmine > > to have only the web-accessible content below /var/www/redmine. I don't see such instructions at the http://wiki.nginx.org/Redmine > If the user really wants to install to /var/www/redmine, then they must > modify the "root" directive (to be /var/www/redmine/public), as the > words on the wiki page already say. I modify root directive, but change /var/www/redmine to /home/www/redmine and all works fine, but with vulnerability. User must guess than they must change from "root /var/www/redmine;" to "root /var/www/redmine/public;" to fix this unobvious vulnerability? > I do not see this as an nginx-related security problem. As I already say, this is not nginx-related security problem, it was by default vulnerable configuration as wiki recommendation. >>> And if /var/www/redmine does just have the public/ contents and the >>> upstream servers reveal secret information, that would be their problem >>> and not nginx's, I think. >> >> root /var/www/redmine; >> try_files $uri @ruby; >> >> Request https://redmine.example.com/config/database.yml will be >> processed by nginx, because file /var/www/redmine/config/database.yml >> exists. For details - see manual about try_files directive in nginx. > > The file /var/www/redmine/config/database.yml should not exist. it MUST exists, because redmine install instructions suppose that such file exists: "Configure /var/www/redmine/config/database.yml" > If the file /var/www/redmine/config/database.yml does exist and the > above nginx configuration is used, then the user will find that no part > of their redmine web-related installation will work, because all of the > images and stylesheets and javascripts are inaccessible. No, if nginx frontend can't process such non-existend files, it just silently proxy request to backend and backend process such request without any problems. So, this security vulnerability will be *invisible* for users. Try it youself, if you don't believe me. > Correspondingly, if the user has installed only web content below > /var/www, then using a different "root" directive will cause that > installation not to work. redmine documentation at redmine site recomments install entire redmine into /var/www/redmine directory, not only public content. -- Best regards, Gena From gmm at csdoc.com Mon Mar 9 18:56:28 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 09 Mar 2015 20:56:28 +0200 Subject: [security advisory] $http_host vs $host In-Reply-To: <20150309172508.GQ3010@daoine.org> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> Message-ID: <54FDECDC.8070600@csdoc.com> On 09.03.2015 19:25, Francis Daly wrote: >>>> Unsafe variable $http_host was used instead of safe one $host >>> >>> I'm not sure how $http_host is less safe than $host. It is proxy_pass'ed >>> to the "real" redmine server as the Host header. That server must be >>> able to handle it safely anyway, no? >> >> Such configuration allow to spoof nginx built-in server selection rules. >> because nginx will use server name from request line, but will provide >> to upstream completely different server name, from Host request header. > > It is true that $http_host is completely controlled by the client, and > $host is mostly controlled by the client. It is true that they can have > different values. I do not see that the difference is a security issue > in this case. > server { listen 443 ssl; server_name private.example.com; location / { auth_basic "closed site"; auth_basic_user_file conf/htpasswd; proxy_set_header Host $http_host; proxy_pass http://backend; } } server { listen 443 ssl; server_name public.example.com; location / { proxy_set_header Host $http_host; proxy_pass http://backend; } } in such configuration anybody can bypass nginx auth_basic restriction and access content from private.example.com without any login/password: GET https://public.example.com/top-secret.pdf HTTP/1.1 Host: private.example.com nginx will use host name public.example.com for server selection, and process request in second server, but send to backend "Host: private.example.com" and relative URI in request line. and backend will process such request as request to private.example.com because backend server see only relative uri in request line, and will use host name from Host: request header in this case. ======================================================================= for proxy_pass such bug can be fixed just by using always $host instead of $http_host in proxy_set_header Host directive. for fastcgi_pass such bug can be fixed only by using two nginx servers - first for frontend, and second for backend, because nginx send to fastcgi value of $http_host bug cause: fastcgi spec was created when only HTTP/1.0 exists and don't know about absoluteURI in request line - such feature was added in HTTP/1.1, after FastCGI spec. >> So, $host must be used always with proxy_pass instead of $http_host. > > If the upstream server would do anything security-relevant with the Host: > header that it gets from nginx, it would do exactly the same with the > Host: header that it would get from the client directly, no? No. 1) http://tools.ietf.org/html/rfc7230#section-5.5 2) http://nginx.org/en/docs/http/ngx_http_core_module.html $host in this order of precedence: host name from the request line, or host name from the ?Host? request header field, or the server name matching a request -- Best regards, Gena From francis at daoine.org Mon Mar 9 22:50:11 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Mar 2015 22:50:11 +0000 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <54FDE56B.80200@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> <54FDE56B.80200@csdoc.com> Message-ID: <20150309225011.GR3010@daoine.org> On Mon, Mar 09, 2015 at 08:24:43PM +0200, Gena Makhomed wrote: > On 09.03.2015 19:25, Francis Daly wrote: Hi there, > >But the user must have followed some documentation to install redmine in > >the first place; and if they unthinkingly install it into /var/www/redmine > >they are probably doing something wrong before nginx gets involved. > > redmine documentation: > http://www.redmine.org/projects/redmine/wiki/RedmineInstall > don't forbid users to install redmine into /var/www/redmine Yes, redmine can be installed anywhere on the filesystem. > even more, redmine documentation: > http://www.redmine.org/projects/redmine/wiki/HowTo_install_Redmine_on_CentOS_5 > RECOMMENDS to install redmine into /var/www/redmine > see: "Configure /var/www/redmine/config/database.yml" Yes, that url shows redmine installed to /var/www/redmine. In that case, the nginx "root" should be /var/www/redmine/public. The url http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_using_Debian_package includes the line "ln -s /usr/share/redmine/public /var/www/redmine". The url http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_on_Debian_Squeeze_with_Postgresql_Ruby-on-Rails_and_Apache2-Passenger includes the line "ln -s /var/lib/redmine/public /var/www/redmine". The url http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_2_integrated_with_Gitolite_2_on_Debian_Wheezy_with_Apache_and_Phusion_Passenger includes the lines "cd /var/www; ln -s /opt/redmine/public redmine" In those cases the nginx "root" should be /var/www/redmine. The nginx configuration must match the redmine installation that was done. There does not appear to be consistent documentation on where redmine is expected to be installed. >From an nginx perspective, change "root" to match where the web documents are. That's what the wiki page said, and that's what the wiki page says, so it's all good. > >I see instructions to install to /opt/redmine, and to /var/lib/redmine, > >and to /usr/share/redmine, and in each case they say to do something like > > > > ln -s /usr/share/redmine/public /var/www/redmine > > > >to have only the web-accessible content below /var/www/redmine. > > I don't see such instructions at the http://wiki.nginx.org/Redmine Correct. It looks like the nginx wiki page assumed one type of redmine install, without documenting exactly what type of install it assumed. That appears to still be the case, so that's all good too. > User must guess than they must change from "root /var/www/redmine;" > to "root /var/www/redmine/public;" to fix this unobvious vulnerability? Perhaps it will be useful for someone to note that the "root" directive value in nginx must be the root directory of the redmine web content, which is the "public" directory of the redmine distribution. That appears not to have been clear on the nginx wiki page. > >>Request https://redmine.example.com/config/database.yml will be > >>processed by nginx, because file /var/www/redmine/config/database.yml > >>exists. For details - see manual about try_files directive in nginx. > > > >The file /var/www/redmine/config/database.yml should not exist. > > it MUST exists, because redmine install > instructions suppose that such file exists: > "Configure /var/www/redmine/config/database.yml" There is more than one set of redmine install instructions. The one you followed wanted a different "root" directive than what was on the nginx wiki. Now it wants the "root" directive that is on the nginx wiki. > >If the file /var/www/redmine/config/database.yml does exist and the > >above nginx configuration is used, then the user will find that no part > >of their redmine web-related installation will work, because all of the > >images and stylesheets and javascripts are inaccessible. > > No, if nginx frontend can't process such non-existend files, > it just silently proxy request to backend and backend process > such request without any problems. That, I was not aware of. Thank you for correcting me. I guess it must happen because the back-end web server has been configured with the "right" DocumentRoot or whatever equivalent it uses. (Or maybe it is hardcoded to always look in "public" so does not need extra configuration.) > >Correspondingly, if the user has installed only web content below > >/var/www, then using a different "root" directive will cause that > >installation not to work. > > redmine documentation at redmine site recomments install > entire redmine into /var/www/redmine directory, not only public content. The redmine installation instructions are something that the redmine people might be interested in making more consistent. In nginx, everything on the filesystem below your "root" value is potentially accessible to the world. So it is prudent to set it to a directory that only contains public information. The nginx wiki page shows that now. All the best, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Mar 9 23:38:47 2015 From: nginx-forum at nginx.us (justink101) Date: Mon, 09 Mar 2015 19:38:47 -0400 Subject: How to "undo" a global server deny all in a location block Message-ID: Is it possible to undo a server level deny all; inside a more specific location block? See the following: server { allow 1.2.3.4; allow 2.3.4.5; deny all; location / { location ~ ^/api/(?.*) { # bunch of directives } location = /actions/foo.php { # bunch of directives } location = /actions/bar.php { #bunch of directives } location = /actions/allow-all.php { # this should allow all # bunch of directives } location ~\.php { # bunch of directives } } } All the location blocks except for /actions/allow-all.php should follow the global server allow and deny rules. /actions/allow-all.php should undo those rules and just allow all. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257155,257155#msg-257155 From francis at daoine.org Tue Mar 10 00:06:21 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Mar 2015 00:06:21 +0000 Subject: How to "undo" a global server deny all in a location block In-Reply-To: References: Message-ID: <20150310000621.GS3010@daoine.org> On Mon, Mar 09, 2015 at 07:38:47PM -0400, justink101 wrote: Hi there, > Is it possible to undo a server level deny all; inside a more specific > location block? Yes. Normal directive inheritance rules apply, but with the note that "allow" and "deny" are one of a small set of directives which are inherited as a pair. So "allow 1.2.3.4;" inside a location means that the entire allow/deny configuration within that location is just the one entry. > location = /actions/allow-all.php { > # this should allow all > # bunch of directives > } Add "allow all;" in there. f -- Francis Daly francis at daoine.org From gmm at csdoc.com Tue Mar 10 00:36:13 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Tue, 10 Mar 2015 02:36:13 +0200 Subject: [security advisory] http://wiki.nginx.org/Redmine In-Reply-To: <20150309225011.GR3010@daoine.org> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> <54FDE56B.80200@csdoc.com> <20150309225011.GR3010@daoine.org> Message-ID: <54FE3C7D.3090106@csdoc.com> On 10.03.2015 0:50, Francis Daly wrote: >> even more, redmine documentation: >> http://www.redmine.org/projects/redmine/wiki/HowTo_install_Redmine_on_CentOS_5 >> RECOMMENDS to install redmine into /var/www/redmine >> see: "Configure /var/www/redmine/config/database.yml" > > Yes, that url shows redmine installed to /var/www/redmine. > > In that case, the nginx "root" should be /var/www/redmine/public. http://wiki.nginx.org/Redmine now fixed and provides correct info. > Perhaps it will be useful for someone to note that the "root" directive > value in nginx must be the root directory of the redmine web content, > which is the "public" directory of the redmine distribution. That appears > not to have been clear on the nginx wiki page. Current nginx redmine config example at wiki is more safe, because even in case of "ln -s /var/lib/redmine/public /var/www/redmine" and "root /var/www/redmine/public;" in the nginx config - all should work fine, without any security vulnerabilities. And Debian users probably easy can guess what they should replace "root /var/www/redmine/public;" with "root /var/www/redmine;" because /var/www/redmine is symlink to /var/lib/redmine/public in their install. >> redmine documentation at redmine site recomments install >> entire redmine into /var/www/redmine directory, not only public content. > > The redmine installation instructions are something that the redmine > people might be interested in making more consistent. May be this is Debian-way, make symlinks to only "static" files inside /var/www ? And all other services in Debian configured in the same way? So redmine can't break Debian packaging rules? And all other UNIX-like OS and distros do not have such requirements about creating such useless and potentially dangerous symlinks? Dangerous, - if nginx configured with "disable_symlinks on;" -- Best regards, Gena From wandenberg at gmail.com Tue Mar 10 00:37:09 2015 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 9 Mar 2015 21:37:09 -0300 Subject: Nginx upstream delays In-Reply-To: References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> Message-ID: You also have to consider the rate your client get data from the server. The request time is the entire time spent from the beginning of the request until the end of response. So you may not have a problem on your server, just a lazy client :) On Mon, Mar 9, 2015 at 1:05 PM, B.R. wrote: > You are on a nginx mailing list, thus I will reply on the nginx side of > the problem. > > You can diagnose further to tell if the problem comes from nginx or from > the backend by using 2 different variables in your log message: > $request_time > > $upstream_response_time > > > If those values are close enough (most of the time equal), you might then > conclude that the trouble does not come from nginx, but rather from the > backend (or communication between those). > > If you want to investigate the communication level, you can set up some > tcpdump listening on the communication between nginx and the backend. You > will need to use TCP ports to do that. > Since you are using UNIX sockets, you might want to monitor file > descriptors, but I would (totally out of thin air) suppose it might not be > the source of your trouble, since you would have seen much more impact if > it was. > > I guess you will have to trace/dump stuff on your backend. PHP has some > slowlog capability firing up tracing in a code which takes too long to > finish. I do not know anything about Python servers, but you are not at the > right location for questions related to those anyway. > > Happy hunting, > --- > *B. R.* > > > On Mon, Mar 9, 2015 at 4:28 PM, Wiebe Cazemier wrote: > >> Hello, >> >> I have a question about sporadic long upstream response times I'm seeing >> on (two of) our Nginx servers. It's kind of hard to show and quantify, but >> I'll do my best. >> >> One is a Django Gunicorn server. We included the upstream response time >> in the Nginx access log and wrote a script to analyze them. What we see, is >> that on the login page of a website (a page that does almost nothing) >> 95%-99% of 'GET /customer/login/' requests are processed within about 50 >> ms. The other few percent can take several seconds. Sometimes even 5s. Our >> Munin graphs show no correlation in disk latency, cpu time, memory use, etc. >> >> I also added an access log to Gunicorn, so that I can see how long >> Gunicorn takes to process requests that Nginx thinks take long. Gunicorn >> has 8 workers. It can be seen that there is actually no delay in Gunicorn. >> For instance, Nginx sees this (the long upstream response time is marked >> red, 3.042s): >> >> 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ >> HTTP/1.1" 200 8310 0.061 0.121 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ >> HTTP/1.1" 200 8310 0.035 0.092 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:52 +0100] "GET /customer/login/ >> HTTP/1.1" 200 8310 3.042 3.098 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ >> HTTP/1.1" 200 8310 0.051 0.108 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ >> HTTP/1.1" 200 8310 0.038 0.096 "-" "Echoping/6.0.2" >> x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST >> /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 >> >> >> But then the corresponding Gunicorn logs shows normal response times (the >> figure after 'None', in ?s) (Corresponding line marked blue): >> >> >> 11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ HTTP/1.0" >> 200 None 41686 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ HTTP/1.0" >> 200 None 27629 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ HTTP/1.0" >> 200 None 28143 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ HTTP/1.0" >> 200 None 41846 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ HTTP/1.0" >> 200 None 30192 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ HTTP/1.0" >> 200 None 59382 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ HTTP/1.0" >> 200 None 33308 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:52] "GET /customer/login/ HTTP/1.0" >> 200 None 39849 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ HTTP/1.0" >> 200 None 48321 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ HTTP/1.0" >> 200 None 36484 "-" "Echoping/6.0.2" >> x.x.x.x - - [06/Mar/2015:10:27:58] "POST >> /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 >> y.y.y.y - - [06/Mar/2015:10:28:02] "GET >> /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 97824 >> y.y.y.y - - [06/Mar/2015:10:28:03] "GET >> /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 78162 >> 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ HTTP/1.0" >> 200 None 38350 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ HTTP/1.0" >> 200 None 31076 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ HTTP/1.0" >> 200 None 28536 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ HTTP/1.0" >> 200 None 30981 "-" "Echoping/6.0.2" >> 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ HTTP/1.0" >> 200 None 29920 "-" "Echoping/6.0.2" >> >> >> As I said, there are currently 8 workers. I already increased them from >> 4. The log above shows that there are enough seconds between each request >> that 8 workers should be able to handle it. I also created a MySQL slow >> log, which doesn't show the delays. MySQL is always fast. >> >> Another server we have is Nginx with PHP-FPM (with 150 PHP children in >> the pool), no database access. On one particular recent log of a few >> hundred thousand entries, 99% of requests is done in 129ms. But one >> response even took 3170ms. Its PHP proxy settings are: >> >> location ~ \.php$ { >> fastcgi_split_path_info ^(.+\.php)(/.+)$; >> # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini >> >> # With php5-fpm: >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> fastcgi_index index.php; >> include fastcgi_params; >> } >> >> >> It seems something in the communication between Nginx and the service >> behind it slows down sometimes, but I can't figure out what it might be. >> Any idea what it might be or how to diagnose it better? >> >> Regards, >> >> Wiebe Cazemier >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue Mar 10 07:51:19 2015 From: lists at ruby-forum.com (HD DH) Date: Tue, 10 Mar 2015 08:51:19 +0100 Subject: AES-NI support with nginx In-Reply-To: References: Message-ID: Kurt Cancemi wrote in post #1168394: > AES-NI is already on, there is no configuration option and it will work > as > long as your cpu supports it. > > --- > Kurt Cancemi > https://www.x64architecture.com Can you show me where does the source of information from ? -- Posted via http://www.ruby-forum.com/. From wiebe at halfgaar.net Tue Mar 10 09:23:52 2015 From: wiebe at halfgaar.net (Wiebe Cazemier) Date: Tue, 10 Mar 2015 10:23:52 +0100 (CET) Subject: Nginx upstream delays In-Reply-To: References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> Message-ID: <1475835516.6200.1425979431989.JavaMail.zimbra@halfgaar.net> Hello, The $request_time and $upstream_request_time are already included. In the log below it says ' 3.042 3.098 '. The latter is the request time, the former the upstream request time. It doesn't seem to be an issue of slow clients (also not for other log entries, they're similar). It's going to be hard to diagnose if Gunicorn says the request takes 40 ms, but Nginx says it takes 3042 ms. Hunting on... - Wiebe ----- Original Message ----- > From: "Wandenberg Peixoto" > To: nginx at nginx.org > Sent: Tuesday, 10 March, 2015 1:37:09 AM > Subject: Re: Nginx upstream delays > You also have to consider the rate your client get data from the server. > The request time is the entire time spent from the beginning of the request > until the end of response. > So you may not have a problem on your server, just a lazy client :) > On Mon, Mar 9, 2015 at 1:05 PM, B.R. < reallfqq-nginx at yahoo.fr > wrote: > > You are on a nginx mailing list, thus I will reply on the nginx side of the > > problem. > > > You can diagnose further to tell if the problem comes from nginx or from > > the > > backend by using 2 different variables in your log message: > > > $request_time > > > $upstream_response_time > > > If those values are close enough (most of the time equal), you might then > > conclude that the trouble does not come from nginx, but rather from the > > backend (or communication between those). > > > If you want to investigate the communication level, you can set up some > > tcpdump listening on the communication between nginx and the backend. You > > will need to use TCP ports to do that. > > > Since you are using UNIX sockets, you might want to monitor file > > descriptors, > > but I would (totally out of thin air) suppose it might not be the source of > > your trouble, since you would have seen much more impact if it was. > > > I guess you will have to trace/dump stuff on your backend. PHP has some > > slowlog capability firing up tracing in a code which takes too long to > > finish. I do not know anything about Python servers, but you are not at the > > right location for questions related to those anyway. > > > Happy hunting, > > > --- > > > B. R. > > > On Mon, Mar 9, 2015 at 4:28 PM, Wiebe Cazemier < wiebe at halfgaar.net > > > wrote: > > > > Hello, > > > > > > I have a question about sporadic long upstream response times I'm seeing > > > on > > > (two of) our Nginx servers. It's kind of hard to show and quantify, but > > > I'll > > > do my best. > > > > > > One is a Django Gunicorn server. We included the upstream response time > > > in > > > the Nginx access log and wrote a script to analyze them. What we see, is > > > that on the login page of a website (a page that does almost nothing) > > > 95%-99% of 'GET /customer/login/' requests are processed within about 50 > > > ms. > > > The other few percent can take several seconds. Sometimes even 5s. Our > > > Munin > > > graphs show no correlation in disk latency, cpu time, memory use, etc. > > > > > > I also added an access log to Gunicorn, so that I can see how long > > > Gunicorn > > > takes to process requests that Nginx thinks take long. Gunicorn has 8 > > > workers. It can be seen that there is actually no delay in Gunicorn. For > > > instance, Nginx sees this (the long upstream response time is marked red, > > > 3.042s): > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ > > > > HTTP/1.1" > > > > 200 8310 0.061 0.121 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ > > > > HTTP/1.1" > > > > 200 8310 0.035 0.092 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [ 06/Mar/2015:10:27:52 +0100] "GET /customer/login/ > > > > HTTP/1.1" > > > > 200 8310 3.042 3.098 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ > > > > HTTP/1.1" > > > > 200 8310 0.051 0.108 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ > > > > HTTP/1.1" > > > > 200 8310 0.038 0.096 "-" "Echoping/6.0.2" > > > > > > > > > > x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST > > > > /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 > > > > > > > > > But then the corresponding Gunicorn logs shows normal response times (the > > > figure after 'None', in ?s) (Corresponding line marked blue): > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 41686 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 27629 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 28143 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 41846 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 30192 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 59382 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 33308 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [ 06/Mar/2015:10:27:52 ] "GET /customer/login/ > > > > HTTP/1.0" > > > > 200 > > > > None 39849 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 48321 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 36484 "-" "Echoping/6.0.2" > > > > > > > > > > x.x.x.x - - [06/Mar/2015:10:27:58] "POST > > > > /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 > > > > > > > > > > y.y.y.y - - [06/Mar/2015:10:28:02] "GET > > > > /customer/login/?next=/customer/home/ > > > > HTTP/1.0" 200 None 97824 > > > > > > > > > > y.y.y.y - - [06/Mar/2015:10:28:03] "GET > > > > /customer/login/?next=/customer/home/ > > > > HTTP/1.0" 200 None 78162 > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 38350 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 31076 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 28536 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 30981 "-" "Echoping/6.0.2" > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ HTTP/1.0" > > > > 200 > > > > None 29920 "-" "Echoping/6.0.2" > > > > > > > > > As I said, there are currently 8 workers. I already increased them from > > > 4. > > > The log above shows that there are enough seconds between each request > > > that > > > 8 workers should be able to handle it. I also created a MySQL slow log, > > > which doesn't show the delays. MySQL is always fast. > > > > > > Another server we have is Nginx with PHP-FPM (with 150 PHP children in > > > the > > > pool), no database access. On one particular recent log of a few hundred > > > thousand entries, 99% of requests is done in 129ms. But one response even > > > took 3170ms. Its PHP proxy settings are: > > > > > > > location ~ \.php$ { > > > > > > > > > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > > > > > > > > > # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > > > > > > > > > > # With php5-fpm: > > > > > > > > > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > > > > > > > > > fastcgi_index index.php; > > > > > > > > > > include fastcgi_params; > > > > > > > > > > } > > > > > > > > > It seems something in the communication between Nginx and the service > > > behind > > > it slows down sometimes, but I can't figure out what it might be. Any > > > idea > > > what it might be or how to diagnose it better? > > > > > > Regards, > > > > > > Wiebe Cazemier > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Mar 10 10:01:48 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Mar 2015 11:01:48 +0100 Subject: [security advisory] $http_host vs $host In-Reply-To: <54FDECDC.8070600@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> <54FDECDC.8070600@csdoc.com> Message-ID: You specifically configured nginx to pass the Host header ($http_host) to the backend, thus the backend has only this piece of information available... If you specified $host to be passed over, you would not have this flaw in your configuration. nginx does exactly what you configured. By default there is no such header set for the backend. Moreover using a Host header as a 'security feature' is... strange at the very least. The difference between the host (machine) and the Host header is common, and should not be relied on for security. Have you ever played with curl? If you use auth_basic for security, you should follow the same process when dealing with the backend. The $remote_user variable allows you to check which user has been authenticated. I would pass that to the backend. No user = no authentication. The only 'security advisory' I see here is to teach some basic course about security to your sysadmin. --- *B. R.* On Mon, Mar 9, 2015 at 7:56 PM, Gena Makhomed wrote: > On 09.03.2015 19:25, Francis Daly wrote: > > Unsafe variable $http_host was used instead of safe one $host >>>>> >>>> >>>> I'm not sure how $http_host is less safe than $host. It is proxy_pass'ed >>>> to the "real" redmine server as the Host header. That server must be >>>> able to handle it safely anyway, no? >>>> >>> >>> Such configuration allow to spoof nginx built-in server selection rules. >>> because nginx will use server name from request line, but will provide >>> to upstream completely different server name, from Host request header. >>> >> >> It is true that $http_host is completely controlled by the client, and >> $host is mostly controlled by the client. It is true that they can have >> different values. I do not see that the difference is a security issue >> in this case. >> >> > server { > listen 443 ssl; > server_name private.example.com; > location / { > auth_basic "closed site"; > auth_basic_user_file conf/htpasswd; > proxy_set_header Host $http_host; > proxy_pass http://backend; > } > } > > server { > listen 443 ssl; > server_name public.example.com; > location / { > proxy_set_header Host $http_host; > proxy_pass http://backend; > } > } > > in such configuration anybody can bypass nginx auth_basic restriction > and access content from private.example.com without any login/password: > > GET https://public.example.com/top-secret.pdf HTTP/1.1 > Host: private.example.com > > nginx will use host name public.example.com for server selection, > and process request in second server, but send to backend > "Host: private.example.com" and relative URI in request line. > > and backend will process such request as request to private.example.com > > because backend server see only relative uri in request line, > and will use host name from Host: request header in this case. > > ======================================================================= > > for proxy_pass such bug can be fixed just by using always > $host instead of $http_host in proxy_set_header Host directive. > > for fastcgi_pass such bug can be fixed only by using two nginx > servers - first for frontend, and second for backend, > because nginx send to fastcgi value of $http_host > > bug cause: > > fastcgi spec was created when only HTTP/1.0 exists > and don't know about absoluteURI in request line - > such feature was added in HTTP/1.1, after FastCGI spec. > > So, $host must be used always with proxy_pass instead of $http_host. >>> >> >> If the upstream server would do anything security-relevant with the Host: >> header that it gets from nginx, it would do exactly the same with the >> Host: header that it would get from the client directly, no? >> > > No. > > 1) http://tools.ietf.org/html/rfc7230#section-5.5 > > 2) http://nginx.org/en/docs/http/ngx_http_core_module.html > > $host > in this order of precedence: host name from the request line, or host > name from the ?Host? request header field, or the server name matching a > request > > -- > Best regards, > Gena > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From TheGrandChamp at gmx.de Tue Mar 10 10:25:09 2015 From: TheGrandChamp at gmx.de (TheGrandChamp at gmx.de) Date: Tue, 10 Mar 2015 11:25:09 +0100 Subject: nginx + LibreSSL + ECDSA cert = Error Message-ID: An HTML attachment was scrubbed... URL: From TheGrandChamp at gmx.de Tue Mar 10 10:27:55 2015 From: TheGrandChamp at gmx.de (TheGrandChamp at gmx.de) Date: Tue, 10 Mar 2015 11:27:55 +0100 Subject: Aw: nginx + LibreSSL + ECDSA cert = Error In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From kyprizel at gmail.com Tue Mar 10 11:02:17 2015 From: kyprizel at gmail.com (kyprizel) Date: Tue, 10 Mar 2015 14:02:17 +0300 Subject: nginx + LibreSSL + ECDSA cert = Error In-Reply-To: References: Message-ID: wrong curve? On Tue, Mar 10, 2015 at 1:27 PM, wrote: > Hi, > > this time not stupidly formatted ;): > I compiled nginx 1.7.10 + LibreSSL 2.1.4, but am not able to use ECC > certificates. > > nginx -V: > nginx version: nginx/1.7.10 > built by gcc 4.7.2 (Debian 4.7.2-5) > TLS SNI support enabled > configure arguments: > --with-openssl=/root/git/build_nginx/build/libressl-2.1.4 > --with-pcre=/root/git/build_nginx/build/pcre-8.36 > --add-module=/root/git/build_nginx/build/echo-nginx-module-0.57 > --with-ld-opt=-lrt --prefix=/usr/local/nginx > --conf-path=/etc/nginx-libressl/nginx.conf > --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-http_auth_request_module --with-file-aio > --with-http_spdy_module --with-http_addition_module --with-http_dav_module > --with-http_geoip_module --with-http_gzip_static_module > --with-http_image_filter_module --with-http_secure_link_module > --with-http_sub_module --with-http_xslt_module > > Using this script: > https://gist.github.com/leonklingele/a669803060fa92817f64 > > nginx error log gives me these messages: > 2015/03/09 17:00:11 [notice] 6484#0: signal process started > 2015/03/09 17:00:15 [alert] 6486#0: *732628 ignoring stale global SSL > error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you > should not call) while SSL handshaking, client: xxx.xxx.xxx.xxx, server: > 0.0.0.0:443 > 2015/03/09 17:01:23 [notice] 6785#0: signal process started > 2015/03/09 17:01:25 [alert] 6787#0: *733012 ignoring stale global SSL > error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you > should not call) while SSL handshaking, client: xxx.xxx.xxx.xxx, server: > 0.0.0.0:443 > 2015/03/09 17:05:27 [notice] 7479#0: signal process started > 2015/03/09 17:05:35 [alert] 7481#0: *734270 ignoring stale global SSL > error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you > should not call) while SSL handshaking, client: xxx.xxx.xxx.xxx, server: > 0.0.0.0:443 > > RSA certificates work perfectly fine. > > I generated the ECDSA CSR (for Comodo) using: > $ openssl ecparam -out private.key -name secp384r1 -genkey > $ openssl req -new -key private.key -nodes -out request.csr > > Is this issue related to nginx or LibreSSL? > > Also see: http://forum.nginx.org/read.php?2,256381,256381#msg-256381 > > Thanks for helping, > Jonathan M?ller > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Mar 10 12:28:03 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Mar 2015 13:28:03 +0100 Subject: Nginx upstream delays In-Reply-To: <1475835516.6200.1425979431989.JavaMail.zimbra@halfgaar.net> References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> <1475835516.6200.1425979431989.JavaMail.zimbra@halfgaar.net> Message-ID: Then it means nginx waited 3.042 seconds after having finished sending the request to the backend (ie time waiting for an answer). http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time Try to get the definition of the time you mention from Gunicorn. Times could be completely different from what you think they mean. Have you tried to use tcpdump on the frontend/backend communication? Does it show anything interesting? If the latency comes from neither ends specifically but from the communication channel itself, you will need to dig further. ?If it comes fro mthe backend, it might also be some waiting time (database? buffering?), which might not taken into account in the time you mentioned. The only certainty is: nginx is responsible for an overhead of 56ms on this request. Definitely not a problem on its side, I would conclude. --- *B. R.* On Tue, Mar 10, 2015 at 10:23 AM, Wiebe Cazemier wrote: > Hello, > > The $request_time and $upstream_request_time are already included. In the > log below it says '3.042 3.098'. The latter is the request time, the > former the upstream request time. It doesn't seem to be an issue of slow > clients (also not for other log entries, they're similar). > > It's going to be hard to diagnose if Gunicorn says the request takes 40 > ms, but Nginx says it takes 3042 ms. > > Hunting on... > > - Wiebe > > > ------------------------------ > > *From: *"Wandenberg Peixoto" > *To: *nginx at nginx.org > *Sent: *Tuesday, 10 March, 2015 1:37:09 AM > *Subject: *Re: Nginx upstream delays > > > You also have to consider the rate your client get data from the server. > The request time is the entire time spent from the beginning of the > request until the end of response. > So you may not have a problem on your server, just a lazy client :) > > On Mon, Mar 9, 2015 at 1:05 PM, B.R. wrote: > >> You are on a nginx mailing list, thus I will reply on the nginx side of >> the problem. >> >> You can diagnose further to tell if the problem comes from nginx or from >> the backend by using 2 different variables in your log message: >> $request_time >> >> $upstream_response_time >> >> >> If those values are close enough (most of the time equal), you might then >> conclude that the trouble does not come from nginx, but rather from the >> backend (or communication between those). >> >> If you want to investigate the communication level, you can set up some >> tcpdump listening on the communication between nginx and the backend. You >> will need to use TCP ports to do that. >> Since you are using UNIX sockets, you might want to monitor file >> descriptors, but I would (totally out of thin air) suppose it might not be >> the source of your trouble, since you would have seen much more impact if >> it was. >> >> I guess you will have to trace/dump stuff on your backend. PHP has some >> slowlog capability firing up tracing in a code which takes too long to >> finish. I do not know anything about Python servers, but you are not at the >> right location for questions related to those anyway. >> >> Happy hunting, >> --- >> *B. R.* >> >> >> On Mon, Mar 9, 2015 at 4:28 PM, Wiebe Cazemier >> wrote: >> >>> Hello, >>> >>> I have a question about sporadic long upstream response times I'm seeing >>> on (two of) our Nginx servers. It's kind of hard to show and quantify, but >>> I'll do my best. >>> >>> One is a Django Gunicorn server. We included the upstream response time >>> in the Nginx access log and wrote a script to analyze them. What we see, is >>> that on the login page of a website (a page that does almost nothing) >>> 95%-99% of 'GET /customer/login/' requests are processed within about 50 >>> ms. The other few percent can take several seconds. Sometimes even 5s. Our >>> Munin graphs show no correlation in disk latency, cpu time, memory use, etc. >>> >>> I also added an access log to Gunicorn, so that I can see how long >>> Gunicorn takes to process requests that Nginx thinks take long. Gunicorn >>> has 8 workers. It can be seen that there is actually no delay in Gunicorn. >>> For instance, Nginx sees this (the long upstream response time is marked >>> red, 3.042s): >>> >>> 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ >>> HTTP/1.1" 200 8310 0.061 0.121 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ >>> HTTP/1.1" 200 8310 0.035 0.092 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:52 +0100] "GET /customer/login/ >>> HTTP/1.1" 200 8310 3.042 3.098 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ >>> HTTP/1.1" 200 8310 0.051 0.108 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ >>> HTTP/1.1" 200 8310 0.038 0.096 "-" "Echoping/6.0.2" >>> x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST >>> /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 >>> >>> >>> But then the corresponding Gunicorn logs shows normal response times >>> (the figure after 'None', in ?s) (Corresponding line marked blue): >>> >>> >>> 11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ HTTP/1.0" >>> 200 None 41686 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ HTTP/1.0" >>> 200 None 27629 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ HTTP/1.0" >>> 200 None 28143 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ HTTP/1.0" >>> 200 None 41846 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ HTTP/1.0" >>> 200 None 30192 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ HTTP/1.0" >>> 200 None 59382 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ HTTP/1.0" >>> 200 None 33308 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:52] "GET /customer/login/ HTTP/1.0" >>> 200 None 39849 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ HTTP/1.0" >>> 200 None 48321 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ HTTP/1.0" >>> 200 None 36484 "-" "Echoping/6.0.2" >>> x.x.x.x - - [06/Mar/2015:10:27:58] "POST >>> /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 >>> y.y.y.y - - [06/Mar/2015:10:28:02] "GET >>> /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 97824 >>> y.y.y.y - - [06/Mar/2015:10:28:03] "GET >>> /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 78162 >>> 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ HTTP/1.0" >>> 200 None 38350 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ HTTP/1.0" >>> 200 None 31076 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ HTTP/1.0" >>> 200 None 28536 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ HTTP/1.0" >>> 200 None 30981 "-" "Echoping/6.0.2" >>> 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ HTTP/1.0" >>> 200 None 29920 "-" "Echoping/6.0.2" >>> >>> >>> As I said, there are currently 8 workers. I already increased them from >>> 4. The log above shows that there are enough seconds between each request >>> that 8 workers should be able to handle it. I also created a MySQL slow >>> log, which doesn't show the delays. MySQL is always fast. >>> >>> Another server we have is Nginx with PHP-FPM (with 150 PHP children in >>> the pool), no database access. On one particular recent log of a few >>> hundred thousand entries, 99% of requests is done in 129ms. But one >>> response even took 3170ms. Its PHP proxy settings are: >>> >>> location ~ \.php$ { >>> fastcgi_split_path_info ^(.+\.php)(/.+)$; >>> # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini >>> >>> # With php5-fpm: >>> fastcgi_pass unix:/var/run/php5-fpm.sock; >>> fastcgi_index index.php; >>> include fastcgi_params; >>> } >>> >>> >>> It seems something in the communication between Nginx and the service >>> behind it slows down sometimes, but I can't figure out what it might be. >>> Any idea what it might be or how to diagnose it better? >>> >>> Regards, >>> >>> Wiebe Cazemier >>> >>> >>> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 10 14:22:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Mar 2015 17:22:08 +0300 Subject: fastcgi_ignore_headers inside if{} - block In-Reply-To: <1425809127.674508647.wkpxr6o5@frv34.fwdcdn.com> References: <1425809127.674508647.wkpxr6o5@frv34.fwdcdn.com> Message-ID: <20150310142207.GC88631@mdounin.ru> Hello! On Sun, Mar 08, 2015 at 12:10:04PM +0200, wishmaster wrote: > Hi. > > I need set some fascgi_* inside "if" block. E.g.: > > if ($foo = "bar") { > fastcgi_ignore_headers "Set-Cookie"; > } > > But the error is occure on configtest stage: > > nginx: [emerg] "fastcgi_ignore_headers" directive is not allowed here > > Is there any workaround? See these article for general hints: http://wiki.nginx.org/IfIsEvil http://wiki.nginx.org/IfIsEvil#What_to_do_instead In this particular case, it should also be possible to use fastcgi_ignore_headers unconditionally, with appropriate fastcgi_no_cache additionally configured (see http://nginx.org/r/fastcgi_no_cache for details). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 10 14:39:01 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Mar 2015 17:39:01 +0300 Subject: Nginx upstream delays In-Reply-To: <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> Message-ID: <20150310143900.GD88631@mdounin.ru> Hello! On Mon, Mar 09, 2015 at 04:28:22PM +0100, Wiebe Cazemier wrote: > I have a question about sporadic long upstream response times > I'm seeing on (two of) our Nginx servers. It's kind of hard to > show and quantify, but I'll do my best. > > One is a Django Gunicorn server. We included the upstream > response time in the Nginx access log and wrote a script to > analyze them. What we see, is that on the login page of a > website (a page that does almost nothing) 95%-99% of 'GET > /customer/login/' requests are processed within about 50 ms. The > other few percent can take several seconds. Sometimes even 5s. > Our Munin graphs show no correlation in disk latency, cpu time, > memory use, etc. > > I also added an access log to Gunicorn, so that I can see how > long Gunicorn takes to process requests that Nginx thinks take > long. Gunicorn has 8 workers. It can be seen that there is > actually no delay in Gunicorn. For instance, Nginx sees this > (the long upstream response time is marked red, 3.042s): The 3s time suggests there is a packetloss somewhere. -- Maxim Dounin http://nginx.org/ From wiebe at halfgaar.net Tue Mar 10 14:55:04 2015 From: wiebe at halfgaar.net (Wiebe Cazemier) Date: Tue, 10 Mar 2015 15:55:04 +0100 (CET) Subject: Nginx upstream delays In-Reply-To: References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> <1475835516.6200.1425979431989.JavaMail.zimbra@halfgaar.net> Message-ID: <1713047647.6497.1425999304318.JavaMail.zimbra@halfgaar.net> Hello, The definition of the gunicorn time I mentioned is 'request time in microseconds'. Because Gunicorn only talks to Nginx, this would be the time Gunicorn needs (after having received the request) to generate the response and send it back to nginx, I would say. In most cases, this time matches the nginx upstream response time closely. There is a few milliseconds overhead, usually. I haven't tried a tcpdump. This is going to require some time to setup, because the long delay time is not reproducible. I need to log a day's worth of data and then devise some useful filter. I'll try to set something up. As for Maxim's response about packet loss; it would have to be packet loss on a file socket. Is this even possible? - Wiebe ----- Original Message ----- > From: "B.R." > To: "nginx ML" > Sent: Tuesday, 10 March, 2015 1:28:03 PM > Subject: Re: Nginx upstream delays > Then it means nginx waited 3.042 seconds after having finished sending the > request to the backend (ie time waiting for an answer). > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time > Try to get the definition of the time you mention from Gunicorn. Times could > be completely different from what you think they mean. > Have you tried to use tcpdump on the frontend/backend communication? Does it > show anything interesting? > If the latency comes from neither ends specifically but from the > communication channel itself, you will need to dig further. > ?If it comes fro mthe backend, it might also be some waiting time (database? > buffering?), which might not taken into account in the time you mentioned. > The only certainty is: nginx is responsible for an overhead of 56ms on this > request. Definitely not a problem on its side, I would conclude. > --- > B. R. > On Tue, Mar 10, 2015 at 10:23 AM, Wiebe Cazemier < wiebe at halfgaar.net > > wrote: > > Hello, > > > The $request_time and $upstream_request_time are already included. In the > > log > > below it says ' 3.042 3.098 '. The latter is the request time, the former > > the upstream request time. It doesn't seem to be an issue of slow clients > > (also not for other log entries, they're similar). > > > It's going to be hard to diagnose if Gunicorn says the request takes 40 ms, > > but Nginx says it takes 3042 ms. > > > Hunting on... > > > - Wiebe > > > > From: "Wandenberg Peixoto" < wandenberg at gmail.com > > > > > > > To: nginx at nginx.org > > > > > > Sent: Tuesday, 10 March, 2015 1:37:09 AM > > > > > > Subject: Re: Nginx upstream delays > > > > > > You also have to consider the rate your client get data from the server. > > > > > > The request time is the entire time spent from the beginning of the > > > request > > > until the end of response. > > > > > > So you may not have a problem on your server, just a lazy client :) > > > > > > On Mon, Mar 9, 2015 at 1:05 PM, B.R. < reallfqq-nginx at yahoo.fr > wrote: > > > > > > > You are on a nginx mailing list, thus I will reply on the nginx side of > > > > the > > > > problem. > > > > > > > > > > You can diagnose further to tell if the problem comes from nginx or > > > > from > > > > the > > > > backend by using 2 different variables in your log message: > > > > > > > > > > $request_time > > > > > > > > > > $upstream_response_time > > > > > > > > > > If those values are close enough (most of the time equal), you might > > > > then > > > > conclude that the trouble does not come from nginx, but rather from the > > > > backend (or communication between those). > > > > > > > > > > If you want to investigate the communication level, you can set up some > > > > tcpdump listening on the communication between nginx and the backend. > > > > You > > > > will need to use TCP ports to do that. > > > > > > > > > > Since you are using UNIX sockets, you might want to monitor file > > > > descriptors, > > > > but I would (totally out of thin air) suppose it might not be the > > > > source > > > > of > > > > your trouble, since you would have seen much more impact if it was. > > > > > > > > > > I guess you will have to trace/dump stuff on your backend. PHP has some > > > > slowlog capability firing up tracing in a code which takes too long to > > > > finish. I do not know anything about Python servers, but you are not at > > > > the > > > > right location for questions related to those anyway. > > > > > > > > > > Happy hunting, > > > > > > > > > > --- > > > > > > > > > > B. R. > > > > > > > > > > On Mon, Mar 9, 2015 at 4:28 PM, Wiebe Cazemier < wiebe at halfgaar.net > > > > > wrote: > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > I have a question about sporadic long upstream response times I'm > > > > > seeing > > > > > on > > > > > (two of) our Nginx servers. It's kind of hard to show and quantify, > > > > > but > > > > > I'll > > > > > do my best. > > > > > > > > > > > > > > > One is a Django Gunicorn server. We included the upstream response > > > > > time > > > > > in > > > > > the Nginx access log and wrote a script to analyze them. What we see, > > > > > is > > > > > that on the login page of a website (a page that does almost nothing) > > > > > 95%-99% of 'GET /customer/login/' requests are processed within about > > > > > 50 > > > > > ms. > > > > > The other few percent can take several seconds. Sometimes even 5s. > > > > > Our > > > > > Munin > > > > > graphs show no correlation in disk latency, cpu time, memory use, > > > > > etc. > > > > > > > > > > > > > > > I also added an access log to Gunicorn, so that I can see how long > > > > > Gunicorn > > > > > takes to process requests that Nginx thinks take long. Gunicorn has 8 > > > > > workers. It can be seen that there is actually no delay in Gunicorn. > > > > > For > > > > > instance, Nginx sees this (the long upstream response time is marked > > > > > red, > > > > > 3.042s): > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ > > > > > > HTTP/1.1" > > > > > > 200 8310 0.061 0.121 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ > > > > > > HTTP/1.1" > > > > > > 200 8310 0.035 0.092 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [ 06/Mar/2015:10:27:52 +0100] "GET /customer/login/ > > > > > > HTTP/1.1" > > > > > > 200 8310 3.042 3.098 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ > > > > > > HTTP/1.1" > > > > > > 200 8310 0.051 0.108 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ > > > > > > HTTP/1.1" > > > > > > 200 8310 0.038 0.096 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST > > > > > > /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 > > > > > > > > > > > > > > > > > > > > But then the corresponding Gunicorn logs shows normal response times > > > > > (the > > > > > figure after 'None', in ?s) (Corresponding line marked blue): > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 41686 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 27629 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 28143 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 41846 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 30192 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 59382 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 33308 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [ 06/Mar/2015:10:27:52 ] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 39849 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 48321 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 36484 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > x.x.x.x - - [06/Mar/2015:10:27:58] "POST > > > > > > /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 > > > > > > > > > > > > > > > > > > > > > y.y.y.y - - [06/Mar/2015:10:28:02] "GET > > > > > > /customer/login/?next=/customer/home/ > > > > > > HTTP/1.0" 200 None 97824 > > > > > > > > > > > > > > > > > > > > > y.y.y.y - - [06/Mar/2015:10:28:03] "GET > > > > > > /customer/login/?next=/customer/home/ > > > > > > HTTP/1.0" 200 None 78162 > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 38350 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 31076 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 28536 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 30981 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > > 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ > > > > > > HTTP/1.0" > > > > > > 200 > > > > > > None 29920 "-" "Echoping/6.0.2" > > > > > > > > > > > > > > > > > > > > As I said, there are currently 8 workers. I already increased them > > > > > from > > > > > 4. > > > > > The log above shows that there are enough seconds between each > > > > > request > > > > > that > > > > > 8 workers should be able to handle it. I also created a MySQL slow > > > > > log, > > > > > which doesn't show the delays. MySQL is always fast. > > > > > > > > > > > > > > > Another server we have is Nginx with PHP-FPM (with 150 PHP children > > > > > in > > > > > the > > > > > pool), no database access. On one particular recent log of a few > > > > > hundred > > > > > thousand entries, 99% of requests is done in 129ms. But one response > > > > > even > > > > > took 3170ms. Its PHP proxy settings are: > > > > > > > > > > > > > > > > location ~ \.php$ { > > > > > > > > > > > > > > > > > > > > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > > > > > > > > > > > > > > > > > > > > # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > > > > > > > > > > > > > > > > > > > > > # With php5-fpm: > > > > > > > > > > > > > > > > > > > > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > > > > > > > > > > > > > > > > > > > > fastcgi_index index.php; > > > > > > > > > > > > > > > > > > > > > include fastcgi_params; > > > > > > > > > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > It seems something in the communication between Nginx and the service > > > > > behind > > > > > it slows down sometimes, but I can't figure out what it might be. Any > > > > > idea > > > > > what it might be or how to diagnose it better? > > > > > > > > > > > > > > > Regards, > > > > > > > > > > > > > > > Wiebe Cazemier > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.sales at donweb.com Tue Mar 10 16:48:07 2015 From: fabian.sales at donweb.com (=?ISO-8859-1?Q?Fabi=E1n_M_Sales?=) Date: Tue, 10 Mar 2015 13:48:07 -0300 Subject: NGINX and mod_log_sql In-Reply-To: <54F74C8B.80007@donweb.com> References: <54F74C8B.80007@donweb.com> Message-ID: <54FF2047.9000107@donweb.com> Any idea? Thanks. On 04/03/15 15:18, Fabi?n M Sales wrote: > Hello List. > > I use mod_log_sql-1.10 compiled into Apache / 2.4.7 and write > correctly in MySQL. > > In the nginx web server with the IP writer in MySQL is the IP of the > webserver and not the client IP to access the website. > > You might still be able to write the client IP and non-IP using nginx > webserver? > > -- > Firma Institucional > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at mostertman.org Tue Mar 10 16:50:39 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Tue, 10 Mar 2015 17:50:39 +0100 Subject: NGINX and mod_log_sql In-Reply-To: <54FF2047.9000107@donweb.com> References: <54F74C8B.80007@donweb.com> <54FF2047.9000107@donweb.com> Message-ID: <54FF20DF.8020005@mostertman.org> Hi Fabi?n, You most likely put nginx in front of Apache. If that's the case, then chances are that you see the address in your logs that nginx contacts Apache from, instead of the user connecting to nginx. You might want to look into passing the IP of the visitor to your backend (Apache). A good example can be found easily with a search engine, like this: http://www.daveperrett.com/articles/2009/08/10/passing-ips-to-apache-with-nginx-proxy/ Hope this helps, Dani?l Fabi?n M Sales schreef op 10-3-2015 om 17:48: > Any idea? > > Thanks. > On 04/03/15 15:18, Fabi?n M Sales wrote: >> Hello List. >> >> I use mod_log_sql-1.10 compiled into Apache / 2.4.7 and write >> correctly in MySQL. >> >> In the nginx web server with the IP writer in MySQL is the IP of the >> webserver and not the client IP to access the website. >> >> You might still be able to write the client IP and non-IP using nginx >> webserver? >> >> -- >> Firma Institucional >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Mar 10 16:59:29 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Mar 2015 17:59:29 +0100 Subject: Nginx upstream delays In-Reply-To: <1713047647.6497.1425999304318.JavaMail.zimbra@halfgaar.net> References: <2003872937.5059.1425912004407.JavaMail.zimbra@halfgaar.net> <1101963084.5404.1425914902687.JavaMail.zimbra@halfgaar.net> <1475835516.6200.1425979431989.JavaMail.zimbra@halfgaar.net> <1713047647.6497.1425999304318.JavaMail.zimbra@halfgaar.net> Message-ID: Does the time reported by Gunicorn match the upstream time reported by nginx for the faulty request? - If yes, then the slowdown comes from Gunicorn (or most probably from the application within) - If no, then the shallow waters between front and backend needs inspection by whatever means available, system wide since you are using file descriptors and inodes from the filesystem with the UNIX sockets I suggest you monitor the whole machine, which includes the whole OS and what other processes do, and the whole impact on resources. You will need to add what happens elsewhere on the machine if you are using a shared virtualized environment. Sometimes stuff appearing to be loosely coupled interfere with each other. :o) Once again, to me, the trouble definitely does not come from nginx. --- *B. R.* On Tue, Mar 10, 2015 at 3:55 PM, Wiebe Cazemier wrote: > Hello, > > The definition of the gunicorn time I mentioned is 'request time in > microseconds'. Because Gunicorn only talks to Nginx, this would be the time > Gunicorn needs (after having received the request) to generate the response > and send it back to nginx, I would say. In most cases, this time matches > the nginx upstream response time closely. There is a few milliseconds > overhead, usually. > > I haven't tried a tcpdump. This is going to require some time to setup, > because the long delay time is not reproducible. I need to log a day's > worth of data and then devise some useful filter. I'll try to set something > up. > > As for Maxim's response about packet loss; it would have to be packet loss > on a file socket. Is this even possible? > > - Wiebe > > > ------------------------------ > > *From: *"B.R." > *To: *"nginx ML" > *Sent: *Tuesday, 10 March, 2015 1:28:03 PM > > *Subject: *Re: Nginx upstream delays > > Then it means nginx waited 3.042 seconds after having finished sending the > request to the backend (ie time waiting for an answer). > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time > > Try to get the definition of the time you mention from Gunicorn. Times > could be completely different from what you think they mean. > > Have you tried to use tcpdump on the frontend/backend communication? Does > it show anything interesting? > If the latency comes from neither ends specifically but from the > communication channel itself, you will need to dig further. > ?If it comes fro mthe backend, it might also be some waiting time > (database? buffering?), which might not taken into account in the time you > mentioned. > > The only certainty is: nginx is responsible for an overhead of 56ms on > this request. Definitely not a problem on its side, I would conclude. > --- > *B. R.* > > On Tue, Mar 10, 2015 at 10:23 AM, Wiebe Cazemier > wrote: > >> Hello, >> >> The $request_time and $upstream_request_time are already included. In >> the log below it says '3.042 3.098'. The latter is the request time, the >> former the upstream request time. It doesn't seem to be an issue of slow >> clients (also not for other log entries, they're similar). >> >> It's going to be hard to diagnose if Gunicorn says the request takes 40 >> ms, but Nginx says it takes 3042 ms. >> >> Hunting on... >> >> - Wiebe >> >> >> ------------------------------ >> >> *From: *"Wandenberg Peixoto" >> *To: *nginx at nginx.org >> *Sent: *Tuesday, 10 March, 2015 1:37:09 AM >> *Subject: *Re: Nginx upstream delays >> >> >> You also have to consider the rate your client get data from the server. >> The request time is the entire time spent from the beginning of the >> request until the end of response. >> So you may not have a problem on your server, just a lazy client :) >> >> On Mon, Mar 9, 2015 at 1:05 PM, B.R. wrote: >> >>> You are on a nginx mailing list, thus I will reply on the nginx side of >>> the problem. >>> >>> You can diagnose further to tell if the problem comes from nginx or from >>> the backend by using 2 different variables in your log message: >>> $request_time >>> >>> $upstream_response_time >>> >>> >>> If those values are close enough (most of the time equal), you might >>> then conclude that the trouble does not come from nginx, but rather from >>> the backend (or communication between those). >>> >>> If you want to investigate the communication level, you can set up some >>> tcpdump listening on the communication between nginx and the backend. You >>> will need to use TCP ports to do that. >>> Since you are using UNIX sockets, you might want to monitor file >>> descriptors, but I would (totally out of thin air) suppose it might not be >>> the source of your trouble, since you would have seen much more impact if >>> it was. >>> >>> I guess you will have to trace/dump stuff on your backend. PHP has some >>> slowlog capability firing up tracing in a code which takes too long to >>> finish. I do not know anything about Python servers, but you are not at the >>> right location for questions related to those anyway. >>> >>> Happy hunting, >>> --- >>> *B. R.* >>> >>> >>> On Mon, Mar 9, 2015 at 4:28 PM, Wiebe Cazemier >>> wrote: >>> >>>> Hello, >>>> >>>> I have a question about sporadic long upstream response times I'm >>>> seeing on (two of) our Nginx servers. It's kind of hard to show and >>>> quantify, but I'll do my best. >>>> >>>> One is a Django Gunicorn server. We included the upstream response time >>>> in the Nginx access log and wrote a script to analyze them. What we see, is >>>> that on the login page of a website (a page that does almost nothing) >>>> 95%-99% of 'GET /customer/login/' requests are processed within about 50 >>>> ms. The other few percent can take several seconds. Sometimes even 5s. Our >>>> Munin graphs show no correlation in disk latency, cpu time, memory use, etc. >>>> >>>> I also added an access log to Gunicorn, so that I can see how long >>>> Gunicorn takes to process requests that Nginx thinks take long. Gunicorn >>>> has 8 workers. It can be seen that there is actually no delay in Gunicorn. >>>> For instance, Nginx sees this (the long upstream response time is marked >>>> red, 3.042s): >>>> >>>> 11.22.33.44 - - [06/Mar/2015:10:27:46 +0100] "GET /customer/login/ >>>> HTTP/1.1" 200 8310 0.061 0.121 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:48 +0100] "GET /customer/login/ >>>> HTTP/1.1" 200 8310 0.035 0.092 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:52 +0100] "GET /customer/login/ >>>> HTTP/1.1" 200 8310 3.042 3.098 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:53 +0100] "GET /customer/login/ >>>> HTTP/1.1" 200 8310 0.051 0.108 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:54 +0100] "GET /customer/login/ >>>> HTTP/1.1" 200 8310 0.038 0.096 "-" "Echoping/6.0.2" >>>> x.x.x.x - - [06/Mar/2015:10:27:58 +0100] "POST >>>> /customer/login/?next=/customer/home/ HTTP/1.1" 302 5 0.123 0.123 >>>> >>>> >>>> But then the corresponding Gunicorn logs shows normal response times >>>> (the figure after 'None', in ?s) (Corresponding line marked blue): >>>> >>>> >>>> 11.22.33.44 - - [06/Mar/2015:10:27:41] "GET /customer/login/ HTTP/1.0" >>>> 200 None 41686 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:42] "GET /customer/login/ HTTP/1.0" >>>> 200 None 27629 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:43] "GET /customer/login/ HTTP/1.0" >>>> 200 None 28143 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:44] "GET /customer/login/ HTTP/1.0" >>>> 200 None 41846 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:45] "GET /customer/login/ HTTP/1.0" >>>> 200 None 30192 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:46] "GET /customer/login/ HTTP/1.0" >>>> 200 None 59382 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:48] "GET /customer/login/ HTTP/1.0" >>>> 200 None 33308 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:52] "GET /customer/login/ HTTP/1.0" >>>> 200 None 39849 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:53] "GET /customer/login/ HTTP/1.0" >>>> 200 None 48321 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:27:54] "GET /customer/login/ HTTP/1.0" >>>> 200 None 36484 "-" "Echoping/6.0.2" >>>> x.x.x.x - - [06/Mar/2015:10:27:58] "POST >>>> /customer/login/?next=/customer/home/ HTTP/1.0" 302 None 122295 >>>> y.y.y.y - - [06/Mar/2015:10:28:02] "GET >>>> /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 97824 >>>> y.y.y.y - - [06/Mar/2015:10:28:03] "GET >>>> /customer/login/?next=/customer/home/ HTTP/1.0" 200 None 78162 >>>> 11.22.33.44 - - [06/Mar/2015:10:28:26] "GET /customer/login/ HTTP/1.0" >>>> 200 None 38350 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:28:27] "GET /customer/login/ HTTP/1.0" >>>> 200 None 31076 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:28:28] "GET /customer/login/ HTTP/1.0" >>>> 200 None 28536 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:28:30] "GET /customer/login/ HTTP/1.0" >>>> 200 None 30981 "-" "Echoping/6.0.2" >>>> 11.22.33.44 - - [06/Mar/2015:10:28:31] "GET /customer/login/ HTTP/1.0" >>>> 200 None 29920 "-" "Echoping/6.0.2" >>>> >>>> >>>> As I said, there are currently 8 workers. I already increased them from >>>> 4. The log above shows that there are enough seconds between each request >>>> that 8 workers should be able to handle it. I also created a MySQL slow >>>> log, which doesn't show the delays. MySQL is always fast. >>>> >>>> Another server we have is Nginx with PHP-FPM (with 150 PHP children in >>>> the pool), no database access. On one particular recent log of a few >>>> hundred thousand entries, 99% of requests is done in 129ms. But one >>>> response even took 3170ms. Its PHP proxy settings are: >>>> >>>> location ~ \.php$ { >>>> fastcgi_split_path_info ^(.+\.php)(/.+)$; >>>> # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini >>>> >>>> # With php5-fpm: >>>> fastcgi_pass unix:/var/run/php5-fpm.sock; >>>> fastcgi_index index.php; >>>> include fastcgi_params; >>>> } >>>> >>>> >>>> It seems something in the communication between Nginx and the service >>>> behind it slows down sometimes, but I can't figure out what it might be. >>>> Any idea what it might be or how to diagnose it better? >>>> >>>> Regards, >>>> >>>> Wiebe Cazemier >>>> >>>> >>>> >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at x64architecture.com Tue Mar 10 18:29:05 2015 From: kurt at x64architecture.com (Kurt Cancemi) Date: Tue, 10 Mar 2015 14:29:05 -0400 Subject: AES-NI support with nginx In-Reply-To: References: Message-ID: <8C0673E4-29B5-4DAC-B372-41A4CA8125AA@x64architecture.com> http://openssl.6102.n7.nabble.com/having-a-lot-of-troubles-trying-to-get-AES-NI-working-tp44285p44301.html > On Mar 10, 2015, at 3:51 AM, HD DH wrote: > > Kurt Cancemi wrote in post #1168394: >> AES-NI is already on, there is no configuration option and it will work >> as >> long as your cpu supports it. >> >> --- >> Kurt Cancemi >> https://www.x64architecture.com > > Can you show me where does the source of information from ? > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue Mar 10 18:38:05 2015 From: lists at ruby-forum.com (HD DH) Date: Tue, 10 Mar 2015 19:38:05 +0100 Subject: AES-NI support with nginx In-Reply-To: References: Message-ID: I'm very interested in this issue I have a problem with use openssl version and engine AES-NI Detail my question: http://stackoverflow.com/questions/28939825/how-to-config-openssl-engine-aes-ni-in-nginx Please suggest a solution for me. Thank you so much. -- Posted via http://www.ruby-forum.com/. From stl at wiredrive.com Tue Mar 10 19:14:02 2015 From: stl at wiredrive.com (Scott Larson) Date: Tue, 10 Mar 2015 12:14:02 -0700 Subject: nginx + LibreSSL + ECDSA cert = Error In-Reply-To: References: Message-ID: I've been using ECDSA without issue on 1.7.10 with LibreSSL 2.1.4. Method to generate the key was: openssl ecparam -out ec_key.pem -name secp384r1 -genkey openssl req -newkey ec:ec_key.pem -nodes -sha256 -keyout www.domain.tld.key -new -out www.domain.tld.csr Then I'm declaring the DSA options in ssl_ciphers and defining ssl_ecdh_curve: ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA; ssl_ecdh_curve secp384r1; *[image: userimage]Scott Larson[image: los angeles] Lead Systems Administrator[image: wdlogo] [image: linkedin] [image: facebook] [image: twitter] [image: instagram] T 310 823 8238 x1106 <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* On Tue, Mar 10, 2015 at 3:25 AM, wrote: > Hi, > > > > I compiled nginx 1.7.10 + LibreSSL 2.1.4, but am not able to use ECC > certificates. > > > > nginx -V: > > nginx version: nginx/1.7.10 > > built by gcc 4.7.2 (Debian 4.7.2-5) > > TLS SNI support enabled > > configure arguments: > --with-openssl=/root/git/build_nginx/build/libressl-2.1.4 > --with-pcre=/root/git/build_nginx/build/pcre-8.36 > --add-module=/root/git/build_nginx/build/echo-nginx-module-0.57 > --with-ld-opt=-lrt --prefix=/usr/local/nginx > --conf-path=/etc/nginx-libressl/nginx.conf --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-http_auth_request_module --with-file-aio > --with-http_spdy_module --with-http_addition_module --with-http_dav_module > --with-http_geoip_module --with-http_gzip_static_module > --with-http_image_filter_module --with-http_secure_link_module > --with-http_sub_module --with-http_xslt_module > > > > Using this script: > https://gist.github.com/leonklingele/a669803060fa92817f64 > > > > nginx error log gives me these messages: > > 2015/03/09 17:00:11 [notice] 6484#0: signal process started > > 2015/03/09 17:00:15 [alert] 6486#0: *732628 ignoring stale global SSL > error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you > should not call) while SSL handshaking, client: xxx.xxx.xxx.xxx, server: > 0.0.0.0:443 > > 2015/03/09 17:01:23 [notice] 6785#0: signal process started > > 2015/03/09 17:01:25 [alert] 6787#0: *733012 ignoring stale global SSL > error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you > should not call) while SSL handshaking, client: xxx.xxx.xxx.xxx, server: > 0.0.0.0:443 > > 2015/03/09 17:05:27 [notice] 7479#0: signal process started > > 2015/03/09 17:05:35 [alert] 7481#0: *734270 ignoring stale global SSL > error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you > should not call) while SSL handshaking, client: xxx.xxx.xxx.xxx, server: > 0.0.0.0:443 > > > > RSA certificates work perfectly fine. > > I generated the ECDSA CSR (for Comodo) using: > > $ openssl ecparam -out private.key -name secp384r1 -genkey > > $ openssl req -new -key private.key -nodes -out request.csr > > > > Is this issue related to nginx or LibreSSL? > > > > Also see: http://forum.nginx.org/read.php?2,256381,256381#msg-256381 > > > > > > Thanks for helping, > > Jonathan M?ller > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Mar 10 19:17:16 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 10 Mar 2015 20:17:16 +0100 Subject: AES-NI support with nginx In-Reply-To: References: , Message-ID: > I'm very interested in this issue > I have a problem with use openssl version and engine AES-NI > Detail my question: > http://stackoverflow.com/questions/28939825/how-to-config-openssl-engine-aes-ni-in-nginx > Please suggest a solution for me. Use official openssl distributions, not some github fork and don't configure any "engines" in nginx. Best thing for you to do is to used precompiled binaries from nginx.org or your linux distro, instead of compiling them on your own. Lukas From kpariani at zimbra.com Tue Mar 10 19:42:41 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Tue, 10 Mar 2015 14:42:41 -0500 (CDT) Subject: nginx page caching not working for responses with valid (rfc 1123 compliant) Expires header Message-ID: <1031883112.3420720.1426016561093.JavaMail.zimbra@zimbra.com> Hello, I am on nginx-1.7.1 & trying to use nginx's page caching feature but run into an issue for responses with a valid 'Expires' header which seem to be in the correct rfc 1123 compliant format. Nginx somehow doesn't like it & hence doesn't cache such responses. Is this a bug ? ---------------------------------------------------------- Config in http block to enable page caching ---------------------------------------------------------- proxy_cache_path /opt/zimbra/data/tmp/nginx/cache keys_zone=zimbra:10m; proxy_cache zimbra; proxy_cache_key "$scheme$request_method$host$request_uri"; proxy_cache_valid 200 302 10m; add_header X-Proxy-Cache $upstream_cache_status; ------------------------------------------------------------------------------------------------------ Logs showing that a .css with a valid Expires header doesn't get cached because of header parsing failure ------------------------------------------------------------------------------------------------------ 2015/03/10 13:58:29 [debug] 17311#0: *7 http upstream request: "/css/images,common,dwt,msgview,login,zm,spellcheck,skin.css?v=150302053859&debug=false&skin=harmony&locale=en_US" 2015/03/10 13:58:29 [debug] 17311#0: *7 http upstream send request handler 2015/03/10 13:58:29 [debug] 17311#0: *7 http upstream send request 2015/03/10 13:58:29 [debug] 17311#0: *7 chain writer in: 0000000000000000 2015/03/10 13:58:29 [debug] 17311#0: *7 event timer: 21, old: 1426013969478, new: 1426013969533 2015/03/10 13:58:29 [debug] 17311#0: *7 http upstream process header 2015/03/10 13:58:29 [debug] 17311#0: *7 SSL_read: 3904 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy status 200 "200 OK" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Date: Tue, 10 Mar 2015 18:58:29 GMT" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "X-Frame-Options: SAMEORIGIN" 2015/03/10 13:58:29 [debug] 17311#0: *7 ngx_http_parse_time failed: h->value.data = Thu, 9 Apr 2015 19:58:29 GMT, h->value.len = 28 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Expires: Thu, 9 Apr 2015 19:58:29 GMT" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Cache-Control: public, max-age=2595600" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Vary: User-Agent" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Content-Type: text/css" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Vary: Accept-Encoding, User-Agent" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Content-Length: 295693" 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header done 2015/03/10 13:58:29 [debug] 17311#0: *7 http script var: "MISS" 2015/03/10 13:58:29 [debug] 17311#0: *7 spdy header filter 2015/03/10 13:58:29 [debug] 17311#0: *7 malloc: 00000000025EB460:385 2015/03/10 13:58:29 [debug] 17311#0: *7 spdy deflate out: ni:00000000025EB5C8 no:00000000025D780E ai:0 ao:56 rc:0 2015/03/10 13:58:29 [debug] 17311#0: *7 spdy:3 create SYN_REPLY frame 00000000025D7858: len:374 2015/03/10 13:58:29 [debug] 17311#0: *7 http cleanup add: 00000000025D7898 2015/03/10 13:58:29 [debug] 17311#0: *7 http cacheable: 0 Code added in src/http/ngx_http_upstream.c to throw the 'ngx_http_parse_time failed' debug expires = ngx_http_parse_time(h->value.data, h->value.len); ---> Returns NGX_ERROR if (expires == NGX_ERROR || expires < ngx_time()) { ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "ngx_http_parse_time failed: h->value.data = %s, h->value.len = %d" , h->value.data, h->value.len); u->cacheable = 0; ---> sets cacheable to 0 here & hence does not gets cached return NGX_OK; } ------------------------------------------------------------------------------------------------------ Logs showing that a gif image without Expires header gets cached correctly ------------------------------------------------------------------------------------------------------ 2015/03/10 13:58:35 [debug] 17311#0: *7 http upstream request: "/service/zimlet/com_zimbra_email/img/EmailZimlet_busy.gif?" 2015/03/10 13:58:35 [debug] 17311#0: *7 http upstream process header 2015/03/10 13:58:35 [debug] 17311#0: *7 SSL_read: 899 2015/03/10 13:58:35 [debug] 17311#0: *7 SSL_read: -1 2015/03/10 13:58:35 [debug] 17311#0: *7 SSL_get_error: 2 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy status 200 "200 OK" 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy header: "Date: Tue, 10 Mar 2015 18:58:35 GMT" 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy header: "Content-Type: image/gif" 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy header: "Last-Modified: Tue, 03 Mar 2015 01:41:21 GMT" 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy header: "Accept-Ranges: bytes" 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy header: "Content-Length: 729" 2015/03/10 13:58:35 [debug] 17311#0: *7 http proxy header done 2015/03/10 13:58:35 [debug] 17311#0: *7 http ims:1425346881 lm:1425346881 2015/03/10 13:58:35 [debug] 17311#0: *7 http script var: "MISS" 2015/03/10 13:58:35 [debug] 17311#0: *7 spdy header filter 2015/03/10 13:58:35 [debug] 17311#0: *7 malloc: 00000000025C92B0:181 2015/03/10 13:58:35 [debug] 17311#0: *7 spdy deflate out: ni:00000000025C9365 no:000000000260ECBB ai:0 ao:31 rc:0 2015/03/10 13:58:35 [debug] 17311#0: *7 spdy:39 create SYN_REPLY frame 000000000260ECF0: len:195 2015/03/10 13:58:35 [debug] 17311#0: *7 http cleanup add: 000000000260ED30 2015/03/10 13:58:35 [debug] 17311#0: *7 spdy frame out: 000000000260ECF0 sid:39 prio:5 bl:1 len:195 2015/03/10 13:58:35 [debug] 17311#0: *7 SSL buf copy: 203 2015/03/10 13:58:35 [debug] 17311#0: *7 SSL to write: 203 2015/03/10 13:58:35 [debug] 17311#0: *7 SSL_write: 203 2015/03/10 13:58:35 [debug] 17311#0: *7 spdy:39 SYN_REPLY frame 000000000260ECF0 was sent 2015/03/10 13:58:35 [debug] 17311#0: *7 spdy frame sent: 000000000260ECF0 sid:39 bl:1 len:195 2015/03/10 13:58:35 [debug] 17311#0: *7 http file cache set header 2015/03/10 13:58:35 [debug] 17311#0: *7 http cacheable: 1 Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Mar 10 20:10:09 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 10 Mar 2015 23:10:09 +0300 Subject: nginx page caching not working for responses with valid (rfc 1123 compliant) Expires header In-Reply-To: <1031883112.3420720.1426016561093.JavaMail.zimbra@zimbra.com> References: <1031883112.3420720.1426016561093.JavaMail.zimbra@zimbra.com> Message-ID: <6017351.utc2cBREWr@vbart-laptop> On Tuesday 10 March 2015 14:42:41 Kunal Pariani wrote: > Hello, > I am on nginx-1.7.1 & trying to use nginx's page caching feature but run into an issue for responses with a valid 'Expires' header which seem to be in the correct rfc 1123 compliant format. Nginx somehow doesn't like it & hence doesn't cache such responses. Is this a bug ? How is RFC 1123 related to the Expires header? [..] > 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Expires: Thu, 9 Apr 2015 19:58:29 GMT" It's wrong, it has only one digit day. See RFC 7231 for details: https://tools.ietf.org/html/rfc7231#section-7.1.1.1 wbr, Valentin V. Bartenev From kpariani at zimbra.com Tue Mar 10 20:34:45 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Tue, 10 Mar 2015 15:34:45 -0500 (CDT) Subject: nginx page caching not working for responses with valid (rfc 1123 compliant) Expires header In-Reply-To: <6017351.utc2cBREWr@vbart-laptop> References: <1031883112.3420720.1426016561093.JavaMail.zimbra@zimbra.com> <6017351.utc2cBREWr@vbart-laptop> Message-ID: <1302403594.3421922.1426019685968.JavaMail.zimbra@zimbra.com> ----- Original Message ----- From: "Valentin V. Bartenev" To: nginx at nginx.org Sent: Tuesday, March 10, 2015 1:10:09 PM Subject: Re: nginx page caching not working for responses with valid (rfc 1123 compliant) Expires header On Tuesday 10 March 2015 14:42:41 Kunal Pariani wrote: > Hello, > I am on nginx-1.7.1 & trying to use nginx's page caching feature but run into an issue for responses with a valid 'Expires' header which seem to be in the correct rfc 1123 compliant format. Nginx somehow doesn't like it & hence doesn't cache such responses. Is this a bug ? >> How is RFC 1123 related to the Expires header? My bad. I meant RFC 822 https://tools.ietf.org/html/rfc822#section-5.1 [..] > 2015/03/10 13:58:29 [debug] 17311#0: *7 http proxy header: "Expires: Thu, 9 Apr 2015 19:58:29 GMT" >> It's wrong, it has only one digit day. See RFC 7231 for details: >> https://tools.ietf.org/html/rfc7231#section-7.1.1.1 Aah didn't catch that. So looks like an issue on my upstream. Although throwing some kind of debug here (that its not rfc compliant) would have been helpful. Thanks -Kunal wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Mar 10 21:09:00 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Mar 2015 21:09:00 +0000 Subject: [security advisory] $http_host vs $host In-Reply-To: <54FDECDC.8070600@csdoc.com> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> <54FDECDC.8070600@csdoc.com> Message-ID: <20150310210900.GB29618@daoine.org> On Mon, Mar 09, 2015 at 08:56:28PM +0200, Gena Makhomed wrote: > On 09.03.2015 19:25, Francis Daly wrote: Hi there, thank you for the explanation. > >It is true that $http_host is completely controlled by the client, and > >$host is mostly controlled by the client. It is true that they can have > >different values. I do not see that the difference is a security issue > >in this case. > server { > listen 443 ssl; > server_name private.example.com; > location / { > auth_basic "closed site"; > auth_basic_user_file conf/htpasswd; > proxy_set_header Host $http_host; > proxy_pass http://backend; > } > } > > server { > listen 443 ssl; > server_name public.example.com; > location / { > proxy_set_header Host $http_host; > proxy_pass http://backend; > } > } > > in such configuration anybody can bypass nginx auth_basic restriction > and access content from private.example.com without any login/password: You are correct. It is possible to construct a scenario where the difference between $http_host and $host matters from a security perspective. It is also possible to construct a scenario where the difference between $http_host and $host matters from a correctness perspective. The "common" redmine configuration appears to be one of the latter, if nginx is not run on port 80 (such as, by using the original example config and running nginx as non-root). If nginx runs on port 80, then everything probably Just Works. I suspect that if the "proxy_set_header Host" is omitted entirely, then the default "proxy_redirect" probably means that everything Just Works, whatever port nginx listens on. And the current example config does just that, so it's all good. > for fastcgi_pass such bug can be fixed only by using two nginx > servers - first for frontend, and second for backend, > because nginx send to fastcgi value of $http_host I think it may be possible for the fastcgi application to avoid that confusion -- nginx sends the http headers plus whatever is configured to be sent. So, for creation of "redirect" urls back to the client, HTTP_HOST may be sensible to use; but for anything where the nginx server{} chosen matters, sending $server_name or $host from nginx in a well-known fastcgi_param is probably a better option. > >>So, $host must be used always with proxy_pass instead of $http_host. > > > >If the upstream server would do anything security-relevant with the Host: > >header that it gets from nginx, it would do exactly the same with the > >Host: header that it would get from the client directly, no? > > No. You are correct, thanks. I had forgotten about the absolute form of request to nginx, which becomes origin form when nginx speaks to upstream. nginx will see the full request and choose the server{} block to use, and set $host and $server_name and $http_host appropriately. If upstream cares about the difference between them, then nginx must be configured to send the correct one (or none at all). nginx will send no more than one in the Host: header, while the client can send different names in Host: and in the request line. Cheers, f -- Francis Daly francis at daoine.org From gmm at csdoc.com Tue Mar 10 22:29:29 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 11 Mar 2015 00:29:29 +0200 Subject: [security advisory] $http_host vs $host In-Reply-To: <20150310210900.GB29618@daoine.org> References: <54FC6293.7000602@csdoc.com> <54FC637D.3010904@csdoc.com> <20150308205047.GO3010@daoine.org> <54FDB1B5.4060609@csdoc.com> <20150309172508.GQ3010@daoine.org> <54FDECDC.8070600@csdoc.com> <20150310210900.GB29618@daoine.org> Message-ID: <54FF7049.5060707@csdoc.com> On 10.03.2015 23:09, Francis Daly wrote: >> server { >> listen 443 ssl; >> server_name private.example.com; >> location / { >> auth_basic "closed site"; >> auth_basic_user_file conf/htpasswd; >> proxy_set_header Host $http_host; >> proxy_pass http://backend; >> } >> } >> >> server { >> listen 443 ssl; >> server_name public.example.com; >> location / { >> proxy_set_header Host $http_host; >> proxy_pass http://backend; >> } >> } >> >> in such configuration anybody can bypass nginx auth_basic restriction >> and access content from private.example.com without any login/password: > > You are correct. > > It is possible to construct a scenario where the difference between > $http_host and $host matters from a security perspective. > > It is also possible to construct a scenario where the difference between > $http_host and $host matters from a correctness perspective. > I don't know situations where system administrator need use proxy_set_header Host $http_host; instead of proxy_set_header Host $host; because only $host variable contains correct host name, and $http_host may be spoofed and contains anything other. So, for me I create simple rule of nginx configuration: always use $host in directive proxy_set_header Host and never use $http_host there. > The "common" redmine configuration appears to be one of the latter, > if nginx is not run on port 80 (such as, by using the original example > config and running nginx as non-root). > > If nginx runs on port 80, then everything probably Just Works. > > I suspect that if the "proxy_set_header Host" is omitted entirely, then > the default "proxy_redirect" probably means that everything Just Works, > whatever port nginx listens on. And the current example config does just > that, so it's all good. In redmine settings: https://redmine.example.com/settings you can select "Host name and path" to use and protocol, HTTP or HTTPS. >> for fastcgi_pass such bug can be fixed only by using two nginx >> servers - first for frontend, and second for backend, >> because nginx send to fastcgi value of $http_host > > I think it may be possible for the fastcgi application to avoid that > confusion -- nginx sends the http headers plus whatever is configured to > be sent. So, for creation of "redirect" urls back to the client, > HTTP_HOST may be sensible to use; but for anything where the nginx > server{} chosen matters, sending $server_name or $host from nginx in a > well-known fastcgi_param is probably a better option. Yes, I am talking about protecting of FastCGI application, from system administrator point of view, - some application, open source or closed source, and I don't know how reliable code of this application and want to make it safe and protected from such spoofed $http_host in HTTP_HOST variable, thanks to FastCGI spec. -- Best regards, Gena From emailgrant at gmail.com Wed Mar 11 12:09:29 2015 From: emailgrant at gmail.com (Grant) Date: Wed, 11 Mar 2015 05:09:29 -0700 Subject: gzip_types not working as expected In-Reply-To: References: Message-ID: > gzip is not working on my piwik.js file according to Google at > developers.google.com/speed/pagespeed/insights. It's working fine on > my CSS file. How can I troubleshoot this? > > gzip on; > gzip_disable msie6; > gzip_types text/javascript application/x-javascript text/css text/plain; > > - Grant Any help here guys? - Grant From nginx-forum at nginx.us Wed Mar 11 13:10:20 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 11 Mar 2015 09:10:20 -0400 Subject: gzip_types not working as expected In-Reply-To: References: Message-ID: <3c46794c9d9aea0a33904a844110c9bd.NginxMailingListEnglish@forum.nginx.org> Use curl -i to see what you actually get back on a file request. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257147,257198#msg-257198 From frederik.nosi at postecom.it Wed Mar 11 13:11:09 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Wed, 11 Mar 2015 14:11:09 +0100 Subject: gzip_types not working as expected In-Reply-To: References: Message-ID: <55003EED.3080201@postecom.it> Hi, On 03/11/2015 01:09 PM, Grant wrote: >> gzip is not working on my piwik.js file according to Google at >> developers.google.com/speed/pagespeed/insights. It's working fine on >> my CSS file. How can I troubleshoot this? >> >> gzip on; >> gzip_disable msie6; >> gzip_types text/javascript application/x-javascript text/css text/plain; You probably miss application/javascript in your list, or the size of your javascript in question is less then the (default ?) minimum size for gizp to be done, check the official documentation for this. Anyway, an easy way to check if you are missing a mime type on your gzip list is to open your page with firebug (or similar) enabled and check the type and size of the particular resource. >> >> - Grant > > Any help here guys? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Mar 11 19:16:51 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Wed, 11 Mar 2015 15:16:51 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP Message-ID: Hi, my name is Ricardo, I'm here to ask for help about an implementation of pop3/imap and smtp proxy functionality with nginx, i want to implement a "cluster" with those functionalities. Consideration - All nodes/machines are virtualized (VM). - All nodes/machines are configured with 600MB of RAM memory. - All nodes/machines are based on 64 bits CentOS 7 distro. - Nginx version included into CentOS 7, nginx-1.6.2-4.el7.x86_64 Scenario My scenario is as follows: - 1 Server as proxy with IMAP/POP/IMAPS/POP3S/SMTP and SMTPS enabled. This will be proxy-n1.ine.mx with IP address 192.168.122.170. - 1 Server as DNS with name master.ife.org.mx. This is the dns server for the solution, the IP address for this host is 192.168.122.85 - 1 Server as LDAP with name ldap.ife.org.mx. This is the "directory server" for my users. The IP address assigned to this host is 192.168.122.30 - 2 Mail servers with postfix configured. The name for the firs node is correo-n1.ine.mx with IP address 192.168.122.98 and The name for the second node is correo-n2.ine.mx with IP address 192.168.122.78. Both of them with postfix 2.10 and dovecot 2.2.10 with SMTP/SMTPS POP3/POPS3 and IMAP/IMAPS enabled. - 1 client with Windows 7 Starter with Outlook. The objective of this VM is to connect to the proxy solution an function and to get a normal functionality. (I would like to mention, that this is the first phase/stange) Goal - This first phase is stablish email flow functionality with authenticated mechanism with one proxy server and one email server. Done Activities - The proxy nodes has been configured to support IMAP/POP/IMAPS/POP3S/SMTP and SMTPS, I paste the configuration for better understanding: -------------------------------- /etc/nginx/nginx.conf -------------------------------- user nginx; worker_processes 1; worker_rlimit_nofile 65535; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log debug; error_log /var/log/nginx/error.log notice; error_log /var/log/nginx/error.log info; error_log /var/log/nginx/error.log error; pid /run/nginx.pid; events { worker_connections 10240; debug_connection 192.168.122.0/24; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; proxy_buffering on; proxy_buffer_size 8k; proxy_buffers 2048 8k; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; #gzip on; index index.html index.htm; include /etc/nginx/conf.d/*.conf; server { listen 80 default_server; server_name localhost; root /usr/share/nginx/html; include /etc/nginx/default.d/*.conf; location / { index index.html index.htm index.php; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } } mail { server_name proxy-n1.ine.mx; # apache external backend auth_http 192.168.122.170:80/correo-proxy-auth/index.php; xclient on; proxy on; proxy_pass_error_message on; imap_auth plain login cram-md5; pop3_auth plain apop cram-md5; smtp_auth plain login cram-md5; imap_capabilities "IMAP4" "IMAP4rev1" "UIDPLUS" "IDLE" "LITERAL +" "QUOTA"; pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL"; smtp_capabilities "PIPELINING" "SIZE 10240000" "VRFY" "ETRN" "ENHANCEDSTATUSCODES" "8BITMIME" "DSN"; ssl_session_cache shared:MAIL:10m; ssl_certificate /etc/nginx/ssl_keys/cert_primario.cer; ssl_certificate_key /etc/nginx/ssl_keys/www-key.pem; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; server { listen 143; protocol imap; starttls on; auth_http_header X-Auth-Port 143; auth_http_header User-Agent "Nginx POP3/IMAP4 proxy"; } server { protocol pop3; listen 110; starttls on; pop3_auth plain; auth_http_header X-Auth-Port 110; auth_http_header User-Agent "Nginx POP3/IMAP4 proxy"; } server { listen 993; ssl on; protocol imap; auth_http_header X-Auth-Port 993; auth_http_header User-Agent "Nginx POP3/IMAP4 proxy"; } server { protocol pop3; listen 995; ssl on; pop3_auth plain; auth_http_header X-Auth-Port 995; auth_http_header User-Agent "Nginx POP3/IMAP4 proxy"; } server { listen 25; protocol smtp; auth_http_header X-Auth-Port 25; auth_http_header User-Agent "Nginx SMTP/SMTPS proxy"; timeout 12000; } server { listen 465; protocol smtp; auth_http_header X-Auth-Port 465; auth_http_header User-Agent "Nginx SMTP/SMTPS proxy"; ssl on; } server { listen 587; protocol smtp; auth_http_header X-Auth-Port 587; auth_http_header User-Agent "Nginx SMTP/SMTPS proxy"; starttls on; } } -------------------------------- end file /etc/nginx/nginx.conf -------------------------------- - Auth logic has been written: i wrote all the logic for the auth process, this is specified into the mail module from nginx: auth_http 192.168.122.170:80/correo-proxy-auth/index.php; -------------------------------- /usr/share/nginx/html/correo-proxy-auth/index.php -------------------------------- The content of index.php script is as follows: authUser($user,$password)){ // set message just in case if the provided password or user are wrong. $a->setFail(); }else{ // set the server configuration and redireting to it. $getMailHost = $e->getMailHost($user); $getProtocol = $e->getProtocol($protocol); $getMailServ = $e->getMailServer($user); #print "$getMailHost $getProtocol $getMailServ $user $password\ $e->setStatusPass($getMailServ,$getProtocol,$user,$password); } }else{ // set message just in case if the provided password or login are wrong. $a->setFail(); } ?> -------------------------------- end file /usr/share/nginx/html/correo-proxy-auth/index.php -------------------------------- This scripts just return the data to being passed to ngnix headers. a) I get the mailhost from the ldap user (mailhost: correo-n1.ine.mx) $getMailHost = $e->getMailHost($user); b) I get the email protocol to being proxied. $getProtocol = $e->getProtocol($protocol); c) I get the mail server assigned to my ldap user (i get this from the ldap.ife.org.mx) $getMailServ = $e->getMailServer($user); #print "$getMailHost $getProtocol $getMailServ $user $password d) I pass the data above got it to generate ngnix headers $e->setStatusPass($getMailServ,$getProtocol,$user,$password); - I have activated debuggin mode into nginx but it does not work as expected, I could not The problem At the moment to sign with the Windows machine with outlook to the proxy-n1.ine.mx node, I always get a message into the logs as follows: 2015/03/11 10:59:21 [debug] 1983#0: *8 http fastcgi header: "Status: 500 Internal Server Error" and i do not see any connections to my correo-n1.ine.mx, just see connections to the proxy-n1.ine.mx node. I have searched on the web and not many solutios are provided, but the few solutions found are related to the "auth process problem" and that's it. Today i found that the "Status: 500 Internal Server error" are generated for the next causes: 1, Hard disk space is full 2, Nginx configuration file errors? (tuning -open files, limits.conf etc.-, concurrency settings, etc. etc.) 3. Auth process (own auth module) Another logs that i see into my logs are as follows: a) Resource temporarily unavailable 2015/03/11 10:59:21 [debug] 1983#0: *8 recv() not ready (11: Resource temporarily unavailable) 2015/03/11 10:59:21 [debug] 1983#0: *8 recv() not ready (11: Resource temporarily unavailable) 2015/03/11 10:59:21 [debug] 1983#0: *8 recv() not ready (11: Resource temporarily unavailable) 2015/03/11 10:59:21 [debug] 1983#0: *8 recv() not ready (11: Resource temporarily unavailable) I guess those debug messages refers to if i have a load balancing configuration or something like that, b) auth http server :80 did not send server or port while in http auth state, client: , server: :25, login: "" 2015/03/11 09:38:49 [error] 3399#0: *30 auth http server 192.168.122.170:80 did not send server or port while in http auth state, client: 192.168.122.1, server: 0.0.0.0:25, login: "ricardo.carrillo" 2015/03/11 09:38:49 [error] 3399#0: *30 auth http server 192.168.122.170:80 did not send server or port while in http auth state, client: 192.168.122.1, server: 0.0.0.0:25, login: "ricardo.carrillo" According to the "Mastering Nginx" book from Dimitri Aivaliotis, this error is caused by "the authentication query is not successfully answered for any reason" (page 62) I quote a pharagraph from the book: "If the authentication query is not successfully answered for any reason, the connection is terminated. NGINX doesn't know to which upstream the client should be proxied, and thereby closes the connection with an Internal server error with the protocol-specific response code." But does not offer any solution or clue to solve that. For all the above, i ask for your help, I have already searched and spend a lot of time to solve the problem, but I could not do my email solutions works. Could you help me to solve this problem? Regars Ricardo Carrillo. P.D: Sorry for the format, , but the forum system does not support html or any post formatted setting. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257206#msg-257206 From francis at daoine.org Wed Mar 11 22:43:17 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Mar 2015 22:43:17 +0000 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: References: Message-ID: <20150311224317.GC29618@daoine.org> On Wed, Mar 11, 2015 at 03:16:51PM -0400, dominus.ceo wrote: Hi there, It sounds like one problem you have is the auth_http request not getting the expected response. See the example at http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol, and try making the request manually and seeing what exact response comes back. > auth_http 192.168.122.170:80/correo-proxy-auth/index.php; What response do you get from this: ? curl -H "Auth-Method: plain" \ -H "Auth-User: ricardo.carrillo" \ -H "Auth-Pass: r3dh4t" \ -H "Auth-Protocol: imap" \ -H "Auth-Login-Attempt: 1" \ -i http://192.168.122.170:80/correo-proxy-auth/index.php Add whatever other header name/value pairs you need for one successful login. Until that replies with the expected response, none of your mail side of things will work. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Mar 11 23:24:17 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Wed, 11 Mar 2015 19:24:17 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: <20150311224317.GC29618@daoine.org> References: <20150311224317.GC29618@daoine.org> Message-ID: <8be09d76563ea794d5543cafcdb61a8d.NginxMailingListEnglish@forum.nginx.org> Actually already did that, and i got the next answere: [root at proxy-n1 ~]# curl -H "Auth-Method: plain" \ > -H "Auth-User: ricardo.carrillo" \ > -H "Auth-Pass: r3dh4t" \ > -H "Auth-Protocol: imap" \ > -H "Auth-Login-Attempt: 1" \ > -i http://192.168.122.170:80/correo-proxy-auth/index.php HTTP/1.1 500 Internal Server Error Server: nginx/1.6.2 Date: Wed, 11 Mar 2015 23:18:54 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.16 but the error logs are not very descriptive. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257208#msg-257208 From francis at daoine.org Thu Mar 12 08:08:18 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Mar 2015 08:08:18 +0000 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: <8be09d76563ea794d5543cafcdb61a8d.NginxMailingListEnglish@forum.nginx.org> References: <20150311224317.GC29618@daoine.org> <8be09d76563ea794d5543cafcdb61a8d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150312080818.GD29618@daoine.org> On Wed, Mar 11, 2015 at 07:24:17PM -0400, dominus.ceo wrote: Hi there, > Actually already did that, and i got the next answere: > > [root at proxy-n1 ~]# curl -H "Auth-Method: plain" \ > > -H "Auth-User: ricardo.carrillo" \ > > -H "Auth-Pass: r3dh4t" \ > > -H "Auth-Protocol: imap" \ > > -H "Auth-Login-Attempt: 1" \ > > -i http://192.168.122.170:80/correo-proxy-auth/index.php > HTTP/1.1 500 Internal Server Error > Server: nginx/1.6.2 > Date: Wed, 11 Mar 2015 23:18:54 GMT > Content-Type: text/html > Transfer-Encoding: chunked > Connection: keep-alive > X-Powered-By: PHP/5.4.16 > > but the error logs are not very descriptive. That says that nginx is sending the request to php, which is a good start. Temporarily replace the index.php file with one which just does print_r($_SERVER) and see what it shows when you make the same manual request. Your objective is to see that the nginx/php integration is working. If that request does give a sensible response, then you will want to look more closely at your original index.php. If it does not give a sensible response, then look more closely at what nginx sends to the php (fastcgi) server. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Mar 12 10:44:59 2015 From: nginx-forum at nginx.us (AmitChauhan) Date: Thu, 12 Mar 2015 06:44:59 -0400 Subject: Echo sub request not working Message-ID: <19c0d734ef2ca325a66ba12acdd6b433.NginxMailingListEnglish@forum.nginx.org> Hi, I am new to Nginx, so know very little about it. My requirement is I need to make a service call but along with proxy passing that request, I need to make call to another service/servlet. For this I decided to use Echo module to use the echo_subrequest_async option. However the subrequest is not working and even simple echo print is not working. Where does echo "hello" supposed to go? to the client side (my guess) Below is my configuration (some notes): 1. Both the servers are java servers which are running on my local box 2. my os is mac osx version 10.9.5 3, I compile nginx 1.6.2 from source and with modules pcre and http echo sudo /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.6.2 built by clang 5.1 (clang-503.0.40) (based on LLVM 3.4svn) configure arguments: --with-pcre=/Users/amchauhan/project/pcre-8.35 --add-module=/Users/amchauhan/project/echo-nginx-module-0.57 --with-cc-opt=-Wno-deprecated-declarations nginx config ---------------- events { worker_connections 1024; } http { server { listen 80; server_name localhost; location /mywebapp { default_type "text/plain"; echo "in webapp call, should not happen"; echo_flush; proxy_pass http://localhost:8080/mywebapp/; } location /messaging-service { default_type "text/plain"; echo "in the messaging service call"; echo_flush; echo_subrequest_async GET '/mywebapp/second'; proxy_pass http://localhost:8090/messaging-service/; } location /mywebapp/second { echo "Going to keep sesson alive !!"; echo_flush; proxy_pass http://localhost:8080/mywebapp/second/; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257211,257211#msg-257211 From nginx-forum at nginx.us Thu Mar 12 10:52:54 2015 From: nginx-forum at nginx.us (AmitChauhan) Date: Thu, 12 Mar 2015 06:52:54 -0400 Subject: Echo sub request not working In-Reply-To: <19c0d734ef2ca325a66ba12acdd6b433.NginxMailingListEnglish@forum.nginx.org> References: <19c0d734ef2ca325a66ba12acdd6b433.NginxMailingListEnglish@forum.nginx.org> Message-ID: <72c9f0c1ee8de14ce99d28fc3da319bf.NginxMailingListEnglish@forum.nginx.org> When I say both the servers running on my local machine it means servers running on localhost:8080 and localhost:8090. Both are java tomcat servers. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257211,257213#msg-257213 From dp at nginx.com Thu Mar 12 11:08:16 2015 From: dp at nginx.com (Dmitry Pryadko) Date: Thu, 12 Mar 2015 14:08:16 +0300 Subject: map with two variables In-Reply-To: <8135cefd620b6c1d8d710f2caee8ae60@ssl.scheff32.de> References: <8135cefd620b6c1d8d710f2caee8ae60@ssl.scheff32.de> Message-ID: <550173A0.6070704@nginx.com> it would be easier to define server_name with regexp and use match groups server { server_name ~ ^(?\w+)\.(?\w+)\.(?\w+)$; set $graphite_host "${a}_${b}_${c}"; } 09.03.15 18:36, Matthias Rieber ?????: > Hi, > > I'd like to set a variable to the value of $host where the dots are > replaced by underscore. My first idea: > > map $host $graphite_host { > "~(?P[^.]*)\.(?P[^.]*)\.(?P[^.]*)" $a_$b_$c; > } > > But I can't use more than one variable in the result. $a or $b would > work, but not $a_$b or $a$b. I always get an error like: > nginx: [emerg] unknown "a$b" variable. Is that intentional? Is there any > other way to replace the . by _? > > # nginx -V > nginx version: nginx/1.7.10 > built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > TLS SNI support enabled > configure arguments: --with-http_stub_status_module > --with-http_ssl_module --prefix=/usr/local > --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid > --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log > --add-module=/usr/local/src/ngx_devel_kit > --add-module=/usr/local/src/lua-nginx-module > --add-module=/usr/local/src/headers-more-nginx-module/ > --add-module=/usr/local/src/nginx-statsd --with-http_spdy_module > --with-http_sub_module > > Regards, > Matthias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- br, Dmitry Pryadko From max at edx.org Thu Mar 12 15:01:46 2015 From: max at edx.org (Max Rothman) Date: Thu, 12 Mar 2015 11:01:46 -0400 Subject: Verify Content-Length matches request body Message-ID: Hi, Is there a way for nginx to verify that the Content-Length header isn't exceeded by the actual size of the request body? Context: I'm working on an upload endpoint with a maximum upload size, and it seems that client_max_body_size only checks the Content-Length header, not the actual body. Additionally, from my testing it appears that nginx accepts the entire request body regardless of what the Content-Length is set to. I want to be able to defend against a potential slowloris-style attack where all of my workers could get tied up with overly-large uploads. Thanks, Max Rothman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 12 15:15:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Mar 2015 18:15:24 +0300 Subject: Verify Content-Length matches request body In-Reply-To: References: Message-ID: <20150312151524.GO88631@mdounin.ru> Hello! On Thu, Mar 12, 2015 at 11:01:46AM -0400, Max Rothman wrote: > Is there a way for nginx to verify that the Content-Length header isn't > exceeded by the actual size of the request body? This can't happen. Anything after the Content-Length is a next request. > Context: I'm working on an upload endpoint with a maximum upload size, and > it seems that client_max_body_size only checks the Content-Length header, > not the actual body. Additionally, from my testing it appears that > nginx accepts > the entire request body regardless of what the Content-Length is set to. I > want to be able to defend against a potential slowloris-style attack where > all of my workers could get tied up with overly-large uploads. After the body is read, nginx will either read the next request (if allowed as per keepalive_timeout/keepalive_requests, as well as internal state), or will close the connection. When closing the connection it will use lingering_timeout / lingering_time settings to read and discard additional data (if any), if allowed by the lingering_close directive, see http://nginx.org/r/lingering_close for details. -- Maxim Dounin http://nginx.org/ From max at edx.org Thu Mar 12 15:17:32 2015 From: max at edx.org (Max Rothman) Date: Thu, 12 Mar 2015 11:17:32 -0400 Subject: Verify Content-Length matches request body In-Reply-To: <20150312151524.GO88631@mdounin.ru> References: <20150312151524.GO88631@mdounin.ru> Message-ID: Thank you! That makes a lot of sense. On Thu, Mar 12, 2015 at 11:15 AM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 12, 2015 at 11:01:46AM -0400, Max Rothman wrote: > > > Is there a way for nginx to verify that the Content-Length header isn't > > exceeded by the actual size of the request body? > > This can't happen. Anything after the Content-Length is a next > request. > > > Context: I'm working on an upload endpoint with a maximum upload size, > and > > it seems that client_max_body_size only checks the Content-Length header, > > not the actual body. Additionally, from my testing it appears that > > nginx accepts > > the entire request body regardless of what the Content-Length is set to. > I > > want to be able to defend against a potential slowloris-style attack > where > > all of my workers could get tied up with overly-large uploads. > > After the body is read, nginx will either read the next request > (if allowed as per keepalive_timeout/keepalive_requests, as well > as internal state), or will close the connection. When closing > the connection it will use lingering_timeout / lingering_time > settings to read and discard additional data (if any), if allowed > by the lingering_close directive, see > http://nginx.org/r/lingering_close for details. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.sales at donweb.com Thu Mar 12 16:24:02 2015 From: fabian.sales at donweb.com (=?ISO-8859-1?Q?Fabi=E1n_M_Sales?=) Date: Thu, 12 Mar 2015 13:24:02 -0300 Subject: NGINX and mod_log_sql In-Reply-To: <54FF20DF.8020005@mostertman.org> References: <54F74C8B.80007@donweb.com> <54FF2047.9000107@donweb.com> <54FF20DF.8020005@mostertman.org> Message-ID: <5501BDA2.5000006@donweb.com> Thanks for your Reply Y solved this issue. With this URL: https://github.com/tommybotten/mod_log_sql/commit/23e52d480edcd8406f0338c6a64167a3298985d1 Regards. Fabi?n M. Sales. On 10/03/15 13:50, Dani?l Mostertman wrote: > Hi Fabi?n, > > You most likely put nginx in front of Apache. > If that's the case, then chances are that you see the address in your > logs that nginx contacts Apache from, instead of the user connecting > to nginx. > > You might want to look into passing the IP of the visitor to your > backend (Apache). > A good example can be found easily with a search engine, like this: > > http://www.daveperrett.com/articles/2009/08/10/passing-ips-to-apache-with-nginx-proxy/ > > Hope this helps, > > Dani?l > > Fabi?n M Sales schreef op 10-3-2015 om 17:48: >> Any idea? >> >> Thanks. >> On 04/03/15 15:18, Fabi?n M Sales wrote: >>> Hello List. >>> >>> I use mod_log_sql-1.10 compiled into Apache / 2.4.7 and write >>> correctly in MySQL. >>> >>> In the nginx web server with the IP writer in MySQL is the IP of the >>> webserver and not the client IP to access the website. >>> >>> You might still be able to write the client IP and non-IP using >>> nginx webserver? >>> >>> -- >>> Firma Institucional >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> -- >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Firma Institucional *Fabi?n* *M. Sales *Soporte T?cnico & I.T.I Linux *DonWeb * La Actitud Es Todo www.DonWeb.com ------------------------------------------------------------------------ Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgaci?n y/o uso del mismo sin autorizaci?n por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificaci?n y/o alteraci?n del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elim?elo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se voc? os recebeu por engano ou n?o ? um dos destinat?rios aos quais ela foi endere?ada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. ? proibida a reten??o, distribui??o, divulga??o ou utiliza??o de quaisquer informa??es aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gr?ficos1 Type: image/png Size: 6502 bytes Desc: not available URL: From janet at wifispark.com Thu Mar 12 16:25:08 2015 From: janet at wifispark.com (Janet Valbuena) Date: Thu, 12 Mar 2015 16:25:08 +0000 Subject: 3:unable to get certificate CRL Message-ID: Hi Nginx Team I'm having problems configuring NGINX to use a CRL. I've created the CRL using OpenSSL 0.9.8e and my Nginx version is 1.4.1. I'm using a self-signed certificate and an intermediate certificate. The lines for the SSL in my config are: server { > listen 10446 ssl; > > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > ssl_prefer_server_ciphers on; > > ssl_certificate /etc/nginx/ssl/star_net.crt; > ssl_certificate_key /etc/nginx/ssl/star_net.key; > > ssl_client_certificate /etc/certs/ca-chain.cert.pem; > > ssl_crl /etc/certs/crl.cert.pem; > > ssl_verify_client on; > ssl_verify_depth 2; > > If I comment the ssl_crl line, I don't get any errors. However as soon as I uncomment it I get this error: ..... client SSL certificate verify error: (3:unable to get certificate > CRL) while reading client request headers, client: .... > I can't see what is wrong in my config. Help please. Thanks very much Janet -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 12 17:03:26 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Mar 2015 20:03:26 +0300 Subject: 3:unable to get certificate CRL In-Reply-To: References: Message-ID: <20150312170326.GR88631@mdounin.ru> Hello! On Thu, Mar 12, 2015 at 04:25:08PM +0000, Janet Valbuena wrote: > Hi Nginx Team > > I'm having problems configuring NGINX to use a CRL. > > I've created the CRL using OpenSSL 0.9.8e and my Nginx version is 1.4.1. > > I'm using a self-signed certificate and an intermediate certificate. > > The lines for the SSL in my config are: > > server { > > listen 10446 ssl; > > > > ssl_session_cache shared:SSL:10m; > > ssl_session_timeout 10m; > > ssl_prefer_server_ciphers on; > > > > ssl_certificate /etc/nginx/ssl/star_net.crt; > > ssl_certificate_key /etc/nginx/ssl/star_net.key; > > > > ssl_client_certificate /etc/certs/ca-chain.cert.pem; > > > > ssl_crl /etc/certs/crl.cert.pem; > > > > ssl_verify_client on; > > ssl_verify_depth 2; > > > > > If I comment the ssl_crl line, I don't get any errors. > > However as soon as I uncomment it I get this error: > > ..... client SSL certificate verify error: (3:unable to get certificate > > CRL) while reading client request headers, client: .... > > > > I can't see what is wrong in my config. Help please. The error suggests that you don't have CRL for at least one of the certificates in the chain. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Mar 12 17:28:35 2015 From: nginx-forum at nginx.us (nathanmesser) Date: Thu, 12 Mar 2015 13:28:35 -0400 Subject: Validating client certificate against CRL In-Reply-To: <20141211193318.GY45960@mdounin.ru> References: <20141211193318.GY45960@mdounin.ru> Message-ID: We're in a similiar situation, but with many intermediate CAs and root CAs for all the possible client certificates we accept. We have all of these concatenated into a single file for the ssl_client_certificate directive. We have CRLs for some of these and not for others. Is there any way we configure nginx so it will honour the ones we have, without requiring us to have a CRL for all of them? We've tried combining the ones we have into a single file, and using that in the ssl_crl directive, but it still gives us a 400 Bad Request error. With apache we were able to specify the directory they are all in, and have it process the ones we have. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255448,257227#msg-257227 From nginx-forum at nginx.us Thu Mar 12 19:03:45 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Thu, 12 Mar 2015 15:03:45 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: <20150312080818.GD29618@daoine.org> References: <20150312080818.GD29618@daoine.org> Message-ID: <5120a12860a01d9a49fe2cb8d71aac9a.NginxMailingListEnglish@forum.nginx.org> Thank's in advance. The php ngnix integration already worked fine, but your idea it is goot to know what else is obtained with the manual request, I put the result of the execution: 168.122.170:80/correo-proxy-auth/auth.php * About to connect() to 192.168.122.170 port 80 (#0) * Trying 192.168.122.170... * Connected to 192.168.122.170 (192.168.122.170) port 80 (#0) > GET /correo-proxy-auth/auth.php HTTP/1.1 > User-Agent: curl/7.29.0 > Accept: */* > Host:192.168.122.170 > Auth-Method:plain > Auth-User:ricardo.carrillo > Auth-pass:r3dh4t > Auth-Protocol:imap > Auth-Login-Attempt:1 > Client-IP: 192.168.122.1 > < HTTP/1.1 200 OK < Server: nginx/1.6.2 < Date: Thu, 12 Mar 2015 15:46:11 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < X-Powered-By: PHP/5.4.16 < Array ( [USER] => apache [HOME] => /usr/share/httpd [FCGI_ROLE] => RESPONDER [PATH_TRANSLATED] => /usr/share/nginx/html/correo-proxy-auth/auth.php [QUERY_STRING] => [REQUEST_METHOD] => GET [CONTENT_TYPE] => [CONTENT_LENGTH] => [SCRIPT_NAME] => /correo-proxy-auth/auth.php [REQUEST_URI] => /correo-proxy-auth/auth.php [DOCUMENT_URI] => /correo-proxy-auth/auth.php [DOCUMENT_ROOT] => /usr/share/nginx/html [SERVER_PROTOCOL] => HTTP/1.1 [GATEWAY_INTERFACE] => CGI/1.1 [SERVER_SOFTWARE] => nginx/1.6.2 [REMOTE_ADDR] => 192.168.122.170 [REMOTE_PORT] => 39783 [SERVER_ADDR] => 192.168.122.170 [SERVER_PORT] => 80 [SERVER_NAME] => localhost [REDIRECT_STATUS] => 200 [HTTP_USER_AGENT] => curl/7.29.0 [HTTP_ACCEPT] => */* [HTTP_HOST] => 192.168.122.170 [HTTP_AUTH_METHOD] => plain [HTTP_AUTH_USER] => ricardo.carrillo [HTTP_AUTH_PASS] => r3dh4t [HTTP_AUTH_PROTOCOL] => imap [HTTP_AUTH_LOGIN_ATTEMPT] => 1 [HTTP_CLIENT_IP] => 192.168.122.1 [PHP_SELF] => /correo-proxy-auth/auth.php [REQUEST_TIME_FLOAT] => 1426175171.8848 [REQUEST_TIME] => 1426175171 ) * Connection #0 to host 192.168.122.170 left intact The response it is OK, so the problem certainly is into the auth logic. (index.php script) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257228#msg-257228 From nginx-forum at nginx.us Thu Mar 12 23:25:09 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Thu, 12 Mar 2015 19:25:09 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: References: Message-ID: Hi there, I have decided delete all authetication part from the email process auth, and first made the simplest configuration, so i have configured nginx based on the the examples provided into wiki nginx page and just modified it to get an IP address and hostname to redirect my users to my backends mail servers, Unfortunadly I got another 2 error messages : upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.122.1, server: 0.0.0.0:143, login: "ricardo.carrillo", upstream: 192.168.192.78:143 and client was rejected: "MAIL FROM: __" while in auth state, client: 192.168.122.1, server: 0.0.0.0:25 Does anybody knows what does those errors mean? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257230#msg-257230 From francis at daoine.org Fri Mar 13 00:38:35 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Mar 2015 00:38:35 +0000 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: References: Message-ID: <20150313003835.GE29618@daoine.org> On Thu, Mar 12, 2015 at 07:25:09PM -0400, dominus.ceo wrote: > upstream timed out (110: Connection timed out) while connecting to > upstream, client: 192.168.122.1, server: 0.0.0.0:143, login: > "ricardo.carrillo", upstream: 192.168.192.78:143 Your client connected to nginx port 143 (for imap). nginx tried to connect to 192.168.192.78 port 143. nginx got no response. Your original mail suggested that there was an imap server on 192.168.122.78, not 192.168.192.78. > client was rejected: "MAIL FROM: __" while in auth > state, client: 192.168.122.1, server: 0.0.0.0:25 I do not know about this part. The code seems to suggest that that means that one part of the system expects authentication but another part does not. What is the response to the manual auth_http request in this case? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Mar 13 01:54:04 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Thu, 12 Mar 2015 21:54:04 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: <20150313003835.GE29618@daoine.org> References: <20150313003835.GE29618@daoine.org> Message-ID: <861b66b359335353e1c1bf136a5f6be0.NginxMailingListEnglish@forum.nginx.org> Sorry, me mistake, all the IP's are related to with the 192.168.122.0/255.255.255.0 network. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257233#msg-257233 From nginx-forum at nginx.us Fri Mar 13 02:07:14 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Thu, 12 Mar 2015 22:07:14 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: <20150313003835.GE29618@daoine.org> References: <20150313003835.GE29618@daoine.org> Message-ID: <03e27c611876bcd2cf784e1873ae716e.NginxMailingListEnglish@forum.nginx.org> I changed the IP address into my php script and i think we are moving on, now i'm seening the next error into my logs: *6 upstream sent invalid response: "550 5.7.0 Error: insufficient authorization" while reading response from upstream, client: 192.168.122.1 using starttls, server: 0.0.0.0:587, login: "ricardo.carrillo", upstream: 192.168.122.78:25 2015/03/12 21:02:18 [info] 2375#0: *6 upstream sent invalid response: "550 5.7.0 Error: insufficient authorization" while reading response from upstream, client: 192.168.122.1 using starttls, server: 0.0.0.0:587, login: "ricardo.carrillo", upstream: 192.168.122.78:25 The response for the manual auth_http request is: [root at proxy-n1 ~]# curl -v -H "Host:192.168.122.170" -H "Auth-Method:plain" -H "Auth-User:ricardo.carrillo" -H "Auth-pass:r3dh4t" -H "Auth-Protocol:imap" -H "Auth-Login-Attempt:1" -H "Client-IP: 192.168.122.1" http://192.168.122.170:80/correo-proxy-auth/auth2.php * About to connect() to 192.168.122.170 port 80 (#0) * Trying 192.168.122.170... * Connected to 192.168.122.170 (192.168.122.170) port 80 (#0) > GET /correo-proxy-auth/auth2.php HTTP/1.1 > User-Agent: curl/7.29.0 > Accept: */* > Host:192.168.122.170 > Auth-Method:plain > Auth-User:ricardo.carrillo > Auth-pass:r3dh4t > Auth-Protocol:imap > Auth-Login-Attempt:1 > Client-IP: 192.168.122.1 > < HTTP/1.1 200 OK < Server: nginx/1.6.2 < Date: Fri, 13 Mar 2015 02:06:37 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < X-Powered-By: PHP/5.4.16 < Auth-Status: OK < Auth-Server: 192.168.122.78 < Auth-Port: 143 < * Connection #0 to host 192.168.122.170 left intact Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257234#msg-257234 From nginx-forum at nginx.us Fri Mar 13 08:36:37 2015 From: nginx-forum at nginx.us (arty777) Date: Fri, 13 Mar 2015 04:36:37 -0400 Subject: Hide static file from sniffer Message-ID: <8d79c4f53b5cc58fac3c3aca37b62def.NginxMailingListEnglish@forum.nginx.org> Hi , how it possible with nginx to hide static file from simple sniffer (like chrome) ? This directive add_header X-Content-Type-Options nosniff; does't help me . Help please with solution . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257236,257236#msg-257236 From nginx-forum at nginx.us Fri Mar 13 08:56:28 2015 From: nginx-forum at nginx.us (AmitChauhan) Date: Fri, 13 Mar 2015 04:56:28 -0400 Subject: Echo sub request not working In-Reply-To: <19c0d734ef2ca325a66ba12acdd6b433.NginxMailingListEnglish@forum.nginx.org> References: <19c0d734ef2ca325a66ba12acdd6b433.NginxMailingListEnglish@forum.nginx.org> Message-ID: Can anyone help me with this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257211,257237#msg-257237 From nginx-forum at nginx.us Fri Mar 13 10:16:59 2015 From: nginx-forum at nginx.us (AmitChauhan) Date: Fri, 13 Mar 2015 06:16:59 -0400 Subject: Multiple proxy_pass destinations from a single location block In-Reply-To: References: Message-ID: When you say " but ALSO set up another proxy_pass from this location block to say bar.mydomain.com" Do you mean the request should go to both? i.e foo.mydomian and bar.mydomain. If thats the case than upstream will not work because it will actually load balance the requests between 2 servers. Infact I am also looking for a solution wherein I need proxy_pass to send request to the destined server but ALSO send it to another server also (in async manner). I tried using echo module with echo_subrequest_async but it did not worked. So I am not sure whats the best way to do that in Nginx check this http://forum.nginx.org/read.php?2,257211,257211 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256437,257238#msg-257238 From nginx-forum at nginx.us Fri Mar 13 11:01:40 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Fri, 13 Mar 2015 07:01:40 -0400 Subject: Does anybody know what does "Undefined index: HTTP_X_AUTH_PORT " mean? Message-ID: <834d55f16a465a8149e3f14a4eb1bcf0.NginxMailingListEnglish@forum.nginx.org> Does anybody know what does "Undefined index: HTTP_X_AUTH_PORT " mean? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257239,257239#msg-257239 From me at myconan.net Fri Mar 13 11:05:07 2015 From: me at myconan.net (Edho Arief) Date: Fri, 13 Mar 2015 20:05:07 +0900 Subject: Does anybody know what does "Undefined index: HTTP_X_AUTH_PORT " mean? In-Reply-To: <834d55f16a465a8149e3f14a4eb1bcf0.NginxMailingListEnglish@forum.nginx.org> References: <834d55f16a465a8149e3f14a4eb1bcf0.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Mar 13, 2015 at 8:01 PM, dominus.ceo wrote: > Does anybody know what does "Undefined index: HTTP_X_AUTH_PORT " mean? > It means your php application is incorrectly written or you missed some nginx configuration for that specific application. (it's php error message) From sebastian at der-wastl.de Fri Mar 13 12:23:12 2015 From: sebastian at der-wastl.de (Sebastian Schwaiger) Date: Fri, 13 Mar 2015 13:23:12 +0100 Subject: nginx + SabreDAV: Error 405 when accessing location via WebDAV, SabreDAV web interface works without problems Message-ID: <004101d05d88$7a8fbc40$6faf34c0$@der-wastl.de> Dear nginx team, I'm trying to set up a WebDAV server using SabreDAV. I successfully set up the server with Apache and now I want to achieve the same result with nginx. Currently, the SabreDAV web GUI works without problems, but access via a WebDAV client is not possible (I'm using CarotDAV). Always the error 405 is returned. I now post my configuration and the debug trace of a request. As no fastcgi messages appear, it looks to me as if the problem is caused by nginx, not a misconfigured php-fpm interpreter. server { listen 80; location / { try_files $uri $uri/ /index.php; } location /TENANT_ID/webdav/ { try_files $uri $uri/ /TENANT_ID/webdav/index.php?$1; } location ~* (index|fileViewer)\.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param HTTP_AUTHORIZATION $http_authorization if_not_empty; } } 2015/03/13 12:06:44 [debug] 111#0: *513 http request line: "PROPFIND /TENANT_ID/webdav/ HTTP/1.0" 2015/03/13 12:06:44 [debug] 111#0: *513 http uri: "/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 http args: "" 2015/03/13 12:06:44 [debug] 111#0: *513 http exten: "" 2015/03/13 12:06:44 [debug] 111#0: *513 posix_memalign: 00000000022D4600:4096 @16 2015/03/13 12:06:44 [debug] 111#0: *513 http process request header line 2015/03/13 12:06:44 [debug] 111#0: *513 http header: "X-Forwarded-Proto: https" 2015/03/13 12:06:44 [debug] 111#0: *513 http header: "Connection: close" 2015/03/13 12:06:44 [debug] 111#0: *513 http header: "Content-Length: 0" 2015/03/13 12:06:44 [debug] 111#0: *513 http header: "User-Agent: Rei.Fs.WebDAV/1.11.9" 2015/03/13 12:06:44 [debug] 111#0: *513 http header: "Accept-Encoding: deflate, gzip" 2015/03/13 12:06:44 [debug] 111#0: *513 http header: "Depth: 1" 2015/03/13 12:06:44 [debug] 111#0: *513 http header done 2015/03/13 12:06:44 [debug] 111#0: *513 event timer del: 3: 1426248464611 2015/03/13 12:06:44 [debug] 111#0: *513 generic phase: 0 2015/03/13 12:06:44 [debug] 111#0: *513 rewrite phase: 1 2015/03/13 12:06:44 [debug] 111#0: *513 test location: "/" 2015/03/13 12:06:44 [debug] 111#0: *513 test location: "TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 test location: ~ "(index|fileViewer)\.php$" 2015/03/13 12:06:44 [debug] 111#0: *513 using configuration "/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 http cl:0 max:210763776 2015/03/13 12:06:44 [debug] 111#0: *513 rewrite phase: 3 2015/03/13 12:06:44 [debug] 111#0: *513 post rewrite phase: 4 2015/03/13 12:06:44 [debug] 111#0: *513 generic phase: 5 2015/03/13 12:06:44 [debug] 111#0: *513 generic phase: 6 2015/03/13 12:06:44 [debug] 111#0: *513 generic phase: 7 2015/03/13 12:06:44 [debug] 111#0: *513 access phase: 8 2015/03/13 12:06:44 [debug] 111#0: *513 access phase: 9 2015/03/13 12:06:44 [debug] 111#0: *513 post access phase: 10 2015/03/13 12:06:44 [debug] 111#0: *513 try files phase: 11 2015/03/13 12:06:44 [debug] 111#0: *513 http script var: "/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 trying to use file: "/TENANT_ID/webdav/" "/srv/data/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 http script var: "/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 trying to use dir: "/TENANT_ID/webdav/" "/srv/data/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 try file uri: "/TENANT_ID/webdav/" 2015/03/13 12:06:44 [debug] 111#0: *513 content phase: 12 2015/03/13 12:06:44 [debug] 111#0: *513 content phase: 13 2015/03/13 12:06:44 [debug] 111#0: *513 content phase: 14 2015/03/13 12:06:44 [debug] 111#0: *513 content phase: 15 2015/03/13 12:06:44 [debug] 111#0: *513 content phase: 16 2015/03/13 12:06:44 [debug] 111#0: *513 http finalize request: 405, "/TENANT_ID/webdav/?" a:1, c:1 2015/03/13 12:06:44 [debug] 111#0: *513 http special response: 405, "/TENANT_ID/webdav/?" 2015/03/13 12:06:44 [debug] 111#0: *513 http set discard body 2015/03/13 12:06:44 [debug] 111#0: *513 xslt filter header 2015/03/13 12:06:44 [debug] 111#0: *513 HTTP/1.1 405 Not Allowed Server: nginx Date: Fri, 13 Mar 2015 12:06:44 GMT Content-Type: text/html Content-Length: 166 Connection: close 2015/03/13 12:06:44 [debug] 111#0: *513 write new buf t:1 f:0 00000000023121A0, pos 00000000023121A0, size: 145 file: 0, size: 0 2015/03/13 12:06:44 [debug] 111#0: *513 http write filter: l:0 f:0 s:145 2015/03/13 12:06:44 [debug] 111#0: *513 http output filter "/TENANT_ID/webdav/?" 2015/03/13 12:06:44 [debug] 111#0: *513 http copy filter: "/TENANT_ID/webdav/?" 2015/03/13 12:06:44 [debug] 111#0: *513 image filter 2015/03/13 12:06:44 [debug] 111#0: *513 xslt filter body 2015/03/13 12:06:44 [debug] 111#0: *513 http postpone filter "/TENANT_ID/webdav/?" 0000000002312360 2015/03/13 12:06:44 [debug] 111#0: *513 write old buf t:1 f:0 00000000023121A0, pos 00000000023121A0, size: 145 file: 0, size: 0 2015/03/13 12:06:44 [debug] 111#0: *513 write new buf t:0 f:0 0000000000000000, pos 00000000006C5F80, size: 120 file: 0, size: 0 2015/03/13 12:06:44 [debug] 111#0: *513 write new buf t:0 f:0 0000000000000000, pos 00000000006C6700, size: 46 file: 0, size: 0 2015/03/13 12:06:44 [debug] 111#0: *513 http write filter: l:1 f:0 s:311 2015/03/13 12:06:44 [debug] 111#0: *513 http write filter limit 0 2015/03/13 12:06:44 [debug] 111#0: *513 writev: 311 2015/03/13 12:06:44 [debug] 111#0: *513 http write filter 0000000000000000 2015/03/13 12:06:44 [debug] 111#0: *513 http copy filter: 0 "/TENANT_ID/webdav/?" 2015/03/13 12:06:44 [debug] 111#0: *513 http finalize request: 0, "/TENANT_ID/webdav/?" a:1, c:1 2015/03/13 12:06:44 [debug] 111#0: *513 event timer add: 3: 5000:1426248409611 2015/03/13 12:06:44 [debug] 111#0: *513 http lingering close handler 2015/03/13 12:06:44 [debug] 111#0: *513 recv: fd:3 -1 of 4096 2015/03/13 12:06:44 [debug] 111#0: *513 recv() not ready (11: Resource temporarily unavailable) 2015/03/13 12:06:44 [debug] 111#0: *513 lingering read: -2 2015/03/13 12:06:44 [debug] 111#0: *513 event timer: 3, old: 1426248409611, new: 1426248409611 2015/03/13 12:06:44 [debug] 111#0: *513 post event 00007F08EC5301B0 2015/03/13 12:06:44 [debug] 111#0: *513 delete posted event 00007F08EC5301B0 2015/03/13 12:06:44 [debug] 111#0: *513 http lingering close handler 2015/03/13 12:06:44 [debug] 111#0: *513 recv: fd:3 0 of 4096 2015/03/13 12:06:44 [debug] 111#0: *513 lingering read: 0 2015/03/13 12:06:44 [debug] 111#0: *513 http request count:1 blk:0 2015/03/13 12:06:44 [debug] 111#0: *513 http close request 2015/03/13 12:06:44 [debug] 111#0: *513 http log handler 2015/03/13 12:06:44 [debug] 111#0: *513 free: 0000000002311400, unused: 96 2015/03/13 12:06:44 [debug] 111#0: *513 free: 00000000022D4600, unused: 2951 2015/03/13 12:06:44 [debug] 111#0: *513 close http connection: 3 2015/03/13 12:06:44 [debug] 111#0: *513 event timer del: 3: 1426248409611 2015/03/13 12:06:44 [debug] 111#0: *513 reusable connection: 0 2015/03/13 12:06:44 [debug] 111#0: *513 free: 00000000022DB080 2015/03/13 12:06:44 [debug] 111#0: *513 free: 00000000022D2910, unused: 0 2015/03/13 12:06:44 [debug] 111#0: *513 free: 00000000022DAF70, unused: 128 ----------------------------------------------------- Sebastian Schwaiger -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 13 14:02:58 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Fri, 13 Mar 2015 10:02:58 -0400 Subject: Does anybody know what does "Undefined index: HTTP_X_AUTH_PORT " mean? In-Reply-To: References: Message-ID: Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257239,257244#msg-257244 From nginx-forum at nginx.us Fri Mar 13 14:15:44 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Fri, 13 Mar 2015 10:15:44 -0400 Subject: Internal Server Error messages nginx proxy POP/IMAP/SMTP In-Reply-To: References: Message-ID: Thanks for the tips and the corrections , after a long night i have finished the proxy configuration with ldap auth php authentication . Does anybody know if it it possible share configurations into ngnix page ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257206,257245#msg-257245 From francis at daoine.org Sat Mar 14 10:08:20 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 14 Mar 2015 10:08:20 +0000 Subject: nginx + SabreDAV: Error 405 when accessing location via WebDAV, SabreDAV web interface works without problems In-Reply-To: <004101d05d88$7a8fbc40$6faf34c0$@der-wastl.de> References: <004101d05d88$7a8fbc40$6faf34c0$@der-wastl.de> Message-ID: <20150314100820.GF29618@daoine.org> On Fri, Mar 13, 2015 at 01:23:12PM +0100, Sebastian Schwaiger wrote: Hi there, > I'm trying to set up a WebDAV server using SabreDAV. I don't see an obvious sample configuration for this for nginx, but the sabredav docs do give a sample configuration for apache at http://sabre.io/dav/webservers/, which seems to indicate that *all* requests should be handled by sabredav; so whatever web server is used is really just there to let sabredav do its work. That is not what the configuration you show here does; so I suspect there may need to be some more discussion about what exactly you want nginx to do. I would expect that for nginx, if every request that starts with "/s/" should be passed to the fastcgi server to be processed by the file /opt/sabredav/server.php, then all the configuration you need is server { location ^~ /s/ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME /opt/sabredav/server.php; fastcgi_pass unix:/var/run/php5-fpm.sock; } } That will let requests that do not start with /s/ be served from the filesystem at /usr/local/nginx/html, or whatever your configured root is. > location /TENANT_ID/webdav/ { > > try_files $uri $uri/ > /TENANT_ID/webdav/index.php?$1; $1 there probably does not have what you want. And the try_files there will probably be what leads to the 405 that you see... > 2015/03/13 12:06:44 [debug] 111#0: *513 http request line: "PROPFIND > /TENANT_ID/webdav/ HTTP/1.0" ...because the request is for /TENANT_ID/webdav/, which refers to a directory that exists, and the nginx serve-from-the-filesystem handler does not "do" PROPFIND. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Mar 14 13:00:59 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Sat, 14 Mar 2015 09:00:59 -0400 Subject: Security and tunning concerns Message-ID: <0206257e8d37c10ce82b27fba51e3b9a.NginxMailingListEnglish@forum.nginx.org> Hi everybody, Does any know if exist any kind of baseline o checklist to improve the security and performance of nginx..? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257248,257248#msg-257248 From nginx-forum at nginx.us Sat Mar 14 13:14:28 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Sat, 14 Mar 2015 09:14:28 -0400 Subject: Security and tunning concerns In-Reply-To: <0206257e8d37c10ce82b27fba51e3b9a.NginxMailingListEnglish@forum.nginx.org> References: <0206257e8d37c10ce82b27fba51e3b9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: This questions is originated because i would like to improve it before to put it into a production environment. Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257248,257249#msg-257249 From emailgrant at gmail.com Sat Mar 14 13:30:56 2015 From: emailgrant at gmail.com (Grant) Date: Sat, 14 Mar 2015 06:30:56 -0700 Subject: gzip_types not working as expected In-Reply-To: <55003EED.3080201@postecom.it> References: <55003EED.3080201@postecom.it> Message-ID: >>> gzip is not working on my piwik.js file according to Google at >>> developers.google.com/speed/pagespeed/insights. It's working fine on >>> my CSS file. How can I troubleshoot this? >>> >>> gzip on; >>> gzip_disable msie6; >>> gzip_types text/javascript application/x-javascript text/css text/plain; > > > You probably miss application/javascript in your list, or the size of your > javascript in question is less then the (default ?) minimum size for gizp to > be done, check the official documentation for this. > > Anyway, an easy way to check if you are missing a mime type on your gzip > list is to open your page with firebug (or similar) enabled and check the > type and size of the particular resource. Just needed to add application/javascript. Thanks guys. - Grant From sapientcloud at gmail.com Sat Mar 14 21:12:11 2015 From: sapientcloud at gmail.com (Sapient Nitro) Date: Sun, 15 Mar 2015 02:42:11 +0530 Subject: Host header in upstream requests Message-ID: Hi, We are using nginx for load balancing dynamically discovered upstream endpoints. We are in need to attach upstream hostname as header Host in upstream requests. For example below is my existing conf extract. upstream products { server hostname1:port1; server hostname2:port2; } server { location /products { proxy_pass http://products } } All requests are load balanced and working fine. Now we are in need to add http header "Host: hostnameX" (ex: hostname1 or hostname2, not its IP) in all requests to upstream. How can we get this done in nginx? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Mar 14 21:28:53 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 14 Mar 2015 17:28:53 -0400 Subject: [ANN] Windows nginx 1.7.11.2 Gryphon Message-ID: 22:02 14-3-2015 nginx 1.7.11.2 Gryphon Based on nginx 1.7.11 (14-3-2015, last changeset 6005:d84f0abd4a53) with; + nginx-module-vts (Virtual host traffic status) adding monitoring for your NOC (network operations center) see /conf/vhts see our updated 'nginx for Windows - documentation 1.1.pdf' chapter 13 + set-misc-nginx-module v0.28 (upgraded 10-3-2015) + echo-nginx-module v0.57 (upgraded 8-3-2015) + lua-nginx-module v0.9.16 (upgraded 10-3-2015) * nginx for Windows is safe against SSL FREAK attack + new best practice ssl_ciphers example (nginx-win.conf) + 'include' in upstream http://trac.nginx.org/nginx/ticket/635 + nginx-auth-ldap (upgraded 2-3-2015) + Inter Worker Communication Protocol to support multiple workers with EBLB IWCP updated to v0.3 (if you like to keep up to date with IWCP/EBLB for other OS's then follow nginx for Windows releases, all Lua code should be cross OS compatible) see our updated 'nginx for Windows - documentation 1.1.pdf' chapter 10 with EBLB and IWCP in action, what it can do for you, including examples + EBLB (Elastic Backend Load Balancer), see /conf/EBLB + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' * This is the last of the Gryphon series, watch out for the new release name Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257253,257253#msg-257253 From reallfqq-nginx at yahoo.fr Sun Mar 15 00:00:16 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 15 Mar 2015 01:00:16 +0100 Subject: Host header in upstream requests In-Reply-To: References: Message-ID: Use variables list to determine what is available to you: http://nginx.org/en/docs/varindex.html --- *B. R.* On Sat, Mar 14, 2015 at 10:12 PM, Sapient Nitro wrote: > Hi, > > We are using nginx for load balancing dynamically discovered upstream > endpoints. We are in need to attach upstream hostname as header Host in > upstream requests. For example below is my existing conf extract. > > upstream products { > server hostname1:port1; > server hostname2:port2; > } > server { > location /products { > proxy_pass http://products > } > } > > All requests are load balanced and working fine. Now we are in need to add > http header "Host: hostnameX" (ex: hostname1 or hostname2, not its IP) in > all requests to upstream. How can we get this done in nginx? > > Regards > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 15 14:19:37 2015 From: nginx-forum at nginx.us (sapientcloud) Date: Sun, 15 Mar 2015 10:19:37 -0400 Subject: Host header in upstream requests In-Reply-To: References: Message-ID: <143fd6c3f36a51caa635a61a8bf528b9.NginxMailingListEnglish@forum.nginx.org> Already tried $upstream_addr which unfortunately gives IP:port instead of hostname:port. My upstream needs hostname listed in conf instead of IP Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257252,257258#msg-257258 From nginx-forum at nginx.us Mon Mar 16 05:11:33 2015 From: nginx-forum at nginx.us (jacograaff) Date: Mon, 16 Mar 2015 01:11:33 -0400 Subject: restrict sub-directory access divert if not authenticated Message-ID: <4d5b1fde17c7a497e9022f84eb29c950.NginxMailingListEnglish@forum.nginx.org> I am moving from a development server to a live server and would like to still test before REALLY going live. I need to redirect access to a sub-folder until I am satisfied with the stability this is what i have: ------------------------------------------------------------- server { listen 80; server_name hostname.com #other configurations location ~ \.php$ { #fastcgi configurations } # now for the new sub-site location /shop/ { # I would like to redirect to ShopSoonToOpenAdvertisement.html if deny satisfy any; allow 203.0.0.133; # my testing server and some client's ip's for testing deny all; auth_basic "sorry but this page is off limits - please go back to hostname.com"; auth_basic_user_file /etc/nginx/.htpasswd; } } -------------------------------------------------------------- for this config the server does not allow access to hostname.com/shop/ but hostname.com/shop/index.php will render the page sans the .js, css, etc... I would like to show the shop if passw or allow is satisfied else redirect to a static html file Any suggestions?? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257266,257266#msg-257266 From dp at nginx.com Mon Mar 16 08:34:36 2015 From: dp at nginx.com (Dmitry Pryadko) Date: Mon, 16 Mar 2015 11:34:36 +0300 Subject: Host header in upstream requests In-Reply-To: <143fd6c3f36a51caa635a61a8bf528b9.NginxMailingListEnglish@forum.nginx.org> References: <143fd6c3f36a51caa635a61a8bf528b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5506959C.20209@nginx.com> I don't think that something other than manual address-hostname pairs maintainance can help you map $upstream_addr $upstream_hostname { 10.0.0.1:80 hostname1; 10.0.0.2:80 hostname2; } ... proxy_set_header Host $upstream_hostname; 15.03.15 17:19, sapientcloud ?????: > Already tried $upstream_addr which unfortunately gives IP:port instead of > hostname:port. My upstream needs hostname listed in conf instead of IP > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257252,257258#msg-257258 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- br, Dmitry Pryadko From dp at nginx.com Mon Mar 16 10:37:04 2015 From: dp at nginx.com (Dmitry Pryadko) Date: Mon, 16 Mar 2015 13:37:04 +0300 Subject: restrict sub-directory access divert if not authenticated In-Reply-To: <4d5b1fde17c7a497e9022f84eb29c950.NginxMailingListEnglish@forum.nginx.org> References: <4d5b1fde17c7a497e9022f84eb29c950.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5506B250.8020103@nginx.com> 16.03.15 8:11, jacograaff ?????: > I am moving from a development server to a live server and would like to > still test before REALLY going live. > > I need to redirect access to a sub-folder until I am satisfied with the > stability > > this is what i have: > ------------------------------------------------------------- > server { > listen 80; > server_name hostname.com > > #other configurations > > location ~ \.php$ { > #fastcgi configurations > } > > # now for the new sub-site > > > location /shop/ { > # I would like to redirect to ShopSoonToOpenAdvertisement.html if > deny > satisfy any; > allow 203.0.0.133; # my testing server and some client's ip's for > testing > deny all; > auth_basic "sorry but this page is off limits - please go back to href='hostname.com'>hostname.com"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > } > > -------------------------------------------------------------- > > for this config the server does not allow access to hostname.com/shop/ > > but > > hostname.com/shop/index.php > > will render the page sans the .js, css, etc... So you say that /shop/index.php doesn't land to /shop/ location? Then ensure that you don't have location /shop/index.php , otherwise see debug log > > I would like to show the shop if passw or allow is satisfied else redirect > to a static html file error_page 401 403 /ShopSoonToOpenAdvertisement.html; > > > Any suggestions?? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257266,257266#msg-257266 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- br, Dmitry Pryadko From shahzaib.cb at gmail.com Mon Mar 16 11:09:30 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 16 Mar 2015 16:09:30 +0500 Subject: Fake video sharing Android App !! Message-ID: Guys, someone cloned our videosharing website and created a FAKE android application using same name as our website and people considering it as our app, which is not. The main problem we're facing is, the videos being served from this android application are hotlinked to our server due to which we're the one affected by its bandwidth cost. Webserver is nginx and hotlinking is already enabled but the issue with no Referer_Header for the requests being generated by this android application. What precautions should we take to prevent this application by using our server's bandwidth ? Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Mon Mar 16 11:19:43 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 16 Mar 2015 13:19:43 +0200 Subject: Fake video sharing Android App !! In-Reply-To: References: Message-ID: <5506BC4F.6040505@csdoc.com> On 16.03.2015 13:09, shahzaib shahzaib wrote: > Guys, someone cloned our videosharing website and created a FAKE android > application using same name as our website and people considering it as > our app, which is not. The main problem we're facing is, the videos > being served from this android application are hotlinked to our server > due to which we're the one affected by its bandwidth cost. > > Webserver is nginx and hotlinking is already enabled but the issue with > no Referer_Header for the requests being generated by this android > application. > > What precautions should we take to prevent this application by using our > server's bandwidth ? Probably you can use http://nginx.org/en/docs/http/ngx_http_secure_link_module.html to completely prevent hotlinking from any other applications and not authorized users. but secret must not be included inside your android application, and secure links must be generated only on server and only for allowed (authorized) android applications and users. -- Best regards, Gena From francis at daoine.org Mon Mar 16 13:28:29 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 16 Mar 2015 13:28:29 +0000 Subject: Fake video sharing Android App !! In-Reply-To: References: Message-ID: <20150316132829.GG29618@daoine.org> On Mon, Mar 16, 2015 at 04:09:30PM +0500, shahzaib shahzaib wrote: Hi there, > Webserver is nginx and hotlinking is already enabled but the issue with no > Referer_Header for the requests being generated by this android > application. > > What precautions should we take to prevent this application by using our > server's bandwidth ? You have "the requests that you wish to allow as normal". You have "the requests that you wish not to allow, since they come from this client". What part of the request that nginx sees puts it into the "yes" or "no" bucket? Put that in your configuration, so that "yes" does what happens now, and "no" returns a http error, or returns a different video inviting the client to get your official app. Perhaps their app uses a unique User-Agent header; or all "wanted" clients do include a Referer header? If you can't tell a "good" request from a "bad" one, you probably cannot configure nginx to. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Mon Mar 16 13:45:30 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 16 Mar 2015 18:45:30 +0500 Subject: Fake video sharing Android App !! In-Reply-To: <20150316132829.GG29618@daoine.org> References: <20150316132829.GG29618@daoine.org> Message-ID: Hi, I have installed that android app and requested log against my ip is following : 39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET /files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-" "Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)" where 39.49.52.224 is ip of my modem. I have also tried blocking specific user agent such as Android but neither it worked (sure i am doing something wrong) nor this is the correct solution : if ($http_user_agent ~* "Linux;Android 4.2.2") { return 403; } Thanks. Shahzaib On Mon, Mar 16, 2015 at 6:28 PM, Francis Daly wrote: > On Mon, Mar 16, 2015 at 04:09:30PM +0500, shahzaib shahzaib wrote: > > Hi there, > > > Webserver is nginx and hotlinking is already enabled but the issue with > no > > Referer_Header for the requests being generated by this android > > application. > > > > What precautions should we take to prevent this application by using our > > server's bandwidth ? > > You have "the requests that you wish to allow as normal". You have "the > requests that you wish not to allow, since they come from this client". > > What part of the request that nginx sees puts it into the "yes" or > "no" bucket? > > Put that in your configuration, so that "yes" does what happens now, > and "no" returns a http error, or returns a different video inviting > the client to get your official app. > > Perhaps their app uses a unique User-Agent header; or all "wanted" > clients do include a Referer header? > > If you can't tell a "good" request from a "bad" one, you probably cannot > configure nginx to. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at pkern.at Mon Mar 16 13:55:40 2015 From: info at pkern.at (Patrik Kernstock) Date: Mon, 16 Mar 2015 14:55:40 +0100 Subject: Fake video sharing Android App !! In-Reply-To: References: <20150316132829.GG29618@daoine.org> Message-ID: <387a8d6a88cc806d3b61c20be17c19b6@pkern.at> > if ($http_user_agent ~* "Linux;Android 4.2.2") { > return 403; > } Looks correct, but maybe nginx does not like the ";" in the provided string? To be true, I never used such an rule. But anyhow this isn't the perfect solution: You're just blocking Android with version 4.2.2 with that. When an user has a phone with just Android 4 the if won't work. Just try that, I hope it will work (I'm just guessing): >> if ($http_user_agent ~* '(Android|android)') { Regards, Patrik On 2015-03-16 14:45, shahzaib shahzaib wrote: > Hi, > > I have installed that android app and requested log against my ip > is following : > > 39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET > /files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-" > "Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)" > > where 39.49.52.224 is ip of my modem. > > I have also tried blocking specific user agent such as Android but > neither it worked (sure i am doing something wrong) nor this is the > correct solution : > > if ($http_user_agent ~* "Linux;Android 4.2.2") { > return 403; > } > > Thanks. > > Shahzaib > > On Mon, Mar 16, 2015 at 6:28 PM, Francis Daly > wrote: > >> On Mon, Mar 16, 2015 at 04:09:30PM +0500, shahzaib shahzaib wrote: >> >> Hi there, >> >>> Webserver is nginx and hotlinking is already enabled but the >> issue with no >>> Referer_Header for the requests being generated by this android >>> application. >>> >>> What precautions should we take to prevent this application by >> using our >>> server's bandwidth ? >> >> You have "the requests that you wish to allow as normal". You have >> "the >> requests that you wish not to allow, since they come from this >> client". >> >> What part of the request that nginx sees puts it into the "yes" or >> "no" bucket? >> >> Put that in your configuration, so that "yes" does what happens >> now, >> and "no" returns a http error, or returns a different video >> inviting >> the client to get your official app. >> >> Perhaps their app uses a unique User-Agent header; or all "wanted" >> clients do include a Referer header? >> >> If you can't tell a "good" request from a "bad" one, you probably >> cannot >> configure nginx to. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx [1] > > > > Links: > ------ > [1] http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Mar 16 14:47:42 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 16 Mar 2015 10:47:42 -0400 Subject: Fake video sharing Android App !! In-Reply-To: <387a8d6a88cc806d3b61c20be17c19b6@pkern.at> References: <387a8d6a88cc806d3b61c20be17c19b6@pkern.at> Message-ID: A map will be better here; map $http_user_agent $block { default 0; ~*Linux.Android 4\.2\.2 1; ....etc..... } location { if ($block) { return 403; } ......... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257269,257286#msg-257286 From francis at daoine.org Mon Mar 16 14:50:25 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 16 Mar 2015 14:50:25 +0000 Subject: Fake video sharing Android App !! In-Reply-To: References: <20150316132829.GG29618@daoine.org> Message-ID: <20150316145025.GH29618@daoine.org> On Mon, Mar 16, 2015 at 06:45:30PM +0500, shahzaib shahzaib wrote: Hi there, > I have installed that android app and requested log against my ip is > following : > > 39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET > /files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-" > "Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)" > > where 39.49.52.224 is ip of my modem. So - you have the log line for one request that you would like to block. Do you have the log line for the matching request that you would like to allow? And that log line shows just two request headers plus an ip address. If that is enough to accurately distinguish between "yes" and "no" requests, you're good. If not, examine the entire request (either by extra logging in nginx, or by watching the network traffic involved in each). > I have also tried blocking specific user agent such as Android but neither > it worked (sure i am doing something wrong) nor this is the correct > solution : > > if ($http_user_agent ~* "Linux;Android 4.2.2") { Does that 19-character string appear in the user agent header? If not, the "if" will not match. (I don't see it in there.) If the most important thing is that "they" don't "steal" your bandwidth, you can just turn off your web server. Bandwidth saved. But presumably it is also important that some requests are handled as they currently are. Only you can say what distinguishes a "no" request from a "yes" request. And only you can say which "yes" requests you are happy to mis-characterise as "no" requests and reject. After you determine those, then you can decide how to configure nginx to implement the same test. (For example: check your logs from before this app started. Do all valid requests include Referer? Are you happy to block any actually-valid requests that omit Referer, in order to block all requests from this app? How long do you think it will take the app author to change their app to include a Referer, if you do that?) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Mar 16 22:01:05 2015 From: nginx-forum at nginx.us (antodas) Date: Mon, 16 Mar 2015 18:01:05 -0400 Subject: nginx big bug In-Reply-To: <20120307190924.GD67687@mdounin.ru> References: <20120307190924.GD67687@mdounin.ru> Message-ID: Hello -, I have the similar problem.. !! I installed testlink and running using nginx. Sometimes.. Testlink hangs .. and I need to restart NGINX to get going. 6380#6120: *524 WSARecv() failed (10054: An existing connection was forcibly closed by the remote host) while reading response header from upstream, client: 192.168.27.128, server: localhost, request: "GET /testlink/lib/ajax/checkTCaseDuplicateName.php?_dc=1426533872356&name=%20Rotation%20view%3A%20Review%20offer%20within%20Accept%20rotation%20scre&testcase_id=0&testsuite_id=6382 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000 Let me know how you guys resolved this error. Thank you, Anthonydas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223594,257292#msg-257292 From nginx-forum at nginx.us Mon Mar 16 23:59:12 2015 From: nginx-forum at nginx.us (jacograaff) Date: Mon, 16 Mar 2015 19:59:12 -0400 Subject: restrict sub-directory access divert if not authenticated In-Reply-To: <5506B250.8020103@nginx.com> References: <5506B250.8020103@nginx.com> Message-ID: Hi thanks for that your suggestion: ----------------- So you say that /shop/index.php doesn't land to /shop/ location? Then ensure that you don't have location /shop/index.php , otherwise see debug log ------------------- I do want access to /shop/ as well as /shop/index.php - but only to a select ip-range or password my directive works for /shop/ but if someone types in /shop/index.php they will see the index.php page since the directive for php files overwrites the shop subfolder directive. They see the php file but the html, js, css, img resources does not load because those do still obey my "location /shop/" directive I guess I have to either 1. declare as: "location /shop/ ~ \.php$" or 2. inside the location "/shop/" directive have a clause to force php not to obey the above "location ~ \.php$" directive Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257266,257293#msg-257293 From aflexzor at gmail.com Tue Mar 17 02:11:03 2015 From: aflexzor at gmail.com (Alex Flex) Date: Mon, 16 Mar 2015 20:11:03 -0600 Subject: Help with nginx http auth based on forwarded IP. In-Reply-To: <5507822B.2080303@gmail.com> References: <55077243.2060707@gmail.com> <5507822B.2080303@gmail.com> Message-ID: <55078D37.506@gmail.com> > Hello Nginx, > > I have these lines: > > location / { > proxy_pass http://172.4.1.2:8080; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $our_x_forwarded_for; > satisfy any; > allow 116.2.200.1; > auth_basic "protected"; > auth_basic_user_file /var/www/html/.htpasswd; > } > > It works fine when the remote user is not going through a proxy > ($remote_ip is the real ip). > > The problem is I need to allow the user based on the x_forwarded_ip (in > this case 116.2.200.1) . > > How can I achieve this? I know this isnt very secure because anybody > can emuliate a x_forwarded_ip but this is just an additional layer of > protection in place. > > > Thanks > > Alex. > > > From vinstheking at gmail.com Tue Mar 17 07:54:52 2015 From: vinstheking at gmail.com (vinay bhargav) Date: Tue, 17 Mar 2015 13:24:52 +0530 Subject: Nginx configuration recovery Message-ID: Hi, Sorry for spamming but I'm in deep trouble. I've accidentally overwritten /etc/nginx/site-availabe/default with some xyz file. I'm using Ubuntu 14.04. The server is still running. Is there any way I could recover the config file. Note: Recovering the default file is very important for me. Thanks in advance, Vinay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Tue Mar 17 08:22:19 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Mar 2015 13:22:19 +0500 Subject: Fake video sharing Android App !! In-Reply-To: <20150316145025.GH29618@daoine.org> References: <20150316132829.GG29618@daoine.org> <20150316145025.GH29618@daoine.org> Message-ID: @itpp thanks for suggestion but the problem is , this is the invalid way of blocking requests belong to android and the reason is , our official android app will be releasing soon and filtering based on this user-agent will block valid users as well. So we need something different such as, adding some custom header in official android app and filtering requests based on that (Maybe). @Francis, thanks for explanation and suggestion. As you suggested, i should enable extra logging and currently following is the log format enabled on nginx. Does nginx support extra logging format ? i want to log each parameter to distinguish between valid and invalid requests. Following is current log format : log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; Thanks. Shahzaib On Mon, Mar 16, 2015 at 7:50 PM, Francis Daly wrote: > On Mon, Mar 16, 2015 at 06:45:30PM +0500, shahzaib shahzaib wrote: > > Hi there, > > > I have installed that android app and requested log against my ip is > > following : > > > > 39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET > > /files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-" > > "Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)" > > > > where 39.49.52.224 is ip of my modem. > > So - you have the log line for one request that you would like to block. > > Do you have the log line for the matching request that you would like > to allow? > > And that log line shows just two request headers plus an ip address. If > that is enough to accurately distinguish between "yes" and "no" requests, > you're good. If not, examine the entire request (either by extra logging > in nginx, or by watching the network traffic involved in each). > > > I have also tried blocking specific user agent such as Android but > neither > > it worked (sure i am doing something wrong) nor this is the correct > > solution : > > > > if ($http_user_agent ~* "Linux;Android 4.2.2") { > > Does that 19-character string appear in the user agent header? If not, > the "if" will not match. > > (I don't see it in there.) > > If the most important thing is that "they" don't "steal" your bandwidth, > you can just turn off your web server. Bandwidth saved. > > But presumably it is also important that some requests are handled as > they currently are. > > Only you can say what distinguishes a "no" request from a "yes" > request. > > And only you can say which "yes" requests you are happy to > mis-characterise as "no" requests and reject. > > After you determine those, then you can decide how to configure nginx > to implement the same test. > > (For example: check your logs from before this app started. Do all valid > requests include Referer? Are you happy to block any actually-valid > requests that omit Referer, in order to block all requests from this > app? How long do you think it will take the app author to change their > app to include a Referer, if you do that?) > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 17 09:06:05 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Mar 2015 05:06:05 -0400 Subject: Heads up: Forthcoming OpenSSL releases Message-ID: <181016846658f1e63b0ef9b53224b9b2.NginxMailingListEnglish@forum.nginx.org> "The OpenSSL project team would like to announce the forthcoming release of OpenSSL versions 1.0.2a, 1.0.1m, 1.0.0r and 0.9.8zf. These releases will be made available on 19th March. They will fix a number of security defects. The highest severity defect fixed by these releases is classified as "high" severity. " https://groups.google.com/forum/#!topic/mailing.openssl.users/BGQ908PGoy8 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257302,257302#msg-257302 From nginx-forum at nginx.us Tue Mar 17 09:10:13 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Mar 2015 05:10:13 -0400 Subject: Fake video sharing Android App !! In-Reply-To: References: Message-ID: I'd use some kind of authentication based on a user logging in before allowing use of a service, an encrypted cookie or something along that line. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257269,257303#msg-257303 From shahzaib.cb at gmail.com Tue Mar 17 10:12:36 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Mar 2015 15:12:36 +0500 Subject: Fake video sharing Android App !! In-Reply-To: References: Message-ID: @itpp, as i sent the logs above that referer_header for android requests are empty, maybe blocking requests based on empty referer_header will partially resolve our issue ? Following is the config i used to block empty referer_header but in vain. valid_referers server_names ~.; if ($invalid_referer) { return 403; } Android request log : 39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET /files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-" "Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)" I might be putting this config under wrong location, following is the content of android.conf and virtual.conf : virtual.conf : server { listen 80; server_name conversion.domain.com; client_max_body_size 8000m; # limit_rate 180k; # access_log /websites/theos.in/logs/access.log main; location / { root /var/www/html/conversion; index index.html index.htm index.php; # autoindex on; include android.conf; } location ~ \.(flv|jpg|jpeg)$ { flv; root /var/www/html/conversion; expires 2d; include android.conf; valid_referers none blocked domain.net www.domain.net domain.com www.domain.com; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; root /var/www/html/conversion; expires 1d; include android.conf; valid_referers none blocked domain.net www.domain.net domain.com www.domain.com; if ($invalid_referer) { return 403; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root /var/www/html/conversion; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } android.conf : #if ($http_user_agent ~* "Android") { # return 403; #} valid_referers server_names ~.; if ($invalid_referer) { return 403; } Regards. Shahzaib On Tue, Mar 17, 2015 at 2:10 PM, itpp2012 wrote: > I'd use some kind of authentication based on a user logging in before > allowing use of a service, an encrypted cookie or something along that > line. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257269,257303#msg-257303 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 17 10:25:51 2015 From: nginx-forum at nginx.us (rbqdg9) Date: Tue, 17 Mar 2015 06:25:51 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <20150203172044.GB99511@mdounin.ru> References: <20150203172044.GB99511@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > If you see problems with nginx 1.7.9, consider following hints > at http://wiki.nginx.org/Debugging. I think it will not help (at least if not did by anyone who really knows both openssl and nginx internals). the problem is quickly traceable to long ssl3_ctx_callback_ctrl(SSL_CTX *ctx, int cmd, void (*fp)(void)) { CERT *cert; cert = ctx->cert; switch (cmd) { case SSL_CTRL_SET_TMP_RSA_CB: SSLerr(SSL_F_SSL3_CTX_CTRL, ERR_R_SHOULD_NOT_HAVE_BEEN_CALLED); (yes, this occurence, exactly) inside libressl-2.1.3/ssl/s3_lib.c, and this function seems newer called by nginx code directly and not supposed to be externally-called at all. The pure openssl have some pointer-magic in this place, dropped by libressl developers (with the data structure itself, so no easy way to bring it back) I think the only thing developers may do (if not willing to really investigate and fix this issue) - just stop declaring nginx compatibility with libressl. It not only nonworking, but worse - it cleanly execute some garbage instead of code. (I have full system log of stack-protection mechanics trying to prevent this) and yes, 1.7.10 still does the same. The problem itself does not appear on any connection, just in some special cases, but easely reproduceable. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257313#msg-257313 From nginx-forum at nginx.us Tue Mar 17 10:38:28 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Mar 2015 06:38:28 -0400 Subject: Fake video sharing Android App !! In-Reply-To: References: Message-ID: Which can all be faked (eventually), build some kind of validation/authentication system before launching your app. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257269,257314#msg-257314 From nginx-forum at nginx.us Tue Mar 17 10:59:40 2015 From: nginx-forum at nginx.us (rbqdg9) Date: Tue, 17 Mar 2015 06:59:40 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: References: <20150203172044.GB99511@mdounin.ru> Message-ID: and yes, upgrade to libressl 2.1.5 didn't solve this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257315#msg-257315 From shahzaib.cb at gmail.com Tue Mar 17 11:20:31 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Mar 2015 16:20:31 +0500 Subject: Fake video sharing Android App !! In-Reply-To: References: Message-ID: @itpp, you're right but even if we can partially solve this problem, it'll be sufficient for us. Well, using below method worked in our case : location ~ \.(mp4)$ { mp4; root /var/www/html/conversion; expires 1d; valid_referers servers domain.net content.domain.com ; if ($invalid_referer) { return 403; } } This config is only permitting domain.net and domain.com while preventing any other referer header such as "empty" one. On Tue, Mar 17, 2015 at 3:38 PM, itpp2012 wrote: > Which can all be faked (eventually), build some kind of > validation/authentication system before launching your app. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257269,257314#msg-257314 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 17 13:25:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 16:25:58 +0300 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: References: <20150203172044.GB99511@mdounin.ru> Message-ID: <20150317132558.GI88631@mdounin.ru> Hello! On Tue, Mar 17, 2015 at 06:25:51AM -0400, rbqdg9 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > If you see problems with nginx 1.7.9, consider following hints > > at http://wiki.nginx.org/Debugging. > I think it will not help (at least if not did by anyone who really knows > both openssl and nginx internals). > the problem is quickly traceable to > > long > ssl3_ctx_callback_ctrl(SSL_CTX *ctx, int cmd, void (*fp)(void)) > { > CERT *cert; > > cert = ctx->cert; > > switch (cmd) { > case SSL_CTRL_SET_TMP_RSA_CB: > SSLerr(SSL_F_SSL3_CTX_CTRL, > ERR_R_SHOULD_NOT_HAVE_BEEN_CALLED); > (yes, this occurence, exactly) > > inside libressl-2.1.3/ssl/s3_lib.c, and this function seems newer called by > nginx code directly and not supposed to be externally-called at all. > The pure openssl have some pointer-magic in this place, dropped by libressl > developers (with the data structure itself, so no easy way to bring it > back) I see no magic in the OpenSSL here. It looks like the alert is due to LibreSSL dropped the support for export ciphers, while nginx calls SSL_CTX_set_tmp_rsa_callback() to be able to support them if configured to do so. So, the alert is harmless and can be safely ignored. It's just a result of LibreSSL dropping support for parts of the OpenSSL API nginx uses. > I think the only thing developers may do (if not willing to really > investigate and fix this issue) - just stop declaring nginx compatibility > with libressl. It not only nonworking, but worse - it cleanly execute some > garbage instead of code. The only thing we declaring is that nginx can be built with LibreSSL. And it is going to work as long as LibreSSL does the right thing and don't deverge from the OpenSSL API too much. We consider both LibreSSL and BoringSSL to be interesting experimental libraries, and plan to preserve at least minimal support as long as it doesn't require too much effort. > (I have full system log of stack-protection mechanics trying to prevent > this) > > and yes, 1.7.10 still does the same. The problem itself does not appear on > any connection, just in some special cases, but easely reproduceable. So again: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 17 13:43:14 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 16:43:14 +0300 Subject: nginx big bug In-Reply-To: References: <20120307190924.GD67687@mdounin.ru> Message-ID: <20150317134314.GL88631@mdounin.ru> Hello! On Mon, Mar 16, 2015 at 06:01:05PM -0400, antodas wrote: > Hello -, > > I have the similar problem.. !! > > I installed testlink and running using nginx. > > Sometimes.. Testlink hangs .. and I need to restart NGINX to get going. > > 6380#6120: *524 WSARecv() failed (10054: An existing connection was forcibly > closed by the remote host) while reading response header from upstream, > client: 192.168.27.128, server: localhost, request: "GET > /testlink/lib/ajax/checkTCaseDuplicateName.php?_dc=1426533872356&name=%20Rotation%20view%3A%20Review%20offer%20within%20Accept%20rotation%20scre&testcase_id=0&testsuite_id=6382 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000 > > Let me know how you guys resolved this error. The message suggests there is something wrong with your backend, not nginx. Have you tried restarting the backend only, not nginx? (In either case please note that nginx on Windows is considered to be an experimental version, see http://nginx.org/en/docs/windows.html. If it doesn't work for you, consider switching to other OS.) -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Mar 17 13:49:04 2015 From: nginx-forum at nginx.us (alexandru.eftimie) Date: Tue, 17 Mar 2015 09:49:04 -0400 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: Will there be support for http/2 for upstream connections? I can't seem to find anything about this online ( either SPDY or HTTP/2 for upstream connections ) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256561,257321#msg-257321 From mdounin at mdounin.ru Tue Mar 17 13:49:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 16:49:59 +0300 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: <20150317134958.GO88631@mdounin.ru> Hello! On Tue, Mar 17, 2015 at 09:49:04AM -0400, alexandru.eftimie wrote: > Will there be support for http/2 for upstream connections? I can't seem to > find anything about this online ( either SPDY or HTTP/2 for upstream > connections ) No, and there are no plans. -- Maxim Dounin http://nginx.org/ From daniel at mostertman.org Tue Mar 17 14:01:09 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Tue, 17 Mar 2015 15:01:09 +0100 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: <20150317134958.GO88631@mdounin.ru> References: <20150317134958.GO88631@mdounin.ru> Message-ID: <550833A5.8010000@mostertman.org> Maxim Dounin schreef op 17-3-2015 om 14:49: > Hello! > > On Tue, Mar 17, 2015 at 09:49:04AM -0400, alexandru.eftimie wrote: > >> Will there be support for http/2 for upstream connections? I can't seem to >> find anything about this online ( either SPDY or HTTP/2 for upstream >> connections ) > No, and there are no plans. I don't see why it wouldn't be. HTTP/2 is *not* the same kind as SPDY. HTTP/2 is an actual HTTP, and does not even require protocol renegotiation support. HTTP/2 support plans were announced on Feb 25, 2015: http://nginx.com/blog/how-nginx-plans-to-support-http2/ HTTP/1.0 and HTTP/1.1 are supported for upstreams, so it only makes sense HTTP/2 follows suit. From nginx-forum at nginx.us Tue Mar 17 14:11:48 2015 From: nginx-forum at nginx.us (rbqdg9) Date: Tue, 17 Mar 2015 10:11:48 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <20150317132558.GI88631@mdounin.ru> References: <20150317132558.GI88631@mdounin.ru> Message-ID: > So, the alert is harmless and can be safely ignored. The real problem - it doesnt, it always accompanied by something like: nginx[32624] trap invalid opcode ip:47e04d sp:7fff6971ae50 error:0 in nginx[400000+a0000] (exactly one "invalid opcode" for each "function you should not call" in nginx log) and session reset. Or, in different setup - just silent crash. I can't belive it "harmless". (And I didn't think it is just nginx problem, more likely libressl "cleanup" was somewhat unclean) I've commented out SSL_CTX_set_tmp_rsa_callback call in http_ssl_module, and it seems right fix for my problem (at least it stops producing invalid opcode errors) (I should to try it first, but, wrongly, decided what having !EXPORT in Ciphers prevents nginx from calling legacy code anyway) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257325#msg-257325 From nginx-forum at nginx.us Tue Mar 17 14:28:33 2015 From: nginx-forum at nginx.us (aa2funworld) Date: Tue, 17 Mar 2015 10:28:33 -0400 Subject: Log Format with $request_body Not working Message-ID: <2ed59ea3f1125ac132fe991c1faec54e.NginxMailingListEnglish@forum.nginx.org> I am trying to log the incomming HTTP Request with the below given log_fromat. It is logging only '-'. could you please help what I am doing wrong. Thanks in advance. nginx.conf ---------------- http { include mime.types; default_type application/octet-stream; log_format postdata $request_body; types_hash_bucket_size 64; server_names_hash_bucket_size 128; server { listen 9080; server_name X1n2d23; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://www.google.com; access_log /nginx/install/log/nginx/log_server.log postdata ; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257326,257326#msg-257326 From mdounin at mdounin.ru Tue Mar 17 14:38:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 17:38:21 +0300 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: References: <20150317132558.GI88631@mdounin.ru> Message-ID: <20150317143820.GR88631@mdounin.ru> Hello! On Tue, Mar 17, 2015 at 10:11:48AM -0400, rbqdg9 wrote: > > So, the alert is harmless and can be safely ignored. > The real problem - it doesnt, it always accompanied by something like: > nginx[32624] trap invalid opcode ip:47e04d sp:7fff6971ae50 error:0 in > nginx[400000+a0000] > (exactly one "invalid opcode" for each "function you should not call" in > nginx log) and session reset. What you say sounds wrong - the SSL_CTX_set_tmp_rsa_callback() is only called while reading the configuration, and it shouldn't happen at all at runtime. Either way, as already suggested: http://wiki.nginx.org/Debugging Just discussing your problems won't help. Make sure to at least provide enough information for others to reproduce them. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 17 14:43:35 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 17:43:35 +0300 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: <550833A5.8010000@mostertman.org> References: <20150317134958.GO88631@mdounin.ru> <550833A5.8010000@mostertman.org> Message-ID: <20150317144335.GS88631@mdounin.ru> Hello! On Tue, Mar 17, 2015 at 03:01:09PM +0100, Dani?l Mostertman wrote: > Maxim Dounin schreef op 17-3-2015 om 14:49: > >Hello! > > > >On Tue, Mar 17, 2015 at 09:49:04AM -0400, alexandru.eftimie wrote: > > > >>Will there be support for http/2 for upstream connections? I can't seem to > >>find anything about this online ( either SPDY or HTTP/2 for upstream > >>connections ) > >No, and there are no plans. > I don't see why it wouldn't be. HTTP/2 is *not* the same kind as SPDY. > HTTP/2 is an actual HTTP, and does not even require protocol renegotiation > support. > HTTP/2 support plans were announced on Feb 25, 2015: > http://nginx.com/blog/how-nginx-plans-to-support-http2/ > > HTTP/1.0 and HTTP/1.1 are supported for upstreams, so it only makes sense > HTTP/2 follows suit. In no particular order: - No, HTTP/2 isn't "an actual HTTP". It's a completely different protocol, mostly based on SPDY. - As previously said, there are no plans to support neither HTTP/2 nor SPDY for upstream connections. Feel free to provide patches if you think it's something needed. Most likely you'll understand the reasons "why it wouldn't be" while working on the patches. -- Maxim Dounin http://nginx.org/ From patrick at nginx.com Tue Mar 17 14:56:34 2015 From: patrick at nginx.com (Patrick Nommensen) Date: Tue, 17 Mar 2015 07:56:34 -0700 Subject: 2015 NGINX User Survey: Make Your Voice Heard Message-ID: <8306335F-FF94-4413-A526-8CA41E290042@nginx.com> Hello! Yesterday we launched the 2015 NGINX User Survey. [1] This survey gives us the opportunity to better understand your perspective on NGINX today and what might make us even more valuable for you in the future. The insights you share will be used to help plan the NGINX roadmap and influence how we communicate who we are and what we do. Please take a moment to complete the survey. It will remain open until March 30, 2015. [1] http://survey.newkind.com/r/pXmteacd Patrick (On behalf of the NGINX team globally.) -- Patrick Nommensen http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4106 bytes Desc: not available URL: From kyprizel at gmail.com Tue Mar 17 15:06:57 2015 From: kyprizel at gmail.com (kyprizel) Date: Tue, 17 Mar 2015 18:06:57 +0300 Subject: [calling all patch XPerts !] [PATCH] RSA+DSA+ECC bundles In-Reply-To: <5433F296.9000506@riseup.net> References: <5272D269.20203@comodo.com> <541C3B2B.1050002@comodo.com> <541C3F92.1060409@riseup.net> <541C42D9.2000207@comodo.com> <541DA019.6090000@riseup.net> <5432DE9B.4040400@riseup.net> <20141007114147.GK69200@mdounin.ru> <5433F296.9000506@riseup.net> Message-ID: Hi, I refactored Robs code so it can be merged with latest nginx. Multi certificate support works only for OpenSSL >= 1.0.2. Only certificates with different crypto algorithms (ECC/RSA/DSA) can be used b/c of OpenSSL limitations, otherwise (RSA+SHA-256 / RSA-SHA-1 for example) only last specified in the config will be used. Can you please review it. Thank you. On Tue, Oct 7, 2014 at 5:03 PM, shmick at riseup.net wrote: > > > Maxim Dounin wrote: > > Hello! > > > > On Tue, Oct 07, 2014 at 11:31:56AM +0400, kyprizel wrote: > > > >> Updating patch for the last nginx isn't a problem - we need to hear from > >> Maxim what was the problem with old patch (it wasn't applied that time - > >> why should by applied a new one?) to fix it. > > > > http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004475.html > > ok, so what is the plan for progression & inclusion ? > do you believe there is enough interest and is the idea supported ? > you think Rob's patch isn't feasible ? > is there anybody who can take over and have they ? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_multicert0.patch Type: application/octet-stream Size: 48228 bytes Desc: not available URL: From nginx-forum at nginx.us Tue Mar 17 15:37:08 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 17 Mar 2015 11:37:08 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <20150203152444.GA99511@mdounin.ru> References: <20150203152444.GA99511@mdounin.ru> Message-ID: <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> I am on nginx 1.7.10 with LibreSSL 2.1.5. This is what I see in the error log: 2015/02/03 20:23:30 [alert] 69020#0: *16 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: [...IP...], server: 0.0.0.0:443 I *feel* that the above is related to the following, because the two have occurred together: SNI: ssl_error_bad_cert_domain on https:// http://forum.nginx.org/read.php?2,256957,256957#msg-256957 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257334#msg-257334 From nginx-forum at nginx.us Tue Mar 17 15:39:33 2015 From: nginx-forum at nginx.us (rbqdg9) Date: Tue, 17 Mar 2015 11:39:33 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <20150317143820.GR88631@mdounin.ru> References: <20150317143820.GR88631@mdounin.ru> Message-ID: <6070553e82c4e5aa59083077fb49291f.NginxMailingListEnglish@forum.nginx.org> Yes, it's at least strange. The reproducing configuration is rather complex task, this newer happens in usual browsing session (and not just in parsing config, of course). I'm still trying to limit it to something I can publish. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257335#msg-257335 From nginx-forum at nginx.us Tue Mar 17 15:44:11 2015 From: nginx-forum at nginx.us (rbqdg9) Date: Tue, 17 Mar 2015 11:44:11 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> References: <20150203152444.GA99511@mdounin.ru> <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <081fa395a53eda0d5486ca0ae146f941.NginxMailingListEnglish@forum.nginx.org> may you just try my "fix"? At least, it will save me time for searching in completely wrong place. --- nginx-1.7.10/src/http/modules/ngx_http_ssl_module.c.orig 2015-02-10 15:33:34.000000000 +0100 +++ nginx-1.7.10/src/http/modules/ngx_http_ssl_module.c 2015-03-17 14:55:58.282130993 +0100 @@ -716,7 +716,7 @@ } /* a temporary 512-bit RSA key is required for export versions of MSIE */ - SSL_CTX_set_tmp_rsa_callback(conf->ssl.ctx, ngx_ssl_rsa512_key_callback); + // SSL_CTX_set_tmp_rsa_callback(conf->ssl.ctx, ngx_ssl_rsa512_key_callback); if (ngx_ssl_dhparam(cf, &conf->ssl, &conf->dhparam) != NGX_OK) { return NGX_CONF_ERROR; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257336#msg-257336 From igal at lucee.org Tue Mar 17 16:16:48 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 17 Mar 2015 09:16:48 -0700 Subject: Reverse Proxy for SNMP Message-ID: <55085370.5090101@lucee.org> hi, can it be used as reverse proxy for any protocol or is it limited to http(s) and smtp? I'm trying to setup a reverse proxy for SNMP for the purpose opening remote SNMP access and using the proxy for whitelisting IPs etc. is that possible? TIA Igal From nginx-forum at nginx.us Tue Mar 17 16:20:34 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 17 Mar 2015 12:20:34 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <081fa395a53eda0d5486ca0ae146f941.NginxMailingListEnglish@forum.nginx.org> References: <20150203152444.GA99511@mdounin.ru> <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> <081fa395a53eda0d5486ca0ae146f941.NginxMailingListEnglish@forum.nginx.org> Message-ID: Will try it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257339#msg-257339 From nginx-forum at nginx.us Tue Mar 17 16:29:45 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 17 Mar 2015 12:29:45 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> References: <20150203152444.GA99511@mdounin.ru> <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> Message-ID: The *feeling* that the problem is related to SNI is getting stronger. This is the error log when running ssllabs.com on the server: ==> stderr.log <== 2015/03/17 17:12:45 [crit] 40733#0: *925 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 64.41.200.104, server: 0.0.0.0:443 2015/03/17 17:12:46 [crit] 40733#0: *926 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 64.41.200.104, server: 0.0.0.0:443 It corresponds to the handshake simulation, and in particular to the failed handshakes with all non-SNI browsers, emphasis on "all". The SNI clients that fail are java7u25 and openssl 0.9.8y. All other clients succeed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257340#msg-257340 From dan at pingsweep.co.uk Tue Mar 17 16:45:43 2015 From: dan at pingsweep.co.uk (Daniel Hadfield) Date: Tue, 17 Mar 2015 16:45:43 +0000 Subject: Reverse Proxy for SNMP In-Reply-To: <55085370.5090101@lucee.org> References: <55085370.5090101@lucee.org> Message-ID: <55085A37.2000104@pingsweep.co.uk> nginx has no support for SNMP You should be able to whitelist IP's using whatever SNMP daemon you are using. On 17/03/15 16:16, Igal @ Lucee.org wrote: > hi, > > can it be used as reverse proxy for any protocol or is it limited to > http(s) and smtp? > > I'm trying to setup a reverse proxy for SNMP for the purpose opening > remote SNMP access and using the proxy for whitelisting IPs etc. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature URL: From dan at pingsweep.co.uk Tue Mar 17 16:45:52 2015 From: dan at pingsweep.co.uk (Daniel Hadfield) Date: Tue, 17 Mar 2015 16:45:52 +0000 Subject: Reverse Proxy for SNMP In-Reply-To: <55085370.5090101@lucee.org> References: <55085370.5090101@lucee.org> Message-ID: <55085A40.8020901@pingsweep.co.uk> nginx has no support for SNMP You should be able to whitelist IP's using whatever SNMP daemon you are using. On 17/03/15 16:16, Igal @ Lucee.org wrote: > hi, > > can it be used as reverse proxy for any protocol or is it limited to > http(s) and smtp? > > I'm trying to setup a reverse proxy for SNMP for the purpose opening > remote SNMP access and using the proxy for whitelisting IPs etc. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature URL: From nginx-forum at nginx.us Tue Mar 17 17:14:10 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 17 Mar 2015 13:14:10 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <081fa395a53eda0d5486ca0ae146f941.NginxMailingListEnglish@forum.nginx.org> References: <20150203152444.GA99511@mdounin.ru> <29a204c61056059580a148a0c4ad53f5.NginxMailingListEnglish@forum.nginx.org> <081fa395a53eda0d5486ca0ae146f941.NginxMailingListEnglish@forum.nginx.org> Message-ID: <21ca203e10c90b8d07c2ca93a2871409.NginxMailingListEnglish@forum.nginx.org> "fix" applied. This is what I see when running ssllabs again: 2015/03/17 18:08:33 [crit] 14508#0: *478 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 64.41.200.104, server: 0.0.0.0:443 2015/03/17 18:08:34 [crit] 14506#0: *479 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 64.41.200.104, server: 0.0.0.0:443 The "called a function you should not call" did not show up so far. Will run with the "fix" for a few days. Let see what happens. Thank you for your time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257342#msg-257342 From Gurumurthy.Sundar at telventdtn.com Tue Mar 17 17:42:04 2015 From: Gurumurthy.Sundar at telventdtn.com (Gurumurthy Sundar) Date: Tue, 17 Mar 2015 17:42:04 +0000 Subject: NGINX and websocket endpoint Message-ID: <64FCB5CD9DDEFE4AB99F1D28E132677A0BE938F9@CORPEXPROD02.dtn.com> I am trying to configure nginx as reverse proxy that does authentication and websockets. It proxy-passes request to apache (/auth/wsgi) for authentication - once that succeeds, it then proxy passes to the websocked backend, which is a java based websocket endpoint on tomcat 8. location /basic/alerting/websocket { auth_request /auth/wsgi; proxy_pass http://backend.com:8080/websocket; proxy_http_version 1.1; proxy_set_header Upgrade "Websocket"; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; } The authentication on apache succeeds. However, on the backend tomcat, I get this error: 12572237 [http-nio-8080-exec-10] ERROR org.springframework.web.socket.server.support.DefaultHandshakeHandler - handleWebSocketVersionNotSupported() Handshake failed due to unsupported WebSocket version: null. Supported versions: [13] It seems that failure is because the backend is expecting the header ("Sec-WebSocket-Version") which is not getting passed through. I even see in the nginx logs: 2015/03/17 17:28:12 [debug] 20261#0: *718 http proxy header: "Sec-WebSocket-Version: 13" Is there anything I need to do in the nginx config? Very much appreciate your help. NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From karljohnson.it at gmail.com Tue Mar 17 17:49:33 2015 From: karljohnson.it at gmail.com (Karl Johnson) Date: Tue, 17 Mar 2015 13:49:33 -0400 Subject: Adding expires on all images break nginx rewrite Message-ID: Hello, I host a website based on Laravel with Nginx 1.6.2 + PHP-FPM 5.6. Most images on the website are in /static folder and are served to visitors with a PHP file (see /static location). I want to add a 30 days expire on all images of this vhost. However, when I add the "location ~* \.(?:image)$ {" rule to add expire, the rewrite for images in /static doesn't work anymore. Nginx reports file not found for all images in /static. Any idea how to make it works? Vhost configuration below: server { listen 80; server_name www.website.com; root /home/www/website.com/public_html/public; access_log /var/log/nginx/website.com-access_log; error_log /var/log/nginx/website.com-error_log warn; location /static { rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ /image-f7ec13d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; } ## not working, break the rewrite images above # # location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { # expires 30d; # add_header Cache-Control "public"; # } include conf.d/custom/restrictions.conf; include conf.d/custom/pagespeed.conf; include conf.d/custom/fpm-laravel.conf; pagespeed DisableFilters combine_css; } Rewrite not working after adding the location for all images expire: 2015/03/17 13:46:12 [error] 11792#0: *12217 openat() "/home/www/ website.com/public/static/images/0-0x0/2015/03/2015-03-10-media-fr.jpg" failed (2: No such file or directory) Regards, Karl -------------- next part -------------- An HTML attachment was scrubbed... URL: From nurahmadie at gmail.com Tue Mar 17 18:02:41 2015 From: nurahmadie at gmail.com (Nurahmadie Nurahmadie) Date: Wed, 18 Mar 2015 03:02:41 +0900 Subject: Adding expires on all images break nginx rewrite In-Reply-To: References: Message-ID: That's because the latter regex location takes precedence over the /static one. You should first decide if that regex location should also includes /static prefix or not. If it's a yes, then you should also add the same rewrite-rule inside your regex location, if it's a not, you can turn your /static location to also use regex like this location ~ ^/static/ { # rewrite here... } So nginx will take a look at that location first when you request for /static/* urls. Another way is to use `try_files`, but then you should probably create another (internal) location block to do the rewrite. On Wed, Mar 18, 2015 at 2:49 AM, Karl Johnson wrote: > Hello, > > I host a website based on Laravel with Nginx 1.6.2 + PHP-FPM 5.6. Most > images on the website are in /static folder and are served to visitors with > a PHP file (see /static location). > > I want to add a 30 days expire on all images of this vhost. However, when > I add the "location ~* \.(?:image)$ {" rule to add expire, the rewrite for > images in /static doesn't work anymore. Nginx reports file not found for > all images in /static. > > Any idea how to make it works? > > Vhost configuration below: > > server { > listen 80; > server_name www.website.com; > root /home/www/website.com/public_html/public; > access_log /var/log/nginx/website.com-access_log; > error_log /var/log/nginx/website.com-error_log warn; > > location /static { > rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ > /image-f7ec13d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; > } > > ## not working, break the rewrite images above > # > # location ~* > \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { > # expires 30d; > # add_header Cache-Control "public"; > # } > > include conf.d/custom/restrictions.conf; > include conf.d/custom/pagespeed.conf; > include conf.d/custom/fpm-laravel.conf; > > pagespeed DisableFilters combine_css; > } > > > Rewrite not working after adding the location for all images expire: > > 2015/03/17 13:46:12 [error] 11792#0: *12217 openat() "/home/www/ > website.com/public/static/images/0-0x0/2015/03/2015-03-10-media-fr.jpg" > failed (2: No such file or directory) > > > > Regards, > > Karl > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From karljohnson.it at gmail.com Tue Mar 17 18:46:44 2015 From: karljohnson.it at gmail.com (Karl Johnson) Date: Tue, 17 Mar 2015 14:46:44 -0400 Subject: Adding expires on all images break nginx rewrite Message-ID: Thanks for the reply Nurahmadie. I changed the location to ~ ^/static/ and the rewrite works again. I've added a "expires 1w;" in this location to add an expire on all images in /static but it doesn't seem to apply, images give 200 OK and never cache. Is it the right way to do it? location ~ ^/static/ { rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ /image-fa013d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; expires 1w; } Kind regards, Karl On Tue, Mar 17, 2015 at 2:02 PM, Nurahmadie Nurahmadie wrote: > That's because the latter regex location takes precedence over the /static > one. > You should first decide if that regex location should also includes > /static prefix or not. > If it's a yes, then you should also add the same rewrite-rule inside your > regex location, if it's a not, you can turn your /static location to also > use regex like this > > location ~ ^/static/ { > # rewrite here... > } > > So nginx will take a look at that location first when you request for > /static/* urls. > > Another way is to use `try_files`, but then you should probably create > another (internal) location block to do the rewrite. > > > On Wed, Mar 18, 2015 at 2:49 AM, Karl Johnson > wrote: > >> Hello, >> >> I host a website based on Laravel with Nginx 1.6.2 + PHP-FPM 5.6. Most >> images on the website are in /static folder and are served to visitors with >> a PHP file (see /static location). >> >> I want to add a 30 days expire on all images of this vhost. However, when >> I add the "location ~* \.(?:image)$ {" rule to add expire, the rewrite for >> images in /static doesn't work anymore. Nginx reports file not found for >> all images in /static. >> >> Any idea how to make it works? >> >> Vhost configuration below: >> >> server { >> listen 80; >> server_name www.website.com; >> root /home/www/website.com/public_html/public; >> access_log /var/log/nginx/website.com-access_log; >> error_log /var/log/nginx/website.com-error_log warn; >> >> location /static { >> rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ >> /image-f7ec13d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; >> } >> >> ## not working, break the rewrite images above >> # >> # location ~* >> \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { >> # expires 30d; >> # add_header Cache-Control "public"; >> # } >> >> include conf.d/custom/restrictions.conf; >> include conf.d/custom/pagespeed.conf; >> include conf.d/custom/fpm-laravel.conf; >> >> pagespeed DisableFilters combine_css; >> } >> >> >> Rewrite not working after adding the location for all images expire: >> >> 2015/03/17 13:46:12 [error] 11792#0: *12217 openat() "/home/www/ >> website.com/public/static/images/0-0x0/2015/03/2015-03-10-media-fr.jpg" >> failed (2: No such file or directory) >> >> >> >> Regards, >> >> Karl >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > regards, > Nurahmadie > -- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nurahmadie at gmail.com Tue Mar 17 19:07:26 2015 From: nurahmadie at gmail.com (Nurahmadie Nurahmadie) Date: Wed, 18 Mar 2015 04:07:26 +0900 Subject: Adding expires on all images break nginx rewrite In-Reply-To: References: Message-ID: On Wed, Mar 18, 2015 at 3:46 AM, Karl Johnson wrote: > Thanks for the reply Nurahmadie. > > I changed the location to ~ ^/static/ and the rewrite works again. I've > added a "expires 1w;" in this location to add an expire on all images in > /static but it doesn't seem to apply, images give 200 OK and never cache. > Is it the right way to do it? > > location ~ ^/static/ { > rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ > /image-fa013d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; > expires 1w; > } > > Kind regards, > > Karl > > > Nope, since the request actually processed by your PHP handler not at that location. Since it's an fcgi, pretty sure caching directives only work with fastcgi_cache: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_valid > > > On Tue, Mar 17, 2015 at 2:02 PM, Nurahmadie Nurahmadie < > nurahmadie at gmail.com> wrote: > >> That's because the latter regex location takes precedence over the >> /static one. >> You should first decide if that regex location should also includes >> /static prefix or not. >> If it's a yes, then you should also add the same rewrite-rule inside your >> regex location, if it's a not, you can turn your /static location to also >> use regex like this >> >> location ~ ^/static/ { >> # rewrite here... >> } >> >> So nginx will take a look at that location first when you request for >> /static/* urls. >> >> Another way is to use `try_files`, but then you should probably create >> another (internal) location block to do the rewrite. >> >> >> On Wed, Mar 18, 2015 at 2:49 AM, Karl Johnson >> wrote: >> >>> Hello, >>> >>> I host a website based on Laravel with Nginx 1.6.2 + PHP-FPM 5.6. Most >>> images on the website are in /static folder and are served to visitors with >>> a PHP file (see /static location). >>> >>> I want to add a 30 days expire on all images of this vhost. However, >>> when I add the "location ~* \.(?:image)$ {" rule to add expire, the rewrite >>> for images in /static doesn't work anymore. Nginx reports file not found >>> for all images in /static. >>> >>> Any idea how to make it works? >>> >>> Vhost configuration below: >>> >>> server { >>> listen 80; >>> server_name www.website.com; >>> root /home/www/website.com/public_html/public; >>> access_log /var/log/nginx/website.com-access_log; >>> error_log /var/log/nginx/website.com-error_log warn; >>> >>> location /static { >>> rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ >>> /image-f7ec13d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; >>> } >>> >>> ## not working, break the rewrite images above >>> # >>> # location ~* >>> \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { >>> # expires 30d; >>> # add_header Cache-Control "public"; >>> # } >>> >>> include conf.d/custom/restrictions.conf; >>> include conf.d/custom/pagespeed.conf; >>> include conf.d/custom/fpm-laravel.conf; >>> >>> pagespeed DisableFilters combine_css; >>> } >>> >>> >>> Rewrite not working after adding the location for all images expire: >>> >>> 2015/03/17 13:46:12 [error] 11792#0: *12217 openat() "/home/www/ >>> website.com/public/static/images/0-0x0/2015/03/2015-03-10-media-fr.jpg" >>> failed (2: No such file or directory) >>> >>> >>> >>> Regards, >>> >>> Karl >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> regards, >> Nurahmadie >> -- >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From semenukha at gmail.com Tue Mar 17 19:35:58 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Tue, 17 Mar 2015 15:35:58 -0400 Subject: Nginx configuration recovery In-Reply-To: References: Message-ID: <2072124.UXiTYmDLHg@tornado> On Tuesday, March 17, 2015 01:24:52 PM vinay bhargav wrote: > Hi, > > Sorry for spamming but I'm in deep trouble. > > I've accidentally overwritten /etc/nginx/site-availabe/default with some > xyz file. I'm using Ubuntu 14.04. The server is still running. Is there any > way I could recover the config file. > > Note: Recovering the default file is very important for me. > > Thanks in advance, > Vinay. Hi, If you need the _stock_ file from the repository, this should do: cd /tmp && apt-get download nginx && dpkg -x nginx*.deb If you mean the file you had previously _modified_, I don't think it's possible. -- Best regards, Styopa Semenukha. From karljohnson.it at gmail.com Tue Mar 17 19:36:47 2015 From: karljohnson.it at gmail.com (Karl Johnson) Date: Tue, 17 Mar 2015 15:36:47 -0400 Subject: Adding expires on all images break nginx rewrite In-Reply-To: References: Message-ID: Yes that's what I understood after few tests. I will add the expire by the PHP script. Thanks for all the help! Karl On Tue, Mar 17, 2015 at 3:07 PM, Nurahmadie Nurahmadie wrote: > > On Wed, Mar 18, 2015 at 3:46 AM, Karl Johnson > wrote: > >> Thanks for the reply Nurahmadie. >> >> I changed the location to ~ ^/static/ and the rewrite works again. I've >> added a "expires 1w;" in this location to add an expire on all images in >> /static but it doesn't seem to apply, images give 200 OK and never cache. >> Is it the right way to do it? >> >> location ~ ^/static/ { >> rewrite ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ >> /image-fa013d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; >> expires 1w; >> } >> >> Kind regards, >> >> Karl >> >> >> > Nope, since the request actually processed by your PHP handler not at that > location. > Since it's an fcgi, pretty sure caching directives only work with > fastcgi_cache: > > > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_valid > > > >> >> >> On Tue, Mar 17, 2015 at 2:02 PM, Nurahmadie Nurahmadie < >> nurahmadie at gmail.com> wrote: >> >>> That's because the latter regex location takes precedence over the >>> /static one. >>> You should first decide if that regex location should also includes >>> /static prefix or not. >>> If it's a yes, then you should also add the same rewrite-rule inside >>> your regex location, if it's a not, you can turn your /static location to >>> also use regex like this >>> >>> location ~ ^/static/ { >>> # rewrite here... >>> } >>> >>> So nginx will take a look at that location first when you request for >>> /static/* urls. >>> >>> Another way is to use `try_files`, but then you should probably create >>> another (internal) location block to do the rewrite. >>> >>> >>> On Wed, Mar 18, 2015 at 2:49 AM, Karl Johnson >>> wrote: >>> >>>> Hello, >>>> >>>> I host a website based on Laravel with Nginx 1.6.2 + PHP-FPM 5.6. Most >>>> images on the website are in /static folder and are served to visitors with >>>> a PHP file (see /static location). >>>> >>>> I want to add a 30 days expire on all images of this vhost. However, >>>> when I add the "location ~* \.(?:image)$ {" rule to add expire, the rewrite >>>> for images in /static doesn't work anymore. Nginx reports file not found >>>> for all images in /static. >>>> >>>> Any idea how to make it works? >>>> >>>> Vhost configuration below: >>>> >>>> server { >>>> listen 80; >>>> server_name www.website.com; >>>> root /home/www/website.com/public_html/public; >>>> access_log /var/log/nginx/website.com-access_log; >>>> error_log /var/log/nginx/website.com-error_log warn; >>>> >>>> location /static { >>>> rewrite >>>> ^/static/images/([0-9])\-([0-9]+)x([0-9]+)/(.*)$ >>>> /image-f7ec13d.php?zc=$1&w=$2&h=$3&src=../uploads/images/$4; >>>> } >>>> >>>> ## not working, break the rewrite images above >>>> # >>>> # location ~* >>>> \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { >>>> # expires 30d; >>>> # add_header Cache-Control "public"; >>>> # } >>>> >>>> include conf.d/custom/restrictions.conf; >>>> include conf.d/custom/pagespeed.conf; >>>> include conf.d/custom/fpm-laravel.conf; >>>> >>>> pagespeed DisableFilters combine_css; >>>> } >>>> >>>> >>>> Rewrite not working after adding the location for all images expire: >>>> >>>> 2015/03/17 13:46:12 [error] 11792#0: *12217 openat() "/home/www/ >>>> website.com/public/static/images/0-0x0/2015/03/2015-03-10-media-fr.jpg" >>>> failed (2: No such file or directory) >>>> >>>> >>>> >>>> Regards, >>>> >>>> Karl >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> >>> -- >>> regards, >>> Nurahmadie >>> -- >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > regards, > Nurahmadie > -- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at mostertman.org Tue Mar 17 19:48:04 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Tue, 17 Mar 2015 20:48:04 +0100 Subject: Nginx configuration recovery In-Reply-To: <2072124.UXiTYmDLHg@tornado> References: <2072124.UXiTYmDLHg@tornado> Message-ID: <550884F4.1030804@mostertman.org> Styopa Semenukha schreef op 17-3-2015 om 20:35: > On Tuesday, March 17, 2015 01:24:52 PM vinay bhargav wrote: >> Hi, >> >> Sorry for spamming but I'm in deep trouble. >> >> I've accidentally overwritten /etc/nginx/site-availabe/default with some >> xyz file. I'm using Ubuntu 14.04. The server is still running. Is there any >> way I could recover the config file. >> >> Note: Recovering the default file is very important for me. >> >> Thanks in advance, >> Vinay. > Hi, > > If you need the _stock_ file from the repository, this should do: > cd /tmp && apt-get download nginx && dpkg -x nginx*.deb > > If you mean the file you had previously _modified_, I don't think it's possible. If you are using ext2/3/4, you can use a tool called extundelete to recover the original file, given that you have not overwritten the file again since: http://ubuntuforums.org/showthread.php?t=2113182 http://extundelete.sourceforge.net/ From vbart at nginx.com Tue Mar 17 22:32:26 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Mar 2015 01:32:26 +0300 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: <1494685.hZqS0Tr7Eg@vbart-laptop> On Tuesday 17 March 2015 09:49:04 alexandru.eftimie wrote: > Will there be support for http/2 for upstream connections? I can't seem to > find anything about this online ( either SPDY or HTTP/2 for upstream > connections ) > The problems that SPDY (and HTTP/2) is trying to solve usually do not exist in upstream connections, or can be solved more effectively using other methods already presented in nginx (e.g. keepalive cache). Could you provide any real use case for HTTP/2 in this scenario? wbr, Valentin V. Bartenev From rainer at ultra-secure.de Tue Mar 17 22:37:14 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 17 Mar 2015 23:37:14 +0100 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: <1494685.hZqS0Tr7Eg@vbart-laptop> References: <1494685.hZqS0Tr7Eg@vbart-laptop> Message-ID: > Am 17.03.2015 um 23:32 schrieb Valentin V. Bartenev : > > On Tuesday 17 March 2015 09:49:04 alexandru.eftimie wrote: >> Will there be support for http/2 for upstream connections? I can't seem to >> find anything about this online ( either SPDY or HTTP/2 for upstream >> connections ) >> > > The problems that SPDY (and HTTP/2) is trying to solve usually do not > exist in upstream connections, or can be solved more effectively using > other methods already presented in nginx (e.g. keepalive cache). > > Could you provide any real use case for HTTP/2 in this scenario? > My guess would be if your upstream is actually a ?real? internet-server (that happens to do http/2). Somebody trying to build the next ?CloudFlare/Akamai/WhateverCDN?? ;-) Is a world possible/imaginable that only does http/2? Rainer From lists at ruby-forum.com Tue Mar 17 23:09:50 2015 From: lists at ruby-forum.com (Gabriel Arrais) Date: Wed, 18 Mar 2015 00:09:50 +0100 Subject: Cache TTL set by the client Message-ID: Hi, Is it possible somehow to let the cache ttl (in proxy_pass caching) be defined by the client? Unfortunately it looks like proxy_cache_valid doesn't accept variables as input. Thank you in advance. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Mar 18 01:31:13 2015 From: nginx-forum at nginx.us (halozen) Date: Tue, 17 Mar 2015 21:31:13 -0400 Subject: please suggest performance tweak and the right siege options for load test Message-ID: <7677751438cf284e8ad8916343808ed0.NginxMailingListEnglish@forum.nginx.org> 2 nginx 1.4.6 web servers - ocfs cluster, web root inside mounted LUN from SAN storage 2 MariaDB 5.5 servers - galera cluster, different network segment than nginx web servers nginx servers each two sockets quad core xeon, 128 gb ram Load balanced via F5 load balancer (round-robin, http performance) Based on my setup above, what options that I should use with siege to perform load term to at least 5000 concurrent users? There is a time when thousands of student storms university's web application. Below is result for 300 concurrent users. # siege -c 300 -q -t 1m domain.com siege aborted due to excessive socket failure; you can change the failure threshold in $HOME/.siegerc Transactions: 370 hits Availability: 25.38 % Elapsed time: 47.06 secs Data transferred: 4.84 MB Response time: 20.09 secs Transaction rate: 7.86 trans/sec Throughput: 0.10 MB/sec Concurrency: 157.98 Successful transactions: 370 Failed transactions: 1088 Longest transaction: 30.06 Shortest transaction: 0.00 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257373,257373#msg-257373 From lucas at slcoding.com Wed Mar 18 06:06:09 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Wed, 18 Mar 2015 07:06:09 +0100 Subject: please suggest performance tweak and the right siege options for load test In-Reply-To: <7677751438cf284e8ad8916343808ed0.NginxMailingListEnglish@forum.nginx.org> References: <7677751438cf284e8ad8916343808ed0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <550915D1.9040102@slcoding.com> Have you checked the socket level, and checking kernel log on all 3 servers (nginx and load balancer) meanwhile doing the test? It could be that for some reason you reach a limit really fast (We had an issue that we reached the nf_conntrack limit at 600 concurrent users because we had like 170 requests per page load) halozen wrote: > 2 nginx 1.4.6 web servers - ocfs cluster, web root inside mounted LUN > from SAN storage > 2 MariaDB 5.5 servers - galera cluster, different network segment than > nginx web servers > > nginx servers each two sockets quad core xeon, 128 gb ram > Load balanced via F5 load balancer (round-robin, http performance) > > Based on my setup above, what options that I should use with siege to > perform load term to at least 5000 concurrent users? > > There is a time when thousands of student storms university's web > application. > > Below is result for 300 concurrent users. > > # siege -c 300 -q -t 1m domain.com > > siege aborted due to excessive socket failure; you > can change the failure threshold in $HOME/.siegerc > > Transactions: 370 hits > Availability: 25.38 % > Elapsed time: 47.06 secs > Data transferred: 4.84 MB > Response time: 20.09 secs > Transaction rate: 7.86 trans/sec > Throughput: 0.10 MB/sec > Concurrency: 157.98 > Successful transactions: 370 > Failed transactions: 1088 > Longest transaction: 30.06 > Shortest transaction: 0.00 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257373,257373#msg-257373 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Mar 18 08:29:35 2015 From: nginx-forum at nginx.us (alexandru.eftimie) Date: Wed, 18 Mar 2015 04:29:35 -0400 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: A CDN was the exact reason i asked the question in the first place, at the moment not even cloudflare offers spdy or http/2 for upstream servers ( they use nginx and have spdy enabled ). Seems to me like http/2 for upstream would make building a CDN / Accelerator easier(or at least better) for alot of people. I'm not trying to push for this though, it was just a simple question. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256561,257378#msg-257378 From mark.mielke at gmail.com Wed Mar 18 08:32:55 2015 From: mark.mielke at gmail.com (Mark Mielke) Date: Wed, 18 Mar 2015 04:32:55 -0400 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: <1494685.hZqS0Tr7Eg@vbart-laptop> Message-ID: I think the ability to "push" content, and prioritize requests are examples of capabilities that might require intelligence upstream, and therefore a requirement to proxy HTTP/2 upstream. However, I expect much of this is still theoretical at this point, and until there are actually upstream servers that are providing effective capabilities here, HTTP/1.1 will perform just as good as HTTP/2? I also expect that some of these benefits could only be achieved if the upstream server knows it is talking to a specific client, in which case it would make more sense to use an HAProxy approach, where one client connection is mapped to one upstream connection... On Tue, Mar 17, 2015 at 6:37 PM, Rainer Duffner wrote: > > > Am 17.03.2015 um 23:32 schrieb Valentin V. Bartenev : > > > > On Tuesday 17 March 2015 09:49:04 alexandru.eftimie wrote: > >> Will there be support for http/2 for upstream connections? I can't seem > to > >> find anything about this online ( either SPDY or HTTP/2 for upstream > >> connections ) > >> > > > > The problems that SPDY (and HTTP/2) is trying to solve usually do not > > exist in upstream connections, or can be solved more effectively using > > other methods already presented in nginx (e.g. keepalive cache). > > > > Could you provide any real use case for HTTP/2 in this scenario? > > > > > > My guess would be if your upstream is actually a ?real? internet-server > (that happens to do http/2). > > Somebody trying to build the next ?CloudFlare/Akamai/WhateverCDN?? > ;-) > > Is a world possible/imaginable that only does http/2? > > > Rainer > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Mark Mielke -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at mostertman.org Wed Mar 18 08:33:35 2015 From: daniel at mostertman.org (Daniel Mostertman) Date: Wed, 18 Mar 2015 09:33:35 +0100 Subject: please suggest performance tweak and the right siege options for load test In-Reply-To: <7677751438cf284e8ad8916343808ed0.NginxMailingListEnglish@forum.nginx.org> References: <7677751438cf284e8ad8916343808ed0.NginxMailingListEnglish@forum.nginx.org> Message-ID: I tried siege a lot, but could never get it to really use all cores on the server, I found the tool wrk much more useful for load testing. On Mar 18, 2015 2:31 AM, "halozen" wrote: > 2 nginx 1.4.6 web servers - ocfs cluster, web root inside mounted LUN > from SAN storage > 2 MariaDB 5.5 servers - galera cluster, different network segment than > nginx web servers > > nginx servers each two sockets quad core xeon, 128 gb ram > Load balanced via F5 load balancer (round-robin, http performance) > > Based on my setup above, what options that I should use with siege to > perform load term to at least 5000 concurrent users? > > There is a time when thousands of student storms university's web > application. > > Below is result for 300 concurrent users. > > # siege -c 300 -q -t 1m domain.com > > siege aborted due to excessive socket failure; you > can change the failure threshold in $HOME/.siegerc > > Transactions: 370 hits > Availability: 25.38 % > Elapsed time: 47.06 secs > Data transferred: 4.84 MB > Response time: 20.09 secs > Transaction rate: 7.86 trans/sec > Throughput: 0.10 MB/sec > Concurrency: 157.98 > Successful transactions: 370 > Failed transactions: 1088 > Longest transaction: 30.06 > Shortest transaction: 0.00 > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257373,257373#msg-257373 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 18 14:32:31 2015 From: nginx-forum at nginx.us (vihaan) Date: Wed, 18 Mar 2015 10:32:31 -0400 Subject: logging nginx to syslog on port Message-ID: <2a81aaee38b85ad9aa5b21a71158845e.NginxMailingListEnglish@forum.nginx.org> Hi on my local system i have configured rsyslog and nginx. i want all error logs should to go to syslog over the port no : 10514 my configuration for rsyslog.conf is.: it is running fine showing no error:, but when i run any page on local host which is not there, the error are not appearing appaering in out.log ------------------------------------------------------ module(load="imptcp" ) template(name="t" type="string" string="=====NEW=======\n%msg%\n\n\n%$.foo%\n\n=======END=====\n") module(load="mmnormalize") module(load="mmjsonparse") ruleset(name="r") { action(type="omfile" name="jj_out_1" file="out.log" template="t") } and nginx,cong is configured this way: nginx-1.7.10 ------------------------------------------------------- http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log syslog:server=localhost:10514,facility=local7,tag=nginx,severity=info; ---------------------------------- please help in debugging. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257388,257388#msg-257388 From vbart at nginx.com Wed Mar 18 14:45:13 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Mar 2015 17:45:13 +0300 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: <5286844.PeBmyWdX7y@vbart-workstation> On Wednesday 18 March 2015 04:32:55 Mark Mielke wrote: > I think the ability to "push" content,and prioritize requests are examples > of capabilities that might require intelligence upstream, and therefore a > requirement to proxy HTTP/2 upstream. "Server push" doesn't require HTTP/2 for upstream connection. Upstreams don't request content, instead they return it, so there's nothing to prioritize from the upstream point of view. > However, I expect much of this is > still theoretical at this point, and until there are actually upstream > servers that are providing effective capabilities here, HTTP/1.1 will > perform just as good as HTTP/2? HTTP/1.1 actually can perform better than HTTP/2. HTTP/1.1 has less overhead by default (since it doesn't introduce another framing layer and another flow control over TCP), and it also uses more connections, which means more TCP window, more socket buffers and less impact from packet loss. There's almost no reason for HTTP/2 to perform better unless you're doing many handshakes over high latency network or sending hundreds of kilobytes of headers. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Mar 18 17:12:21 2015 From: nginx-forum at nginx.us (vihaan) Date: Wed, 18 Mar 2015 13:12:21 -0400 Subject: logging nginx to syslog on port In-Reply-To: <2a81aaee38b85ad9aa5b21a71158845e.NginxMailingListEnglish@forum.nginx.org> References: <2a81aaee38b85ad9aa5b21a71158845e.NginxMailingListEnglish@forum.nginx.org> Message-ID: --------------- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257388,257404#msg-257404 From nginx-forum at nginx.us Wed Mar 18 21:53:42 2015 From: nginx-forum at nginx.us (dansch8888) Date: Wed, 18 Mar 2015 17:53:42 -0400 Subject: rewrite rules cms phpwcms not working In-Reply-To: <20150224203758.GV13461@daoine.org> References: <20150224203758.GV13461@daoine.org> Message-ID: <4b4a0198ed167bc7fabaea3defbb6e2a.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I got a way more familar now with this nginx config files. It works fine for me. The only thing that I still not get figured out, is this FastCGI "Y" error. It doesn't happen at the old webserver which is an apache/fastcgi environment, but I cannot play around with this, because it is just a webspace. At the line 2287 there is this code eval($s.";"); and "FastCGI sent in stderr: "PHP message: PHP Notice: Use of undefined constant Y - assumed 'Y' in /xxx/front.func.inc.php(2287) : eval()'d code on line 1" while reading response header from upstream,..." at line 1 in this file I can not see any Y? I still hope the developer can give me some informations. Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256693,257415#msg-257415 From nginx-forum at nginx.us Wed Mar 18 21:55:42 2015 From: nginx-forum at nginx.us (ManuelRighi) Date: Wed, 18 Mar 2015 17:55:42 -0400 Subject: nginx and ssl ciphers Message-ID: <85288b710e1044b9ca83f8f0f2bb6477.NginxMailingListEnglish@forum.nginx.org> Hello, I need to configure my nginx web server with only specific ssl ciphers. I need to use only this ciphers: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) TLS_RSA_WITH_RC4_128_MD5 (0x0004) TLS_RSA_WITH_RC4_128_SHA (0x0005) TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c) TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d) Someone can help me on how I do ? Tnx Manuel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257416,257416#msg-257416 From stl at wiredrive.com Wed Mar 18 22:45:00 2015 From: stl at wiredrive.com (Scott Larson) Date: Wed, 18 Mar 2015 15:45:00 -0700 Subject: nginx and ssl ciphers In-Reply-To: <85288b710e1044b9ca83f8f0f2bb6477.NginxMailingListEnglish@forum.nginx.org> References: <85288b710e1044b9ca83f8f0f2bb6477.NginxMailingListEnglish@forum.nginx.org> Message-ID: Running SSL correctly goes deeper than just declaring ciphers, and at the least I'd recommend using the more modern versions with ECDHE unless there is a technical reason you cannot. That said: ssl_prefer_server_ciphers on; ssl_ciphers AES256-SHA256:AES256-SHA:AES128-SHA256:AES128-SHA:RC4-SHA:RC4-MD5:DES-CBC3-SHA; *[image: userimage]Scott Larson[image: los angeles] Lead Systems Administrator[image: wdlogo] [image: linkedin] [image: facebook] [image: twitter] [image: instagram] T 310 823 8238 x1106 <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* On Wed, Mar 18, 2015 at 2:55 PM, ManuelRighi wrote: > Hello, > I need to configure my nginx web server with only specific ssl ciphers. > I need to use only this ciphers: > > TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) > TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) > TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) > TLS_RSA_WITH_RC4_128_MD5 (0x0004) > TLS_RSA_WITH_RC4_128_SHA (0x0005) > TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c) > TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d) > > > Someone can help me on how I do ? > > Tnx > Manuel > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257416,257416#msg-257416 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.mielke at gmail.com Wed Mar 18 23:11:18 2015 From: mark.mielke at gmail.com (Mark Mielke) Date: Wed, 18 Mar 2015 19:11:18 -0400 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: <5286844.PeBmyWdX7y@vbart-workstation> References: <5286844.PeBmyWdX7y@vbart-workstation> Message-ID: Hi Valentin: Are you talking about the same "push" as I am? HTTP/2, or at least SPDY, had the ability to *push* content like CSS in advance of the request, pushing content into the browsers cache *before* it needs it. I'm not talking about long polling or other technology. I've only read about this technology, though. I've never seen it implemented in practice. And for prioritization, it's about choosing to send more important content before less important content. I don't think you are correct in terms of future potential here. But, it's very likely that you are correct in terms of *current* potential. That is, I think this technology is too new for people to understand it and really think about how to leverage it. It sounds like you don't even know about it... On Wed, Mar 18, 2015 at 10:45 AM, Valentin V. Bartenev wrote: > On Wednesday 18 March 2015 04:32:55 Mark Mielke wrote: > > I think the ability to "push" content,and prioritize requests are > examples > > of capabilities that might require intelligence upstream, and therefore a > > requirement to proxy HTTP/2 upstream. > > "Server push" doesn't require HTTP/2 for upstream connection. > > Upstreams don't request content, instead they return it, so there's nothing > to prioritize from the upstream point of view. > > > > However, I expect much of this is > > still theoretical at this point, and until there are actually upstream > > servers that are providing effective capabilities here, HTTP/1.1 will > > perform just as good as HTTP/2? > > HTTP/1.1 actually can perform better than HTTP/2. > > HTTP/1.1 has less overhead by default (since it doesn't introduce another > framing layer and another flow control over TCP), and it also uses more > connections, which means more TCP window, more socket buffers and less > impact from packet loss. > > There's almost no reason for HTTP/2 to perform better unless you're doing > many handshakes over high latency network or sending hundreds of kilobytes > of headers. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Mark Mielke -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Mar 19 01:44:13 2015 From: lists at ruby-forum.com (Mingcai SHEN) Date: Thu, 19 Mar 2015 02:44:13 +0100 Subject: nginx usptream 302 redirect In-Reply-To: <20131004151340.GQ62063@mdounin.ru> References: <20131004151340.GQ62063@mdounin.ru> Message-ID: <23e333c9c4961ee55b7cabf6fe2a7bd6@ruby-forum.com> Hello, Very sorry to reply on this thread again. I have the same requirement, but I still can not get nginx to follow the 302 redirect as I want. My configuration: upstream backend { server 10.255.199.60:1220; } server { listen 1220; server_name localhost; location / { proxy_pass http://backend; error_page 301 302 307 @redir; #set $foo $upstream_http_location; } location @redir { set $foo $upstream_http_location; proxy_pass $foo; } } And when I use curl to test , I got: # curl -v http://101.36.99.131:1220/vds48/export/data/videos_vod/v16/videos/0/00092/791/2/2.m3u8 * About to connect() to 101.36.99.131 port 1220 (#0) * Trying 101.36.99.131... connected * Connected to 101.36.99.131 (101.36.99.131) port 1220 (#0) > GET /vds48/export/data/videos_vod/v16/videos/0/00092/791/2/2.m3u8 HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.6.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: 101.36.99.131:1220 > Accept: */* > < HTTP/1.1 302 Found < Server: nginx/1.4.4 < Date: Thu, 19 Mar 2015 01:34:15 GMT < Content-Length: 0 < Connection: keep-alive < Cache-Control: no-cache < Location: http://10.255.199.43:1220/vds48/export/data/videos_vod/v16/videos/0/00092/791/2/2.m3u8 < * Connection #0 to host 101.36.99.131 left intact * Closing connection #0 What's the problem? Thanks. - MC Maxim Dounin wrote in post #1123508: > Hello! > > On Fri, Oct 04, 2013 at 05:33:57PM +0300, Anatoli Marinov wrote: > > [...] > >> From dev mail list Maxim advised me to backup $upstream_http_location in >> other variable and I did it but the result was the same - 500 internal >> server error. The config after the fix is: > > [...] > >> proxy_pass $foo; >> proxy_temp_path tmp ; >> } > > You need "set ... $upstream_http_location" to be executed after a > request to an upstream was done, so you need in location @redir, > not in location /. > > -- > Maxim Dounin > http://nginx.org/en/donation.html -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Mar 19 03:15:55 2015 From: nginx-forum at nginx.us (mac-989) Date: Wed, 18 Mar 2015 23:15:55 -0400 Subject: testing nginx.conf fails Message-ID: This is a new install of 1.6.2 on linux dedi (centOS). I have the following in my nginx.conf file: fastcgi_cache_path /dev/nginx-cache levels=1:2 keys_zone=WORDPRESS:10m max_size=1024m inactive=60m; When testing the configurations (nginx -t), I get the following error message: nginx: [emerg] "fastcgi_cache_path" directive is not allowed here in /etc/nginx/nginx.conf:82 I have reviewed many suggested configurations and all place the fast_cache_path statement, in some similar form, in nginx.conf. Why is this consistently failing? I thank everyone in advance for their assistance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257421,257421#msg-257421 From nginx-forum at nginx.us Thu Mar 19 03:37:25 2015 From: nginx-forum at nginx.us (mac-989) Date: Wed, 18 Mar 2015 23:37:25 -0400 Subject: testing nginx.conf fails In-Reply-To: References: Message-ID: <72afff0bedc1e07f3f0d71fc3863759e.NginxMailingListEnglish@forum.nginx.org> Please disregard this post, the error has been identified. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257421,257422#msg-257422 From nginx-forum at nginx.us Thu Mar 19 04:31:17 2015 From: nginx-forum at nginx.us (Mayhem30) Date: Thu, 19 Mar 2015 00:31:17 -0400 Subject: Allow directive with variables In-Reply-To: <20130101213109.GZ40452@mdounin.ru> References: <20130101213109.GZ40452@mdounin.ru> Message-ID: An easy way to solve this issue is to create a "allowmyip.conf" file and include it anywhere you wish. allowmyip.conf file : allow 11.22.33.44 deny all; Then in your server block :location ^~ /apc/ { # Allow home include /etc/nginx/allowmyip.conf; } This way it will be real easy to manage if your IP address changes. Just update your IP Address in the allowmyip.conf file and restart the server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234600,257423#msg-257423 From dilyan.palauzov at aegee.org Thu Mar 19 12:29:04 2015 From: dilyan.palauzov at aegee.org (=?UTF-8?B?0JTQuNC70Y/QvSDQn9Cw0LvQsNGD0LfQvtCy?=) Date: Thu, 19 Mar 2015 13:29:04 +0100 Subject: SSL Ciphers Message-ID: <550AC110.1040904@aegee.org> Hello, I have nginx linked openssl 1.0.2 and nginx and configured with ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH CAMELLIA SHA256 SHA384 !aNULL !eNULL !LOW -3DES !MD5 !EXP !PSK -SRP !DSS !RC4 !EDH"; Nginx supports these ciphers: ECDHE-RSA-AES256-GCM-SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDHE-RSA-AES256-SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 ECDHE-RSA-AES256-SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA ECDHE-RSA-AES128-GCM-SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ECDHE-RSA-AES128-SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ECDHE-RSA-AES128-SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA but openssl cipher -V 'the above list' prints in addition AES128-SHA256 AES256-SHA256 CAMELLIA128-SHA CAMELLIA256-SHA DH-DSS-AES128-SHA256 DH-DSS-AES256-SHA256 DH-DSS-CAMELLIA128-SHA DH-DSS-CAMELLIA256-SHA DH-RSA-AES128-SHA256 DH-RSA-AES256-SHA256 DH-RSA-CAMELLIA128-SHA DH-RSA-CAMELLIA256-SHA ECDH-ECDSA-AES128-SHA256 ECDH-ECDSA-AES256-SHA384 ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA ECDHE-ECDSA-AES128-SHA256 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-AES256-SHA384 ECDH-RSA-AES128-SHA256 ECDH-RSA-AES256-SHA384 Can you tell me, why doesn't nginx support all ciphers printed by openssl cipher using the same cipher-string? I use ngonx 1.6.2 . Thanks in advance for your answer Dilyan From citrin at citrin.ru Thu Mar 19 12:51:08 2015 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Thu, 19 Mar 2015 15:51:08 +0300 Subject: SSL Ciphers In-Reply-To: <550AC110.1040904@aegee.org> References: <550AC110.1040904@aegee.org> Message-ID: <550AC63C.5040408@citrin.ru> On 03/19/15 15:29, ????? ???????? wrote: > Can you tell me, why doesn't nginx support all ciphers printed by openssl cipher > using the same cipher-string? Some cipher suites depend on certificate type. E. g. for ECDHE-ECDSA-* you need ECC-based certificate. From vbart at nginx.com Thu Mar 19 14:18:02 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 19 Mar 2015 17:18:02 +0300 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: <5286844.PeBmyWdX7y@vbart-workstation> Message-ID: <5448509.lUBmX8Sn9Z@vbart-workstation> On Wednesday 18 March 2015 19:11:18 Mark Mielke wrote: > Hi Valentin: > > Are you talking about the same "push" as I am? HTTP/2, or at least SPDY, > had the ability to *push* content like CSS in advance of the request, > pushing content into the browsers cache *before* it needs it. Yes, about that one. > I'm not talking about long polling or other technology. I've only read about this > technology, though. I've never seen it implemented in practice. The mod_spdy Apaches's module has "server push" implementation for a long time already. And it can be used with mod_proxy or mod_fastcgi without a problem. See: https://code.google.com/p/mod-spdy/wiki/OptimizingForSpdy#Using_SPDY_server_push > And for prioritization, it's about choosing to send more important content before > less important content. The prioritization mechanism in SPDY/HTTP2 mostly solves the problem introduced by multiplexing, i.e. by the new protocol itself. When you have only one pipe then you should carefully choose what to send first. It already works pretty good in nginx with SPDY on client side and HTTP or FastCGI on backends. There's almost no room for improvement. > I don't think you are correct in terms of future potential here. But, it's very > likely that you are correct in terms of *current* potential. That is, I think > this technology is too new for people to understand it and really think about > how to leverage it. It sounds like you don't even know about it... Well, do you know that the FastCGI protocol has multiplexing ability since introduction in 1996? So, nothing new, and I cannot remember any widely used implementation. The reason is that multiplexing is a very complicated thing with questionable benefits, especially when we are talking about fast and persistent connections between web server and application. At least, this should be well studied at first, since the proper implementation on the backend side will take lots of programming hours. As for the future, personally I believe that SCTP has much better potential. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Mar 19 14:35:27 2015 From: nginx-forum at nginx.us (doachs) Date: Thu, 19 Mar 2015 10:35:27 -0400 Subject: SSL backend support for imap proxy Message-ID: >From what I can see in the forum archives, the mail/imap proxy does not support encrypted connections to a backend imap server. This still appears to be the case with the latest mainline version. Are there any plans to add that? It would be really great if our nginx imap proxy server could talk to our imap servers directly over a secure connection and not require stunnel or some other work around. Additionally, it would really help us out with a current issue we are facing. We would really like to be able to proxy imap connections for some of our users to imap.gmail.com and to our local imap server for other users. However imap.gmail.com only supports a secure imap connection and testing with stunnel still fails ( we get "upstream sent invalid response" error ). Has anyone successfully gotten nginx to proxy imap connections to imap.gmail.com? If so, I would really like to hear how you did it. The flexibility of nginx as an imap proxy has been wonderful and having it be able to talk to imap.gmail.com and do ssl would make it that much more amazing. Thanks, Dan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257439,257439#msg-257439 From nginx-forum at nginx.us Thu Mar 19 15:52:41 2015 From: nginx-forum at nginx.us (youngde811) Date: Thu, 19 Mar 2015 11:52:41 -0400 Subject: Rewrite undecoded URL Message-ID: <74b8e1cea216e821a88c46b00ef9c780.NginxMailingListEnglish@forum.nginx.org> Hello. We are trying to use the nginx rewrite rule, without the application of URL decoding. The relevant portion of our test configuration is: location ~ ^/p/stratus.* { include conf.d/hosts.conf; proxy_pass http://localhost:8080; proxy_set_header Host $host; # strip leading /p (note that 'break' stops processing ngx_http_rewrite_module directives, including 'set') rewrite ^/p(.*)$ $scheme://$server_addr/?redirect_to=$scheme://$host/$1 break; } location / { return 200 'Twas Brillig...'; } The rewite behaves properly in stripping off the /p but writes the decoded url rather than leaving it untouched. We have enocded slashes (/) -- %2F -- within the URI. Not as parameters but in the body of the url. Nginx is decoding these and we need to pass the url upstream intact but with the /p stripped. Any help would be appreciated, as we're out of ideas. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257440,257440#msg-257440 From nginx-forum at nginx.us Thu Mar 19 18:44:58 2015 From: nginx-forum at nginx.us (blason) Date: Thu, 19 Mar 2015 14:44:58 -0400 Subject: Reverse Proxy for Microsoft RDP Message-ID: Hi Guys, I do have couple of microsoft servers which are being accessed over the internet using RDP. Would like to know if nginx can be used as a reverse proxy for RDP servers without exposing my original servers? Can someone guide me plss.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257444,257444#msg-257444 From feldan1 at gmail.com Thu Mar 19 18:50:23 2015 From: feldan1 at gmail.com (Lorne Wanamaker) Date: Thu, 19 Mar 2015 14:50:23 -0400 Subject: Reverse Proxy for Microsoft RDP In-Reply-To: References: Message-ID: Can't say I have ever heard of Nginx being used like that but would be interesting if it could. If you are worried about RDP exposure you can always lock down RDP access to a certain IP. All of our servers (RDP) are locked down to just 2 staff VPN IPs. Lorne On Thu, Mar 19, 2015 at 2:44 PM, blason wrote: > Hi Guys, > > I do have couple of microsoft servers which are being accessed over the > internet using RDP. Would like to know if nginx can be used as a reverse > proxy for RDP servers without exposing my original servers? > > Can someone guide me plss.... > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257444,257444#msg-257444 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 19 20:02:10 2015 From: nginx-forum at nginx.us (173279834462) Date: Thu, 19 Mar 2015 16:02:10 -0400 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> References: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> Message-ID: Update: The original error "SSL3_CTX_CTRL:called a function you should not cal" is no longer on the logs. The last occurrence dates back to early february: 2015/02/03 20:23:30 [alert] 69020#0: *16 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: [my-IP], server: 0.0.0.0:443 >From my seat that error is gone. However, I do see the following on the error log when running ssllabs' server test: 2015/03/19 20:45:24 [crit] 24179#0: *226 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 64.41.200.101, server: 0.0.0.0:443 2015/03/19 20:45:25 [crit] 24179#0: *227 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 64.41.200.101, server: 0.0.0.0:443 The timing occurs right before POODLE tests: > BEAST attack Not mitigated server-side (more info) TLS 1.0: 0xc014 I am on nginx 1.7.10 with libressl 2.1.6. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,257447#msg-257447 From hobson42 at gmail.com Thu Mar 19 20:05:22 2015 From: hobson42 at gmail.com (Ian) Date: Thu, 19 Mar 2015 20:05:22 +0000 Subject: Help needed with configuration Message-ID: <550B2C02.6060104@gmail.com> Hi, I have the following configuration. Why are a missing .html file, and the requests for favicon.ico (also missing), all result in a "bad gateway" error.? The error log shows that the request is passed to uwsgi, which is aborting 502. I would have expected the last two configurations to be taken resulting in a 404. server { listen 80; listen [::]:80 ipv6only=on; server_name www.mydomain.com mydomain.com, wwp.mydomain.com localhost; # localhost so it can be read locally for testing access_log /var/log/nginx/gmt.access.log; error_log /var/log/nginx/gmt.error.log; index index.htm; # three way split -> .htm .php and static # serve old *.htm, send missing to new location ~ \.htm$ { root /var/www/mydomain.com/oldhtdocs; try_files $uri $uri/ @newflask; } # for new site, pass .htm to flask location @newflask { # recieves *.htm where old file is missing root /var/www/mydomain.com/htdocs; include uwsgi_params; try_files $uri =404; fastcgi_split_path_info ^(.+\.htm)(/.+)$; uwsgi_pass uwsgicluster; uwsgi_param UWSGI_CHDIR /var/www/mydomain.com/flask; uwsgi_param UWSGI_PYHOME /var/www/mydomain.com/flask/venv; uwsgi_param UWSGI_MODULE gmt; uwsgi_param UWSGI_CALLABLE app; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } # Pass php files to php via fcgi location ~ \.php$ { root /var/www/mydomain.com/oldhtdocs; try_files $uri @newphp; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: Must have "cgi.fix_pathinfo = 0;" in php.ini fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # pass .php to new root or 404 location @newphp { root /var/www/mydomain.com/htdocs; try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # serve old static files with nginx location / { root /var/www/mydomain.com/oldhtdocs; index index.htm; try_files $uri $uri/ @newstatic; } # serve static new files with nginx location @newstatic { root /var/www/mydomain.com/htdocs; # serve if present, else 404 try_files $uri $uri/ =404; } } By way of explanation, we are migrating from oldhtdocs to a new site, but having to replace pages in groups. This way I hope we can leave all the junk behind. Ubuntu 14:04, with standard versions of all software. Nginx is 1.4.6 Thanks for your ideas, Ian From nginx-forum at nginx.us Thu Mar 19 20:24:02 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 19 Mar 2015 16:24:02 -0400 Subject: [ANN] Windows nginx 1.7.11.3 Gryphon Message-ID: 20:43 19-3-2015 nginx 1.7.11.3 Gryphon Based on nginx 1.7.11 (19-3-2015, last changeset 6024:199c0dd313ea) with; + Openssl-1.0.1m (CVE-2015-0204, CVE-2015-0286, CVE-2015-0287, CVE-2015-0289, CVE-2015-0292, CVE-2015-0293, CVE-2015-0209, CVE-2015-0288) * In some cases the nginx processes won't stop normally when it's service is stopped (workers are still busy), it is advised to add this line: TASKKILL /F /IM "nginx*" at the end of your 'ngx_stop.cmd' file to make sure no workers are left behind before a new master and workers are started + Source changes back ported + Source changes add-on's: no changes + Changes for nginx_basic: Source changes back ported * Scheduled release: no (openssl fixes) * Additional specifications: see 'Feature list' * This is a non scheduled release of the last in the Gryphon series Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257453,257453#msg-257453 From nginx-forum at nginx.us Thu Mar 19 21:03:17 2015 From: nginx-forum at nginx.us (dansch8888) Date: Thu, 19 Mar 2015 17:03:17 -0400 Subject: rewrite rules cms phpwcms not working In-Reply-To: <4b4a0198ed167bc7fabaea3defbb6e2a.NginxMailingListEnglish@forum.nginx.org> References: <20150224203758.GV13461@daoine.org> <4b4a0198ed167bc7fabaea3defbb6e2a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <81da6f7e6fe103efba970738a278a167.NginxMailingListEnglish@forum.nginx.org> I found the problem with the Y. It was this "[PHP] echo date(Y);[/PHP]" code in my CMS content, where the Y not was quoted 'Y'. I changed it to "[PHP] echo date('Y');[/PHP]" and all went fine now. The [PHP] is just a replacement tag of the CMS, where you can render PHP code in the system. All works fine now. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256693,257455#msg-257455 From nginx-forum at nginx.us Thu Mar 19 21:04:38 2015 From: nginx-forum at nginx.us (gonguinguen) Date: Thu, 19 Mar 2015 17:04:38 -0400 Subject: What's the proper way to apply different connection limits to multiple virtual hosts? Message-ID: <4f5fb67b04e3f76fcece8f438e888e97.NginxMailingListEnglish@forum.nginx.org> Let's say that we have two virtual hosts serving two domains: blog.com and store.com. Let's suppose that we need to apply the following limits: - blog.com: no more than 5 connections per ip, and no more than 50 total connections to the virtual host - store.com: no more than 10 connections per ip, and no more than 100 connections to the virtual host. Considering that. Which would be the correct aproach? ## ---------------- Approach 1 ---------------- ## http { limit_conn_zone $binary_remote_addr zone=conn_per_ip:5m; limit_conn_zone $server_name zone=conn_per_server:5m; server { server_name blog.com; limit_conn conn_per_ip 5; limit_conn conn_per_server 50; ... } server { server_name store.com; limit_conn conn_per_ip 10; limit_conn conn_per_server 100; ... } ... } ## ---------------- Approach 2 ---------------- ## http { limit_conn_zone $binary_remote_addr zone=blog_conn_per_ip:5m; limit_conn_zone $server_name zone=blog_conn_per_server:5m; server { server_name blog.com; limit_conn blog_conn_per_ip 5; limit_conn blog_conn_per_server 50; ... } limit_conn_zone $binary_remote_addr zone=store_conn_per_ip:5m; limit_conn_zone $server_name zone=store_conn_per_server:5m; server { server_name store.com; limit_conn store_conn_per_ip 10; limit_conn store_conn_per_server 100; ... } ... } Notice that in the first approach only two memory shared zones are declared, while in the second approach, four of them are declared. I appreciate any clarification on this! Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257456,257456#msg-257456 From nginx-forum at nginx.us Thu Mar 19 22:08:52 2015 From: nginx-forum at nginx.us (tunist) Date: Thu, 19 Mar 2015 18:08:52 -0400 Subject: nginx fails to start - nginx.service not found Message-ID: i just built the latest mainline nginx from source on fedora 21 (64bit) and when i run: service nginx start i see: Redirecting to /bin/systemctl start nginx.service Failed to start nginx.service: Unit nginx.service failed to load: No such file or directory. i've searched around online and so far found no-one else reporting the same issue. i think i have fixed this before on another installation in the past.. but presently i'm not sure what the resolutioin is - can anyone help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257457,257457#msg-257457 From nginx-forum at nginx.us Fri Mar 20 02:29:57 2015 From: nginx-forum at nginx.us (jinwon42) Date: Thu, 19 Mar 2015 22:29:57 -0400 Subject: https to http error "too many redirects" Message-ID: <3c2b2904a7fd3c6808b8287593931715.NginxMailingListEnglish@forum.nginx.org> Hi. i have a setting problem. I want all request "http" --> "https" But, some location is "https" --> "http". ALL Location : https /companyBrand.do : http only i saw error that "too many redirects" What's problem? ---------------------------------------------------------------------------------- map $uri $example_org_preferred_proto { default "https"; ~^/companyBrand.do "http"; } server { listen 80; server_name www.aaa.com; if ($example_org_preferred_proto = "https") { return 301 https://$server_name$request_uri; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_buffering off; proxy_connect_timeout 60; proxy_read_timeout 60; proxy_pass http://wwwaaacom; } } # HTTPS server # server { listen 443; server_name www.aaa.com; charset utf-8; ssl on; ssl_certificate D:/nginx-1.7.10/ssl/cert.pem; ssl_certificate_key D:/nginx-1.7.10/ssl/key.pem; if ($example_org_preferred_proto = "http") { return 301 http://$server_name$request_uri; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_buffering off; proxy_connect_timeout 60; proxy_read_timeout 60; proxy_pass http://wwwaaacom; proxy_ssl_session_reuse off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257458,257458#msg-257458 From dp at nginx.com Fri Mar 20 07:30:32 2015 From: dp at nginx.com (Dmitry Pryadko) Date: Fri, 20 Mar 2015 10:30:32 +0300 Subject: https to http error "too many redirects" In-Reply-To: <3c2b2904a7fd3c6808b8287593931715.NginxMailingListEnglish@forum.nginx.org> References: <3c2b2904a7fd3c6808b8287593931715.NginxMailingListEnglish@forum.nginx.org> Message-ID: <550BCC98.7020607@nginx.com> You can merge both servers into one and try something like this: map $request_uri $example_org_preferred_proto { default "https"; /companyBrand.do "http"; } server { listen 80; listen 443 ssl; .... if ($scheme != $example_org_preferred_proto) { return 301 $example_org_preferred_proto://$server_name$request_uri; } .... } 20.03.15 5:29, jinwon42 ?????: > Hi. > > i have a setting problem. > > I want all request "http" --> "https" > But, some location is "https" --> "http". > > ALL Location : https > /companyBrand.do : http only > > i saw error that "too many redirects" > > What's problem? > > ---------------------------------------------------------------------------------- > > map $uri $example_org_preferred_proto { > default "https"; > ~^/companyBrand.do "http"; > } > > server { > listen 80; > server_name www.aaa.com; > > if ($example_org_preferred_proto = "https") { > return 301 https://$server_name$request_uri; > } > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_buffering off; > proxy_connect_timeout 60; > proxy_read_timeout 60; > proxy_pass http://wwwaaacom; > } > > } > > > # HTTPS server > # > server { > listen 443; > server_name www.aaa.com; > > charset utf-8; > > ssl on; > ssl_certificate D:/nginx-1.7.10/ssl/cert.pem; > ssl_certificate_key D:/nginx-1.7.10/ssl/key.pem; > > if ($example_org_preferred_proto = "http") { > return 301 http://$server_name$request_uri; > } > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_buffering off; > proxy_connect_timeout 60; > proxy_read_timeout 60; > proxy_pass http://wwwaaacom; > proxy_ssl_session_reuse off; > } > > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257458,257458#msg-257458 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- br, Dmitry Pryadko From nginx-forum at nginx.us Fri Mar 20 08:08:19 2015 From: nginx-forum at nginx.us (jinwon42) Date: Fri, 20 Mar 2015 04:08:19 -0400 Subject: https to http error "too many redirects" In-Reply-To: <550BCC98.7020607@nginx.com> References: <550BCC98.7020607@nginx.com> Message-ID: <3c7ad67b10403a2ac37249ca8f2c4186.NginxMailingListEnglish@forum.nginx.org> Thanks for reply! But, I still saw error. 400 Bad Request The plain HTTP request was sent to HTTPS port this setting is wrong? map $request_uri $example_org_preferred_proto { default "https"; ~^/mobile/PayOnlyResult.do "http"; ~^/kor/companyBrand.do "http"; } server { listen 443 ssl; listen 80; server_name www.aaa.com; charset utf-8; ssl on; ssl_certificate D:/nginx-1.5.2/ssl/cert.pem; ssl_certificate_key D:/nginx-1.5.2/ssl/key.pem; ssl_verify_client off; if ($scheme != $example_org_preferred_proto) { return 301 $example_org_preferred_proto://$server_name:88$request_uri; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_buffering off; proxy_connect_timeout 60; proxy_read_timeout 60; proxy_pass http://wwwaaaacom; proxy_ssl_session_reuse off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257458,257467#msg-257467 From dp at nginx.com Fri Mar 20 08:33:44 2015 From: dp at nginx.com (Dmitry Pryadko) Date: Fri, 20 Mar 2015 11:33:44 +0300 Subject: https to http error "too many redirects" In-Reply-To: <3c7ad67b10403a2ac37249ca8f2c4186.NginxMailingListEnglish@forum.nginx.org> References: <550BCC98.7020607@nginx.com> <3c7ad67b10403a2ac37249ca8f2c4186.NginxMailingListEnglish@forum.nginx.org> Message-ID: <550BDB68.2000400@nginx.com> Why 88? 20.03.15 11:08, jinwon42 ?????: > return 301 $example_org_preferred_proto://$server_name:88$request_uri; -- br, Dmitry Pryadko From nginx-forum at nginx.us Fri Mar 20 08:48:15 2015 From: nginx-forum at nginx.us (jinwon42) Date: Fri, 20 Mar 2015 04:48:15 -0400 Subject: https to http error "too many redirects" In-Reply-To: <550BDB68.2000400@nginx.com> References: <550BDB68.2000400@nginx.com> Message-ID: <6f72a7ac1ea2d3ba9f5878556f13f8e2.NginxMailingListEnglish@forum.nginx.org> Sorry. 80 port is right. if ($scheme != $example_org_preferred_proto) { return 301 $example_org_preferred_proto://$server_name$request_uri; } Still saw error. "ERR_TOO_MANY_REDIRECTS" ------------------------------------------------------- map $request_uri $example_org_preferred_proto { default "https"; ~^/mobile/PayOnlyResult.do "http"; ~^/kor/tel.do "http"; } server { listen 443 ssl; listen 80; server_name www.aaaa.com; charset utf-8; #ssl on; ssl_certificate D:/nginx-1.7.10/ssl/cert.pem; ssl_certificate_key D:/nginx-1.7.10/ssl/key.pem; ssl_verify_client off; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES256-SHA:HIGH:!EXPORT:!eNULL:!ADH:RC4+RSA; ssl_prefer_server_ciphers on; # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; error_page 400 /error/error.html; error_page 403 /error/error.html; error_page 404 /error/error.html; if ($scheme != $example_org_preferred_proto) { return 301 $example_org_preferred_proto://$server_name$request_uri; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_buffering off; proxy_connect_timeout 60; proxy_read_timeout 60; proxy_pass http://wwwaaaacom; proxy_ssl_session_reuse off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257458,257469#msg-257469 From daniel at mostertman.org Fri Mar 20 08:56:02 2015 From: daniel at mostertman.org (Daniel Mostertman) Date: Fri, 20 Mar 2015 09:56:02 +0100 Subject: https to http error "too many redirects" In-Reply-To: <6f72a7ac1ea2d3ba9f5878556f13f8e2.NginxMailingListEnglish@forum.nginx.org> References: <550BDB68.2000400@nginx.com> <6f72a7ac1ea2d3ba9f5878556f13f8e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Correct, you give the HSTS header on the SSL/TLS port. So if *any* connection in the past has gone to the SSL/TLS port, the browser is forced to use https:// for any future connection. You should set it to 1 for a while and then disable it. On Mar 20, 2015 9:48 AM, "jinwon42" wrote: > Sorry. > > 80 port is right. > > > if ($scheme != $example_org_preferred_proto) { > return 301 > $example_org_preferred_proto://$server_name$request_uri; > } > > > Still saw error. "ERR_TOO_MANY_REDIRECTS" > > > > > ------------------------------------------------------- > > map $request_uri $example_org_preferred_proto { > default "https"; > ~^/mobile/PayOnlyResult.do "http"; > ~^/kor/tel.do "http"; > } > > server { > listen 443 ssl; > listen 80; > server_name www.aaaa.com; > > charset utf-8; > > #ssl on; > ssl_certificate D:/nginx-1.7.10/ssl/cert.pem; > ssl_certificate_key D:/nginx-1.7.10/ssl/key.pem; > ssl_verify_client off; > > ssl_session_timeout 5m; > > ssl_protocols SSLv3 TLSv1; > ssl_ciphers AES256-SHA:HIGH:!EXPORT:!eNULL:!ADH:RC4+RSA; > ssl_prefer_server_ciphers on; > > # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 > months) > add_header Strict-Transport-Security max-age=15768000; > > error_page 400 /error/error.html; > error_page 403 /error/error.html; > error_page 404 /error/error.html; > > if ($scheme != $example_org_preferred_proto) { > return 301 > $example_org_preferred_proto://$server_name$request_uri; > } > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header Host $http_host; > proxy_buffering off; > proxy_connect_timeout 60; > proxy_read_timeout 60; > proxy_pass http://wwwaaaacom; > proxy_ssl_session_reuse off; > } > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257458,257469#msg-257469 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Fri Mar 20 09:16:51 2015 From: thresh at nginx.com (Konstantin Pavlov) Date: Fri, 20 Mar 2015 12:16:51 +0300 Subject: nginx fails to start - nginx.service not found In-Reply-To: References: Message-ID: <550BE583.4010200@nginx.com> Hello, On 20/03/2015 01:08, tunist wrote: > i just built the latest mainline nginx from source on fedora 21 (64bit) and > when i run: service nginx start > > i see: > > Redirecting to /bin/systemctl start nginx.service > Failed to start nginx.service: Unit nginx.service failed to load: No such > file or directory. > > i've searched around online and so far found no-one else reporting the same > issue. > i think i have fixed this before on another installation in the past.. but > presently i'm not sure what the resolutioin is - can anyone help? Nginx source code does not contain a service file for systemd. You're free to re-use the one shipped in the official packages available on http://nginx.org/en/linux_packages.html#mainline. Also, it might be worth rebuilding an official source rpm package - the fedora systemd support is in there, it's just we don't build those packages specifically for fedora linux. -- Konstantin Pavlov From nginx-forum at nginx.us Fri Mar 20 09:20:21 2015 From: nginx-forum at nginx.us (jinwon42) Date: Fri, 20 Mar 2015 05:20:21 -0400 Subject: https to http error "too many redirects" In-Reply-To: References: Message-ID: You should set it to 1 for a while and then disable it. What's mean? How can i do? Please teach me. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257458,257472#msg-257472 From daniel at mostertman.org Fri Mar 20 09:35:49 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Fri, 20 Mar 2015 10:35:49 +0100 Subject: https to http error "too many redirects" In-Reply-To: References: Message-ID: <550BE9F5.6050903@mostertman.org> You said that in your configuration, you have the following line: # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; This makes nginx send a HSTS header to browsers that visit the website. With this, you tell the browser to always use https:// and never use http://, for the whole website. If you do not disable this, any and all requests done to the site will make sure that any requests for the next 6 months of that visit (you set it to 6 months), will always, no matter what the user or redirect types/does, use https://. If you want to avoid this behaviour, you should first reduce the duration of the header (max-age=) to 1 second, so that browsers will reduce the remaining time to 1 second. Then disable it after a few days/a week, depending on how long you think users take to return to your website. jinwon42 schreef op 20-3-2015 om 10:20: > You should set it to 1 for a while and then disable it. > > What's mean? > > How can i do? Please teach me. > > Thanks > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257458,257472#msg-257472 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From gmm at csdoc.com Fri Mar 20 10:14:19 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Fri, 20 Mar 2015 12:14:19 +0200 Subject: https to http error "too many redirects" In-Reply-To: <550BE9F5.6050903@mostertman.org> References: <550BE9F5.6050903@mostertman.org> Message-ID: <550BF2FB.8060708@csdoc.com> On 20.03.2015 11:35, Dani?l Mostertman wrote: > You said that in your configuration, you have the following line: > > # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) > add_header Strict-Transport-Security max-age=15768000; > > This makes nginx send a HSTS header to browsers that visit the website. > With this, you tell the browser to always use https:// and never use > http://, for the whole website. > If you do not disable this, any and all requests done to the site will > make sure that any requests for the next 6 months of that visit (you set > it to 6 months), will always, no matter what the user or redirect > types/does, use https://. > > If you want to avoid this behaviour, you should first reduce the > duration of the header (max-age=) to 1 second, so that browsers will > reduce the remaining time to 1 second. > Then disable it after a few days/a week, depending on how long you think > users take to return to your website. HSTS is good thing and should not be disabled. if you need http only for some uri - better create separate server, on different server_name, which works only on http, and leave https server for all rest https uri. for example: server { listen 443 ssl; server_name www.example.com; # HSTS (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; ... # HTTPS-only } server { listen 80; server_name www.example.com; location / { return 301 https://www.example.com$request_uri; } } server { listen 80; server_name example.com; location / { return 301 https://www.example.com$request_uri; } location = /mobile/PayOnlyResult.do { ... # HTTP-only } location = /kor/tel.do { ... # HTTP-only } } www.example.com - HTTPS-only, example.com - HTTP-only. -- Best regards, Gena From dewanggaba at xtremenitro.org Fri Mar 20 10:36:54 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 20 Mar 2015 17:36:54 +0700 Subject: https to http error "too many redirects" In-Reply-To: <550BF2FB.8060708@csdoc.com> References: <550BE9F5.6050903@mostertman.org> <550BF2FB.8060708@csdoc.com> Message-ID: <550BF846.2070306@xtremenitro.org> Hi! You'll _never_ reach http request since you set HSTS configuration :) If you still want some http request on your web server, disable your HSTS directive. (see Daniel statement on previous email). On 03/20/2015 05:14 PM, Gena Makhomed wrote: > On 20.03.2015 11:35, Dani?l Mostertman wrote: > >> You said that in your configuration, you have the following line: >> >> # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 >> months) >> add_header Strict-Transport-Security max-age=15768000; >> >> This makes nginx send a HSTS header to browsers that visit the website. >> With this, you tell the browser to always use https:// and never use >> http://, for the whole website. >> If you do not disable this, any and all requests done to the site will >> make sure that any requests for the next 6 months of that visit (you set >> it to 6 months), will always, no matter what the user or redirect >> types/does, use https://. >> >> If you want to avoid this behaviour, you should first reduce the >> duration of the header (max-age=) to 1 second, so that browsers will >> reduce the remaining time to 1 second. >> Then disable it after a few days/a week, depending on how long you think >> users take to return to your website. > > HSTS is good thing and should not be disabled. > > if you need http only for some uri - better create separate server, > on different server_name, which works only on http, and leave https > server for all rest https uri. for example: > > server { > listen 443 ssl; > server_name www.example.com; > > # HSTS (15768000 seconds = 6 months) > add_header Strict-Transport-Security max-age=15768000; > > ... # HTTPS-only > } > > server { > listen 80; > server_name www.example.com; > location / { return 301 https://www.example.com$request_uri; } > } > > server { > listen 80; > server_name example.com; > location / { return 301 https://www.example.com$request_uri; } > > location = /mobile/PayOnlyResult.do { > ... # HTTP-only > } > location = /kor/tel.do { > ... # HTTP-only > } > } > > www.example.com - HTTPS-only, example.com - HTTP-only. > From gmm at csdoc.com Fri Mar 20 11:05:44 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Fri, 20 Mar 2015 13:05:44 +0200 Subject: https to http error "too many redirects" In-Reply-To: <550BF846.2070306@xtremenitro.org> References: <550BE9F5.6050903@mostertman.org> <550BF2FB.8060708@csdoc.com> <550BF846.2070306@xtremenitro.org> Message-ID: <550BFF08.4000703@csdoc.com> On 20.03.2015 12:36, Dewangga Bachrul Alam wrote: > You'll _never_ reach http request since you set HSTS configuration :) > If you still want some http request on your web server, disable your > HSTS directive. (see Daniel statement on previous email). 1. HSTS enabled only on domain name www.example.com on domain name example.com - no HSTS, no https and no redirects. 2. disabling HSTS is bad idea. HSTS should be enabled on https servers. 3. please do not top post. thank you. >> HSTS is good thing and should not be disabled. >> >> if you need http only for some uri - better create separate server, >> on different server_name, which works only on http, and leave https >> server for all rest https uri. for example: >> >> server { >> listen 443 ssl; >> server_name www.example.com; >> >> # HSTS (15768000 seconds = 6 months) >> add_header Strict-Transport-Security max-age=15768000; >> >> ... # HTTPS-only >> } >> >> server { >> listen 80; >> server_name www.example.com; >> location / { return 301 https://www.example.com$request_uri; } >> } >> >> server { >> listen 80; >> server_name example.com; >> location / { return 301 https://www.example.com$request_uri; } >> >> location = /mobile/PayOnlyResult.do { >> ... # HTTP-only >> } >> location = /kor/tel.do { >> ... # HTTP-only >> } >> } >> >> www.example.com - HTTPS-only, example.com - HTTP-only. >> -- Best regards, Gena From daniel at mostertman.org Fri Mar 20 11:13:21 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Fri, 20 Mar 2015 12:13:21 +0100 Subject: https to http error "too many redirects" In-Reply-To: <550BFF08.4000703@csdoc.com> References: <550BE9F5.6050903@mostertman.org> <550BF2FB.8060708@csdoc.com> <550BF846.2070306@xtremenitro.org> <550BFF08.4000703@csdoc.com> Message-ID: <550C00D1.8020703@mostertman.org> Gena Makhomed schreef op 20-3-2015 om 12:05: > On 20.03.2015 12:36, Dewangga Bachrul Alam wrote: > >> You'll _never_ reach http request since you set HSTS configuration :) >> If you still want some http request on your web server, disable your >> HSTS directive. (see Daniel statement on previous email). > > 1. HSTS enabled only on domain name www.example.com > on domain name example.com - no HSTS, no https and no redirects. > > 2. disabling HSTS is bad idea. > HSTS should be enabled on https servers. > > 3. please do not top post. > thank you. > 1. Any website will want www. and non-www to show the same website. This can not be done in your configuration. 2. If any user goes to https://example.com/ instead of https://www.example.com/ it goes to the default website on 443, being www.example.com in this case. If that certificate is valid for example.com, the connection is built, and the HSTS is re-set in any browser for example.com and you will end up on SSL time and time again. 3. I never said I thought it _should_ be disabled. In fact, I think https:// should always be used if possible, and http:// should be avoided at pretty much all times. 4. HSTS does not _need_ to be enabled for secure connections to work, it's a "very nice to have". But not mandatory. In his case, it probably gives more trouble than it's worth. However, I do agree that it _should_, like you said. But again, in his configuration that might not be possible to have the best possible solution for what's being wished for. From gmm at csdoc.com Fri Mar 20 11:41:06 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Fri, 20 Mar 2015 13:41:06 +0200 Subject: https to http error "too many redirects" In-Reply-To: <550C00D1.8020703@mostertman.org> References: <550BE9F5.6050903@mostertman.org> <550BF2FB.8060708@csdoc.com> <550BF846.2070306@xtremenitro.org> <550BFF08.4000703@csdoc.com> <550C00D1.8020703@mostertman.org> Message-ID: <550C0752.9060801@csdoc.com> On 20.03.2015 13:13, Dani?l Mostertman wrote: >>> You'll _never_ reach http request since you set HSTS configuration :) >>> If you still want some http request on your web server, disable your >>> HSTS directive. (see Daniel statement on previous email). >> >> 1. HSTS enabled only on domain name www.example.com >> on domain name example.com - no HSTS, no https and no redirects. >> >> 2. disabling HSTS is bad idea. >> HSTS should be enabled on https servers. >> >> 3. please do not top post. >> thank you. >> > > 1. Any website will want www. and non-www to show the same website. This > can not be done in your configuration. http://example.com and http://www.example.com show the same site: server { listen 80; server_name example.com; location / { return 301 https://www.example.com$request_uri; } location = /mobile/PayOnlyResult.do { ... # HTTP-only } location = /kor/tel.do { ... # HTTP-only } } exception are done only for two uri, which are HTTP-only. > 2. If any user goes to https://example.com/ instead of > https://www.example.com/ it goes to the default website on 443, being > www.example.com in this case. If that certificate is valid for > example.com, the connection is built, and the HSTS is re-set in any > browser for example.com and you will end up on SSL time and time again. No problem, server { listen 443 default_server; server_name example.com; location / { return 301 https://www.example.com$request_uri; } location = /mobile/PayOnlyResult.do { return 301 http://example.com$request_uri; } location = /kor/tel.do { return 301 http://example.com$request_uri; } } server { listen 443 ssl; server_name www.example.com; # HSTS (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; ... # HTTPS-only } HTTPS-site example.com is default site and does not have HSTS. > 3. I never said I thought it _should_ be disabled. In fact, I think > https:// should always be used if possible, and http:// should be > avoided at pretty much all times. Agree, I don't know why topic starter need such strange configuration. > 4. HSTS does not _need_ to be enabled for secure connections to work, > it's a "very nice to have". But not mandatory. In his case, it probably > gives more trouble than it's worth. However, I do agree that it > _should_, like you said. But again, in his configuration that might not > be possible to have the best possible solution for what's being wished for. I can't agree with you what disabling HSTS on HTTPS-sites is the best possible way. My way of solution may be more simple, if for HTTP-only server use other name, for example, public.example.com or legacy.example.com or static.example.com or something like this. In this case, www.example.com and example.com can be both HTTPS-sites, without exceptions. -- Best regards, Gena From nginx-forum at nginx.us Fri Mar 20 17:57:46 2015 From: nginx-forum at nginx.us (ankneo) Date: Fri, 20 Mar 2015 13:57:46 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> I am seeing similar error as well. It is showing up for lot of people and am not sure why it is happening and if actually the clients facing the error are actually able to browse through the website or not. Can someone please help me understanding that is it safe to downgrade to the earlier version of libssl? and does it solve the problem of client unable to connect (if that happens) in this case? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,257497#msg-257497 From benfell at mail.parts-unknown.org Fri Mar 20 18:01:10 2015 From: benfell at mail.parts-unknown.org (David Benfell) Date: Fri, 20 Mar 2015 11:01:10 -0700 Subject: stripping www and forcing ssl Message-ID: <20150320180110.GA72503@home.parts-unknown.org> Hi all, I am attempting to strip www. and force SSL. Here are the blocks I'm using: server { listen 50.250.218.168:80; listen 50.250.218.168:443 ssl; listen [2001:470:67:2b5::10]:80; listen [2001:470:67:2b5::10]:443 ssl; server_name www.disunitedstates.org; include ssl_common; access_log /var/log/nginx/disunitedstates.org/access.log; error_log /var/log/nginx/disunitedstates.org/error.log; return 301 https://disunitedstates.org$request_uri; } server { listen 50.250.218.168:80; listen [2001:470:67:2b5::10]:80; server_name disunitedstates.org; access_log /var/log/nginx/disunitedstates.org/access.log; error_log /var/log/nginx/disunitedstates.org/error.log; return 301 https://disunitedstates.org$request_uri; } I have a separate server block for actually serving the site. But when one tries to access http://disunitedstates.org, one gets a 400 error, "The plain HTTP request was sent to HTTPS port." The information I'm finding out on the web about this is confusing and contradictory. How should this be done? Thanks! -- David Benfell See https://parts-unknown.org/node/2 if you don't understand the attachment. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Mar 20 18:15:42 2015 From: nginx-forum at nginx.us (tempspace) Date: Fri, 20 Mar 2015 14:15:42 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> References: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> I had to start looking at this issue again now that yet another openssl security issue. Now that I know I can go back to a working setup just by downgrading SSL, I am able to gather more information. This morning, I updated the libssl libraries and restarted nginx, and the errors started flooding back. This time, I took a packet capture to see what was happening and what I could correlate. I run a set of servers that handle API requests from a mobile phone application, and every single client that produced this error was running iOS. In the packet capture, we offer the same cipher that the clients always use without a problem, but for some reason, some of our iPhone clients have issues (not all.) I have been unable to discern a pattern, but it's always iPhones and doesn't seem to have anything to do with the device model or the OS version. I haven't found a single Android instance of the IP's that show up in our error logs, and we have slightly more Android devices than iOS devices. We get the Client Hello which has a list of 37 potential ciphers for TLS 1.2. We send the server hello and offer the normal cipher. The client, instead of continuing on, immediately sends a FIN, ACK. It then tries to connect again over TLS 1.0, gives the client hello, we send the ACK and almost immediately, WE send a FIN, ACK to the client. Since it's an API and there are multiple requests being made from the client, not every one will fail. Some negotiate SSL just fine, others do not. I'm still digging through the packet captures to try and figure out any other patterns. As soon as I downgrade libssl, everything works fine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,257499#msg-257499 From reallfqq-nginx at yahoo.fr Fri Mar 20 18:35:49 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 20 Mar 2015 19:35:49 +0100 Subject: stripping www and forcing ssl In-Reply-To: <20150320180110.GA72503@home.parts-unknown.org> References: <20150320180110.GA72503@home.parts-unknown.org> Message-ID: You have a duplicate listen directive with same IP address and same port in both server blocks. I doubt that is a valid configuration. Have you checked nginx -t and error logs on reload/start? I suggest you have a server block listening for HTTP on port 80 and another block reponsible for HTTPS traffic listening on 443, and then redirecting the HTTP block to the HTTPS one. --- *B. R.* On Fri, Mar 20, 2015 at 7:01 PM, David Benfell < benfell at mail.parts-unknown.org> wrote: > Hi all, > > I am attempting to strip www. and force SSL. Here are the blocks I'm > using: > > server { > listen 50.250.218.168:80; > listen 50.250.218.168:443 ssl; > listen [2001:470:67:2b5::10]:80; > listen [2001:470:67:2b5::10]:443 ssl; > > server_name www.disunitedstates.org; > include ssl_common; > > access_log > /var/log/nginx/disunitedstates.org/access.log; > error_log > /var/log/nginx/disunitedstates.org/error.log; > > return 301 https://disunitedstates.org$request_uri; > } > > server { > listen 50.250.218.168:80; > listen [2001:470:67:2b5::10]:80; > > server_name disunitedstates.org; > > access_log > /var/log/nginx/disunitedstates.org/access.log; > error_log > /var/log/nginx/disunitedstates.org/error.log; > > return 301 https://disunitedstates.org$request_uri; > } > > I have a separate server block for actually serving the site. > > But when one tries to access http://disunitedstates.org, one gets a > 400 error, "The plain HTTP request was sent to HTTPS port." The > information I'm finding out on the web about this is confusing and > contradictory. > > How should this be done? > > Thanks! > -- > David Benfell > See https://parts-unknown.org/node/2 if you don't understand the > attachment. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Mar 20 18:41:59 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 20 Mar 2015 19:41:59 +0100 Subject: Default value of gzip_proxied Message-ID: I recently bumped into some trouble with a client caching uncompressed data without understanding where it came from. After long investigation on what appeared to be random, I narrowed it to the gzip_proxied directive. Return content from webserver was supposed to be *always* compressed (as compressed data is generally better than uncompressed whenever possible), but when requests coming from clients behind proxies resulted in MISS, the returned content was uncompressed and stored as such in cache... thus serving cached uncompressed data to final clients. ?Why is the default value of that directive 'off'? What is the problem with sneding compressed data to proxies? Why have you decided on such a default value? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Fri Mar 20 18:57:10 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Fri, 20 Mar 2015 20:57:10 +0200 Subject: stripping www and forcing ssl In-Reply-To: <20150320180110.GA72503@home.parts-unknown.org> References: <20150320180110.GA72503@home.parts-unknown.org> Message-ID: <550C6D86.50608@csdoc.com> On 20.03.2015 20:01, David Benfell wrote: > I am attempting to strip www. and force SSL. Here are the blocks I'm > using: > > server { > listen 50.250.218.168:80; > listen 50.250.218.168:443 ssl; > listen [2001:470:67:2b5::10]:80; > listen [2001:470:67:2b5::10]:443 ssl; > > server_name www.disunitedstates.org; > include ssl_common; > > access_log > /var/log/nginx/disunitedstates.org/access.log; > error_log > /var/log/nginx/disunitedstates.org/error.log; > > return 301 https://disunitedstates.org$request_uri; > } > > server { > listen 50.250.218.168:80; > listen [2001:470:67:2b5::10]:80; > > server_name disunitedstates.org; > > access_log > /var/log/nginx/disunitedstates.org/access.log; > error_log > /var/log/nginx/disunitedstates.org/error.log; > > return 301 https://disunitedstates.org$request_uri; > } > > I have a separate server block for actually serving the site. > > But when one tries to access http://disunitedstates.org, one gets a > 400 error, "The plain HTTP request was sent to HTTPS port." The > information I'm finding out on the web about this is confusing and > contradictory. > > How should this be done? Probably "include ssl_common;" contains "ssl on;" directive, which forces nginx to use HTTPS on 50.250.218.168:80 http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl Just remove "ssl on;" from ssl_common include file and reload nginx. -- Best regards, Gena From gmm at csdoc.com Fri Mar 20 19:01:40 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Fri, 20 Mar 2015 21:01:40 +0200 Subject: stripping www and forcing ssl In-Reply-To: References: <20150320180110.GA72503@home.parts-unknown.org> Message-ID: <550C6E94.6070707@csdoc.com> On 20.03.2015 20:35, B.R. wrote: > You have a duplicate listen directive with same IP address and same port > in both server blocks. > I doubt that is a valid configuration. Yes, this is valid configuration. See http://nginx.org/en/docs/http/request_processing.html http://nginx.org/en/docs/http/server_names.html http://nginx.org/en/docs/http/configuring_https_servers.html for details in nginx documentation http://nginx.org/en/docs/ -- Best regards, Gena From m.tokallo at gmail.com Fri Mar 20 19:36:13 2015 From: m.tokallo at gmail.com (Mohammad Tokallo) Date: Fri, 20 Mar 2015 23:06:13 +0330 Subject: enable memcache with nginx Message-ID: Dear Friends i have tried to configure memcache with nginx but still couldn't configure it anybody have experience to configure memcache with nginx please send your configuration file thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Fri Mar 20 19:59:27 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Fri, 20 Mar 2015 21:59:27 +0200 Subject: enable memcache with nginx In-Reply-To: References: Message-ID: <550C7C1F.9060900@csdoc.com> On 20.03.2015 21:36, Mohammad Tokallo wrote: > i have tried to configure memcache with nginx but still couldn't > configure it > anybody have experience to configure memcache with nginx please send > your configuration file You can find example configuration in module documentation: http://nginx.org/en/docs/http/ngx_http_memcached_module.html nginx can only read from memcached and can't write to it. Thereby content into memcached must be placed by backend. -- Best regards, Gena From nginx-forum at nginx.us Fri Mar 20 21:25:48 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Fri, 20 Mar 2015 17:25:48 -0400 Subject: what does fastcgi_keep_conn do? Message-ID: <2f10beb676939d19a4cf6d48ac4c83a3.NginxMailingListEnglish@forum.nginx.org> Hi, I would like nginx to serve all requests of a given TCP connection to the same FCGI server. If a new TCP connection is established, then nginx would select the next UDS FCGI server in round-robin fashion. Can this be achieved with NGINX, and if yes, how? I thought turning on fastcgi_keep_conn on would achieve this goal, but it is not what happened. My obervation was that each FCGI server took turn receiving a new request even if all the requests are from the same TCP connection. I had: upstream backend { server unix:/tmp/fastcgi/socket1 ...; server unix:/tmp/fastcgi/socket2 ...; keepalive 32; } server { ... location { fastcgi_keep_conn on; fastcgi_pass backend; } Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257508,257508#msg-257508 From nginx-forum at nginx.us Fri Mar 20 21:48:56 2015 From: nginx-forum at nginx.us (gue22) Date: Fri, 20 Mar 2015 17:48:56 -0400 Subject: Latest nginx debug version Message-ID: Hi, I installed nginx-full-dbg on the latest Ubuntu 14.10 and after some tinkering I could add the Dynatrace Agent. While I got nginx 1.6.2 on Ubuntu I tried the same on a Fedora 18 and I end up with 1.2.9. Also: I saw some mention of a debug switch in the help, but I guess this has nothing to do with nginx itself. How do I get the right *and* debug version on Fedora? Thanks for any insight! G. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257509,257509#msg-257509 From nginx-forum at nginx.us Sat Mar 21 04:37:40 2015 From: nginx-forum at nginx.us (imwack) Date: Sat, 21 Mar 2015 00:37:40 -0400 Subject: Nginx mail proxy Message-ID: <2137685ab51744fc6b774da61e727e5e.NginxMailingListEnglish@forum.nginx.org> I want to use nginx as a mail proxy.I am new to nginx and need some help with the configuration, I got some problems. I want to use Foxmail ,use ngx proxy , this is my configuration. mail{ #server_name mailProxy; auth_http localhost:80/php/auth.php; pop3_capabilities LAST TOP USER PIPELINING UIDL; pop3_auth plain apop cram-md5; imap_capabilities IMAP4rev1 UIDPLUS; smtp_capabilities "SIZE 10485760" ENHANCEDSTATUSCODES 8BITMIME DSN; smtp_auth login plain cram-md5; server{ listen 25; protocol smtp; } server{ listen 110; protocol pop3; proxy on; proxy_pass_error_message on; } server{ listen 143; protocol imap; proxy on; } } and my auth script using PHP as follow: But this does not run,when i use telnet test,as follow telnet 192.168.42.132 25 Trying 192.168.42.132... Connected to 192.168.42.132. Escape character is '^]'. 220 wack ESMTP ready auth login 334 VXNlcm5hbWU6 base64(username==) 334 UGFzc3dvcmQ6 base64(password) 451 4.3.2 Internal server error Connection closed by foreign host. what's wrong ,and the error log as follow: 2015/03/21 12:35:39 [error] 55719#0: *151 upstream sent invalid response: "550 insufficient authorization" while reading response from upstream, client: 192.168.42.132, server: 0.0.0.0:25, login: "***@**.**.cn", upstream:***.***.***.***:25 The '*' is my username and backend ip. 192.168.42.132 is my vitual machine ip. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257510,257510#msg-257510 From benfell at mail.parts-unknown.org Sat Mar 21 06:08:27 2015 From: benfell at mail.parts-unknown.org (David Benfell) Date: Fri, 20 Mar 2015 23:08:27 -0700 Subject: SOLVED: Re: stripping www and forcing ssl In-Reply-To: <550C6D86.50608@csdoc.com> References: <20150320180110.GA72503@home.parts-unknown.org> <550C6D86.50608@csdoc.com> Message-ID: <20150321060827.GB21048@home.parts-unknown.org> On Fri, Mar 20, 2015 at 08:57:10PM +0200, Gena Makhomed wrote: > > Probably "include ssl_common;" contains "ssl on;" > directive, which forces nginx to use HTTPS on 50.250.218.168:80 > Yup. This was right. I was shocked because I had thought I had omitted this directive. But I looked and there it was. Thanks very much! -- David Benfell See https://parts-unknown.org/node/2 if you don't understand the attachment. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From reallfqq-nginx at yahoo.fr Sat Mar 21 10:43:17 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 21 Mar 2015 11:43:17 +0100 Subject: what does fastcgi_keep_conn do? In-Reply-To: <2f10beb676939d19a4cf6d48ac4c83a3.NginxMailingListEnglish@forum.nginx.org> References: <2f10beb676939d19a4cf6d48ac4c83a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: 'Keep connection' does not mean what you think it means. That does not tie a client connection to a backend connection. To do that, you will need stuff like ip_hash or more advenced session mechanisms (which are sadly not available in FOSS... yet?). Read the docs on fastcgi_keep_conn , which says it ensures the FastCGI connection is not close after being used, which is the normal way of doing it. Use that in relation with keepalive , as instructed. That means that connections to the backend will remain open after being open, so the next time the webserver addresses to them, they won't need to open a new connection to them. --- *B. R.* On Fri, Mar 20, 2015 at 10:25 PM, nginxuser100 wrote: > Hi, I would like nginx to serve all requests of a given TCP connection to > the same FCGI server. If a new TCP connection is established, then nginx > would select the next UDS FCGI server in round-robin fashion. > > Can this be achieved with NGINX, and if yes, how? > > I thought turning on fastcgi_keep_conn on would achieve this goal, but it > is > not what happened. My obervation was that each FCGI server took turn > receiving a new request even if all the requests are from the same TCP > connection. > > I had: > > upstream backend { > server unix:/tmp/fastcgi/socket1 ...; > server unix:/tmp/fastcgi/socket2 ...; > keepalive 32; > } > > server { > ... > location { > fastcgi_keep_conn on; > fastcgi_pass backend; > } > > Thank you. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257508,257508#msg-257508 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Mar 21 14:21:32 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Mar 2015 17:21:32 +0300 Subject: Default value of gzip_proxied In-Reply-To: References: Message-ID: <20150321142132.GP88631@mdounin.ru> Hello! On Fri, Mar 20, 2015 at 07:41:59PM +0100, B.R. wrote: > I recently bumped into some trouble with a client caching uncompressed data > without understanding where it came from. > > After long investigation on what appeared to be random, I narrowed it to > the gzip_proxied > > directive. Return content from webserver was supposed to be *always* > compressed (as compressed data is generally better than uncompressed > whenever possible), but when requests coming from clients behind proxies > resulted in MISS, the returned content was uncompressed and stored as such > in cache... thus serving cached uncompressed data to final clients. > > ?Why is the default value of that directive 'off'? What is the problem with > sneding compressed data to proxies? Why have you decided on such a default > value? Because not all clients support compression, and it's not possible to instruct HTTP/1.0 proxies to serve compressed version only to some clients. In HTTP/1.1 there is a Vary header for this, but nevertheless it's usually bad idea to use it as it causes huge cache duplication. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sat Mar 21 14:36:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Mar 2015 17:36:36 +0300 Subject: Nginx mail proxy In-Reply-To: <2137685ab51744fc6b774da61e727e5e.NginxMailingListEnglish@forum.nginx.org> References: <2137685ab51744fc6b774da61e727e5e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150321143636.GQ88631@mdounin.ru> Hello! On Sat, Mar 21, 2015 at 12:37:40AM -0400, imwack wrote: > I want to use nginx as a mail proxy.I am new to nginx and need some help > with the configuration, I got some problems. > I want to use Foxmail ,use ngx proxy , this is my configuration. [...] > But this does not run,when i use telnet test,as follow > telnet 192.168.42.132 25 > Trying 192.168.42.132... > Connected to 192.168.42.132. > Escape character is '^]'. > 220 wack ESMTP ready > auth login > 334 VXNlcm5hbWU6 > base64(username==) > 334 UGFzc3dvcmQ6 > base64(password) > 451 4.3.2 Internal server error > Connection closed by foreign host. > > what's wrong ,and the error log as follow: > > 2015/03/21 12:35:39 [error] 55719#0: *151 upstream sent invalid response: > "550 insufficient authorization" while reading response from upstream, > client: 192.168.42.132, server: 0.0.0.0:25, login: "***@**.**.cn", > upstream:***.***.***.***:25 > > The '*' is my username and backend ip. 192.168.42.132 is my vitual machine > ip. When using SMTP, nginx won't try to do any authentication against the backend server, but rather will use XCLIENT command to pass user credentials, see http://nginx.org/r/xclient. You have to instruct your SMTP backend to accept XCLIENT from nginx. When using Postfix, this can be done with smtpd_authorized_xclient_hosts: http://www.postfix.org/postconf.5.html#smtpd_authorized_xclient_hosts -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sat Mar 21 14:53:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Mar 2015 17:53:38 +0300 Subject: Intermittent SSL Handshake Errors In-Reply-To: <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> References: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150321145338.GS88631@mdounin.ru> Hello! On Fri, Mar 20, 2015 at 02:15:42PM -0400, tempspace wrote: > I had to start looking at this issue again now that yet another openssl > security issue. Now that I know I can go back to a working setup just by > downgrading SSL, I am able to gather more information. > > This morning, I updated the libssl libraries and restarted nginx, and the > errors started flooding back. This time, I took a packet capture to see what > was happening and what I could correlate. I run a set of servers that > handle API requests from a mobile phone application, and every single client > that produced this error was running iOS. > > In the packet capture, we offer the same cipher that the clients always use > without a problem, but for some reason, some of our iPhone clients have > issues (not all.) I have been unable to discern a pattern, but it's always > iPhones and doesn't seem to have anything to do with the device model or the > OS version. I haven't found a single Android instance of the IP's that show > up in our error logs, and we have slightly more Android devices than iOS > devices. > > We get the Client Hello which has a list of 37 potential ciphers for TLS > 1.2. We send the server hello and offer the normal cipher. The client, > instead of continuing on, immediately sends a FIN, ACK. It then tries to > connect again over TLS 1.0, gives the client hello, we send the ACK and > almost immediately, WE send a FIN, ACK to the client. So it looks like th fallback prevention part looks like it should - the inappropriate fallback is prevented. The question now is why fallback happens at all, that is - why the client sends a FIN. It might be some specific cipher which causes the problem - you may try switching ssl_prefer_server_ciphers to off (the default) to see if it helps, and/or playing with ciphers supported (again, default will be a good starting point). -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Sat Mar 21 15:05:05 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 21 Mar 2015 16:05:05 +0100 Subject: Default value of gzip_proxied In-Reply-To: <20150321142132.GP88631@mdounin.ru> References: <20150321142132.GP88631@mdounin.ru> Message-ID: Hello Maxim, So HTTP/1.0 is the reason of all that. Now I also understand why there are those parameters allowing to compress data that should not be cached: nginx as webserver tries to be smarter than those dumb HTTP/1.0 proxies. I was wondering, though: are there real numbers to back this compatibility thing? Is not there a point in time when the horizon could be set, denying backwards compatibility for older software/standards? HTTP/1.1, is the most used version of the protocol, nginx supports SPDY, HTTP/2.0 is coming... and there are strangeness there for backwards-compatibility with HTTP/1.0. That behavior made us cache uncompressed content 'randomly' since the pattern was hard to find/reproduce, and I got a bit of luck determining the condition under which we were caching uncompressed data... What is the ratio benefits/costs of dropping compatibility (at least partially) with HTTP/1.0? I know I am being naive here, considering the most part of the Web is HTTP/1.1-compliant, but how far am I for reality? --- *B. R.* On Sat, Mar 21, 2015 at 3:21 PM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 20, 2015 at 07:41:59PM +0100, B.R. wrote: > > > I recently bumped into some trouble with a client caching uncompressed > data > > without understanding where it came from. > > > > After long investigation on what appeared to be random, I narrowed it to > > the gzip_proxied > > > > directive. Return content from webserver was supposed to be *always* > > compressed (as compressed data is generally better than uncompressed > > whenever possible), but when requests coming from clients behind proxies > > resulted in MISS, the returned content was uncompressed and stored as > such > > in cache... thus serving cached uncompressed data to final clients. > > > > ?Why is the default value of that directive 'off'? What is the problem > with > > sneding compressed data to proxies? Why have you decided on such a > default > > value? > > Because not all clients support compression, and it's not possible > to instruct HTTP/1.0 proxies to serve compressed version only to > some clients. In HTTP/1.1 there is a Vary header for this, but > nevertheless it's usually bad idea to use it as it causes huge > cache duplication. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Mar 21 15:50:59 2015 From: nginx-forum at nginx.us (tempspace) Date: Sat, 21 Mar 2015 11:50:59 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: <20150321145338.GS88631@mdounin.ru> References: <20150321145338.GS88631@mdounin.ru> Message-ID: Maxim, I have been playing with the ciphers as well, and it doesn't appear to be cipher related. It happens for every cipher I've tried. I tried with turning off the prefer on the server, and it uses the same cipher with the prefer on. I then turned prefer server ciphers back on, and tailed our access logs which show which cipher was used for the communication. I then went through cipher by cipher, disabled the cipher in our config and restarted nginx each time. None of them had any difference, we're still seeing lots of fallbacks exclusively from our iOS clients. I tried the following ciphers to no avail: ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-SHA256 ECDHE-RSA-AES256-SHA ECDHE-RSA-AES128-SHA DHE-RSA-AES256-SHA256 DHE-RSA-AES256-SHA DHE-RSA-AES128-SHA256 DHE-RSA-AES128-SHA Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,257522#msg-257522 From nginx-forum at nginx.us Sat Mar 21 15:57:44 2015 From: nginx-forum at nginx.us (Milos) Date: Sat, 21 Mar 2015 11:57:44 -0400 Subject: I need some urgent rewrites Message-ID: <7c54df17396fff0c6b6a06a50ad4168d.NginxMailingListEnglish@forum.nginx.org> I need some urgent rewrites From: http://www.my-domain.de/kalorientabelle.php/unterkategorie/204-suppen To: http://www.my-domain.de/kalorientabelle/suppen.204/unterkategorie From: http://www.my-domain.de/kalorientabelle.php/unterkategorie/3-vollkornbrot To: http://www.my-domain.de/kalorientabelle/vollkornbrot.3/unterkategorie AND From: http://www.my-domain.de/kalorientabelle.php/produkt/6293-alkopops To: http://www.my-domain.de/kalorientabelle/alkopops.6293/produkt From: http://www.my-domain.de/kalorientabelle.php/produkt/16648-schoko-reiswaffel To: http://www.my-domain.de/kalorientabelle/schoko-reiswaffel.16648/produkt I would be very happy if someone can help me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257523,257523#msg-257523 From nginx-forum at nginx.us Sat Mar 21 15:59:17 2015 From: nginx-forum at nginx.us (tempspace) Date: Sat, 21 Mar 2015 11:59:17 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: <20150321145338.GS88631@mdounin.ru> Message-ID: <49a96755f4514f3c01707a08ac0a541e.NginxMailingListEnglish@forum.nginx.org> I should specify that I agree with what is happening. We have clients that are falling back under normal conditions, and the latest libssl that implemented fallback prevention for TLS is stopping. I have downgraded our libssl and I'm looking in my logs, and I see plenty of iOS 8 devices that auto-negotiate to TLS 1.2 that end up with a TLS 1.0 session. When the new libssl is installed, these connections get blocked. Is there a way to turn off the fallback prevention for TLS on the server side while we try to figure out what's happening? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,257524#msg-257524 From nginx-forum at nginx.us Sat Mar 21 17:46:55 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Sat, 21 Mar 2015 13:46:55 -0400 Subject: [ANN] Windows nginx 1.7.11.3 Gryphon In-Reply-To: References: Message-ID: <16babf16c65296ec6c6e6e499f0a43b5.NginxMailingListEnglish@forum.nginx.org> Hey itpp i am curious if you know the cause of a bug with your windows nginx builds. It is something to do with the worker processes. For some reason when i have "worker_processes auto;" i will occasionaly recieve a unknown web server error (usualy means a timeout) from cloudflare, but when i set it to "worker_processes 1;" i do not recieve the error is there some kind of dropped request between nginx and the very first connection to a new worker process ? Its like it drops or cancels the first request for the worker process when its auto. Its very difficult to figure out what has been causing the error. And with my "worker_processes 1;" set to be 1 worker will it ever be restarted or need to restart or it should be fine like that. because i deliever all my traffic just fine with one worker still but that timeout error occassionaly was annoying everyone. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257453,257525#msg-257525 From nginx-forum at nginx.us Sat Mar 21 19:26:59 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 21 Mar 2015 15:26:59 -0400 Subject: [ANN] Windows nginx 1.7.11.3 Gryphon In-Reply-To: <16babf16c65296ec6c6e6e499f0a43b5.NginxMailingListEnglish@forum.nginx.org> References: <16babf16c65296ec6c6e6e499f0a43b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Don't use auto, set worker_processes to a value where high performance keeps cpu(s) around 50% max, this can be 2, 4, etc... Windows server 2012 is AFAIK the only version which can scale cpu's where the auto value has any use. Some cpu's can easily handle 2-4 workers per cpu, some only 1 per cpu, this varies because there are a number of variables like cpu type, bus type and speed, xeon quads, how many real cpu lines going to each cpu, type of HT, etc.... We runs test with 4 workers on 1 vcpu which works fine on xen, hv and vb but not on vmware (2 max). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257453,257526#msg-257526 From nginx-forum at nginx.us Sat Mar 21 20:49:54 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Sat, 21 Mar 2015 16:49:54 -0400 Subject: [ANN] Windows nginx 1.7.11.3 Gryphon In-Reply-To: References: <16babf16c65296ec6c6e6e499f0a43b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: I will leave it at 1 it works fine and i no longer encounter that strange timeout error. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257453,257527#msg-257527 From nginx-forum at nginx.us Sat Mar 21 22:03:04 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Sat, 21 Mar 2015 18:03:04 -0400 Subject: [ANN] Windows nginx 1.7.11.3 Gryphon In-Reply-To: References: Message-ID: <0a748070574395cc7dde9102dad8cd7c.NginxMailingListEnglish@forum.nginx.org> Also i just saw you added into the latest builds a PHP opcache config file. Check out my post here. http://www.apachelounge.com/viewtopic.php?p=29858#29858 You need to do that to configure Zend Opcache to shares it memory with php-cgi processes and if you use windows set a app pool id. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257453,257528#msg-257528 From al-nginx at none.at Sat Mar 21 23:01:52 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 22 Mar 2015 00:01:52 +0100 Subject: I need some urgent rewrites In-Reply-To: <7c54df17396fff0c6b6a06a50ad4168d.NginxMailingListEnglish@forum.nginx.org> References: <7c54df17396fff0c6b6a06a50ad4168d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ef25ccd5efcae9c3e15fc30d02a43c6@none.at> Hi. Only syntax checked. Please read for further investigation. http://nginx.org/en/docs/varindex.html http://nginx.org/r/location http://nginx.org/r/return https://regex101.com/ Am 21-03-2015 16:57, schrieb Milos: > I need some urgent rewrites > > From: > http://www.my-domain.de/kalorientabelle.php/unterkategorie/204-suppen > To: > http://www.my-domain.de/kalorientabelle/suppen.204/unterkategorie [snip] This should work for all your examples. location ~ (\w+).php\/(\w+)\/(\d+)-([\w-]+) { return 302 $scheme://$hostname/$1/$4.$3/$2; } > I would be very happy if someone can help me. BR Aleks > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257523,257523#msg-257523 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sun Mar 22 01:12:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 04:12:58 +0300 Subject: Intermittent SSL Handshake Errors In-Reply-To: <49a96755f4514f3c01707a08ac0a541e.NginxMailingListEnglish@forum.nginx.org> References: <20150321145338.GS88631@mdounin.ru> <49a96755f4514f3c01707a08ac0a541e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150322011258.GT88631@mdounin.ru> Hello! On Sat, Mar 21, 2015 at 11:59:17AM -0400, tempspace wrote: > I should specify that I agree with what is happening. We have clients that > are falling back under normal conditions, and the latest libssl that > implemented fallback prevention for TLS is stopping. I have downgraded our > libssl and I'm looking in my logs, and I see plenty of iOS 8 devices that > auto-negotiate to TLS 1.2 that end up with a TLS 1.0 session. When the new > libssl is installed, these connections get blocked. > > Is there a way to turn off the fallback prevention for TLS on the server > side while we try to figure out what's happening? Looking though OpenSSL code - I don't think it's possible without OpenSSL code changes. Changes will be trivial though. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Mar 22 01:31:17 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 04:31:17 +0300 Subject: Default value of gzip_proxied In-Reply-To: References: <20150321142132.GP88631@mdounin.ru> Message-ID: <20150322013116.GU88631@mdounin.ru> Hello! On Sat, Mar 21, 2015 at 04:05:05PM +0100, B.R. wrote: > Hello Maxim, > > So HTTP/1.0 is the reason of all that. > Now I also understand why there are those parameters allowing to compress > data that should not be cached: nginx as webserver tries to be smarter than > those dumb HTTP/1.0 proxies. > > I was wondering, though: are there real numbers to back this compatibility > thing? > Is not there a point in time when the horizon could be set, denying > backwards compatibility for older software/standards? > > HTTP/1.1, is the most used version of the protocol, nginx supports SPDY, > HTTP/2.0 is coming... and there are strangeness there for > backwards-compatibility with HTTP/1.0. > That behavior made us cache uncompressed content 'randomly' since the > pattern was hard to find/reproduce, and I got a bit of luck determining the > condition under which we were caching uncompressed data... > > What is the ratio benefits/costs of dropping compatibility (at least > partially) with HTTP/1.0? > I know I am being naive here, considering the most part of the Web is > HTTP/1.1-compliant, but how far am I for reality? There are two problems: - You assume HTTP/1.0 is dying. That's not true. While uncommon nowadays for browsers, it's still widely used by various software. In particular, nginx itself use it by default when talking to upstream servers. - You assume that the behaviour in question is only needed for HTTP/1.0 clients. That's, again, not true, as using "Vary: Accept-Encoding" isn't a good idea either. As already mentioned, even if correctly supported it will cause cache data duplication. If you don't like the behaviour, you can always configure nginx to do whatever you want. But I don't think the default worth changing. -- Maxim Dounin http://nginx.org/ From gmm at csdoc.com Sun Mar 22 02:20:12 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Sun, 22 Mar 2015 04:20:12 +0200 Subject: Default value of gzip_proxied In-Reply-To: <20150322013116.GU88631@mdounin.ru> References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> Message-ID: <550E26DC.7040207@csdoc.com> On 22.03.2015 3:31, Maxim Dounin wrote: > - You assume that the behaviour in question is only needed for > HTTP/1.0 clients. That's, again, not true, as using "Vary: Accept-Encoding" > isn't a good idea either. As already mentioned, even if > correctly supported it will cause cache data duplication. > > If you don't like the behaviour, you can always configure nginx to > do whatever you want. But I don't think the default worth > changing. If turn gunzip on - nginx can store in cache only one compressed answer from origin server and can correctly provide uncompressed content for proxies and clients, which not support compression ? -- Best regards, Gena From mdounin at mdounin.ru Sun Mar 22 03:24:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 06:24:47 +0300 Subject: Default value of gzip_proxied In-Reply-To: <550E26DC.7040207@csdoc.com> References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> <550E26DC.7040207@csdoc.com> Message-ID: <20150322032447.GV88631@mdounin.ru> Hello! On Sun, Mar 22, 2015 at 04:20:12AM +0200, Gena Makhomed wrote: > On 22.03.2015 3:31, Maxim Dounin wrote: > > >- You assume that the behaviour in question is only needed for > > HTTP/1.0 clients. That's, again, not true, as using "Vary: Accept-Encoding" > > isn't a good idea either. As already mentioned, even if > > correctly supported it will cause cache data duplication. > > > >If you don't like the behaviour, you can always configure nginx to > >do whatever you want. But I don't think the default worth > >changing. > > If turn gunzip on - nginx can store in cache only one compressed > answer from origin server and can correctly provide uncompressed > content for proxies and clients, which not support compression ? Yes, though this requires some special configuration. In particular, you have to instruct your backend to return gzip (usually by "proxy_set_header Accept-Encoding gzip;" in nginx). Additionally, if your backen returns "Vary: Accept-Encoding", you'll have to instruct nginx to ignore it when using nginx 1.7.7+ ("proxy_ignore_headers Vary"). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Mar 22 04:17:07 2015 From: nginx-forum at nginx.us (imwack) Date: Sun, 22 Mar 2015 00:17:07 -0400 Subject: Nginx mail proxy In-Reply-To: <20150321143636.GQ88631@mdounin.ru> References: <20150321143636.GQ88631@mdounin.ru> Message-ID: The SMTP backend is not mine, I use gmail or something else, what should i do? Just : "xclient on;" ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257510,257534#msg-257534 From nginx-forum at nginx.us Sun Mar 22 10:35:31 2015 From: nginx-forum at nginx.us (carnagel) Date: Sun, 22 Mar 2015 06:35:31 -0400 Subject: $skip_cache define home page Message-ID: I understand how to skip cache on cookies, POST, query strings, urls containing string etc But how do you define the home page itself in $skip_cache please? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257535,257535#msg-257535 From mdounin at mdounin.ru Sun Mar 22 10:48:30 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 13:48:30 +0300 Subject: Nginx mail proxy In-Reply-To: References: <20150321143636.GQ88631@mdounin.ru> Message-ID: <20150322104830.GW88631@mdounin.ru> Hello! On Sun, Mar 22, 2015 at 12:17:07AM -0400, imwack wrote: > The SMTP backend is not mine, I use gmail or something else, what should i > do? Just : "xclient on;" ? If the backend isn't your, then nginx mail proxy is a wrong thing to use. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Sun Mar 22 14:14:22 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 22 Mar 2015 15:14:22 +0100 Subject: Default value of gzip_proxied In-Reply-To: <20150322013116.GU88631@mdounin.ru> References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> Message-ID: I do not get why you focus on the gzip_vary directive, while I was explicitely talking about gzip_proxied. The fact that content supposedly compressed might actually not be because it contains a 'Via' header is the root cause of our trouble... and you just told me it was for HTTP/1.0 compatibility. This behavior, put aside the HTTP/1.0 compatibility, is strange and disruptive at best. I willingly join you on the fact that still a lot of software uses HTTP/1.0, but I usually distinguish that from the reasons behind it and what it should be. I assume nginx defaults to talking HTTP/1.0 with backend because it is the lowest common denominator. That allows to handle outdated software and I can understand that when you wish to be universal. nginx seems to be stuck not knowing which way the wind is blowing, sometimes promoting modernity and sometimes enforcing backwards (yes, HTTP/1.0 means looking backwards) compatibility. ?While setting default values to be interoperable the most, which I understand perfectly, ?there should be somewhere bright pointers about the fact that some directives only exists for such reasons. I would be more than welcoming that defualt configuration introduces commented examples about what modern configuration/usage of nginx shall be. 'gzip on' clearly is clearly not enough if you want to send compressed content. How much people know about it? 'RTFM' stance is no longer valid when multiple directives shall be activated at once on a modern infrastructure. nginx configuration was supposed to be lean and clean. It is, provided that you use outdate protocol to serve content: minimal configuration for compatibility is smaller than the one for modern protocols... and you need to dig by yourself to learn that. WTF? --- *B. R.* On Sun, Mar 22, 2015 at 2:31 AM, Maxim Dounin wrote: > Hello! > > On Sat, Mar 21, 2015 at 04:05:05PM +0100, B.R. wrote: > > > Hello Maxim, > > > > So HTTP/1.0 is the reason of all that. > > Now I also understand why there are those parameters allowing to compress > > data that should not be cached: nginx as webserver tries to be smarter > than > > those dumb HTTP/1.0 proxies. > > > > I was wondering, though: are there real numbers to back this > compatibility > > thing? > > Is not there a point in time when the horizon could be set, denying > > backwards compatibility for older software/standards? > > > > HTTP/1.1, is the most used version of the protocol, nginx supports SPDY, > > HTTP/2.0 is coming... and there are strangeness there for > > backwards-compatibility with HTTP/1.0. > > That behavior made us cache uncompressed content 'randomly' since the > > pattern was hard to find/reproduce, and I got a bit of luck determining > the > > condition under which we were caching uncompressed data... > > > > What is the ratio benefits/costs of dropping compatibility (at least > > partially) with HTTP/1.0? > > I know I am being naive here, considering the most part of the Web is > > HTTP/1.1-compliant, but how far am I for reality? > > There are two problems: > > - You assume HTTP/1.0 is dying. That's not true. While uncommon > nowadays for browsers, it's still widely used by various > software. In particular, nginx itself use it by default when > talking to upstream servers. > > - You assume that the behaviour in question is only needed for > HTTP/1.0 clients. That's, again, not true, as using "Vary: > Accept-Encoding" > isn't a good idea either. As already mentioned, even if > correctly supported it will cause cache data duplication. > > If you don't like the behaviour, you can always configure nginx to > do whatever you want. But I don't think the default worth > changing. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 22 14:26:57 2015 From: nginx-forum at nginx.us (Milos) Date: Sun, 22 Mar 2015 10:26:57 -0400 Subject: I need some urgent rewrites In-Reply-To: <2ef25ccd5efcae9c3e15fc30d02a43c6@none.at> References: <2ef25ccd5efcae9c3e15fc30d02a43c6@none.at> Message-ID: WOW, thanks Aleksandar. that works for my. I hope you can help me with another rewrite. from: http://www.my-domain.de/attachments/hundebilder-hundefotos-fotowettbewerbe/67591d1394397097-hund-monats-april-2014-dsc06022.jpg to: http://www.my-domain.de/attachments/dsc06022-jpg.67591/ OR from http://www.my-domain.de/attachments/alte-fotowettbewerbe-hunde/61955d1389003298-hund-jahre-2013-hund-mai.jpg to http://www.my-domain.de/attachments/hund-mai-jpg.61955/ My attempts have all failed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257523,257538#msg-257538 From mdounin at mdounin.ru Sun Mar 22 17:06:54 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 20:06:54 +0300 Subject: Default value of gzip_proxied In-Reply-To: References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> Message-ID: <20150322170654.GX88631@mdounin.ru> Hello! On Sun, Mar 22, 2015 at 03:14:22PM +0100, B.R. wrote: > I do not get why you focus on the gzip_vary directive, while I was > explicitely talking about gzip_proxied. > The fact that content supposedly compressed might actually not be because > it contains a 'Via' header is the root cause of our trouble... and you just > told me it was for HTTP/1.0 compatibility. With HTTP/1.0, there is only one safe option: - don't compress anything for proxies. With HTTP/1.1, there are two options: - don't compress anything for proxies; - compress for proxies, but send Vary to avoid incorrect behaviour. The second options, which becomes available if you don't care about HTTP/1.0 compatibility at all, has its downsides I've talked about. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Mar 22 22:03:16 2015 From: nginx-forum at nginx.us (tunist) Date: Sun, 22 Mar 2015 18:03:16 -0400 Subject: nginx fails to start - nginx.service not found In-Reply-To: <550BE583.4010200@nginx.com> References: <550BE583.4010200@nginx.com> Message-ID: Konstantin Pavlov Wrote: ------------------------------------------------------- > Nginx source code does not contain a service file for systemd. You're > free to re-use the one shipped in the official packages available on > http://nginx.org/en/linux_packages.html#mainline. > > Also, it might be worth rebuilding an official source rpm package - > the > fedora systemd support is in there, it's just we don't build those > packages specifically for fedora linux. oh, ok - thanks; i used this file from the nginx site and all appears to be ok now: http://wiki.nginx.org/FedoraSystemdServiceFile Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257457,257540#msg-257540 From al-nginx at none.at Mon Mar 23 11:45:50 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 23 Mar 2015 12:45:50 +0100 Subject: I need some urgent rewrites In-Reply-To: References: <2ef25ccd5efcae9c3e15fc30d02a43c6@none.at> Message-ID: <9137a5c48ee55ca931b7062b6c3c5eb8@none.at> Dear Milos. Am 22-03-2015 15:26, schrieb Milos: > WOW, thanks Aleksandar. that works for my. [snipp] > My attempts have all failed. What was your attempts and how does they failed? Have you also take a look into this link and the other links in my first answer? http://nginx.org/en/docs/http/converting_rewrite_rules.html > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257523,257538#msg-257538 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Mar 23 13:09:07 2015 From: nginx-forum at nginx.us (elronar) Date: Mon, 23 Mar 2015 09:09:07 -0400 Subject: ProxyPass to target need to pass another proxy Message-ID: Hey guys, I have the following plan. I need to configure an nginx via proxypass. The target of the proxy itself is behind a proxy. My config looks like this: upstream @squid { server XXX.YYY.FFF.EEE:3128; } server { listen 80; server_name install.peng.puff.de; location ~ ^/(mirror/ubuntu|debian-peng/ubuntu|ppa/libreoffice/ppa/ubuntu|ppa/natecarlson/maven3/ubuntu|ppa/ondrej/php5-5.6/ubuntu|mirror/apt.puppetlabs.com|VirtualBox/debian|ppa/webupd8team/java/ubuntu)(.*)$ { proxy_pass http://@squid/$scheme://$host$uri; proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Request-URI $request_uri; proxy_set_header Host install.peng.puff.de; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The squid and host entry are concatinated but the squid is not able to parse it caused by the / between @squid/$scheme. So the squid tries to connect to : 1426763671.035 0 172.31.2.35 TAG_NONE/400 3706 GET /http://install.peng.puff.de/ppa/webupd8team/java/ubuntu/dists/trusty/main/source/Sources - HIER_NONE/- text/html I also tried to delelte the slash but this try has been marked as invalid config by nginx. Anyone another idea? In apache2 it is "only" a ProxyRemote * XXX.YYY.FFF.EEE:3128 . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257562,257562#msg-257562 From francis at daoine.org Mon Mar 23 13:38:08 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 23 Mar 2015 13:38:08 +0000 Subject: ProxyPass to target need to pass another proxy In-Reply-To: References: Message-ID: <20150323133808.GI29618@daoine.org> On Mon, Mar 23, 2015 at 09:09:07AM -0400, elronar wrote: Hi there, > I need to configure an nginx via proxypass. The target of the proxy itself > is behind a proxy. nginx as a client speaks http to a http server. It does not speak http-being-proxied to a http proxy server. > Anyone another idea? Assuming that I've correctly understood your setup: Option 1 - do not use nginx as the client. Option 2 - change your squid to listen for http, not just for http-being-proxied. (This may be called "transparent mode". The squid documentation probably has more.) > In apache2 it is "only" a ProxyRemote * XXX.YYY.FFF.EEE:3128 . apache2 does speak http-being-proxied to a http proxy server. f -- Francis Daly francis at daoine.org From makailol7 at gmail.com Mon Mar 23 13:48:32 2015 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 23 Mar 2015 19:18:32 +0530 Subject: How to apply concurrent connection limit ? Message-ID: Hello, We have been providing API to our customers and want to apply concurrent connection limit for API calls. Would anybody let us know which module should be used with configuration example? We also need to exclude (whitelist) some IPs from this connection limit and need to allow more connections. Thanks, Bhargav -------------- next part -------------- An HTML attachment was scrubbed... URL: From leave at nixkid.com Mon Mar 23 13:54:02 2015 From: leave at nixkid.com (Pavel Mihaduk) Date: Mon, 23 Mar 2015 16:54:02 +0300 Subject: How to apply concurrent connection limit ? In-Reply-To: References: Message-ID: <1704850.4rjl6tooz4@mihaduk-laptop> http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html Hello, We have been providing API to our customers and want to apply concurrent connection limit for API calls. Would anybody let us know which module should be used with configuration example? We also need to exclude (whitelist) some IPs from this connection limit and need to allow more connections. Thanks, Bhargav -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Mar 23 14:10:25 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 23 Mar 2015 19:10:25 +0500 Subject: Internal Server Error !! Message-ID: Hi, Nginx logging mp4 related error intermittently. Following is the log : 2015/03/23 19:01:53 [crit] 8671#0: *782950 pread() "/tunefiles/storage17/files/videos/2014/05/07/13994800482e2b0-360.mp4" failed (22: Invalid argument), client: 182.178.204.162, server: storage17.domain.com, request: "GET /files/videos/2014/05/07/13994800482e2b0-360.mp4?start=31.832 HTTP/1.1", host: "storage17.domain.com", referrer: " http://static.tune.pk/tune_player/tune.swf?v2" We've changed nginx-1.6.2 banner to as follows : nginx version: tune-webserver/1.0.4 built by gcc 4.7.2 (Debian 4.7.2-5) configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nginx --group=nginx --with-http_flv_module --with-http_mp4_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt='-L /usr/lib/x86_64-linux-gnu' Could anyone please assist me regarding this issue? Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Mar 23 20:13:50 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 24 Mar 2015 09:13:50 +1300 Subject: disable file uploads Message-ID: <1427141630.3304.54.camel@steve-new> Is there any way to stop / disable random file uploads... for example, I'm having 'fun' with mail relays being uploaded to the cache area of a wordpress site? Can't think of anything off the top of my head that would do it. Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From francis at daoine.org Mon Mar 23 22:52:17 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 23 Mar 2015 22:52:17 +0000 Subject: disable file uploads In-Reply-To: <1427141630.3304.54.camel@steve-new> References: <1427141630.3304.54.camel@steve-new> Message-ID: <20150323225217.GJ29618@daoine.org> On Tue, Mar 24, 2015 at 09:13:50AM +1300, Steve Holdoway wrote: Hi there, > Is there any way to stop / disable random file uploads... for example, > I'm having 'fun' with mail relays being uploaded to the cache area of a > wordpress site? What the difference between a request that is a file upload and a request that is not a file upload, on your system? Are there some specific urls you want to block? Do you want to block all POST requests? > Can't think of anything off the top of my head that would do it. Would it be simpler for you to configure your wordpress to disallow file uploads? f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 23 22:59:00 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 23 Mar 2015 22:59:00 +0000 Subject: $skip_cache define home page In-Reply-To: References: Message-ID: <20150323225900.GK29618@daoine.org> On Sun, Mar 22, 2015 at 06:35:31AM -0400, carnagel wrote: Hi there, > I understand how to skip cache on cookies, POST, query strings, urls > containing string etc How do you skip cache on urls containing strings? > But how do you define the home page itself in $skip_cache please? What url or urls is "the home page"? f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 23 23:23:59 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 23 Mar 2015 23:23:59 +0000 Subject: Rewrite undecoded URL In-Reply-To: <74b8e1cea216e821a88c46b00ef9c780.NginxMailingListEnglish@forum.nginx.org> References: <74b8e1cea216e821a88c46b00ef9c780.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150323232359.GL29618@daoine.org> On Thu, Mar 19, 2015 at 11:52:41AM -0400, youngde811 wrote: Hi there, > Hello. We are trying to use the nginx rewrite rule, without the application > of URL decoding. The relevant portion of our test configuration is: I'm afraid that I am not sure what response you want nginx to give to your incoming request. But if you want to mangle a variable, "map" is usually a good thing to use. For example map $request_uri $strip_the_slash_p { default ""; ~^/p(?P/.*) $one; } and then you can later use the variable where it is set, such as location /p/ { return 301 $strip_the_slash_p; } f -- Francis Daly francis at daoine.org From steve at greengecko.co.nz Mon Mar 23 23:47:38 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 24 Mar 2015 12:47:38 +1300 Subject: disable file uploads In-Reply-To: <20150323225217.GJ29618@daoine.org> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> Message-ID: <1427154458.3304.59.camel@steve-new> On Mon, 2015-03-23 at 22:52 +0000, Francis Daly wrote: > On Tue, Mar 24, 2015 at 09:13:50AM +1300, Steve Holdoway wrote: > > Hi there, > > > Is there any way to stop / disable random file uploads... for example, > > I'm having 'fun' with mail relays being uploaded to the cache area of a > > wordpress site? > > What the difference between a request that is a file upload and a request > that is not a file upload, on your system? > > Are there some specific urls you want to block? Do you want to block > all POST requests? > > > Can't think of anything off the top of my head that would do it. > > Would it be simpler for you to configure your wordpress to disallow > file uploads? > > f I would like to block at web server level if possible, seems the most sensible to me. This is what I currently use for wordpress ( after this morning lol ) # set the static ones first, then the catchall # Directives to send expires headers and turn off 404 error logging. location ~* ^/(?:uploads|files|cache|plugins)/.*\.(png|gif|jpg| jpeg|css|js|swf|ico|txt|xml|bmp|pdf|doc|docx|ppt|pptx|zip|woff|ttf|otf| xls|myo|qbb|pst|dat|qbx|bc7|cf7)$ { expires 24h; log_not_found off; } location ~* ^/wp-content/(files|uploads|cache|plugins)/.*.(|php| js|swf)$ { types { } default_type text/plain; } I think I should be able to simplify it by having the block before a straight catchall with no extensions listed, which would help ( although a zero expiry on .html would probably be beneficial ). Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From francis at daoine.org Tue Mar 24 00:00:45 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Mar 2015 00:00:45 +0000 Subject: disable file uploads In-Reply-To: <1427154458.3304.59.camel@steve-new> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> Message-ID: <20150324000045.GM29618@daoine.org> On Tue, Mar 24, 2015 at 12:47:38PM +1300, Steve Holdoway wrote: > On Mon, 2015-03-23 at 22:52 +0000, Francis Daly wrote: > > On Tue, Mar 24, 2015 at 09:13:50AM +1300, Steve Holdoway wrote: Hi there, > > > Is there any way to stop / disable random file uploads... for example, > > > I'm having 'fun' with mail relays being uploaded to the cache area of a > > > wordpress site? > > > > What the difference between a request that is a file upload and a request > > that is not a file upload, on your system? > # set the static ones first, then the catchall > # Directives to send expires headers and turn off 404 error > logging. > location ~* ^/(?:uploads|files|cache|plugins)/.*\.(png|gif|jpg| > jpeg|css|js|swf|ico|txt|xml|bmp|pdf|doc|docx|ppt|pptx|zip|woff|ttf|otf| > xls|myo|qbb|pst|dat|qbx|bc7|cf7)$ { > expires 24h; > log_not_found off; > } For requests that match this location block, serve from the filesystem. > location ~* ^/wp-content/(files|uploads|cache|plugins)/.*.(|php| > js|swf)$ { > types { } > default_type text/plain; > } For requests that match this location block, serve from the filesystem. None of that seems to say "handle file uploads". I confess I'm somewhat confused about what your question is. What request do you make of nginx, that does not give you the response that you want? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Mar 24 00:42:39 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 23 Mar 2015 20:42:39 -0400 Subject: end-to-end disconnect notification? Message-ID: <069c51bae2c7ccf98e9eeb7feb90ae0d.NginxMailingListEnglish@forum.nginx.org> Hi, given client --(tcp)-->nginx --(fcgi)-->fcgi server --(tcp)--> back-end server if the client initiates a TCP disconnect, is there a way for NGINX to carry out the termination to the fcgi server? Or if the back-end server disconnects, how can the fcgi server communicate the disconnect all the way to nginx and the client? >From what I observed, a client could send a TCP FIN, but NGINX will keep the FCGX connection up. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257618,257618#msg-257618 From nginx-forum at nginx.us Tue Mar 24 00:43:27 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 23 Mar 2015 20:43:27 -0400 Subject: what does fastcgi_keep_conn do? In-Reply-To: References: Message-ID: <76eab7230400e99f6f58bd312276748d.NginxMailingListEnglish@forum.nginx.org> Thank you, that did the trick. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257508,257619#msg-257619 From steve at greengecko.co.nz Tue Mar 24 01:15:10 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 24 Mar 2015 14:15:10 +1300 Subject: disable file uploads In-Reply-To: <20150324000045.GM29618@daoine.org> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> <20150324000045.GM29618@daoine.org> Message-ID: <1427159710.3304.73.camel@steve-new> On Tue, 2015-03-24 at 00:00 +0000, Francis Daly wrote: > On Tue, Mar 24, 2015 at 12:47:38PM +1300, Steve Holdoway wrote: > > On Mon, 2015-03-23 at 22:52 +0000, Francis Daly wrote: > > > On Tue, Mar 24, 2015 at 09:13:50AM +1300, Steve Holdoway wrote: > > Hi there, > > > > > Is there any way to stop / disable random file uploads... for example, > > > > I'm having 'fun' with mail relays being uploaded to the cache area of a > > > > wordpress site? > > > > > > What the difference between a request that is a file upload and a request > > > that is not a file upload, on your system? > > > # set the static ones first, then the catchall > > # Directives to send expires headers and turn off 404 error > > logging. > > location ~* ^/(?:uploads|files|cache|plugins)/.*\.(png|gif|jpg| > > jpeg|css|js|swf|ico|txt|xml|bmp|pdf|doc|docx|ppt|pptx|zip|woff|ttf|otf| > > xls|myo|qbb|pst|dat|qbx|bc7|cf7)$ { > > expires 24h; > > log_not_found off; > > } > > For requests that match this location block, serve from the filesystem. > > > location ~* ^/wp-content/(files|uploads|cache|plugins)/.*.(|php| > > js|swf)$ { > > types { } > > default_type text/plain; > > } > > For requests that match this location block, serve from the filesystem. > > None of that seems to say "handle file uploads". > > I confess I'm somewhat confused about what your question is. > > What request do you make of nginx, that does not give you the response > that you want? > > f Sorry, This is the best block I can find, where the intention is that php files are just served as text, not processed, which should be good and annoying for the users as well. As I said, I can't work out how on earth to stop them being uploaded in the first place. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From rpaprocki at fearnothingproductions.net Tue Mar 24 02:57:07 2015 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 23 Mar 2015 19:57:07 -0700 Subject: disable file uploads In-Reply-To: <1427159710.3304.73.camel@steve-new> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> <20150324000045.GM29618@daoine.org> <1427159710.3304.73.camel@steve-new> Message-ID: <5B0D1D91-5EDE-4601-AF85-B23FEF7259E8@fearnothingproductions.net> Sounds like you either have a vulnerable web application or hole in your systems security. If the root of your problem is that your having content uploaded to your server without your consent, you're asking the wrong question. If your app does allow for arbitrary file upload, you can disallow certain file extensions, but that should be handled in whatever Wordpress plugin you're using. > On Mar 23, 2015, at 18:15, Steve Holdoway wrote: > >> On Tue, 2015-03-24 at 00:00 +0000, Francis Daly wrote: >>> On Tue, Mar 24, 2015 at 12:47:38PM +1300, Steve Holdoway wrote: >>>> On Mon, 2015-03-23 at 22:52 +0000, Francis Daly wrote: >>>> On Tue, Mar 24, 2015 at 09:13:50AM +1300, Steve Holdoway wrote: >> >> Hi there, >> >>>>> Is there any way to stop / disable random file uploads... for example, >>>>> I'm having 'fun' with mail relays being uploaded to the cache area of a >>>>> wordpress site? >>>> >>>> What the difference between a request that is a file upload and a request >>>> that is not a file upload, on your system? >> >>> # set the static ones first, then the catchall >>> # Directives to send expires headers and turn off 404 error >>> logging. >>> location ~* ^/(?:uploads|files|cache|plugins)/.*\.(png|gif|jpg| >>> jpeg|css|js|swf|ico|txt|xml|bmp|pdf|doc|docx|ppt|pptx|zip|woff|ttf|otf| >>> xls|myo|qbb|pst|dat|qbx|bc7|cf7)$ { >>> expires 24h; >>> log_not_found off; >>> } >> >> For requests that match this location block, serve from the filesystem. >> >>> location ~* ^/wp-content/(files|uploads|cache|plugins)/.*.(|php| >>> js|swf)$ { >>> types { } >>> default_type text/plain; >>> } >> >> For requests that match this location block, serve from the filesystem. >> >> None of that seems to say "handle file uploads". >> >> I confess I'm somewhat confused about what your question is. >> >> What request do you make of nginx, that does not give you the response >> that you want? >> >> f > Sorry, > > This is the best block I can find, where the intention is that php files > are just served as text, not processed, which should be good and > annoying for the users as well. > > As I said, I can't work out how on earth to stop them being uploaded in > the first place. > > Steve > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From steve at greengecko.co.nz Tue Mar 24 03:15:35 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 24 Mar 2015 16:15:35 +1300 Subject: disable file uploads In-Reply-To: <5B0D1D91-5EDE-4601-AF85-B23FEF7259E8@fearnothingproductions.net> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> <20150324000045.GM29618@daoine.org> <1427159710.3304.73.camel@steve-new> <5B0D1D91-5EDE-4601-AF85-B23FEF7259E8@fearnothingproductions.net> Message-ID: <1427166935.3304.77.camel@steve-new> On Mon, 2015-03-23 at 19:57 -0700, Robert Paprocki wrote: > Sounds like you either have a vulnerable web application or hole in your systems security. If the root of your problem is that your having content uploaded to your server without your consent, you're asking the wrong question. > > If your app does allow for arbitrary file upload, you can disallow certain file extensions, but that should be handled in whatever Wordpress plugin you're using. > Well, I'm going for the multiple levels of protection approach, but am trying to mate that with a 'simple to maintain' methodology. So, yes I'd like to do both, but without being heavy-handed on the website owners. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From makailol7 at gmail.com Tue Mar 24 03:23:34 2015 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 24 Mar 2015 08:53:34 +0530 Subject: How to apply concurrent connection limit ? In-Reply-To: <1704850.4rjl6tooz4@mihaduk-laptop> References: <1704850.4rjl6tooz4@mihaduk-laptop> Message-ID: Hey thanks for quick reply. As I mentioned in previous email, we also need to exclude some IPs from this connection limit so would you provide some example of IP whitelisting? Thanks, On Mon, Mar 23, 2015 at 7:24 PM, Pavel Mihaduk wrote: > http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html > > > > > On 23 March 2015 19:18:32 Makailol Charls wrote: > > Hello, > > > We have been providing API to our customers and want to apply concurrent > connection limit for API calls. Would anybody let us know which module > should be used with configuration example? > > > We also need to exclude (whitelist) some IPs from this connection limit > and need to allow more connections. > > > Thanks, > > Bhargav > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niteshnarayanlalleo at gmail.com Tue Mar 24 07:33:11 2015 From: niteshnarayanlalleo at gmail.com (nitesh narayan lal) Date: Tue, 24 Mar 2015 13:03:11 +0530 Subject: Compiling and using NGINX in multi thread mode Message-ID: Hi, I hope that I am posting at the right place. I am trying to figure out if multi-thread mode is supported with the latest release, and if so then what I need to do for using it? I am trying to compile it from its source code for PowerPC architecture. But I have seen that the following is been commented out in the auto/options file. #--with-threads=*) USE_THREADS="$value" ;; #--with-threads) USE_THREADS="pthreads" ;; Due to which I am bit confused if its supported or not! -- Regards Nitesh Narayan Lal http://www.niteshnarayanlal.org/ From firstawhois at gmail.com Tue Mar 24 07:39:23 2015 From: firstawhois at gmail.com (=?UTF-8?B?5byg5aWV5pmu?=) Date: Tue, 24 Mar 2015 15:39:23 +0800 Subject: Why subtraction is used to compare in nginx_rbtree.c? Message-ID: When I study the rbtree of nginx I can't understand the comments. Can someone give me a concrete example?Thanks :) for ( ;; ) { /* * Timer values * 1) are spread in small range, usually several minutes, * 2) and overflow each 49 days, if milliseconds are stored in 32 bits. * The comparison takes into account that overflow. */ /* node->key < temp->key */ p = ((ngx_rbtree_key_int_t) (node->key - temp->key) < 0) ? &temp->left : &temp->right; if (*p == sentinel) { break; } temp = *p; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Tue Mar 24 15:33:41 2015 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 24 Mar 2015 11:33:41 -0400 Subject: proper way to redirect from http to https w/query string notifier Message-ID: i need to redirecting from http to https, and append a "source" attribute for tracking (we're trying to figure out how the wrong requests are coming in) this seems to work: if ($query_string){ return 301 https://$host$request_uri&source=server1 ; } return 301 https://$host$request_uri?source=server1 ; I'm just wondering if there is a more appropriate way From mdounin at mdounin.ru Tue Mar 24 15:42:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 18:42:34 +0300 Subject: Why subtraction is used to compare in nginx_rbtree.c? In-Reply-To: References: Message-ID: <20150324154234.GR88631@mdounin.ru> Hello! On Tue, Mar 24, 2015 at 03:39:23PM +0800, ??? wrote: > When I study the rbtree of nginx I can't understand the comments. Can > someone give me a concrete example?Thanks :) [...] > /* > * Timer values > * 1) are spread in small range, usually several minutes, > * 2) and overflow each 49 days, if milliseconds are stored in 32 bits. > * The comparison takes into account that overflow. > */ > > /* node->key < temp->key */ > > p = ((ngx_rbtree_key_int_t) (node->key - temp->key) < 0) > ? &temp->left : &temp->right; Consider (assuming 32-bit keys): node->key = 4294967295 temp->key = 0 The 0 value is bigger, because it's (4294967295 + 1). And that's what the code checks. The (ngx_rbtree_key_int_t) (node->key - temp->key) evaluates to (ngx_rbtree_key_int_t) (4294967295 - 0) and becomes negative after casting to signed (note: this conversion result is actually implementation-defined, but this is how it works in practice). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 24 16:22:00 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 19:22:00 +0300 Subject: nginx-1.7.11 Message-ID: <20150324162200.GS88631@mdounin.ru> Changes with nginx 1.7.11 24 Mar 2015 *) Change: the "sendfile" parameter of the "aio" directive is deprecated; now nginx automatically uses AIO to pre-load data for sendfile if both "aio" and "sendfile" directives are used. *) Feature: experimental thread pools support. *) Feature: the "proxy_request_buffering", "fastcgi_request_buffering", "scgi_request_buffering", and "uwsgi_request_buffering" directives. *) Feature: request body filters experimental API. *) Feature: client SSL certificates support in mail proxy. Thanks to Sven Peter, Franck Levionnois, and Filipe Da Silva. *) Feature: startup speedup when using the "hash ... consistent" directive in the upstream block. Thanks to Wai Keen Woon. *) Feature: debug logging into a cyclic memory buffer. *) Bugfix: in hash table handling. Thanks to Chris West. *) Bugfix: in the "proxy_cache_revalidate" directive. *) Bugfix: SSL connections might hang if deferred accept or the "proxy_protocol" parameter of the "listen" directive were used. Thanks to James Hamlin. *) Bugfix: the $upstream_response_time variable might contain a wrong value if the "image_filter" directive was used. *) Bugfix: in integer overflow handling. Thanks to R?gis Leroy. *) Bugfix: it was not possible to enable SSLv3 with LibreSSL. *) Bugfix: the "ignoring stale global SSL error ... called a function you should not call" alerts appeared in logs when using LibreSSL. *) Bugfix: certificates specified by the "ssl_client_certificate" and "ssl_trusted_certificate" directives were inadvertently used to automatically construct certificate chains. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 24 16:50:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 19:50:55 +0300 Subject: Compiling and using NGINX in multi thread mode In-Reply-To: References: Message-ID: <20150324165055.GW88631@mdounin.ru> Hello! On Tue, Mar 24, 2015 at 01:03:11PM +0530, nitesh narayan lal wrote: > Hi, > > I hope that I am posting at the right place. > I am trying to figure out if multi-thread mode is supported with the > latest release, and if so then what I need to do for using it? > I am trying to compile it from its source code for PowerPC architecture. > But I have seen that the following is been commented out in the > auto/options file. > #--with-threads=*) USE_THREADS="$value" ;; > #--with-threads) USE_THREADS="pthreads" ;; > Due to which I am bit confused if its supported or not! There is no support for threads in previous releases of nginx - only some obsolete code from previous experiments. In just released 1.7.11 an experimental support for thread pools was introduced though. More details can be found here: http://nginx.org/r/aio In short: you have to compile nginx with --with-threads (which is supported now), and then use "aio threads". -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Mar 24 17:32:43 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Mar 2015 17:32:43 +0000 Subject: proper way to redirect from http to https w/query string notifier In-Reply-To: References: Message-ID: <20150324173243.GN29618@daoine.org> On Tue, Mar 24, 2015 at 11:33:41AM -0400, Jonathan Vanasco wrote: Hi there, > if ($query_string){ > return 301 https://$host$request_uri&source=server1 ; > } > return 301 https://$host$request_uri?source=server1 ; > > I'm just wondering if there is a more appropriate way If your backend will accept /request?source=server1 and /request?&source=server1 as being equivalent, then you could use the $is_args variable and just always return 301 https://$host$request_uri$is_args&source=server1; f -- Francis Daly francis at daoine.org From me at myconan.net Tue Mar 24 17:50:29 2015 From: me at myconan.net (Edho Arief) Date: Wed, 25 Mar 2015 02:50:29 +0900 Subject: proper way to redirect from http to https w/query string notifier In-Reply-To: <20150324173243.GN29618@daoine.org> References: <20150324173243.GN29618@daoine.org> Message-ID: On Wed, Mar 25, 2015 at 2:32 AM, Francis Daly wrote: > On Tue, Mar 24, 2015 at 11:33:41AM -0400, Jonathan Vanasco wrote: > > Hi there, > >> if ($query_string){ >> return 301 https://$host$request_uri&source=server1 ; >> } >> return 301 https://$host$request_uri?source=server1 ; >> >> I'm just wondering if there is a more appropriate way > > If your backend will accept /request?source=server1 and > /request?&source=server1 as being equivalent, then you could use the > $is_args variable and just always > > return 301 https://$host$request_uri$is_args&source=server1; > that looks wrong since when there's argument: $request_uri: /path/name?arg=uments $is_args: ? whereas when there's no argument: $request_uri: /path/name $is_args: (now imagine when your return is used) . One possible solution would be just `$host$uri?source=server1&$args` From gmm at csdoc.com Tue Mar 24 18:10:08 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Tue, 24 Mar 2015 20:10:08 +0200 Subject: proper way to redirect from http to https w/query string notifier In-Reply-To: References: Message-ID: <5511A880.6060706@csdoc.com> On 24.03.2015 17:33, Jonathan Vanasco wrote: > i need to redirecting from http to https, > and append a "source" attribute for tracking > (we're trying to figure out how the wrong requests are coming in) Probably you can do such tracking just looking at Referer request header > this seems to work: > if ($query_string){ > return 301 https://$host$request_uri&source=server1 ; > } > return 301 https://$host$request_uri?source=server1 ; > > I'm just wondering if there is a more appropriate way Yes, you can use $uri variable and rewrite directive for this: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite | If a replacement string includes the new request arguments, | the previous request arguments are appended after them. rewrite ^ https://$host$uri?source=server1 permanent; -- Best regards, Gena From reallfqq-nginx at yahoo.fr Tue Mar 24 18:11:17 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 24 Mar 2015 19:11:17 +0100 Subject: Default value of gzip_proxied In-Reply-To: <20150322170654.GX88631@mdounin.ru> References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> <20150322170654.GX88631@mdounin.ru> Message-ID: Hi Maxim, There is still something I do not get... The gzip_proxied default value is set to honor the HTTP/1.0 protocol (which does not have the Vary header and thus is unable to cache different versions of a document) in some proxies. However, the gzip_http_version default value is set so that only HTTP/1.1 requests are being compressed... Thus with the default setting it is impossible to compress requests advertising HTTP/1.0. The RFC dictates: Intermediaries that process HTTP messages (i.e., all intermediaries other than those acting as a tunnel) MUST send their own HTTP-Version in forwarded messages. In other words, they MUST NOT blindly forward the first line of an HTTP message without ensuring that the protocol version matches what the intermediary understands, and is at least conditionally compliant to, for both the receiving and sending of messages. 'tunnel' is considered different as a 'proxy', as section 2.3 indicates: There are three common forms of HTTP intermediary: proxy, gateway, and tunnel. Thus, any HTTP/1.0 proxy should be seen with a HTTP/1.0 protocol version header... and thus should naturally get an uncompressed version of the page. Non-compliant proxies can be bogus in thousands of way, so there is no point in trying to satisfy them anyway. In the light of these elements, I am still wondering why the default behavior of the gzip module for HTTP/1.1 requests going through a (HTTP/1.1) proxy is to send a disturbing uncompressed version of the page. --- *B. R.* On Sun, Mar 22, 2015 at 6:06 PM, Maxim Dounin wrote: > Hello! > > On Sun, Mar 22, 2015 at 03:14:22PM +0100, B.R. wrote: > > > I do not get why you focus on the gzip_vary directive, while I was > > explicitely talking about gzip_proxied. > > The fact that content supposedly compressed might actually not be because > > it contains a 'Via' header is the root cause of our trouble... and you > just > > told me it was for HTTP/1.0 compatibility. > > With HTTP/1.0, there is only one safe option: > > - don't compress anything for proxies. > > With HTTP/1.1, there are two options: > > - don't compress anything for proxies; > > - compress for proxies, but send Vary to avoid incorrect behaviour. > > The second options, which becomes available if you don't care > about HTTP/1.0 compatibility at all, has its downsides I've > talked about. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Mar 24 19:26:19 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Mar 2015 19:26:19 +0000 Subject: proper way to redirect from http to https w/query string notifier In-Reply-To: References: <20150324173243.GN29618@daoine.org> Message-ID: <20150324192619.GO29618@daoine.org> On Wed, Mar 25, 2015 at 02:50:29AM +0900, Edho Arief wrote: > On Wed, Mar 25, 2015 at 2:32 AM, Francis Daly wrote: Hi there, > > If your backend will accept /request?source=server1 and > > /request?&source=server1 as being equivalent, then you could use the > > $is_args variable and just always > > > > return 301 https://$host$request_uri$is_args&source=server1; > > > > that looks wrong since when there's argument: Ah, yes, you are correct - I had it backwards. I guess one could set something using "map" to be "?" if $is_args is empty and "&" if $is_args is ?, and build a correct "return" line from that -- but the original "if ($query_string)" is probably simpler at that point. > One possible solution would be just `$host$uri?source=server1&$args` > > That will work in the common case, but $uri has been percent-unescaped so may not be suitable to send as-is. Thanks, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Mar 24 19:31:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 22:31:38 +0300 Subject: Default value of gzip_proxied In-Reply-To: References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> <20150322170654.GX88631@mdounin.ru> Message-ID: <20150324193138.GA88631@mdounin.ru> Hello! On Tue, Mar 24, 2015 at 07:11:17PM +0100, B.R. wrote: > Hi Maxim, > > There is still something I do not get... > > The gzip_proxied > > default value is set to honor the HTTP/1.0 protocol (which does not have > the Vary header and thus is unable to cache different versions of a > document) in some proxies. You are still misunderstanding things. It's one of the two possible approaches to handle things even if we forget about HTTP/1.0 completely. > However, the gzip_http_version > > default value is set so that only HTTP/1.1 requests are being compressed... > Thus with the default setting it is impossible to compress requests > advertising HTTP/1.0. > > The RFC > > dictates: > > Intermediaries that process HTTP messages (i.e., all intermediaries > other than those acting as a tunnel) MUST send their own HTTP-Version > in forwarded messages. In other words, they MUST NOT blindly forward > the first line of an HTTP message without ensuring that the protocol > version matches what the intermediary understands, and is at least > conditionally compliant to, for both the receiving and sending of > messages. As you can see from the paragraph you quoted, nginx only knows HTTP version of the intermediary it got the request from. That is, there is no guarantee that there are no HTTP/1.0 proxies along the request/response chain. -- Maxim Dounin http://nginx.org/ From steve at greengecko.co.nz Tue Mar 24 20:04:18 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 25 Mar 2015 09:04:18 +1300 Subject: disable file uploads In-Reply-To: <1427166935.3304.77.camel@steve-new> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> <20150324000045.GM29618@daoine.org> <1427159710.3304.73.camel@steve-new> <5B0D1D91-5EDE-4601-AF85-B23FEF7259E8@fearnothingproductions.net> <1427166935.3304.77.camel@steve-new> Message-ID: <1427227458.3304.100.camel@steve-new> On Tue, 2015-03-24 at 16:15 +1300, Steve Holdoway wrote: > On Mon, 2015-03-23 at 19:57 -0700, Robert Paprocki wrote: > > Sounds like you either have a vulnerable web application or hole in your systems security. If the root of your problem is that your having content uploaded to your server without your consent, you're asking the wrong question. > > > > If your app does allow for arbitrary file upload, you can disallow certain file extensions, but that should be handled in whatever Wordpress plugin you're using. > > > Well, I'm going for the multiple levels of protection approach, but am > trying to mate that with a 'simple to maintain' methodology. > > So, yes I'd like to do both, but without being heavy-handed on the > website owners. > > > Steve Just had another attack on a drupal site. Should I resort to weird ownership / permissions at a system level? That just makes it really difficult for the client to keep their site current, which is pretty counter-productive. I did work out a couple of scripts for Magento to chown nobody / chattr +i to lock a site down when in 'production mode' and vv, but it is still an imposition. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From francis at daoine.org Tue Mar 24 20:36:00 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Mar 2015 20:36:00 +0000 Subject: disable file uploads In-Reply-To: <1427227458.3304.100.camel@steve-new> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> <20150324000045.GM29618@daoine.org> <1427159710.3304.73.camel@steve-new> <5B0D1D91-5EDE-4601-AF85-B23FEF7259E8@fearnothingproductions.net> <1427166935.3304.77.camel@steve-new> <1427227458.3304.100.camel@steve-new> Message-ID: <20150324203600.GP29618@daoine.org> On Wed, Mar 25, 2015 at 09:04:18AM +1300, Steve Holdoway wrote: Hi there, > Just had another attack on a drupal site. Should I resort to weird > ownership / permissions at a system level? >From what I've read in the thread, you seem to have two possible approaches. One is "stop the unwanted files from being uploaded". To do that, you will need to know how the unwanted files are uploaded -- if they don't go through nginx, no nginx config will block them. (If they *do* go through nginx, then there may be some correlation between file modification times and nginx request logs which indicates what request leads to the files being uploaded.) Are there ftp or scp or other logs indicating how these files are put onto your server? The other is "stop the unwanted files from being served"; but I think you also indicated that the unwanted files were being actively executed on your server. > That just makes it really > difficult for the client to keep their site current, which is pretty > counter-productive. More counter-productive than the reputation damage to running an exploited server? You're in damage-control mode. Turn everything off, or make everything read-only, until you can find out what has happened and can make it right. Good luck identifying the cause, f -- Francis Daly francis at daoine.org From nginx at 2xlp.com Tue Mar 24 22:23:34 2015 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 24 Mar 2015 18:23:34 -0400 Subject: proper way to redirect from http to https w/query string notifier In-Reply-To: <5511A880.6060706@csdoc.com> References: <5511A880.6060706@csdoc.com> Message-ID: <158A1FCE-A7E7-4241-BA40-247094A7283E@2xlp.com> On Mar 24, 2015, at 2:10 PM, Gena Makhomed wrote: > Probably you can do such tracking just looking at Referer request header Long story short - we actually are doing that. This is just to get stats into the HTTPS log analyzer, which is a different system and much easier for us to deploy changes to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Tue Mar 24 22:24:49 2015 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 24 Mar 2015 18:24:49 -0400 Subject: proper way to redirect from http to https w/query string notifier In-Reply-To: <20150324192619.GO29618@daoine.org> References: <20150324173243.GN29618@daoine.org> <20150324192619.GO29618@daoine.org> Message-ID: On Mar 24, 2015, at 3:26 PM, Francis Daly wrote: > but the original "if ($query_string)" is probably simpler at > that point. thanks for the help! it's great having a second set of eyes on this! From nginx at 2xlp.com Tue Mar 24 22:40:03 2015 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 24 Mar 2015 18:40:03 -0400 Subject: disable file uploads In-Reply-To: <1427166935.3304.77.camel@steve-new> References: <1427141630.3304.54.camel@steve-new> <20150323225217.GJ29618@daoine.org> <1427154458.3304.59.camel@steve-new> <20150324000045.GM29618@daoine.org> <1427159710.3304.73.camel@steve-new> <5B0D1D91-5EDE-4601-AF85-B23FEF7259E8@fearnothingproductions.net> <1427166935.3304.77.camel@steve-new> Message-ID: On Mar 23, 2015, at 11:15 PM, Steve Holdoway wrote: > Well, I'm going for the multiple levels of protection approach, but am > trying to mate that with a 'simple to maintain' methodology. > > So, yes I'd like to do both, but without being heavy-handed on the > website owners. I understand the frustration of this. You don't need to have compromised software to be affected by it. Once someone finds out you have wordpress installed, you become subject to a lot of attacks and random POSTs -- as scripters try to exploit known issues. If you can do this -- one of the simplest things to do is to put as much of the wordpress "dashboard" behind a httpauth block in nginx, and disable POST everywhere but there. I've seen some large properties heavily configure wordpress to run on "admin.example.com" behind heavy auth, and then have "public.domain.com" simply handle GET requests. That may not work on your setup though. If you're using the internal wordpress comments tool or any of their api/web hooks, you'd need to open up those urls to POST -- but you can limit it to something arbitrarily small (e.g. 1k or less) There are also a few integration how-tos for using nginx with fail2ban. From reallfqq-nginx at yahoo.fr Tue Mar 24 22:59:24 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 24 Mar 2015 23:59:24 +0100 Subject: Default value of gzip_proxied In-Reply-To: <20150324193138.GA88631@mdounin.ru> References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> <20150322170654.GX88631@mdounin.ru> <20150324193138.GA88631@mdounin.ru> Message-ID: Hi, ? > > The gzip_proxied > > > > default value is set to honor the HTTP/1.0 protocol (which does not have > > the Vary header and thus is unable to cache different versions of a > > document) in some proxies. > > You are still misunderstanding things. It's one of the two > possible approaches to handle things even if we forget about > HTTP/1.0 completely. > ?Well the 2 only possible approaches in 1.0 are to send compressed or uncompressed data. Client supporting compressed version? ?will understand the uncompressed one but the reverse might not be true. So if you are not able to make the answer being served from cache to vary (which is the case in 1.0), you are actually stuck with a single option, the only common denominator: no compression at all. Right?? > > However, the gzip_http_version > > < > http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_http_version> > > default value is set so that only HTTP/1.1 requests are being > compressed... > > Thus with the default setting it is impossible to compress requests > > advertising HTTP/1.0. > > > > The RFC > > < > http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-14#section-2.5> > > dictates: > > > > Intermediaries that process HTTP messages (i.e., all intermediaries > > other than those acting as a tunnel) MUST send their own HTTP-Version > > in forwarded messages. In other words, they MUST NOT blindly forward > > the first line of an HTTP message without ensuring that the protocol > > version matches what the intermediary understands, and is at least > > conditionally compliant to, for both the receiving and sending of > > messages. > > As you can see from the paragraph you quoted, nginx only knows > HTTP version of the intermediary it got the request from. That > is, there is no guarantee that there are no HTTP/1.0 proxies along > the request/response chain. > ?The text I quoted means than at the end of a chain of intermediaries, you are ensured that you will end up with the greatest common denominator, ie if a single element of the intermediaries chain does not handle 1.1, he his required to forward the request with a 1.0 version header, which will then be left untouched by following intermediaries (as 1.0 is the smallest version available)?. An intermediary seing a 1.1 request coming in but not supporting that version is required to step down to a version it understands, meaning 1.0. It should not forward 1.1. If nginx sees 1.1 coming, it is my understanding that every intermediary supports *at least* 1.1, whatever number of intermediaries we are talking about. ?What is it I do not get? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From baipeng512 at 163.com Wed Mar 25 03:55:14 2015 From: baipeng512 at 163.com (baipeng) Date: Wed, 25 Mar 2015 11:55:14 +0800 (CST) Subject: most of value of $request_time is 0.000 Message-ID: <10e46a34.7ec7.14c4f11e414.Coremail.baipeng512@163.com> Our version of nginx is nginx/1.0.8 and we have write a third party module to handle the http request. Because the process model of our third party module is synchronous so the nginx worker process will be blocked until one request accomplished. The average time cost of one request is about 50ms.But After we set $request_time in nginx.conf, the most of the value of $request_time is 0.000.Even we set timer_resolution 1ms in nginx.conf most of value of $request_time is still 0.000. Are there anyone can tell me why it occured? Thx. -------------- next part -------------- An HTML attachment was scrubbed... URL: From niteshnarayanlalleo at gmail.com Wed Mar 25 06:00:01 2015 From: niteshnarayanlalleo at gmail.com (nitesh narayan lal) Date: Wed, 25 Mar 2015 11:30:01 +0530 Subject: Compiling and using NGINX in multi thread mode In-Reply-To: <20150324165055.GW88631@mdounin.ru> References: <20150324165055.GW88631@mdounin.ru> Message-ID: Hi Maxim, On Tue, Mar 24, 2015 at 10:20 PM, Maxim Dounin wrote: > Hello! > > On Tue, Mar 24, 2015 at 01:03:11PM +0530, nitesh narayan lal wrote: > > > Hi, > > > > I hope that I am posting at the right place. > > I am trying to figure out if multi-thread mode is supported with the > > latest release, and if so then what I need to do for using it? > > I am trying to compile it from its source code for PowerPC architecture. > > But I have seen that the following is been commented out in the > > auto/options file. > > #--with-threads=*) USE_THREADS="$value" ;; > > #--with-threads) USE_THREADS="pthreads" ;; > > Due to which I am bit confused if its supported or not! > > There is no support for threads in previous releases of nginx - > only some obsolete code from previous experiments. > > In just released 1.7.11 an experimental support for thread pools > was introduced though. More details can be found here: > > http://nginx.org/r/aio > > In short: you have to compile nginx with --with-threads (which is > supported now), and then use "aio threads". > Thanks a bunch for informing this. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Regards Nitesh Narayan Lal http://www.niteshnarayanlal.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 25 08:40:48 2015 From: nginx-forum at nginx.us (mastercan) Date: Wed, 25 Mar 2015 04:40:48 -0400 Subject: [calling all patch XPerts !] [PATCH] RSA+DSA+ECC bundles In-Reply-To: <5433F296.9000506@riseup.net> References: <5433F296.9000506@riseup.net> Message-ID: It would be great if the official nginx had support for multiple certificates. Some bigger sites are already deploying ECDSA certificates. To be able to support older clients while using ECDSA we need multi certificate support. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253440,257667#msg-257667 From devel at jasonwoods.me.uk Wed Mar 25 12:04:03 2015 From: devel at jasonwoods.me.uk (Jason Woods) Date: Wed, 25 Mar 2015 12:04:03 +0000 Subject: "gzip on" duplicates Content-Encoding header if an empty one already exists Message-ID: Hi, I have a (probably dodgy) application that is sending out uncompressed XML with the following header. That is, an empty Content-Encoding header. Content-Encoding: This works fine, until I enable gzip on Nginx 1.6.2 latest (which is a proxy to the application.) Nginx compresses the XML, and adds ANOTHER Content-Encoding header, containing "gzip". I end up with this response: Content-Encoding: Content-Encoding: gzip This seems to break on Safari and Google Chrome (not tested other browsers.) They seem to ignore the latter header, and assume that content is not compressed, and try to render the binary compressed output. Is this an issue in the client implementations, an issue in the Nginx GZIP implementation, an issue in the upstream application, or a mixture of all 3? Looking at Nginx 1.6.2's ngx_http_gzip_filter_module.c lines 246 to 255 (which I believe is the correct place) it checks for existence of a Content-Encoding header with a positive length (non-zero) - so it looks like if any other Content-Encoding was already specified, Nginx GZIP does not do anything and does not duplicate header. So it seems the case of an empty Content-Encoding slips through. Should this be the case? Should it remove the existing blank header first, or just not GZIP if it exists and is empty? Thanks in advance, Jason From kworthington at gmail.com Wed Mar 25 13:47:33 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 25 Mar 2015 09:47:33 -0400 Subject: [nginx-announce] nginx-1.7.11 In-Reply-To: <20150324162207.GT88631@mdounin.ru> References: <20150324162207.GT88631@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.11 for Windows http://goo.gl/DIqKtV (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 24, 2015 at 12:22 PM, Maxim Dounin wrote: > Changes with nginx 1.7.11 24 Mar > 2015 > > *) Change: the "sendfile" parameter of the "aio" directive is > deprecated; now nginx automatically uses AIO to pre-load data for > sendfile if both "aio" and "sendfile" directives are used. > > *) Feature: experimental thread pools support. > > *) Feature: the "proxy_request_buffering", "fastcgi_request_buffering", > "scgi_request_buffering", and "uwsgi_request_buffering" directives. > > *) Feature: request body filters experimental API. > > *) Feature: client SSL certificates support in mail proxy. > Thanks to Sven Peter, Franck Levionnois, and Filipe Da Silva. > > *) Feature: startup speedup when using the "hash ... consistent" > directive in the upstream block. > Thanks to Wai Keen Woon. > > *) Feature: debug logging into a cyclic memory buffer. > > *) Bugfix: in hash table handling. > Thanks to Chris West. > > *) Bugfix: in the "proxy_cache_revalidate" directive. > > *) Bugfix: SSL connections might hang if deferred accept or the > "proxy_protocol" parameter of the "listen" directive were used. > Thanks to James Hamlin. > > *) Bugfix: the $upstream_response_time variable might contain a wrong > value if the "image_filter" directive was used. > > *) Bugfix: in integer overflow handling. > Thanks to R?gis Leroy. > > *) Bugfix: it was not possible to enable SSLv3 with LibreSSL. > > *) Bugfix: the "ignoring stale global SSL error ... called a function > you should not call" alerts appeared in logs when using LibreSSL. > > *) Bugfix: certificates specified by the "ssl_client_certificate" and > "ssl_trusted_certificate" directives were inadvertently used to > automatically construct certificate chains. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 25 15:03:12 2015 From: nginx-forum at nginx.us (asarnoldas) Date: Wed, 25 Mar 2015 11:03:12 -0400 Subject: How to apply concurrent connection limit ? In-Reply-To: References: Message-ID: I am searching for the similar thing and can't find anything... Is it possible to handle all of this in nginx? ( Seting/evaluating headers, limiting connection, IP whitelisting ) ? https://developer.github.com/v3/rate_limit/ I want to throttle access to our API and send extra headers, based on the limit say 100 req / minute for 1 IP you would get a 429 HTTP error code ( Too many requests ) and extra header X-RateLimit-Reset: would state how much time left until limit will expire. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257572,257677#msg-257677 From nginx-forum at nginx.us Wed Mar 25 15:09:29 2015 From: nginx-forum at nginx.us (asarnoldas) Date: Wed, 25 Mar 2015 11:09:29 -0400 Subject: API ratelimiting, setting extra headers X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset Message-ID: <0689177eaff70bf59b6dae73e1f02f85.NginxMailingListEnglish@forum.nginx.org> I am searching for API ratelimiting and can't find anything... Is it possible to handle all of this in nginx? ( Seting/evaluating headers, limiting connection, IP whitelisting ) ? I want to throttle access to our API and send extra headers, based on the limit say 100 req / minute for 1 IP you would get a 429 HTTP error code ( Too many requests ) and extra header X-RateLimit-Reset: would state how much time left until limit will expire. X-RateLimit-Limit: 60 X-RateLimit-Remaining: 56 X-RateLimit-Reset: 54 Like github does and documents it: https://developer.github.com/v3/rate_limit/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257678,257678#msg-257678 From semenukha at gmail.com Wed Mar 25 19:08:12 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Wed, 25 Mar 2015 15:08:12 -0400 Subject: "gzip on" duplicates Content-Encoding header if an empty one already exists In-Reply-To: References: Message-ID: <1453824.6b2BFrKz7F@tornado> Hi Jason, Probably discarding the Content-Encoding directive from the upstream will resolve this: http://nginx.org/r/proxy_hide_header On Wednesday, March 25, 2015 12:04:03 PM Jason Woods wrote: > Hi, > > I have a (probably dodgy) application that is sending out uncompressed XML > with the following header. That is, an empty Content-Encoding header. > > Content-Encoding: > > This works fine, until I enable gzip on Nginx 1.6.2 latest (which is a proxy > to the application.) Nginx compresses the XML, and adds ANOTHER > Content-Encoding header, containing "gzip". I end up with this response: > > Content-Encoding: > Content-Encoding: gzip > > This seems to break on Safari and Google Chrome (not tested other browsers.) > They seem to ignore the latter header, and assume that content is not > compressed, and try to render the binary compressed output. Is this an > issue in the client implementations, an issue in the Nginx GZIP > implementation, an issue in the upstream application, or a mixture of all > 3? > > Looking at Nginx 1.6.2's ngx_http_gzip_filter_module.c lines 246 to 255 > (which I believe is the correct place) it checks for existence of a > Content-Encoding header with a positive length (non-zero) - so it looks > like if any other Content-Encoding was already specified, Nginx GZIP does > not do anything and does not duplicate header. So it seems the case of an > empty Content-Encoding slips through. Should this be the case? Should it > remove the existing blank header first, or just not GZIP if it exists and > is empty? > > Thanks in advance, > > Jason > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From mdounin at mdounin.ru Wed Mar 25 20:00:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Mar 2015 23:00:49 +0300 Subject: most of value of $request_time is 0.000 In-Reply-To: <10e46a34.7ec7.14c4f11e414.Coremail.baipeng512@163.com> References: <10e46a34.7ec7.14c4f11e414.Coremail.baipeng512@163.com> Message-ID: <20150325200048.GC88631@mdounin.ru> Hello! On Wed, Mar 25, 2015 at 11:55:14AM +0800, baipeng wrote: > Our version of nginx is nginx/1.0.8 and we have write a third > party module to handle the http request. Because the process > model of our third party module is synchronous so the nginx > worker process will be blocked until one request accomplished. > The average time cost of one request is about 50ms.But After we > set $request_time in nginx.conf, the most of the value of > $request_time is 0.000.Even we set timer_resolution 1ms in > nginx.conf most of value of $request_time is still 0.000. > Are there anyone can tell me why it occured? Time as seen by nginx is only updated once per event loop iteration, so if you block processing for a while and then finalize the request - this time won't be visible in $request_time. The timer_resolution may change things on some platforms, but it's not designed to do so. Instead, it's to _reduce_ timer resolution compared to what nginx does by default, see http://nginx.org/r/timer_resolution. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Mar 25 21:26:10 2015 From: nginx-forum at nginx.us (mastercan) Date: Wed, 25 Mar 2015 17:26:10 -0400 Subject: proper setup for forward secrecy In-Reply-To: <20120924144151.GA40452@mdounin.ru> References: <20120924144151.GA40452@mdounin.ru> Message-ID: <8dd6cb23eecabd54351ca10eb6b81093.NginxMailingListEnglish@forum.nginx.org> This topic is 3 years old by now. Has something changed on OpenSSL key generation since then? Does anybody know? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229538,257690#msg-257690 From mdounin at mdounin.ru Wed Mar 25 21:36:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Mar 2015 00:36:55 +0300 Subject: Default value of gzip_proxied In-Reply-To: References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> <20150322170654.GX88631@mdounin.ru> <20150324193138.GA88631@mdounin.ru> Message-ID: <20150325213655.GD88631@mdounin.ru> Hello! On Tue, Mar 24, 2015 at 11:59:24PM +0100, B.R. wrote: > > > The gzip_proxied > > > > > > default value is set to honor the HTTP/1.0 protocol (which does not have > > > the Vary header and thus is unable to cache different versions of a > > > document) in some proxies. > > > > You are still misunderstanding things. It's one of the two > > possible approaches to handle things even if we forget about > > HTTP/1.0 completely. > > > > ?Well the 2 only possible approaches in 1.0 are to send compressed or > uncompressed data. > Client supporting compressed version? > > ?will understand the uncompressed one but the reverse might not be true. > So if you are not able to make the answer being served from cache to vary > (which is the case in 1.0), you are actually stuck with a single option, > the only common denominator: no compression at all. Right?? Yes, the only option if we care about HTTP/1.0 is to avoid compression for proxies. > > > However, the gzip_http_version > > > < > > http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_http_version> > > > default value is set so that only HTTP/1.1 requests are being > > compressed... > > > Thus with the default setting it is impossible to compress requests > > > advertising HTTP/1.0. > > > > > > The RFC > > > < > > http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-14#section-2.5> > > > dictates: > > > > > > Intermediaries that process HTTP messages (i.e., all intermediaries > > > other than those acting as a tunnel) MUST send their own HTTP-Version > > > in forwarded messages. In other words, they MUST NOT blindly forward > > > the first line of an HTTP message without ensuring that the protocol > > > version matches what the intermediary understands, and is at least > > > conditionally compliant to, for both the receiving and sending of > > > messages. > > > > As you can see from the paragraph you quoted, nginx only knows > > HTTP version of the intermediary it got the request from. That > > is, there is no guarantee that there are no HTTP/1.0 proxies along > > the request/response chain. > > > > ?The text I quoted means than at the end of a chain of intermediaries, you > are ensured that you will end up with the greatest common denominator, ie > if a single element of the intermediaries chain does not handle 1.1, he his > required to forward the request with a 1.0 version header, which will then > be left untouched by following intermediaries (as 1.0 is the smallest > version available)?. > An intermediary seing a 1.1 request coming in but not supporting that > version is required to step down to a version it understands, meaning 1.0. > It should not forward 1.1. > > If nginx sees 1.1 coming, it is my understanding that every intermediary > supports *at least* 1.1, whatever number of intermediaries we are talking > about. No. The text ensures that nginx will see HTTP/1.0 if the last proxy doesn't understand HTTP/1.1. There is no requirement to preserve supported versions untouched. Moreover, first sentence requires intermediaries to use their _own_ version. And RFC2616 explicitly requires the same, http://tools.ietf.org/html/rfc2616#section-3.1: Due to interoperability problems with HTTP/1.0 proxies discovered since the publication of RFC 2068[33], caching proxies MUST, gateways MAY, and tunnels MUST NOT upgrade the request to the highest version they support. That is, in a request/response chain like this: client -> proxy1 -> proxy2 -> nginx If proxy1 supports only HTTP/1.0, but proxy2 supports HTTP/1.1, nginx will see an HTTP/1.1 request. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Mar 25 22:33:49 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 25 Mar 2015 22:33:49 +0000 Subject: How to apply concurrent connection limit ? In-Reply-To: References: <1704850.4rjl6tooz4@mihaduk-laptop> Message-ID: <20150325223349.GQ29618@daoine.org> On Tue, Mar 24, 2015 at 08:53:34AM +0530, Makailol Charls wrote: Hi there, > As I mentioned in previous email, we also need to exclude some IPs from > this connection limit so would you provide some example of IP whitelisting? http://mailman.nginx.org/pipermail/nginx/2012-July/034790.html linked from last December's http://mailman.nginx.org/pipermail/nginx/2014-December/046207.html There is a bit more background in each thread there. f -- Francis Daly francis at daoine.org From kipcoul at gmail.com Thu Mar 26 12:12:34 2015 From: kipcoul at gmail.com (Kip Coul) Date: Thu, 26 Mar 2015 13:12:34 +0100 Subject: Transform POST request to GET and pass body in URL Message-ID: Hello, I am using Varnish as a cache and reverse-proxy to distribute requests between different backend workers. The workers expect some parameters, that can be passed either through GET or POST. The way Varnish works is by caching and distributing requests based on the URL. So all GET requests are fine, because the parameters are in the URL. However, POST requests are not, because parameters are within the body, so requests are not cached and distributed to one backend worker only, which then gets all the load. What I'd like to do is transform POST requests to GET requests, by encoding the POST body content in the URL and pass a subsequent GET to the Varnish cache. Do you think this is possible? If so, how? Many thanks for your help! Regards, Kip From siefke_listen at web.de Thu Mar 26 13:15:44 2015 From: siefke_listen at web.de (Silvio Siefke) Date: Thu, 26 Mar 2015 14:15:44 +0100 Subject: Nginx with Mailman Message-ID: Hello, i try to run mailman on nginx over fcgiwrap. The sock is present on system and has correct rights, but log say me can not find. server { listen 80; listen [::]:80; server_name lists; root /usr/lib64/mailman/cgi-bin; access_log /var/log/nginx/lists.access.log; error_log /var/log/nginx/lists.error.log; location = / { rewrite ^ /listinfo permanent; } location / { fastcgi_split_path_info ^(/[^/]*)(.*)$; fastcgi_pass unix:/run/lists.sock; include /etc/nginx/configuration/fastcgi.conf; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } location /icons { alias /usr/lib64/mailman/icons; } location /archives { alias /var/lib64/mailman/archives/public; autoindex on; } } 2015/03/26 14:13:17 [crit] 13209#0: *21 connect() to unix:/run/list.sock failed (2: No such file or directory) while connecting to upstream, client: 87.161.141.92, server: lists.local, request: "GET /listinfo HTTP/1.1", upstream: "fastcgi://unix:/run/lists.sock:", host: "lists.local" ks3374456 nginx # ls -l /run | grep lists srwxr-xr-x 1 nginx nginx 0 26. M?r 13:01 lists.sock-1 Has someone idea what goes wrong? Thank You & Nice Day Silvio -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Thu Mar 26 13:23:54 2015 From: miguelmclara at gmail.com (Miguel Clara) Date: Thu, 26 Mar 2015 13:23:54 +0000 Subject: Nginx with Mailman In-Reply-To: References: Message-ID: <7B591F2C-E83F-45AD-9C67-3BC9217E0953@gmail.com> On March 26, 2015 1:15:44 PM WET, Silvio Siefke wrote: >Hello, > >i try to run mailman on nginx over fcgiwrap. If you can I would suggest uWsgi module instead of fcgiwrap. The sock is present on >system >and has correct rights, but >log say me can not find. > >server { >listen 80; >listen [::]:80; >server_name lists; >root /usr/lib64/mailman/cgi-bin; > >access_log /var/log/nginx/lists.access.log; >error_log /var/log/nginx/lists.error.log; > > location = / { > rewrite ^ /listinfo permanent; > } > > location / { > fastcgi_split_path_info ^(/[^/]*)(.*)$; > fastcgi_pass unix:/run/lists.sock; > include /etc/nginx/configuration/fastcgi.conf; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > } > > location /icons { > alias /usr/lib64/mailman/icons; > } > > location /archives { > alias /var/lib64/mailman/archives/public; > autoindex on; > } >} > > > >2015/03/26 14:13:17 [crit] 13209#0: *21 connect() to >unix:/run/list.sock >failed (2: No such file or directory) while connecting to upstream, >client: >87.161.141.92, server: lists.local, request: "GET /listinfo HTTP/1.1", >upstream: "fastcgi://unix:/run/lists.sock:", host: "lists.local" > > >ks3374456 nginx # ls -l /run | grep lists >srwxr-xr-x 1 nginx nginx 0 26. M?r 13:01 lists.sock-1 > > >Has someone idea what goes wrong? > Whats the user nginx is set run use on nginx. conf? >Thank You & Nice Day >Silvio > > >------------------------------------------------------------------------ > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From siefke_listen at web.de Thu Mar 26 14:09:10 2015 From: siefke_listen at web.de (Silvio Siefke) Date: Thu, 26 Mar 2015 15:09:10 +0100 Subject: Nginx with Mailman In-Reply-To: <7B591F2C-E83F-45AD-9C67-3BC9217E0953@gmail.com> References: <7B591F2C-E83F-45AD-9C67-3BC9217E0953@gmail.com> Message-ID: Yes in nginx config is nginx the user. Thank You & Nice Day 2015-03-26 14:23 GMT+01:00 Miguel Clara : > > > On March 26, 2015 1:15:44 PM WET, Silvio Siefke > wrote: > >Hello, > > > >i try to run mailman on nginx over fcgiwrap. > > If you can I would suggest uWsgi module instead of fcgiwrap. > > The sock is present on > >system > >and has correct rights, but > >log say me can not find. > > > >server { > >listen 80; > >listen [::]:80; > >server_name lists; > >root /usr/lib64/mailman/cgi-bin; > > > >access_log /var/log/nginx/lists.access.log; > >error_log /var/log/nginx/lists.error.log; > > > > location = / { > > rewrite ^ /listinfo permanent; > > } > > > > location / { > > fastcgi_split_path_info ^(/[^/]*)(.*)$; > > fastcgi_pass unix:/run/lists.sock; > > include /etc/nginx/configuration/fastcgi.conf; > > fastcgi_param PATH_INFO $fastcgi_path_info; > > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > > } > > > > location /icons { > > alias /usr/lib64/mailman/icons; > > } > > > > location /archives { > > alias /var/lib64/mailman/archives/public; > > autoindex on; > > } > >} > > > > > > > >2015/03/26 14:13:17 [crit] 13209#0: *21 connect() to > >unix:/run/list.sock > >failed (2: No such file or directory) while connecting to upstream, > >client: > >87.161.141.92, server: lists.local, request: "GET /listinfo HTTP/1.1", > >upstream: "fastcgi://unix:/run/lists.sock:", host: "lists.local" > > > > > >ks3374456 nginx # ls -l /run | grep lists > >srwxr-xr-x 1 nginx nginx 0 26. M?r 13:01 lists.sock-1 > > > > > >Has someone idea what goes wrong? > > > > Whats the user nginx is set run use on nginx. conf? > > >Thank You & Nice Day > >Silvio > > > > > >------------------------------------------------------------------------ > > > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 26 14:23:11 2015 From: nginx-forum at nginx.us (ender) Date: Thu, 26 Mar 2015 10:23:11 -0400 Subject: Request_time is always 0.000 Message-ID: <0d46c9b333d89b7a6ae4092b720cdefb.NginxMailingListEnglish@forum.nginx.org> Hello, I need to log transaction time, so I simply add $request_time to my log_format directive. Anyway the value in the access.log in always 0.000 even if the server is under heavy load (siege show transaction time up to 30 sec). This is a just a basic server with no more than one page for testing. Nginx is running under OpenBSD 5.2, I both tried 1.0.15 (the default package) and the 1.7.6 from source on nginx site. What I'm missing? Thanks, Andrea Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257699,257699#msg-257699 From francis at daoine.org Thu Mar 26 16:27:17 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 26 Mar 2015 16:27:17 +0000 Subject: Nginx with Mailman In-Reply-To: References: Message-ID: <20150326162717.GR29618@daoine.org> On Thu, Mar 26, 2015 at 02:15:44PM +0100, Silvio Siefke wrote: Hi there, > i try to run mailman on nginx over fcgiwrap. > 2015/03/26 14:13:17 [crit] 13209#0: *21 connect() to unix:/run/list.sock > failed (2: No such file or directory) while connecting to upstream, client: > 87.161.141.92, server: lists.local, request: "GET /listinfo HTTP/1.1", > upstream: "fastcgi://unix:/run/lists.sock:", host: "lists.local" When you request /listinfo, what file on your filesystem do you want your fastcgi server to process? (From the perspective of the fastcgi server, in case any kind of chrooting is involved.) What fastcgi_param does your fastcgi server (fcgiwrap) use to determine the file to process? (It is usually SCRIPT_FILENAME; but some fastcgi servers use something different.) f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Thu Mar 26 17:50:24 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 26 Mar 2015 18:50:24 +0100 Subject: Default value of gzip_proxied In-Reply-To: <20150325213655.GD88631@mdounin.ru> References: <20150321142132.GP88631@mdounin.ru> <20150322013116.GU88631@mdounin.ru> <20150322170654.GX88631@mdounin.ru> <20150324193138.GA88631@mdounin.ru> <20150325213655.GD88631@mdounin.ru> Message-ID: Hello Maxim, Thanks for having taken the time of explaining it all! It seems HTTP/1.0 interoperability is seriously flawed... Anyhow, now I understand nginx' default behavior, which makes sense. ?Our needs are very specific, since nginx is hidden behind an internal cache, but the general case, real front-end, is safe. :o)? Thanks again, --- *B. R.* On Wed, Mar 25, 2015 at 10:36 PM, Maxim Dounin wrote: > Hello! > > On Tue, Mar 24, 2015 at 11:59:24PM +0100, B.R. wrote: > > > > > The gzip_proxied > > > > < > http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_proxied> > > > > default value is set to honor the HTTP/1.0 protocol (which does not > have > > > > the Vary header and thus is unable to cache different versions of a > > > > document) in some proxies. > > > > > > You are still misunderstanding things. It's one of the two > > > possible approaches to handle things even if we forget about > > > HTTP/1.0 completely. > > > > > > > ?Well the 2 only possible approaches in 1.0 are to send compressed or > > uncompressed data. > > Client supporting compressed version? > > > > ?will understand the uncompressed one but the reverse might not be true. > > So if you are not able to make the answer being served from cache to vary > > (which is the case in 1.0), you are actually stuck with a single option, > > the only common denominator: no compression at all. Right?? > > Yes, the only option if we care about HTTP/1.0 is to avoid > compression for proxies. > > > > > However, the gzip_http_version > > > > < > > > > http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_http_version> > > > > default value is set so that only HTTP/1.1 requests are being > > > compressed... > > > > Thus with the default setting it is impossible to compress requests > > > > advertising HTTP/1.0. > > > > > > > > The RFC > > > > < > > > > http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-14#section-2.5> > > > > dictates: > > > > > > > > Intermediaries that process HTTP messages (i.e., all > intermediaries > > > > other than those acting as a tunnel) MUST send their own > HTTP-Version > > > > in forwarded messages. In other words, they MUST NOT blindly > forward > > > > the first line of an HTTP message without ensuring that the > protocol > > > > version matches what the intermediary understands, and is at least > > > > conditionally compliant to, for both the receiving and sending of > > > > messages. > > > > > > As you can see from the paragraph you quoted, nginx only knows > > > HTTP version of the intermediary it got the request from. That > > > is, there is no guarantee that there are no HTTP/1.0 proxies along > > > the request/response chain. > > > > > > > ?The text I quoted means than at the end of a chain of intermediaries, > you > > are ensured that you will end up with the greatest common denominator, ie > > if a single element of the intermediaries chain does not handle 1.1, he > his > > required to forward the request with a 1.0 version header, which will > then > > be left untouched by following intermediaries (as 1.0 is the smallest > > version available)?. > > An intermediary seing a 1.1 request coming in but not supporting that > > version is required to step down to a version it understands, meaning > 1.0. > > It should not forward 1.1. > > > > If nginx sees 1.1 coming, it is my understanding that every intermediary > > supports *at least* 1.1, whatever number of intermediaries we are talking > > about. > > No. The text ensures that nginx will see HTTP/1.0 if the last > proxy doesn't understand HTTP/1.1. There is no requirement to > preserve supported versions untouched. Moreover, first sentence > requires intermediaries to use their _own_ version. And RFC2616 > explicitly requires the same, > http://tools.ietf.org/html/rfc2616#section-3.1: > > Due to interoperability problems with HTTP/1.0 proxies discovered > since the publication of RFC 2068[33], caching proxies MUST, gateways > MAY, and tunnels MUST NOT upgrade the request to the highest version > they support. > > That is, in a request/response chain like this: > > client -> proxy1 -> proxy2 -> nginx > > If proxy1 supports only HTTP/1.0, but proxy2 supports HTTP/1.1, > nginx will see an HTTP/1.1 request. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 26 18:11:25 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Thu, 26 Mar 2015 14:11:25 -0400 Subject: nginx - fastcgi / life Message-ID: <163c927f6ec535cb4923aecd13041d9e.NginxMailingListEnglish@forum.nginx.org> Hi, FASTCGI is 'built in' NGINX. Can someone from the NGINX organization confirm that there is no plan to retire the FASTCGI support in NGINX? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257704,257704#msg-257704 From nginx-forum at nginx.us Thu Mar 26 18:41:39 2015 From: nginx-forum at nginx.us (ankneo) Date: Thu, 26 Mar 2015 14:41:39 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> References: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> Message-ID: That surely helps. So as of now the only way to resolve the issue is going back to u12 version of libssl? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,257705#msg-257705 From lists-nginx at swsystem.co.uk Thu Mar 26 19:05:46 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 26 Mar 2015 19:05:46 +0000 Subject: Nginx with Mailman In-Reply-To: References: Message-ID: <5514588A.5010408@swsystem.co.uk> There seems to be a naming issue for the socket. nginx is configured to use /run/lists.sock yet your ls shows lists.sock-1 Steve. On 26/03/2015 13:15, Silvio Siefke wrote: > Hello, > > i try to run mailman on nginx over fcgiwrap. The sock is present on > system and has correct rights, but > log say me can not find. > > > server { > listen 80; > listen [::]:80; > server_name lists; > root /usr/lib64/mailman/cgi-bin; > > access_log /var/log/nginx/lists.access.log; > error_log /var/log/nginx/lists.error.log; > > location = / { > rewrite ^ /listinfo permanent; > } > > location / { > fastcgi_split_path_info ^(/[^/]*)(.*)$; > fastcgi_pass unix:/run/lists.sock; > include /etc/nginx/configuration/fastcgi.conf; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > } > > location /icons { > alias /usr/lib64/mailman/icons; > } > > location /archives { > alias /var/lib64/mailman/archives/public; > autoindex on; > } > } > > > > 2015/03/26 14:13:17 [crit] 13209#0: *21 connect() to > unix:/run/list.sock failed (2: No such file or directory) while > connecting to upstream, client: 87.161.141.92, server: lists.local, > request: "GET /listinfo HTTP/1.1", upstream: > "fastcgi://unix:/run/lists.sock:", host: "lists.local" > > > ks3374456 nginx # ls -l /run | grep lists > srwxr-xr-x 1 nginx nginx 0 26. M?r 13:01 lists.sock-1 > > > Has someone idea what goes wrong? > > Thank You & Nice Day > Silvio > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From semenukha at gmail.com Thu Mar 26 19:35:27 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Thu, 26 Mar 2015 15:35:27 -0400 Subject: Nginx with Mailman In-Reply-To: References: Message-ID: <3078087.bqCMP2xp4N@tornado> Hi, Your config refers to the file "list.sock" but your error log complains of "listS.sock". Looks like your configuration has changed since Nginx read it. Stop Nginx service and ensure there's no other Nginx process running (e.g. pgrep nginx). Then start a clean instance of the service. On Thursday, March 26, 2015 02:15:44 PM Silvio Siefke wrote: > Hello, > > i try to run mailman on nginx over fcgiwrap. The sock is present on system > and has correct rights, but > log say me can not find. > > > server { > listen 80; > listen [::]:80; > server_name lists; > root /usr/lib64/mailman/cgi-bin; > > access_log /var/log/nginx/lists.access.log; > error_log /var/log/nginx/lists.error.log; > > location = / { > rewrite ^ /listinfo permanent; > } > > location / { > fastcgi_split_path_info ^(/[^/]*)(.*)$; > fastcgi_pass unix:/run/lists.sock; > include /etc/nginx/configuration/fastcgi.conf; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > } > > location /icons { > alias /usr/lib64/mailman/icons; > } > > location /archives { > alias /var/lib64/mailman/archives/public; > autoindex on; > } > } > > > > 2015/03/26 14:13:17 [crit] 13209#0: *21 connect() to unix:/run/list.sock > failed (2: No such file or directory) while connecting to upstream, client: > 87.161.141.92, server: lists.local, request: "GET /listinfo HTTP/1.1", > upstream: "fastcgi://unix:/run/lists.sock:", host: "lists.local" > > > ks3374456 nginx # ls -l /run | grep lists > srwxr-xr-x 1 nginx nginx 0 26. M?r 13:01 lists.sock-1 > > > Has someone idea what goes wrong? > > Thank You & Nice Day > Silvio -- Best regards, Styopa Semenukha. From cyrus_the_great at riseup.net Fri Mar 27 07:34:27 2015 From: cyrus_the_great at riseup.net (Cyrus) Date: Fri, 27 Mar 2015 07:34:27 +0000 Subject: Using naxsi as a "circuit breaker" Message-ID: <55150803.1060304@riseup.net> I only know the basics of naxsi, but bare with me. I want to automatically have virtual hosts blocked when they are getting too many new requests. I can't find information on solving this particular problem with it. It might be that naxsi isn't even the best solution. I want to define virtual hosts and how many requests they can have in a second, or some other unit of time. It would be useful because I run a shared webserver, with quite a few people using it. I had issues recently with a site leaping into popularity. This causes resource constraints and lots of 500 internal errors, breaking the server for other customers. This was not a DDOS attack. From nginx-forum at nginx.us Fri Mar 27 08:03:31 2015 From: nginx-forum at nginx.us (mex) Date: Fri, 27 Mar 2015 04:03:31 -0400 Subject: Using naxsi as a "circuit breaker" In-Reply-To: <55150803.1060304@riseup.net> References: <55150803.1060304@riseup.net> Message-ID: <0d19505e903043977319c719666f3933.NginxMailingListEnglish@forum.nginx.org> Hello, what does naxsi has to do with it? you probably wanted to talk about nginx, naxsi is a 3rd-party-module, extending nginx on WAF-features four your probkem you might wnat to check http://nginx.org/en/docs/http/ngx_http_limit_req_module.html cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257710,257711#msg-257711 From nginx-forum at nginx.us Fri Mar 27 08:21:28 2015 From: nginx-forum at nginx.us (datanasov) Date: Fri, 27 Mar 2015 04:21:28 -0400 Subject: Parse JSON POST request into nginx variable In-Reply-To: <9b0283fab62b8173a9f29b732e9609c9.NginxMailingListEnglish@forum.nginx.org> References: <9b0283fab62b8173a9f29b732e9609c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: justink101 Wrote: ------------------------------------------------------- > How can I read a POST request body which is JSON and get a property? > I need to read a property and use it as a variable in proxy_pass. > > Pseudo code: > > $post_request_body = '{"account": "test.mydomain.com", > "more-stuff": "here"}'; > // I want to get > $account = "test.mydomain.com"; > proxy_pass $account/rest/of/url/here; Hello, I'v wrote a simple configuration example with LUA script to process posted JSON body and based on KEY/Value take a different action: /you can find it here: http://www.energy-bg.org/processing-json-with-nginx-and-lua/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250864,257712#msg-257712 From shahzaib.cb at gmail.com Fri Mar 27 09:44:40 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 27 Mar 2015 14:44:40 +0500 Subject: Internal Server Error !! In-Reply-To: References: Message-ID: Hi, Just want to inform that we're still facing this issue with mp4. Regards. Shahzaib On Mon, Mar 23, 2015 at 7:10 PM, shahzaib shahzaib wrote: > Hi, > > Nginx logging mp4 related error intermittently. Following is the log : > > 2015/03/23 19:01:53 [crit] 8671#0: *782950 pread() > "/tunefiles/storage17/files/videos/2014/05/07/13994800482e2b0-360.mp4" > failed (22: Invalid argument), client: 182.178.204.162, server: > storage17.domain.com, request: "GET > /files/videos/2014/05/07/13994800482e2b0-360.mp4?start=31.832 HTTP/1.1", > host: "storage17.domain.com", referrer: " > http://static.tune.pk/tune_player/tune.swf?v2" > > We've changed nginx-1.6.2 banner to as follows : > > nginx version: tune-webserver/1.0.4 > built by gcc 4.7.2 (Debian 4.7.2-5) > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock --user=nginx --group=nginx > --with-http_flv_module --with-http_mp4_module --with-file-aio --with-ipv6 > --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions > -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' > --with-ld-opt='-L /usr/lib/x86_64-linux-gnu' > > Could anyone please assist me regarding this issue? > > Regards. > Shahzaib > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From list_nginx at bluerosetech.com Fri Mar 27 21:31:40 2015 From: list_nginx at bluerosetech.com (Mel Pilgrim) Date: Fri, 27 Mar 2015 14:31:40 -0700 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: <20150317144335.GS88631@mdounin.ru> References: <20150317134958.GO88631@mdounin.ru> <550833A5.8010000@mostertman.org> <20150317144335.GS88631@mdounin.ru> Message-ID: <5515CC3C.2070509@bluerosetech.com> On 2015-03-17 07:43, Maxim Dounin wrote: > - As previously said, there are no plans to support neither HTTP/2 > nor SPDY for upstream connections. Which of the two will be supported, or will both be supported, and is there any code in developement we can access to test out and provide feedback? From nginx-forum at nginx.us Sat Mar 28 00:38:03 2015 From: nginx-forum at nginx.us (dominus.ceo) Date: Fri, 27 Mar 2015 20:38:03 -0400 Subject: Nginx - SMTP no authentication In-Reply-To: References: <20091010102854.GJ79672@mdounin.ru> Message-ID: I'm in the same situation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,12609,257723#msg-257723 From nginx-forum at nginx.us Sat Mar 28 00:39:53 2015 From: nginx-forum at nginx.us (carnagel) Date: Fri, 27 Mar 2015 20:39:53 -0400 Subject: Rewrites needed Message-ID: <5460ad6462b56a48167604f190680057.NginxMailingListEnglish@forum.nginx.org> Hi - I need some rewrite help please. http://mysite.com/?page=2 to http://mysite.com/page/2/ and http://mysite.com/?s=searchterm&page=2 to http://mysite.com/page/2/?s=searchterm Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257724,257724#msg-257724 From lists at ruby-forum.com Sat Mar 28 08:22:47 2015 From: lists at ruby-forum.com (Qiang Li) Date: Sat, 28 Mar 2015 09:22:47 +0100 Subject: passenger logging agent error in deployment with passenger+nginx Message-ID: I install passenger+nginx on centos 6.5 by: gem install passenger passenger-install-nginx-module everything is ok during the installation, then I run: sudo ~/nginx/sbin/nginx and get the error message? nginx: [alert] Unable to start the Phusion Passenger watchdog because it encountered the following error during startup: Unable to start the Phusion Passenger logging agent: it seems to have crashed during startup for an unknown reason, with exit code 1 (-1: Unknown error) Also I run: sudo ~/nginx/sbin/nginx -t get the same message: nginx: the configuration file /home/lq/nginx/conf/nginx.conf syntax is ok nginx: [alert] Unable to start the Phusion Passenger watchdog because it encountered the following error during startup: Unable to start the Phusion Passenger logging agent: it seems to have crashed during startup for an unknown reason, with exit code 1 (-1: Unknown error) nginx: configuration file /home/lq/nginx/conf/nginx.conf test is successful If I run nginx or passenger standalone, both of them is ok (no error when I run passenger). But it seems they can't work together. Can anyone tell me something about it? thanks! My system is: ruby 2.1.0 rails 4.2.1 gem 2.4.6 passenger 5.0.5 nginx 1.6.2 -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Sat Mar 28 20:27:44 2015 From: nginx-forum at nginx.us (George) Date: Sat, 28 Mar 2015 16:27:44 -0400 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: Definitely looking forward to Nginx's implementation of HTTP/2 as it's one of the missing pieces in comparison test with other HTTP/2 enabled web servers like h2o and OpenLiteSpeed https://h2ohttp2.centminmod.com/webpagetests1.html :) Do some folks have access to experimental builds of Nginx with HTTP/2 already ? As I have seen 2 sites so far which show Nginx headers but being served via h2-14 in Chrome/Opera 28 so far. cheers George Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256561,257730#msg-257730 From nginx-forum at nginx.us Sun Mar 29 05:51:15 2015 From: nginx-forum at nginx.us (cubicdaiya) Date: Sun, 29 Mar 2015 01:51:15 -0400 Subject: about proxy_request_buffering Message-ID: <95a28ed575a6583da2bf9f5e7f383a9c.NginxMailingListEnglish@forum.nginx.org> Hello. Though I'm trying to apply 'proxy_request_buffering off;' for unbuffered uploading, The buffering still seems to be enabled from error.log. # nginx.conf server { listen 443 ssl spdy; server_name example.com; location /upload { proxy_request_buffering off; proxy_pass http://upload_backend; } } # error.log 2015/03/29 14:02:20 [warn] 6965#0: *1 a client request body is buffered to a temporary file /etc/nginx/client_body_temp/0000000001, client: x.x.x.x, server: example.com, request: "POST /upload HTTP/1.1", host: "example.com" The warning above is not output when SPDY is not enabled. Is proxy_request_buffering always enabled when SPDY is enabled? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257731,257731#msg-257731 From nginx-forum at nginx.us Sun Mar 29 12:15:08 2015 From: nginx-forum at nginx.us (degoya) Date: Sun, 29 Mar 2015 08:15:08 -0400 Subject: 502 Gateway Timeout with error exited on signal 7 (SIGBUS) after clearing cache (nginx with php5-fpm) Message-ID: <31eb2929297ce19b68ecf23e6768100d.NginxMailingListEnglish@forum.nginx.org> Hello, i've got the problem when i load a page multiple times (i.E. 10 Tabs with autoreload) only some of the pages load others get 502. it doesn't depend on the content of the page. when i open the same page in all tabs it's the same situation. about 3 out of 10 get 502 errors. The 502 Error comes direct when i force the autoload for all pages, so there is no timeout causing the error because it only takes less that a second to give the 502. I reconized that this only happens when the cache is cleared in the cms and needs a rebuild on pageload. if the page is cached i don't get any 502 errors anymore. When i disable the caching in modx backend i get the same errors like manual clearing the cache. installed on a openVZ VPS with 24GB Ram and 12 Cores also tested on a physical machine with 32GB ram and 8 cores. both managed with ISPconfig3. PHP 5.4.39-1~dotdeb.1 (fpm-fcgi) (built: Mar 22 2015 08:08:54) nginx/1.6.2 mysql The Page is done with Modx CMS Framework 2.3.3 Error Log: 2015/03/26 00:09:16 [error] 27345#0: *231 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 178.203.23.132, ser$ 2015/03/26 00:09:16 [error] 27345#0: *229 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 178.203.23.132, ser$ 2015/03/26 00:09:16 [error] 27345#0: *234 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 178.203.23.132, ser$ PHP-FPM log [25-Mar-2015 23:54:30.875237] DEBUG: pid 28694, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool web91] currently 0 active children, 6 spare children [25-Mar-2015 23:54:30.875247] DEBUG: pid 28694, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool web32] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [25-Mar-2015 23:54:30.875257] DEBUG: pid 28694, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool apps] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [25-Mar-2015 23:54:31.398167] DEBUG: pid 28694, fpm_event_loop(), line 419: event module triggered 1 events [25-Mar-2015 23:54:31.487717] DEBUG: pid 28694, fpm_got_signal(), line 76: received SIGCHLD [25-Mar-2015 23:54:31.487757] WARNING: pid 28694, fpm_children_bury(), line 252: [pool web91] child 28735 exited on signal 7 (SIGBUS) after 57.721563 seconds from start [25-Mar-2015 23:54:31.490246] NOTICE: pid 28694, fpm_children_make(), line 421: [pool web91] child 28783 started [25-Mar-2015 23:54:31.490269] DEBUG: pid 28694, fpm_event_loop(), line 419: event module triggered 1 events [25-Mar-2015 23:54:31.587862] DEBUG: pid 28694, fpm_got_signal(), line 76: received SIGCHLD [25-Mar-2015 23:54:31.587906] WARNING: pid 28694, fpm_children_bury(), line 252: [pool web91] child 28740 exited on signal 7 (SIGBUS) after 47.234370 seconds from start [25-Mar-2015 23:54:31.590430] NOTICE: pid 28694, fpm_children_make(), line 421: [pool web91] child 28784 started [25-Mar-2015 23:54:31.590460] WARNING: pid 28694, fpm_children_bury(), line 252: [pool web91] child 28741 exited on signal 7 (SIGBUS) after 42.284682 seconds from start nginx.conf tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; sendfile on; keepalive_timeout 65; charset utf-8; client_max_body_size 64m; client_body_buffer_size 128k; client_body_timeout 300s; large_client_header_buffers 4 16k; server_names_hash_bucket_size 512; server_names_hash_max_size 2048; fastcgi_buffers 256 16k; fastcgi_buffer_size 128k; fastcgi_read_timeout 5m; fastcgi_max_temp_file_size 0; php-fpm.conf emergency_restart_threshold = 60 emergency_restart_interval = 1m process_control_timeout = 60s rlimit_files = 65536 rlimit_core = unlimited anyone got a idea where to look for the source of the problem Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257732,257732#msg-257732 From luky-37 at hotmail.com Sun Mar 29 12:32:39 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 29 Mar 2015 14:32:39 +0200 Subject: 502 Gateway Timeout with error exited on signal 7 (SIGBUS) after clearing cache (nginx with php5-fpm) In-Reply-To: <31eb2929297ce19b68ecf23e6768100d.NginxMailingListEnglish@forum.nginx.org> References: <31eb2929297ce19b68ecf23e6768100d.NginxMailingListEnglish@forum.nginx.org> Message-ID: > installed on a openVZ VPS with 24GB Ram and 12 Cores also tested on a > physical machine with 32GB ram and 8 cores. both managed with ISPconfig3. > > PHP 5.4.39-1~dotdeb.1 (fpm-fcgi) (built: Mar 22 2015 08:08:54) > nginx/1.6.2 > mysql PHP crashes, report the problem to whoever is providing support for this dotdeb package or install the proper package from your distribution. Debian stable ships PHP 5.4.39, it doesn't make any sense to use dotdeb. From nginx-forum at nginx.us Sun Mar 29 13:17:18 2015 From: nginx-forum at nginx.us (degoya) Date: Sun, 29 Mar 2015 09:17:18 -0400 Subject: 502 Gateway Timeout with error exited on signal 7 (SIGBUS) after clearing cache (nginx with php5-fpm) In-Reply-To: References: Message-ID: Lukas Tribus Wrote: ------------------------------------------------------- > > installed on a openVZ VPS with 24GB Ram and 12 Cores also tested on > a > > physical machine with 32GB ram and 8 cores. both managed with > ISPconfig3. > > > > PHP 5.4.39-1~dotdeb.1 (fpm-fcgi) (built: Mar 22 2015 08:08:54) > > nginx/1.6.2 > > mysql > > PHP crashes, report the problem to whoever is providing support for > this > dotdeb package or install the proper package from your distribution. > > Debian stable ships PHP 5.4.39, it doesn't make any sense to use > dotdeb. i've gemoved the dotdeb php package and installed the php that is shipped by debian. PHP 5.4.39-0+deb7u2 (fpm-fcgi) (built: Mar 25 2015 08:35:25) Copyright (c) 1997-2014 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2014 Zend Technologies still the same result: [29-Mar-2015 15:13:55.673640] DEBUG: pid 29675, fpm_got_signal(), line 76: received SIGCHLD [29-Mar-2015 15:13:55.673690] WARNING: pid 29675, fpm_children_bury(), line 252: [pool web92] child 29711 exited on signal 7 (SIGBUS) after 169.674900 seconds from start [29-Mar-2015 15:13:55.674695] NOTICE: pid 29675, fpm_children_make(), line 421: [pool web92] child 29859 started [29-Mar-2015 15:13:55.674734] DEBUG: pid 29675, fpm_event_loop(), line 419: event module triggered 1 events [29-Mar-2015 15:13:55.715464] DEBUG: pid 29675, fpm_got_signal(), line 76: received SIGCHLD [29-Mar-2015 15:13:55.715498] WARNING: pid 29675, fpm_children_bury(), line 252: [pool web92] child 29779 exited on signal 7 (SIGBUS) after 102.643864 seconds from start [29-Mar-2015 15:13:55.716048] NOTICE: pid 29675, fpm_children_make(), line 421: [pool web92] child 29860 started [29-Mar-2015 15:13:55.716063] DEBUG: pid 29675, fpm_event_loop(), line 419: event module triggered 1 events [29-Mar-2015 15:13:56.181544] DEBUG: pid 29675, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool web92] currently 5 active children, 0 spare children, 5 running children. Spawning rate 1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257732,257734#msg-257734 From luky-37 at hotmail.com Sun Mar 29 13:29:13 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 29 Mar 2015 15:29:13 +0200 Subject: 502 Gateway Timeout with error exited on signal 7 (SIGBUS) after clearing cache (nginx with php5-fpm) In-Reply-To: References: , Message-ID: >> Debian stable ships PHP 5.4.39, it doesn't make any sense to use >> dotdeb. > > i've gemoved the dotdeb php package and installed the php that is shipped by > debian. Get a coredump and open a debian bug with the backtrace, here's how to do it: https://rtcamp.com/tutorials/php/core-dump-php5-fpm/ Although I'm not sure if this works in a openvz container ... Anyway, its not an nginx problem, you would be better of talking to php people. From sarah at nginx.com Sun Mar 29 21:25:31 2015 From: sarah at nginx.com (Sarah Novotny) Date: Sun, 29 Mar 2015 14:25:31 -0700 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: <0145D419-FA77-4CCE-8F99-009274A5E2ED@nginx.com> It?s awesome to see you all excited about HTTP/2. We don?t have any test builds (that I know of) in the wild, so I?m not sure what you?re seeing, George. As we get closer to a release, there *may* be opportunities for test builds. If you?re interested in that, please send me an email offlist and I?ll see what I can do. sarah > On Mar 28, 2015, at 1:27 PM, George wrote: > > Definitely looking forward to Nginx's implementation of HTTP/2 as it's one > of the missing pieces in comparison test with other HTTP/2 enabled web > servers like h2o and OpenLiteSpeed > https://h2ohttp2.centminmod.com/webpagetests1.html :) > > Do some folks have access to experimental builds of Nginx with HTTP/2 > already ? As I have seen 2 sites so far which show Nginx headers but being > served via h2-14 in Chrome/Opera 28 so far. > > cheers > > George > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256561,257730#msg-257730 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Mon Mar 30 00:49:11 2015 From: nginx-forum at nginx.us (patrickshan) Date: Sun, 29 Mar 2015 20:49:11 -0400 Subject: about proxy_request_buffering In-Reply-To: <95a28ed575a6583da2bf9f5e7f383a9c.NginxMailingListEnglish@forum.nginx.org> References: <95a28ed575a6583da2bf9f5e7f383a9c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <113fc02c8e7a5d1d6e14f767fc607f1c.NginxMailingListEnglish@forum.nginx.org> Same problem here. I suspect it's automatically disabled when spdy is enabled. And also found an issue here for tengine: https://github.com/alibaba/tengine/issues/444. Might be a similar issue ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257731,257740#msg-257740 From nginx-forum at nginx.us Mon Mar 30 07:14:13 2015 From: nginx-forum at nginx.us (oweng) Date: Mon, 30 Mar 2015 03:14:13 -0400 Subject: nginx - fastcgi / life In-Reply-To: <163c927f6ec535cb4923aecd13041d9e.NginxMailingListEnglish@forum.nginx.org> References: <163c927f6ec535cb4923aecd13041d9e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51230415a4ace75f2c12e801ff0239e7.NginxMailingListEnglish@forum.nginx.org> Hi nginxuser100 There are no plans to retire support for FastCGI. Best regards Owen Garrett owen at nginx.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257704,257741#msg-257741 From black.fledermaus at arcor.de Mon Mar 30 08:45:49 2015 From: black.fledermaus at arcor.de (basti) Date: Mon, 30 Mar 2015 10:45:49 +0200 Subject: allow access to certain client addresses or use auth_basic Message-ID: <55190D3D.1060503@arcor.de> Hello Mailing list, is there a way to do following in nginx server or location config. 1. allow access to certain client addresses 2. if the ip is not in the list, allow access by ngx_http_auth_basic_module Thanks for any help. Best Regards, Basti From reallfqq-nginx at yahoo.fr Mon Mar 30 14:08:50 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 30 Mar 2015 16:08:50 +0200 Subject: 502 Gateway Timeout with error exited on signal 7 (SIGBUS) after clearing cache (nginx with php5-fpm) In-Reply-To: References: Message-ID: *As Lukas rightly pointed it out, you are definitely at a wrong location to seek help for PHP-FPM.* SIGBUS indicates a problem with the way the worker deals with memory. It is highly likely this is caused by some faulty module. http://serverfault.com/a/474725 One the names which immediately comes to my mind is APC. Try to start PHP-FPM by removing as much extensions as possible, and if you use APC, think about replacing it with APCu (user data cache) and Zend OPcache (opcode cache). If this does not help, please ask PHP-FPM-related communities. --- *B. R.* On Sun, Mar 29, 2015 at 3:29 PM, Lukas Tribus wrote: > >> Debian stable ships PHP 5.4.39, it doesn't make any sense to use > >> dotdeb. > > > > i've gemoved the dotdeb php package and installed the php that is > shipped by > > debian. > > Get a coredump and open a debian bug with the backtrace, here's how to > do it: > https://rtcamp.com/tutorials/php/core-dump-php5-fpm/ > > > Although I'm not sure if this works in a openvz container ... > > > Anyway, its not an nginx problem, you would be better of talking to > php people. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdo88 at free.fr Mon Mar 30 14:39:03 2015 From: kdo88 at free.fr (kdo88 free) Date: Mon, 30 Mar 2015 16:39:03 +0200 Subject: experimental build of Nginx with HTTP/2 Message-ID: <55196007.6070907@free.fr> Hello SarahNovotny, As George in Nginx Forum message, see http://forum.nginx.org/read.php?2,256561,257737#msg-257737, i'm excited about HTTP/2 and i'm interested to experimental build too. May i have access to it ? Thanks in advance Best regards kdo88 forum.nginx.org' registered user --- L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e par le logiciel antivirus Avast. http://www.avast.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Mon Mar 30 17:11:26 2015 From: lists at ruby-forum.com (Mark Asysteo) Date: Mon, 30 Mar 2015 19:11:26 +0200 Subject: NGINX gateway problem Message-ID: <59a7f9b872675023d3aac9fce3a8928c@ruby-forum.com> Hey! i got problem with my ruby site http://www.asysteo.pl it was working amazing and today i made PHP update i got this error. I double checked all 9000 port connections but its still active. any smart ideas from guru's on this forum? ;) -- Posted via http://www.ruby-forum.com/. From peter.volkov at gmail.com Mon Mar 30 17:26:13 2015 From: peter.volkov at gmail.com (Peter Volkov) Date: Mon, 30 Mar 2015 20:26:13 +0300 Subject: try_files is broken with geoip? Message-ID: Hi! We experience problem: if inside location we use geoip variable try_files is not working and only root location is looked up for files. So problem is reproducible with the following virtual server configuration (full configuration in attachment): ================== geo $dontsecure { default 0; 172.16.11.0/24 1; 127.0.0.0/24 1; } server { listen 172.16.11.31 default_server; listen 127.0.0.1 default_server; access_log /var/log/nginx/production-dvr.access_log main; error_log /var/log/nginx/production-dvr.error_log debug; location ~ /([^/]*?)(_lang_[0-9])?/.*\.ts { * if ($dontsecure) { set $reject_access 0; }* root /tmp/test/; try_files /dir1$uri /dir2$uri =404; } } ================== /tmp/test has follofing files: /tmp/test/dir2/test/1.ts - file /tmp/test/dir1/test - directory Now if we request http://172.16.11.31/test/1.ts nginx returns 404 until I comment out selected by bold font lines. In debug log (debug-log-nginx.txt in attachment) it's clear that try_files directive is doing nothing and only /tmp/test/test/1.ts is looked up. Now if I comment out this three bold lines in nginx works as expected and looks up /tmp/test/dir1/test/1.ts or if it not exist /tmp/test/dir2/test/1.ts file. nginx version: nginx/1.7.11 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=/var/lib/nginx/tmp/client --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-debug --with-ipv6 --with-pcre --with-http_flv_module --with-http_geoip_module --with-http_mp4_module --with-http_secure_link_module --with-http_realip_module --add-module=external_module/nginx-rtmp-module-1.1.7 --add-module=external_module/nginx-push-stream-module-0.4.1 --with-http_ssl_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --user=nginx --group=nginx -- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 2015/03/30 19:40:27 [debug] 2758#0: *363781 http process request line 2015/03/30 19:40:27 [debug] 2758#0: *363781 http request line: "GET /test/1.ts HTTP/1.1" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http uri: "/test/1.ts" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http args: "" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http exten: "ts" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http process request header line 2015/03/30 19:40:27 [debug] 2758#0: *363781 http header: "Host: 172.16.11.31" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http header: "User-Agent: HTTPie/0.9.2" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http header: "Accept-Encoding: gzip, deflate" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http header: "Accept: */*" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http header: "Connection: keep-alive" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http header done 2015/03/30 19:40:27 [debug] 2758#0: *363781 event timer del: 895: 1427734227110 2015/03/30 19:40:27 [debug] 2758#0: *363781 generic phase: 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 rewrite phase: 1 2015/03/30 19:40:27 [debug] 2758#0: *363781 test location: ~ "/([^/]*?)(_lang_[0-9])?/.*\.ts" 2015/03/30 19:40:27 [debug] 2758#0: *363781 using configuration "/([^/]*?)(_lang_[0-9])?/.*\.ts" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http cl:-1 max:1048576 2015/03/30 19:40:27 [debug] 2758#0: *363781 rewrite phase: 3 2015/03/30 19:40:27 [debug] 2758#0: *363781 posix_memalign: 00007F4406DF7650:4096 @16 2015/03/30 19:40:27 [debug] 2758#0: *363781 http script var 2015/03/30 19:40:27 [debug] 2758#0: *363781 http geo started: 172.16.11.32 2015/03/30 19:40:27 [debug] 2758#0: *363781 http geo: 1 2015/03/30 19:40:27 [debug] 2758#0: *363781 http script var: "1" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http script if 2015/03/30 19:40:27 [debug] 2758#0: *363781 http script value: "0" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http script set $reject_access 2015/03/30 19:40:27 [debug] 2758#0: *363781 post rewrite phase: 4 2015/03/30 19:40:27 [debug] 2758#0: *363781 generic phase: 5 2015/03/30 19:40:27 [debug] 2758#0: *363781 generic phase: 6 2015/03/30 19:40:27 [debug] 2758#0: *363781 generic phase: 7 2015/03/30 19:40:27 [debug] 2758#0: *363781 access phase: 8 2015/03/30 19:40:27 [debug] 2758#0: *363781 access phase: 9 2015/03/30 19:40:27 [debug] 2758#0: *363781 post access phase: 10 2015/03/30 19:40:27 [debug] 2758#0: *363781 try files phase: 11 2015/03/30 19:40:27 [debug] 2758#0: *363781 content phase: 12 2015/03/30 19:40:27 [debug] 2758#0: *363781 content phase: 13 2015/03/30 19:40:27 [debug] 2758#0: *363781 content phase: 14 2015/03/30 19:40:27 [debug] 2758#0: *363781 http filename: "/tmp/test/test/1.ts" 2015/03/30 19:40:27 [debug] 2758#0: *363781 add cleanup: 00007F4406DF7610 2015/03/30 19:40:27 [error] 2758#0: *363781 open() "/tmp/test/test/1.ts" failed (2: No such file or directory), client: 172.16.11.32, server: , request: "GET /test/1.ts HTTP/1.1", host: "172.16.11.31" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http finalize request: 404, "/test/1.ts?" a:1, c:1 2015/03/30 19:40:27 [debug] 2758#0: *363781 http special response: 404, "/test/1.ts?" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http set discard body 2015/03/30 19:40:27 [debug] 2758#0: *363781 HTTP/1.1 404 Not Found Server: nginx/1.7.11 Date: Mon, 30 Mar 2015 16:40:27 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive Keep-Alive: timeout=20 2015/03/30 19:40:27 [debug] 2758#0: *363781 write new buf t:1 f:0 00007F4406DF77B8, pos 00007F4406DF77B8, size: 179 file: 0, size: 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 http write filter: l:0 f:0 s:179 2015/03/30 19:40:27 [debug] 2758#0: *363781 http write filter limit 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 writev: 179 of 179 2015/03/30 19:40:27 [debug] 2758#0: *363781 http write filter 0000000000000000 2015/03/30 19:40:27 [debug] 2758#0: *363781 http output filter "/test/1.ts?" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http copy filter: "/test/1.ts?" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http postpone filter "/test/1.ts?" 00007F4406DF7890 2015/03/30 19:40:27 [debug] 2758#0: *363781 write new buf t:0 f:0 0000000000000000, pos 00007F440555A5A0, size: 116 file: 0, size: 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 write new buf t:0 f:0 0000000000000000, pos 00007F440555ACE0, size: 53 file: 0, size: 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 http write filter: l:1 f:0 s:169 2015/03/30 19:40:27 [debug] 2758#0: *363781 http write filter limit 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 writev: 169 of 169 2015/03/30 19:40:27 [debug] 2758#0: *363781 http write filter 0000000000000000 2015/03/30 19:40:27 [debug] 2758#0: *363781 http copy filter: 0 "/test/1.ts?" 2015/03/30 19:40:27 [debug] 2758#0: *363781 http finalize request: 0, "/test/1.ts?" a:1, c:1 2015/03/30 19:40:27 [debug] 2758#0: *363781 set http keepalive handler 2015/03/30 19:40:27 [debug] 2758#0: *363781 http close request 2015/03/30 19:40:27 [debug] 2758#0: *363781 http log handler 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF6640, unused: 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF7650, unused: 3055 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF4640 2015/03/30 19:40:27 [debug] 2758#0: *363781 hc free: 0000000000000000 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 hc busy: 0000000000000000 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 tcp_nodelay 2015/03/30 19:40:27 [debug] 2758#0: *363781 reusable connection: 1 2015/03/30 19:40:27 [debug] 2758#0: *363781 event timer add: 895: 75000:1427733702110 2015/03/30 19:40:27 [debug] 2758#0: *363781 post event 00007F4406CF54F0 2015/03/30 19:40:27 [debug] 2758#0: *363781 delete posted event 00007F4406CF54F0 2015/03/30 19:40:27 [debug] 2758#0: *363781 http keepalive handler 2015/03/30 19:40:27 [debug] 2758#0: *363781 malloc: 00007F4406DF4640:1024 2015/03/30 19:40:27 [debug] 2758#0: *363781 recv: fd:895 -1 of 1024 2015/03/30 19:40:27 [debug] 2758#0: *363781 recv() not ready (11: Resource temporarily unavailable) 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF4640 2015/03/30 19:40:27 [debug] 2758#0: *363781 post event 00007F4406CF54F0 2015/03/30 19:40:27 [debug] 2758#0: *363781 delete posted event 00007F4406CF54F0 2015/03/30 19:40:27 [debug] 2758#0: *363781 http keepalive handler 2015/03/30 19:40:27 [debug] 2758#0: *363781 malloc: 00007F4406DF6EC0:1024 2015/03/30 19:40:27 [debug] 2758#0: *363781 recv: fd:895 0 of 1024 2015/03/30 19:40:27 [info] 2758#0: *363781 client 172.16.11.32 closed keepalive connection 2015/03/30 19:40:27 [debug] 2758#0: *363781 close http connection: 895 2015/03/30 19:40:27 [debug] 2758#0: *363781 event timer del: 895: 1427733702110 2015/03/30 19:40:27 [debug] 2758#0: *363781 reusable connection: 0 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF6EC0 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF4420, unused: 8 2015/03/30 19:40:27 [debug] 2758#0: *363781 free: 00007F4406DF4530, unused: 72 From nginx-forum at nginx.us Mon Mar 30 17:39:03 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 30 Mar 2015 13:39:03 -0400 Subject: NGINX gateway problem In-Reply-To: <59a7f9b872675023d3aac9fce3a8928c@ruby-forum.com> References: <59a7f9b872675023d3aac9fce3a8928c@ruby-forum.com> Message-ID: <2df2eb070f6fa03445d1418c90db6a4a.NginxMailingListEnglish@forum.nginx.org> Did you restart nginx? it might be the upstream has marked your php backend as down during the update. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257752,257754#msg-257754 From nginx-forum at nginx.us Mon Mar 30 17:45:06 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 30 Mar 2015 13:45:06 -0400 Subject: try_files is broken with geoip? In-Reply-To: References: Message-ID: <10a3047eb546e1060cce958031442f7b.NginxMailingListEnglish@forum.nginx.org> Peter Volkov Wrote: [...] root /tmp/test/; "/tmp/test/test/1.ts" -> 404 /tmp/test has follofing files: /tmp/test/dir2/test/1.ts - file /tmp/test/dir1/test - directory looks up /tmp/test/dir1/test/1.ts or if it not exist /tmp/test/dir2/test/1.ts file. [...] I think you first need to sort out WHERE stuff is (root+uri) as your 404 is valid against the other paths your have mentioned. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257753,257755#msg-257755 From peter.volkov at gmail.com Mon Mar 30 19:15:59 2015 From: peter.volkov at gmail.com (Peter Volkov) Date: Mon, 30 Mar 2015 22:15:59 +0300 Subject: try_files is broken with geoip? In-Reply-To: <10a3047eb546e1060cce958031442f7b.NginxMailingListEnglish@forum.nginx.org> References: <10a3047eb546e1060cce958031442f7b.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, Mar 30, 2015 at 8:45 PM, itpp2012 wrote: > Peter Volkov Wrote: > [...] > root /tmp/test/; > "/tmp/test/test/1.ts" -> 404 > > /tmp/test has follofing files: > /tmp/test/dir2/test/1.ts - file > /tmp/test/dir1/test - directory > > looks up /tmp/test/dir1/test/1.ts or if it not exist > /tmp/test/dir2/test/1.ts file. > [...] > > I think you first need to sort out WHERE stuff is (root+uri) as your 404 is > valid against the other paths your have mentioned. > Sorry, I don't understood your answer. Could you explain what you mean, please? That was an example configuration that is a fragment from more complex one. We use *try_files /dir1$uri /dir2$uri =404;* directive to lookup file first in dir1 and if file does not exist in dir1/temp/, then lookup in dir2/temp. so 1.ts could be either in dir1/test/1.ts or in dir2/test/1.ts. I think it's quite valid configuration. The problem is that it does not work when combined with geoip. So two questions are here: 1. Are there any problems with this config 2. Are there any workarounds/fixes for this problem? -- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Mar 30 19:18:52 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 30 Mar 2015 20:18:52 +0100 Subject: try_files is broken with geoip? In-Reply-To: References: Message-ID: <20150330191852.GS29618@daoine.org> On Mon, Mar 30, 2015 at 08:26:13PM +0300, Peter Volkov wrote: Hi there, > We experience problem: if inside location we use geoip variable try_files > is not working and only root location is looked up for files. That is working as expected for "if" inside "location". It's unrelated to geoip. http://wiki.nginx.org/IfIsEvil Third item: # try_files wont work due to if location /if-try-files { try_files /file @fallback; set $true 1; if ($true) { # nothing } } You'll want to redesign whatever you are doing not to use "if" inside "location" for anything other than "return" or effective equivalents. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 30 19:21:02 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 30 Mar 2015 20:21:02 +0100 Subject: allow access to certain client addresses or use auth_basic In-Reply-To: <55190D3D.1060503@arcor.de> References: <55190D3D.1060503@arcor.de> Message-ID: <20150330192102.GT29618@daoine.org> On Mon, Mar 30, 2015 at 10:45:49AM +0200, basti wrote: Hi there, > is there a way to do following in nginx server or location config. > > 1. allow access to certain client addresses > 2. if the ip is not in the list, allow access by ngx_http_auth_basic_module Yes. http://nginx.org/r/satisfy f -- Francis Daly francis at daoine.org From peter.volkov at gmail.com Mon Mar 30 19:50:31 2015 From: peter.volkov at gmail.com (Peter Volkov) Date: Mon, 30 Mar 2015 22:50:31 +0300 Subject: try_files is broken with geoip? In-Reply-To: <20150330191852.GS29618@daoine.org> References: <20150330191852.GS29618@daoine.org> Message-ID: On Mon, Mar 30, 2015 at 10:18 PM, Francis Daly wrote: > On Mon, Mar 30, 2015 at 08:26:13PM +0300, Peter Volkov wrote: > > Hi there, > > > We experience problem: if inside location we use geoip variable try_files > > is not working and only root location is looked up for files. > > That is working as expected for "if" inside "location". It's unrelated > to geoip. > > http://wiki.nginx.org/IfIsEvil > > Third item: > > # try_files wont work due to if > > location /if-try-files { > try_files /file @fallback; > > set $true 1; > > if ($true) { > # nothing > } > } > > You'll want to redesign whatever you are doing not to use "if" inside > "location" for anything other than "return" or effective equivalents. > Hell, true! Thank you, Francis! -- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Mar 30 21:11:44 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 30 Mar 2015 17:11:44 -0400 Subject: try_files is broken with geoip? In-Reply-To: References: Message-ID: <605bc686e0498508d54cf1c1bab5f063.NginxMailingListEnglish@forum.nginx.org> Peter Volkov Wrote: ------------------------------------------------------- > On Mon, Mar 30, 2015 at 8:45 PM, itpp2012 > wrote: > > > Peter Volkov Wrote: > > [...] > > root /tmp/test/; > > "/tmp/test/test/1.ts" -> 404 > > > > /tmp/test has follofing files: > > /tmp/test/dir2/test/1.ts - file > > /tmp/test/dir1/test - directory > > > > looks up /tmp/test/dir1/test/1.ts or if it not exist > > /tmp/test/dir2/test/1.ts file. > > [...] > > > > I think you first need to sort out WHERE stuff is (root+uri) as your 404 is > > valid against the other paths your have mentioned. > > Sorry, I don't understood your answer. Could you explain what you > mean, please? The 404 is about /tmp/test/test/1.ts (root+uri) however every other path you want to test for is not in there, so its going to fail always, maybe if you set root /tmp/; try_files could work. > We use *try_files /dir1$uri /dir2$uri =404;* directive to lookup file first > in dir1 and if file does not exist in dir1/temp/, then lookup in dir2/temp. You could do a fallback 'try_files $uri $uri/ @fallback;' and then test again with a different root setting. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257753,257762#msg-257762 From devel at jasonwoods.me.uk Tue Mar 31 10:18:55 2015 From: devel at jasonwoods.me.uk (Jason Woods) Date: Tue, 31 Mar 2015 11:18:55 +0100 Subject: "gzip on" duplicates Content-Encoding header if an empty one already exists In-Reply-To: <1453824.6b2BFrKz7F@tornado> References: <1453824.6b2BFrKz7F@tornado> Message-ID: <7CC87235-C8A2-41A1-BAEA-234A9AA62DA0@jasonwoods.me.uk> > On Wednesday, March 25, 2015 12:04:03 PM Jason Woods wrote: >> Hi, >> >> I have a (probably dodgy) application that is sending out uncompressed XML >> with the following header. That is, an empty Content-Encoding header. >> >> Content-Encoding: >> >> This works fine, until I enable gzip on Nginx 1.6.2 latest (which is a proxy >> to the application.) Nginx compresses the XML, and adds ANOTHER >> Content-Encoding header, containing "gzip". I end up with this response: >> >> Content-Encoding: >> Content-Encoding: gzip >> >> This seems to break on Safari and Google Chrome (not tested other browsers.) >> They seem to ignore the latter header, and assume that content is not >> compressed, and try to render the binary compressed output. Is this an >> issue in the client implementations, an issue in the Nginx GZIP >> implementation, an issue in the upstream application, or a mixture of all >> 3? >> >> Looking at Nginx 1.6.2's ngx_http_gzip_filter_module.c lines 246 to 255 >> (which I believe is the correct place) it checks for existence of a >> Content-Encoding header with a positive length (non-zero) - so it looks >> like if any other Content-Encoding was already specified, Nginx GZIP does >> not do anything and does not duplicate header. So it seems the case of an >> empty Content-Encoding slips through. Should this be the case? Should it >> remove the existing blank header first, or just not GZIP if it exists and >> is empty? >> >> Thanks in advance, >> >> Jason On 25 Mar 2015, at 19:08, Styopa Semenukha > wrote: > > Probably discarding the Content-Encoding directive from the upstream will > resolve this: > http://nginx.org/r/proxy_hide_header > -- > Best regards, > Styopa Semenukha. (Apologies - resending on-list.) Thanks I'll give that a go! Though I do think this might be incorrect Nginx behaviour as the header should be modified not duplicated. I'd like to find out if that is the case. Hopefully an easy fix that will save others time down the line! Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 31 23:31:15 2015 From: nginx-forum at nginx.us (cubicdaiya) Date: Tue, 31 Mar 2015 19:31:15 -0400 Subject: about proxy_request_buffering In-Reply-To: <113fc02c8e7a5d1d6e14f767fc607f1c.NginxMailingListEnglish@forum.nginx.org> References: <95a28ed575a6583da2bf9f5e7f383a9c.NginxMailingListEnglish@forum.nginx.org> <113fc02c8e7a5d1d6e14f767fc607f1c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <904b195261283f1336e10be76c9f86e4.NginxMailingListEnglish@forum.nginx.org> > Same problem here. I suspect it's automatically disabled when spdy is > enabled. And also found an issue here for tengine: > https://github.com/alibaba/tengine/issues/444. Might be a similar issue ? Thanks for your comment. I saw the issue now. Probably so. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257731,257776#msg-257776