From nginx-forum at nginx.us Tue Sep 1 06:26:44 2015 From: nginx-forum at nginx.us (xfeep) Date: Tue, 01 Sep 2015 02:26:44 -0400 Subject: [ANN] Nginx-Clojure v0.4.2 Message-ID: <43a1ab4213c839dfeca37861b53d35e6.NginxMailingListEnglish@forum.nginx.org> Nginx-Clojure 0.4.2 (2015-08-31) 1. New Feature: Support Sente (issue #87, see this PR (https://github.com/ptaoussanis/sente/pull/160)) ) 2. New Feature: Per-message Compression Extensions (PMCEs) for WebSocket (issue #88) 3. New Feature: Add add-aggregated-listener! to makes handling small but fragmented websocket messages easier by clojure. 4. Enhancement: Support to build on a Linux ARM machine 5. Bug Fix: WebSocket and Server Channel do not Work with Some Ring Middlewares (issue#89) 6. Bug Fix: Autodetect jvm_path doesn't work sometimes Web Site http://nginx-clojure.github.io/ Source Hosted on Github https://github.com/nginx-clojure/nginx-clojure Google Group (mailing list) https://groups.google.com/forum/#!forum/nginx-clojure Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261346,261346#msg-261346 From nginx-forum at nginx.us Tue Sep 1 12:10:39 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 01 Sep 2015 08:10:39 -0400 Subject: Parsing HTTP Response Message-ID: <7f7d464826b4fa7bdf0ceeb8a234ed89.NginxMailingListEnglish@forum.nginx.org> I have been using Tengine's health check and it does not support parsing the health check response (like what commercial Nginx provides). Is there any other third party module which I can use to parse the http health check response. Any help on this would be appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261359,261359#msg-261359 From nginx-forum at nginx.us Tue Sep 1 12:55:36 2015 From: nginx-forum at nginx.us (reddwarf) Date: Tue, 01 Sep 2015 08:55:36 -0400 Subject: dynamically rate_limit ? Message-ID: <42996df0e1778599f4c78610730c99bd.NginxMailingListEnglish@forum.nginx.org> Hi folks, I'm after a way to dynamically adjust bandwidth in nginx. Current vhost setup is: maximum connections 10 max connection per client ip = 2 maximum rate per connection = 8Mbit Ideally, 5 clients x 2 connections x 8Mbit rate = 80 Mbit = 10 clients x 1 connection x 8Mbit rate What I would prefer is to set the limit_rate statement as a dynamical function of maximum shared bandwidth divided by the number of active connections therefore, first connection would get full 80Mbit, second connection 40Mbit and so on I did some search about this a year ago and couldn't find anything matching this functionality Has anything changed that would enable the above functionality ? Thanks, Miro Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261362,261362#msg-261362 From francis at daoine.org Tue Sep 1 18:19:17 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2015 19:19:17 +0100 Subject: Create a single config for multiple Apache virtual hosts. In-Reply-To: References: Message-ID: <20150901181917.GC3177@daoine.org> On Wed, Aug 26, 2015 at 06:48:18AM -0400, YemSalat wrote: Hi there, > I am trying to run nginx as reverse proxy for Apache, running multiple > virtual hosts (domains) on the same ip. > > I wanted to know if it is possible to have a single nginx config, that would > pass the correct url/hostname/path to Apache, without having to create a > separate server block for each domain. If nginx is just reverse-proxying everything, then you probably just want to send "Host:" using proxy_set_header (http://nginx.org/r/proxy_set_header), but otherwise there is nothing special to do. > For example if all domain directories > are the same as their hostnames: > /var/www/mydomain.com/ > /var/www/anotherdomain.org/ > ... Since nginx isn't touching the filesystem, it doesn't matter. You'll have to decide what kind of logging you want -- possibly just adding "$host" to each line in a single access.log will be enough. > If it is - are there any potential issues with this setup? For simple things, none spring to mind. If you care about upstream (apache) knowing the client IP address, you'll have to allow for that. Good luck with it, f -- Francis Daly francis at daoine.org From fsantiago at deviltracks.net Tue Sep 1 20:41:20 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Tue, 01 Sep 2015 16:41:20 -0400 Subject: nginx v1.8.0 / redirects Message-ID: <2adfa7035803406cda8f4bcd15b4b294@deviltracks.net> How do I best: 1.> redirect > www. a.) if www is already present, skip to step 2 2.> redirect http://www. request > https://...... ??????? Thanks. - Fabian S. From francis at daoine.org Tue Sep 1 21:16:19 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2015 22:16:19 +0100 Subject: nginx v1.8.0 / redirects In-Reply-To: <2adfa7035803406cda8f4bcd15b4b294@deviltracks.net> References: <2adfa7035803406cda8f4bcd15b4b294@deviltracks.net> Message-ID: <20150901211619.GD3177@daoine.org> On Tue, Sep 01, 2015 at 04:41:20PM -0400, fsantiago at deviltracks.net wrote: Hi there, > 1.> redirect > www. > a.) if www is already present, skip to step 2 > > 2.> redirect http://www. request > https://...... http://nginx.org/en/docs/http/server_names.html Use two server{} blocks. One matches only www.* and redirects to https://$host$request_uri The other redirects to http://www.$host$request_uri f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 1 21:37:23 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2015 22:37:23 +0100 Subject: Redirect on specific threshold !! In-Reply-To: References: <2182722.jmPLUFHago@vbart-workstation> <20150616213030.GC23844@daoine.org> Message-ID: <20150901213723.GE3177@daoine.org> On Sat, Aug 29, 2015 at 04:57:19PM +0500, shahzaib shahzaib wrote: Hi there, > Sorry got back to this thread after long time. First of all, thanks to > all for suggestions. Alright, i have also checked with rate_limit module, > should this work as well or it should be only limit_conn (to parse > error_log and constructing redirect URL). I think the answers already given were different, depending on different understanding of your requirements. Perhaps if you can re-state (or clarify) them, you will get a more specific answer. For what it's worth, what I think you want is a tool to read the access logs from storage.domain.com, copy files to cache.domain.com, change the nginx config on storage.domain.com, and restart nginx on storage.domain.com. None of which involves any special modules or config within nginx. f -- Francis Daly francis at daoine.org From fsantiago at deviltracks.net Wed Sep 2 01:33:11 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Tue, 01 Sep 2015 21:33:11 -0400 Subject: nginx v1.8.0 / redirects In-Reply-To: <20150901211619.GD3177@daoine.org> References: <2adfa7035803406cda8f4bcd15b4b294@deviltracks.net> <20150901211619.GD3177@daoine.org> Message-ID: <8abf350047d676e04c641823c7cd37ab@deviltracks.net> thanks. worked like a charm! On 2015-09-01 17:16, Francis Daly wrote: > On Tue, Sep 01, 2015 at 04:41:20PM -0400, fsantiago at deviltracks.net > wrote: > > Hi there, > >> 1.> redirect > www. >> a.) if www is already present, skip to step 2 >> >> 2.> redirect http://www. request > https://...... > > http://nginx.org/en/docs/http/server_names.html > > Use two server{} blocks. > > One matches only www.* and redirects to https://$host$request_uri > > The other redirects to http://www.$host$request_uri > > f From nginx-forum at nginx.us Wed Sep 2 03:18:32 2015 From: nginx-forum at nginx.us (log) Date: Tue, 01 Sep 2015 23:18:32 -0400 Subject: How to cache js/css request containing a question mark? In-Reply-To: <4029e61e51f2990de4068734de5e275c.NginxMailingListEnglish@forum.nginx.org> References: <4029e61e51f2990de4068734de5e275c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <40cb60259a4c46eb3e332a7ebb2c0a87.NginxMailingListEnglish@forum.nginx.org> Thank you, Biazus! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261338,261373#msg-261373 From nginx-forum at nginx.us Wed Sep 2 06:40:14 2015 From: nginx-forum at nginx.us (log) Date: Wed, 02 Sep 2015 02:40:14 -0400 Subject: How to cache js/css request containing a question mark? In-Reply-To: <4029e61e51f2990de4068734de5e275c.NginxMailingListEnglish@forum.nginx.org> References: <4029e61e51f2990de4068734de5e275c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <71f5f5fdd9e6e49a918df5ccc5e35f55.NginxMailingListEnglish@forum.nginx.org> Well, unfortunately this is not working... http://example.com/style.css?ver=4.3 http://example.com/jquery-migrate.min.js?ver=1.2.1 biazus Wrote: ------------------------------------------------------- > Please try to remove $ in the end of the expression: > > something like this: > > location ~ .*\.(js|css) { > expires 7d; > } > > Also, make sure you are using args in the cache key: > > proxy_cache_key "$host$uri$is_args$args"; > > Regards, > Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261338,261374#msg-261374 From reallfqq-nginx at yahoo.fr Wed Sep 2 07:02:16 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 2 Sep 2015 09:02:16 +0200 Subject: nginx v1.8.0 / redirects In-Reply-To: <8abf350047d676e04c641823c7cd37ab@deviltracks.net> References: <2adfa7035803406cda8f4bcd15b4b294@deviltracks.net> <20150901211619.GD3177@daoine.org> <8abf350047d676e04c641823c7cd37ab@deviltracks.net> Message-ID: I would suggest you avoid multiple redirects in the case a client connects with http://domain, because your current setup will make the client following this flow: http://domain > http://www.domain > https://www.domain This will hurt your TTFB. I suggest your first redirect should directly point to the HTTPS scheme. --- *B. R.* On Wed, Sep 2, 2015 at 3:33 AM, wrote: > thanks. worked like a charm! > > On 2015-09-01 17:16, Francis Daly wrote: > >> On Tue, Sep 01, 2015 at 04:41:20PM -0400, fsantiago at deviltracks.net >> wrote: >> >> Hi there, >> >> 1.> redirect > www. >>> a.) if www is already present, skip to step 2 >>> >>> 2.> redirect http://www. request > https://...... >>> >> >> http://nginx.org/en/docs/http/server_names.html >> >> Use two server{} blocks. >> >> One matches only www.* and redirects to https://$host$request_uri >> >> The other redirects to http://www.$host$request_uri >> >> f >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Sep 2 07:06:52 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 2 Sep 2015 09:06:52 +0200 Subject: How to cache js/css request containing a question mark? In-Reply-To: <71f5f5fdd9e6e49a918df5ccc5e35f55.NginxMailingListEnglish@forum.nginx.org> References: <4029e61e51f2990de4068734de5e275c.NginxMailingListEnglish@forum.nginx.org> <71f5f5fdd9e6e49a918df5ccc5e35f55.NginxMailingListEnglish@forum.nginx.org> Message-ID: You will need to be a little bit more specific than 'this is not working' to get some help. http://www.catb.org/esr/faqs/smart-questions.html Btw, the documentation show the default value for proxy_cache_key if none is provided, and it already takes arguments (and thus arguments' separator) into account: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uri --- *B. R.* On Wed, Sep 2, 2015 at 8:40 AM, log wrote: > Well, unfortunately this is not working... > > http://example.com/style.css?ver=4.3 > http://example.com/jquery-migrate.min.js?ver=1.2.1 > > > biazus Wrote: > ------------------------------------------------------- > > Please try to remove $ in the end of the expression: > > > > something like this: > > > > location ~ .*\.(js|css) { > > expires 7d; > > } > > > > Also, make sure you are using args in the cache key: > > > > proxy_cache_key "$host$uri$is_args$args"; > > > > Regards, > > Biazus > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261338,261374#msg-261374 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 2 08:54:42 2015 From: nginx-forum at nginx.us (Govind) Date: Wed, 02 Sep 2015 04:54:42 -0400 Subject: ABout setting cookie to capture j_username and j_password Message-ID: <46e4c0e49e32303003ac2589aac178cf.NginxMailingListEnglish@forum.nginx.org> Hi, We are using NGINX as proxy server and redirecting to amazon ELB to access Agile application. we are unable to access application because j_username & j_password are not sent with cookie to proxy server. Is there a way to set up cookie to capture j_username & j_password? Please help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261377,261377#msg-261377 From mdounin at mdounin.ru Wed Sep 2 16:18:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Sep 2015 19:18:21 +0300 Subject: Implementing proxy_cache_lock when updating items In-Reply-To: <669839883a65c0f998c9ca149ec7ff6d.NginxMailingListEnglish@forum.nginx.org> References: <669839883a65c0f998c9ca149ec7ff6d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150902161821.GP72232@mdounin.ru> Hello! On Mon, Aug 31, 2015 at 09:16:00AM -0400, footplus wrote: > Hello, > > I am currently implementing a caching proxy with many short-lived items, > expiring at a specific date (Expires header set at an absolute time between > 10 and 30 seconds in the future by the origin). For various reasons, my > cache is multi-level (edge, intermediate 2, intermediate 1, origin) and > needs to make the items expire at the edge at 2exactly the time set in the > Expires header. When the item expires, i want an updated version of the item > to be available at the same URL. > > I have been able to make it work, and I'm using proxy_cache_lock at every > cache level to ensure i'm not hammering the origin servers nor the proxy > server. As documented, this works perfectly for items not present in the > cache. > > I am also using proxy_cache_use_stale updating to avoid this hammering also > in the case of already in-cache items. > > My problems begin when an in-cache item expires. 2 top-level caches (let's > name them E1,E2 for example) request an updated fragment from the below > level (INT). The fragment is requested by INT from below for the first > request, but for the second request, a stale fragment is sent (according to > proxy_cache_use_stale setting, with the UPDATING status). So far, all is > working according to the docs. The problem is that fragments in the UPDATING > status are stale, and cannot be cached at all by E1,E2.., and this can be > very impacting for INT, because all the requests made on E1/E2 are now > proxied to INT directly, until INT has a fresh version of the item installed > in cache (this is quite a short duration, but in testing this generate > bursts of 15 to 50 requests in the mean time). > > Is there a way to implement the proxy_cache_lock to make it work also for > expired in-cache items in the configuration ? If not, can you suggest a way > to implement this (i'm not familiar to nginx's source, but i'm willing to > dig into it) ? Instead, you may consider using "proxy_cache_use_stale updating" in combination with one of the following: - Return some fixed short expiration time for stale responses returned by INT. This will ensure that edge servers will cache it for at least some time, and won't try to request new versions over and over. I should be possible to do so with the "expires" directive and a map{} from $upstream_cache_status, e.g.: map $upstream_cache_status $expires { default ""; STALE "10s"; } expires $expires; Alternatively, if your edge servers use nginx, you can do map $upstream_cache_status $expires { default ""; STALE "10"; } add_header X-Accel-Expires $expires; and it will apply to edge servers only, and won't try to change any other response headers. - Ensure that the version cached by INT will be considered stale before it will become uncacheable according to HTTP headers. This can be done either with proxy_ignore_headers + proxy_cache_valid, or using the X-Accel-Expires header returned by backends. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Sep 2 17:32:36 2015 From: nginx-forum at nginx.us (maplesyrupandrew) Date: Wed, 02 Sep 2015 13:32:36 -0400 Subject: Why is NGINX serving a 404 here? In-Reply-To: References: Message-ID: <24878db649c3362c2b7b14b1d7764cd2.NginxMailingListEnglish@forum.nginx.org> Thanks for the response! I made that change, and I think what was really giving the 404 was that I had changed permissions on the `index.html` file, but not on the var/www/ folder, which I needed so that the nginx user group could access it. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261327,261395#msg-261395 From nginx-forum at nginx.us Wed Sep 2 17:37:49 2015 From: nginx-forum at nginx.us (maplesyrupandrew) Date: Wed, 02 Sep 2015 13:37:49 -0400 Subject: How to send all these requests to the same file when I have an Angular state based router? Message-ID: I'm using the Angular ui-router which uses states to control the routes. Meaning that all request should serve the same index.html file, and the JavaScript worries about loading in appropriate content. The .htaccess rules that control the same thing are below: RewriteEngine On # Required to allow direct-linking of pages so they can be processed by Angular RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !index RewriteRule (.*) index.html [L] I'm trying to achieve the same effect, but I'm serving content through nginx. I tried to achieve this by adding the third location block in the nginx config below, however, this didn't seem to do the trick (404s). It tries to catch all routes that are not /auth. What am I missing here? server { listen 80 default_server; root /var/www/..../dist; index index.html index.html; # Make site accessible from http://localhost/ server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /auth{ proxy_pass http://auth; } location /^(?!auth$).* { try_files $uri /var/www/..../dist/index.html; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261396,261396#msg-261396 From francis at daoine.org Wed Sep 2 19:04:35 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Sep 2015 20:04:35 +0100 Subject: How to send all these requests to the same file when I have an Angular state based router? In-Reply-To: References: Message-ID: <20150902190435.GF3177@daoine.org> On Wed, Sep 02, 2015 at 01:37:49PM -0400, maplesyrupandrew wrote: Hi there, > I'm using the Angular ui-router which uses states to control the routes. > > Meaning that all request should serve the same index.html file, and the > JavaScript worries about loading in appropriate content. >From those words, and the Subject: line above, I'm not actually sure what it is that you want to do. "all requests to the same file" is one thing; but does not seem to be what the rest of your mail implies. > The .htaccess rules that control the same thing are below: > > RewriteEngine On > # Required to allow direct-linking of pages so they can be processed by > Angular > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteCond %{REQUEST_URI} !index RewriteRule (.*) index.html [L] > I think that means "send the file, else send the directory index, else send the fallback /index.html url". Is that correct? Because if so, that's what try_files is for. > I'm trying to achieve the same effect, but I'm serving content through > nginx. I tried to achieve this by adding the third location block in the > nginx config below, however, this didn't seem to do the trick (404s). It > tries to catch all routes that are not /auth. > > What am I missing here? If you have "location /{}" and "location /auth{}", then the first one will match all requests that do not start "/auth". Your third location seems odd. (It's a prefix location, since it does not start with ~.) > index index.html index.html; That line probably doesn't do much useful. > server_name _; That line probably doesn't do much useful. > location / { > # First attempt to serve request as file, then > # as directory, then fall back to displaying a 404. > try_files $uri $uri/ =404; But you want "try as file, then as directory, then fall back to /index.html", no? try_files $uri $uri/ /index.html; > } > > location /auth{ > proxy_pass http://auth; > } > > location /^(?!auth$).* { > try_files $uri /var/www/..../dist/index.html; > } Remove that location{} altogether. It probably won't do any harm, as it is unlikely to match any request. But it is confusing. f -- Francis Daly francis at daoine.org From tomnyberg at gmail.com Thu Sep 3 04:44:09 2015 From: tomnyberg at gmail.com (Thomas Nyberg) Date: Thu, 03 Sep 2015 00:44:09 -0400 Subject: How to edit url and pass forward to wsgi? Message-ID: <55E7D019.4070804@gmail.com> Hello, If I have the following directive: location ~ /staging/dog/.*/info/cat { rewrite /staging/(.+) /$1 break; include uwsgi_params; uwsgi_pass 127.0.0.1:3130; } then a call to `http://127.0.0.1/staging/doc/v0.2/info/cat` gets passed through to my wsgi handler fine. Now if I leave that directive in place, but put the following one before it, then get "502 Bad Gateway": location ~ /dog/.*/info/cat { include uwsgi_params; uwsgi_pass 127.0.0.1:3030; } Nothing else was changed. The route is different, the port is different. Why would this affect the other one? Is it that I'm somehow "breaking" out and _then_ matching the other one? My question is: how do I _not_ break out? I've tried removing "break" but the effect seems exactly the same (i.e. undesired). How do I make it so that the url is rewritten and then pass immediately on? I.e. I want to _not_ leave the location box once I've matched. Is this possible? I've searched the internet for a long time and read the docs here: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html but I've had no success so far. Thanks a lot for any help. Cheers, Thomas From nginx-forum at nginx.us Thu Sep 3 07:38:06 2015 From: nginx-forum at nginx.us (footplus) Date: Thu, 03 Sep 2015 03:38:06 -0400 Subject: Implementing proxy_cache_lock when updating items In-Reply-To: <20150902161821.GP72232@mdounin.ru> References: <20150902161821.GP72232@mdounin.ru> Message-ID: <56257a7a39da7eec8dce51e1f464de51.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply. I'm currently using this mechanism (small variation of yours) to work around the limitation. On every intermediate cache server, i'm using the following (whole chain is using nginx). map $upstream_cache_status $accel_expires_from_upstream_cache_status { default ""; STALE 1; UPDATING 1; } more_set_headers "X-Accel-Expires: $accel_expires_from_upstream_cache_status"; (the key is using UPDATING also, because proxy_cache_use_stale is set to updating only on our setup). So far I think we can manage to use this work-around for our setup, but it has the drawback of potentially serving slightly out of date content. In our case, said items are ~1k files, with a TTL of 2~10s, and they MUST be fresh for our apps to work correctly. We're considering using an artificial "freshener" on intermediary caches, but i fear we can't very efficiently do this in our case, due to the nature of these files (HLS video playlists, locations and bitrates changing upon business decisions). Also, it would not be very practical to refresh hundreds of files with 2s TTL at 1s intervals on 3 cache layers, if as we expect a good part of them are not always asked by upstream. Thanks for the suggestion. I'm still looking for a way to hard lock the updating items however :) Best regards, Aur?lien Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261333,261402#msg-261402 From francis at daoine.org Thu Sep 3 07:44:11 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Sep 2015 08:44:11 +0100 Subject: How to edit url and pass forward to wsgi? In-Reply-To: <55E7D019.4070804@gmail.com> References: <55E7D019.4070804@gmail.com> Message-ID: <20150903074411.GH3177@daoine.org> On Thu, Sep 03, 2015 at 12:44:09AM -0400, Thomas Nyberg wrote: Hi there, > location ~ /staging/dog/.*/info/cat { "~" means "regex match". You haven't anchored the regex, so a request for /staging/dog/x/info/cat will match this, but so will a request for /a/staging/dog/x/info/cat/b > directive in place, but put the following one before it, then get > "502 Bad Gateway": > > location ~ /dog/.*/info/cat { This also is matched by a request for /staging/dog/x/info/cat. For regex locations, first match wins. So this location is chosen to process this request. > Nothing else was changed. The route is different, the port is > different. Why would this affect the other one? Is it that I'm > somehow "breaking" out and _then_ matching the other one? No. Your request matches the first regex location, so uses the first regex location. Possibly you want location ~ ^/dog/.*/info/cat or location ~ ^/dog/.*/info/cat$ or maybe just location ^~ /dog/ depending on what the full plan is. > I.e. I want to _not_ leave the location box once I've matched. > Is this possible? Yes, it's what you are already doing. You're just not in the location block you think you are in. http://nginx.org/r/location f -- Francis Daly francis at daoine.org From tomnyberg at gmail.com Thu Sep 3 12:40:36 2015 From: tomnyberg at gmail.com (Thomas Nyberg) Date: Thu, 03 Sep 2015 08:40:36 -0400 Subject: How to edit url and pass forward to wsgi? In-Reply-To: <20150903074411.GH3177@daoine.org> References: <55E7D019.4070804@gmail.com> <20150903074411.GH3177@daoine.org> Message-ID: <55E83FC4.5070808@gmail.com> Thank you very much for the response. It's working now. I didn't realize that the regular expressions needed anchoring. I'm used to regular expressions where '.*' is needed for the functionality you refer to. On a related note, is there some way to log the location choices that are made? I tried using a debug logging mode, but it was far too low-level (on the level of memory allocations). Of course I could have the routes' outputs go to certain files to figure it out, but if there was a way to log something like "taking route `/staging/doc/.*/info/cat`" it would make things much easier. Thanks for the help! On 09/03/2015 03:44 AM, Francis Daly wrote: > On Thu, Sep 03, 2015 at 12:44:09AM -0400, Thomas Nyberg wrote: > > Hi there, > >> location ~ /staging/dog/.*/info/cat { > > "~" means "regex match". > > You haven't anchored the regex, so a request for /staging/dog/x/info/cat > will match this, but so will a request for /a/staging/dog/x/info/cat/b > >> directive in place, but put the following one before it, then get >> "502 Bad Gateway": >> >> location ~ /dog/.*/info/cat { > > This also is matched by a request for /staging/dog/x/info/cat. > > For regex locations, first match wins. So this location is chosen to > process this request. > >> Nothing else was changed. The route is different, the port is >> different. Why would this affect the other one? Is it that I'm >> somehow "breaking" out and _then_ matching the other one? > > No. Your request matches the first regex location, so uses the first > regex location. > > Possibly you want > > location ~ ^/dog/.*/info/cat > > or > > location ~ ^/dog/.*/info/cat$ > > or maybe just > > location ^~ /dog/ > > depending on what the full plan is. > >> I.e. I want to _not_ leave the location box once I've matched. >> Is this possible? > > Yes, it's what you are already doing. > > You're just not in the location block you think you are in. > > http://nginx.org/r/location > > f > From fsantiago at deviltracks.net Thu Sep 3 14:40:05 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Thu, 03 Sep 2015 10:40:05 -0400 Subject: nginx v1.8.0 / redirects In-Reply-To: References: <2adfa7035803406cda8f4bcd15b4b294@deviltracks.net> <20150901211619.GD3177@daoine.org> <8abf350047d676e04c641823c7cd37ab@deviltracks.net> Message-ID: <24b91799a904903eac7612a81d8e8157@deviltracks.net> I hear what you're saying. i will do that instead and test it out. thank you. -- Fabian S. On 2015-09-02 03:02, B.R. wrote: > I would suggest you avoid multiple redirects in the case a client connects with http://domain [1], because your current setup will make the client following this flow: > http://domain [1] > http://www.domain [2] > https://www.domain [3] > > This will hurt your TTFB. I suggest your first redirect should directly point to the HTTPS scheme. > > --- > B. R. > > On Wed, Sep 2, 2015 at 3:33 AM, wrote: > thanks. worked like a charm! > > On 2015-09-01 17:16, Francis Daly wrote: > On Tue, Sep 01, 2015 at 04:41:20PM -0400, fsantiago at deviltracks.net wrote: > > Hi there, > > 1.> redirect > www. > a.) if www is already present, skip to step 2 > > 2.> redirect http://www [4]. request > https://...... > http://nginx.org/en/docs/http/server_names.html [5] > > Use two server{} blocks. > > One matches only www.* and redirects to https://$host$request_uri > > The other redirects to http://www. [6]$host$request_uri > > f _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx [7] _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx [7] Links: ------ [1] http://domain [2] http://www.domain [3] https://www.domain [4] http://www [5] http://nginx.org/en/docs/http/server_names.html [6] http://www. [7] http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 3 16:01:42 2015 From: nginx-forum at nginx.us (mex) Date: Thu, 03 Sep 2015 12:01:42 -0400 Subject: logging access in stream module Message-ID: hi, is there a way to log access (ip, date, size of payload) within the stream-module? i found error - log configurable for the stream only so far. cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261411,261411#msg-261411 From francis at daoine.org Thu Sep 3 18:47:19 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Sep 2015 19:47:19 +0100 Subject: How to edit url and pass forward to wsgi? In-Reply-To: <55E83FC4.5070808@gmail.com> References: <55E7D019.4070804@gmail.com> <20150903074411.GH3177@daoine.org> <55E83FC4.5070808@gmail.com> Message-ID: <20150903184719.GI3177@daoine.org> On Thu, Sep 03, 2015 at 08:40:36AM -0400, Thomas Nyberg wrote: Hi there, > On a related note, is there some way to log the location choices > that are made? I tried using a debug logging mode, but it was far > too low-level (on the level of memory allocations). Of course I > could have the routes' outputs go to certain files to figure it out, > but if there was a way to log something like "taking route > `/staging/doc/.*/info/cat`" it would make things much easier. The debug log should have a bunch of "test location:" lines, followed by one "using configuration" line. That's the location that is being used for this request. Other than that, the rules at http://nginx.org/r/location and at http://nginx.org/en/docs/http/request_processing.html should make it straightforward to determine which location is used for this request. And now that you know that regexes are not implicitly anchored, it should be clear what is going on. It's even easier if there are only prefix locations in use. Cheers, f -- Francis Daly francis at daoine.org From emailgrant at gmail.com Fri Sep 4 04:24:05 2015 From: emailgrant at gmail.com (Grant) Date: Thu, 3 Sep 2015 21:24:05 -0700 Subject: gzip disrupts users? Message-ID: Over the years, whenever I've enabled gzip compression on my web server I've seen website conversions drop. I just enabled compression on nginx again and I noticed conversions drop again: gzip on; gzip_disable msie6; gzip_types application/javascript text/css; Any thoughts on this? Is there more config I should consider? - Grant From reallfqq-nginx at yahoo.fr Fri Sep 4 07:09:26 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Sep 2015 09:09:26 +0200 Subject: gzip disrupts users? In-Reply-To: References: Message-ID: You could use online tools/crawlers to validate the content of your pages. If nothing is wrong, I suppose what you think you see might be an observation bias. You will need to define 'conversions'. If it means, as I think, people visit your website but buy less, that means they display some pages... so nothing wrong with your nginx I suppose. There is nothing wrong with the configuration snippet you provided anyway. --- *B. R.* On Fri, Sep 4, 2015 at 6:24 AM, Grant wrote: > Over the years, whenever I've enabled gzip compression on my web > server I've seen website conversions drop. I just enabled compression > on nginx again and I noticed conversions drop again: > > gzip on; > gzip_disable msie6; > gzip_types application/javascript text/css; > > Any thoughts on this? Is there more config I should consider? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brentgclarklist at gmail.com Fri Sep 4 10:04:48 2015 From: brentgclarklist at gmail.com (Brent Clark) Date: Fri, 4 Sep 2015 12:04:48 +0200 Subject: Nginx as a reverse caching / load balancing solution - verification of setup needed. Message-ID: <55E96CC0.4080602@gmail.com> Good day Guys. I would like to ask if someone could please verify my configs : http://pastebin.com/8Xk63RYD http://pastebin.com/EfNSpvMV I have a Nginx server sitting in front of two Apache servers. I'm using Nginx as a reverse caching / load balancing solution. But what I'm trying achieve is to cache only for images, css and Js. All in all it appears it all works. The other question I would like to ask is, does the order of location matter? If anyone can help, it would be appreciated. Thanks Brent From krebs.seb at gmail.com Fri Sep 4 11:29:56 2015 From: krebs.seb at gmail.com (Sebastian Krebs) Date: Fri, 4 Sep 2015 13:29:56 +0200 Subject: Nginx as a reverse caching / load balancing solution - verification of setup needed. In-Reply-To: <55E96CC0.4080602@gmail.com> References: <55E96CC0.4080602@gmail.com> Message-ID: 2015-09-04 12:04 GMT+02:00 Brent Clark : > Good day Guys. > > I would like to ask if someone could please verify my configs : > http://pastebin.com/8Xk63RYD > http://pastebin.com/EfNSpvMV > > I have a Nginx server sitting in front of two Apache servers. > I'm using Nginx as a reverse caching / load balancing solution. > But what I'm trying achieve is to cache only for images, css and Js. > > All in all it appears it all works. > > The other question I would like to ask is, does the order of location > matter? > Depends. For regular expression based location is does, for prefix based locations it doesn't. See http://nginx.org/en/docs/http/ngx_http_core_module.html#location A location can either be defined by a prefix string, or by a regular > expression. Regular expressions are specified with the preceding ?~*? > modifier (for case-insensitive matching), or the ?~? modifier (for > case-sensitive matching). *To find location matching a given request, > nginx first checks locations defined using the prefix strings (prefix > locations). Among them, the location with the longest matching prefix is > selected and remembered. Then regular expressions are checked, in the order > of their appearance in the configuration file. The search of regular > expressions terminates on the first match, and the corresponding > configuration is used. If no match with a regular expression is found then > the configuration of the prefix location remembered earlier is used.* > > > If anyone can help, it would be appreciated. > > Thanks > Brent > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- github.com/KingCrunch -------------- next part -------------- An HTML attachment was scrubbed... URL: From brentgclarklist at gmail.com Fri Sep 4 12:01:54 2015 From: brentgclarklist at gmail.com (Brent Clark) Date: Fri, 4 Sep 2015 14:01:54 +0200 Subject: Nginx as a reverse caching / load balancing solution - verification of setup needed. In-Reply-To: References: <55E96CC0.4080602@gmail.com> Message-ID: <55E98832.4020006@gmail.com> Thank you ever so much. Regards Brent On 04/09/2015 13:29, Sebastian Krebs wrote: > > > 2015-09-04 12:04 GMT+02:00 Brent Clark >: > > Good day Guys. > > I would like to ask if someone could please verify my configs : > http://pastebin.com/8Xk63RYD > http://pastebin.com/EfNSpvMV > > I have a Nginx server sitting in front of two Apache servers. > I'm using Nginx as a reverse caching / load balancing solution. > But what I'm trying achieve is to cache only for images, css and Js. > > All in all it appears it all works. > > The other question I would like to ask is, does the order of location > matter? > > > Depends. For regular expression based location is does, for prefix > based locations it doesn't. > See http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > A location can either be defined by a prefix string, or by a > regular expression. Regular expressions are specified with the > preceding ?~*? modifier (for case-insensitive matching), or the > ?~? modifier (for case-sensitive matching). *To find location > matching a given request, nginx first checks locations defined > using the prefix strings (prefix locations). Among them, the > location with the longest matching prefix is selected and > remembered. Then regular expressions are checked, in the order of > their appearance in the configuration file. The search of regular > expressions terminates on the first match, and the corresponding > configuration is used. If no match with a regular expression is > found then the configuration of the prefix location remembered > earlier is used.* > > > > > If anyone can help, it would be appreciated. > > Thanks > Brent > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > github.com/KingCrunch > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 4 12:42:54 2015 From: nginx-forum at nginx.us (donatasm) Date: Fri, 04 Sep 2015 08:42:54 -0400 Subject: Proxy module buffering and timeouts Message-ID: <93b1f60b19e181bdddc77bb9a54dfd50.NginxMailingListEnglish@forum.nginx.org> Given the following nginx config: --- master_process off; daemon off; events { worker_connections 16384; } http { error_log stderr debug; access_log on; log_not_found on; client_body_buffer_size 64k; client_body_in_single_buffer on; upstream nodes { server 127.0.0.1:8000 max_fails=0; server 127.0.0.1:8001 max_fails=0; server 127.0.0.1:8002 max_fails=0; keepalive 16384; } server { listen *:7070 backlog=16384 reuseport; keepalive_requests 2147483647; location /demo { proxy_pass http://nodes; proxy_connect_timeout 1ms; proxy_read_timeout 1ms; proxy_send_timeout 1ms; proxy_buffering on; } } } --- When requesting nginx server I get either the response from the upstream server: curl -i http://localhost:7070/demo HTTP/1.1 200 OK Server: nginx/1.9.4 Date: Fri, 04 Sep 2015 12:33:00 GMT Content-Type: text/plain Content-Length: 27 Connection: keep-alive {"message": "Hello World!"} or a timeout response: curl -i http://localhost:7070/demo HTTP/1.1 504 Gateway Time-out Server: nginx/1.9.4 Date: Fri, 04 Sep 2015 12:24:34 GMT Content-Type: text/html Content-Length: 182 Connection: keep-alive 504 Gateway Time-out

504 Gateway Time-out


nginx/1.9.4
but also i sometimes randomly get partially cut responses: curl -i http://localhost:7070/demo HTTP/1.1 200 OK Server: nginx/1.9.4 Date: Fri, 04 Sep 2015 12:24:35 GMT Content-Type: text/plain Content-Length: 27 Connection: keep-alive curl: (18) transfer closed with 27 bytes remaining to read How this can be fixed? Since proxy buffering is on, i expect nginx always return either 502 error page on upstream timeout or a response from an upstream. Here's a simple nodejs script of upstream nodes to reproduce the case: var http = require('http'); var util = require('util'); var cluster = require('cluster'); var SERVER_COUNT = 8; var HELLO_WORLD = '"message": "Hello World!"}'; var LOCALHOST = '127.0.0.1'; var PORT = 8000; function simple(request, response) { response.writeHead(200, { 'Content-Type': 'text/plain', 'Content-Length': HELLO_WORLD.length + 1 }); response.write('{'); response.end(HELLO_WORLD); } function createServer(port, handler) { http.createServer(handler).listen(port, LOCALHOST); util.log(util.format('Server running at http://%s:%d/', LOCALHOST, port)); } if (cluster.isMaster) { for (var c = 0; c < SERVER_COUNT; c++) { cluster.fork({ port: PORT + c }) } } else { createServer(process.env.port, simple); } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261424,261424#msg-261424 From mdounin at mdounin.ru Fri Sep 4 13:58:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Sep 2015 16:58:06 +0300 Subject: Proxy module buffering and timeouts In-Reply-To: <93b1f60b19e181bdddc77bb9a54dfd50.NginxMailingListEnglish@forum.nginx.org> References: <93b1f60b19e181bdddc77bb9a54dfd50.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150904135806.GB72232@mdounin.ru> Hello! On Fri, Sep 04, 2015 at 08:42:54AM -0400, donatasm wrote: [...] > but also i sometimes randomly get partially cut responses: > > curl -i http://localhost:7070/demo > > HTTP/1.1 200 OK > Server: nginx/1.9.4 > Date: Fri, 04 Sep 2015 12:24:35 GMT > Content-Type: text/plain > Content-Length: 27 > Connection: keep-alive > > curl: (18) transfer closed with 27 bytes remaining to read > > How this can be fixed? By fixing your backend to return a full response or not return it at all. > Since proxy buffering is on, i expect nginx always > return either 502 error page on upstream timeout or a response from an > upstream. No, this is wrong expectation. Buffering means that nginx will avoid doing an extra work for partially filled body buffers, and it will be allowed to buffer parts of a response in the filter chain. No attempt will be made to obtain a full response and check it's length (and such a behaviour is not at all possible if the response is big enough). As long as a response header is received from an upstream server, it will be passed to the client, and then nginx will start proxying the response body. An error can be returned only if the header was not yet passed to the client. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Sep 4 14:28:44 2015 From: nginx-forum at nginx.us (donatasm) Date: Fri, 04 Sep 2015 10:28:44 -0400 Subject: Proxy module buffering and timeouts In-Reply-To: <20150904135806.GB72232@mdounin.ru> References: <20150904135806.GB72232@mdounin.ru> Message-ID: Ok, thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261424,261428#msg-261428 From nginx-forum at nginx.us Fri Sep 4 16:56:24 2015 From: nginx-forum at nginx.us (goldfluss) Date: Fri, 04 Sep 2015 12:56:24 -0400 Subject: Handler modules : Content Handler vs Content Phase Handler Message-ID: Hello ! I'm developping some handler modules for nginx and I'm wondering why they are two kinds of handlers : - Content Phase Handler - Content Handler I read this interesting blog : http://www.nginxguts.com/2011/01/phases/ As far as I understood, the content handler could only be called once in a location configuration and the content phase handler is called everytime. Is there a reason to prefer one type instead of another? My last question is if there is a solution to develop content phase handler modules without calling them each time, but only if they are activated in the nginx.conf file? Thanks in advance for helping me to understand this ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261433,261433#msg-261433 From reallfqq-nginx at yahoo.fr Fri Sep 4 18:00:35 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Sep 2015 20:00:35 +0200 Subject: Nginx as a reverse caching / load balancing solution - verification of setup needed. In-Reply-To: <55E98832.4020006@gmail.com> References: <55E96CC0.4080602@gmail.com> <55E98832.4020006@gmail.com> Message-ID: Igor recommends using prefix locations as much as possible. If you were to use regex locations, you might want to embed them into prefix ones so order only matters at the leaves of the tree, which helps avoiding conflicts and makes you configuration scalable the most. Here is a presentation by Igor himself about the latter: https://youtu.be/YWRYbLKsS0I --- *B. R.* On Fri, Sep 4, 2015 at 2:01 PM, Brent Clark wrote: > Thank you ever so much. > > Regards > Brent > > > On 04/09/2015 13:29, Sebastian Krebs wrote: > > > > 2015-09-04 12:04 GMT+02:00 Brent Clark : > >> Good day Guys. >> >> I would like to ask if someone could please verify my configs : >> http://pastebin.com/8Xk63RYD >> http://pastebin.com/EfNSpvMV >> >> I have a Nginx server sitting in front of two Apache servers. >> I'm using Nginx as a reverse caching / load balancing solution. >> But what I'm trying achieve is to cache only for images, css and Js. >> >> All in all it appears it all works. >> >> The other question I would like to ask is, does the order of location >> matter? >> > > Depends. For regular expression based location is does, for prefix based > locations it doesn't. See > > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > A location can either be defined by a prefix string, or by a regular >> expression. Regular expressions are specified with the preceding ?~*? >> modifier (for case-insensitive matching), or the ?~? modifier (for >> case-sensitive matching). *To find location matching a given request, >> nginx first checks locations defined using the prefix strings (prefix >> locations). Among them, the location with the longest matching prefix is >> selected and remembered. Then regular expressions are checked, in the order >> of their appearance in the configuration file. The search of regular >> expressions terminates on the first match, and the corresponding >> configuration is used. If no match with a regular expression is found then >> the configuration of the prefix location remembered earlier is used.* >> > > >> >> If anyone can help, it would be appreciated. >> >> Thanks >> Brent >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > github.com/KingCrunch > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Sep 4 18:47:12 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Sep 2015 21:47:12 +0300 Subject: Handler modules : Content Handler vs Content Phase Handler In-Reply-To: References: Message-ID: <20150904184711.GH72232@mdounin.ru> Hello! On Fri, Sep 04, 2015 at 12:56:24PM -0400, goldfluss wrote: > Hello ! > I'm developping some handler modules for nginx and I'm wondering why they > are two kinds of handlers : > - Content Phase Handler > - Content Handler > > I read this interesting blog : http://www.nginxguts.com/2011/01/phases/ > As far as I understood, the content handler could only be called once in a > location configuration and the content phase handler is called everytime. > Is there a reason to prefer one type instead of another? > > My last question is if there is a solution to develop content phase handler > modules without calling them each time, but only if they are activated in > the nginx.conf file? Content handlers are unconditionally called for a location, and they override any default content phase handlers. Hence content handlers are used by such modules as proxy, fastcgi, memcached, empty_gif and so on - when all (or almost all) requests in a location are expected to be handled by a particular module. In contrast, content phase handlers are called in order, much like any other phase handlers. There is no way to avoid calling a content phase handler in a particular configuration. Instead, a handler should check the configuration itself, and return NGX_DECLINED if it's not configured to do anything. That is, you should use content handlers for modules like proxy, when you want requests to be handled in a particular way in a given location. And you can use content phase handlers (which are more expensive compared to content handlers), when you want to implement something more generic and selective, i.e., to only handle certain requests, like index or autoindex modules. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sat Sep 5 14:27:51 2015 From: nginx-forum at nginx.us (goldfluss) Date: Sat, 05 Sep 2015 10:27:51 -0400 Subject: Handler modules : Content Handler vs Content Phase Handler In-Reply-To: <20150904184711.GH72232@mdounin.ru> References: <20150904184711.GH72232@mdounin.ru> Message-ID: <3e00e871f30b3bb3a531747e3328aea0.NginxMailingListEnglish@forum.nginx.org> Thanks for your detailed reply ! Have a nice weekend. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261433,261449#msg-261449 From lists at ruby-forum.com Sat Sep 5 16:19:48 2015 From: lists at ruby-forum.com (Sanjeev Kumar) Date: Sat, 05 Sep 2015 18:19:48 +0200 Subject: Css file 403 error in nginx Message-ID: <8c052d086e7c05a6e1c3cc833373065c@ruby-forum.com> Hello I enabled python cgi in nginx server .I create a python simple on the localhost it is showing on the browsers but when i download a html temlate and converted html file to .py file then Css is not loading.403 error is popedup for css file.Please reply me as soon as possible it is very urgent. -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sat Sep 5 16:37:07 2015 From: lists at ruby-forum.com (Sanjeev Kumar) Date: Sat, 05 Sep 2015 18:37:07 +0200 Subject: Execute python files with Nginx In-Reply-To: References: Message-ID: <2d8beed4db5a33b21d29861705fe97ef@ruby-forum.com> Hello nitin Please check this configuration.It works for you. Server { listen localhost:8080; listen [::]:8060 ipv6only=on; root /var/www; index index.html index.htm index.py; location /html/ { # Disable gzip (it makes scripts feel slower since they have to complete # before getting gzipped) gzip off; # Set the root to /usr/lib (inside this location this means that we are # giving access to the files under /usr/lib/cgi-bin) #root /var/www; # Fastcgi socket fastcgi_pass unix:/var/run/fcgiwrap.socket; # Fastcgi parameters, include the standard ones include /etc/nginx/fastcgi_params; # Adjust non standard parameters (SCRIPT_FILENAME) fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Sep 6 17:16:10 2015 From: lists at ruby-forum.com (=?UTF-8?B?UXXDom4=?= =?UTF-8?B?IFRo4bup?=) Date: Sun, 06 Sep 2015 19:16:10 +0200 Subject: htaccess to nginx conversion? In-Reply-To: <2bc4c568e0c7b6f20379d3d47a964248.NginxMailingListEnglish@forum.nginx.org> References: <2bc4c568e0c7b6f20379d3d47a964248.NginxMailingListEnglish@forum.nginx.org> Message-ID: You can try L2MP Stack (http://l2mp.ml) It support htaccess and Litespeed 6x Faster than Apache -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Mon Sep 7 07:19:07 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 07 Sep 2015 03:19:07 -0400 Subject: android apk in mime.types Message-ID: <962f6d9ff0fd4608ac94921f6661fffc.NginxMailingListEnglish@forum.nginx.org> Firefox/IE sometimes gets an android apk as text, this forces octet, anyone see any issues? conf/mime.types line 64: application/octet-stream iso img; - application/octet-stream msi msp msm; + application/octet-stream apk msi msp msm; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261464,261464#msg-261464 From e1c1bac6253dc54a1e89ddc046585792 at posteo.net Mon Sep 7 07:25:01 2015 From: e1c1bac6253dc54a1e89ddc046585792 at posteo.net (Philipp) Date: Mon, 07 Sep 2015 09:25:01 +0200 Subject: android apk in mime.types In-Reply-To: <962f6d9ff0fd4608ac94921f6661fffc.NginxMailingListEnglish@forum.nginx.org> References: <962f6d9ff0fd4608ac94921f6661fffc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6831eb82b99d113083ae7cd6815243dc@posteo.de> Am 07.09.2015 09:19 schrieb itpp2012: > Firefox/IE sometimes gets an android apk as text, this forces octet, > anyone > see any issues? > > conf/mime.types > line 64: > application/octet-stream iso img; > - application/octet-stream msi msp msm; > + application/octet-stream apk msi msp msm; If introducing, why not the official one? application/vnd.android.package-archive apk; Pleases mobile browsers/downloaders a bit more? From reallfqq-nginx at yahoo.fr Mon Sep 7 07:35:02 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 7 Sep 2015 09:35:02 +0200 Subject: htaccess to nginx conversion? In-Reply-To: References: <2bc4c568e0c7b6f20379d3d47a964248.NginxMailingListEnglish@forum.nginx.org> Message-ID: Is not coming on a product ML dropping a 1-liner with doubtful assumptions about another one trolling? --- *B. R.* On Sun, Sep 6, 2015 at 7:16 PM, Qu?n Th? wrote: > You can try L2MP Stack (http://l2mp.ml) It support htaccess and > Litespeed 6x Faster than Apache > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at nginx.com Mon Sep 7 14:09:43 2015 From: sb at nginx.com (Sergey Budnevitch) Date: Mon, 7 Sep 2015 17:09:43 +0300 Subject: authentication on trac.nginx.org Message-ID: <31D0AB99-06BF-41EE-9062-D437E7B24B98@nginx.com> Hello. As you know we used openid to authenticate users on trac.nginx.org. Unfortunately many openid providers vanished or ceased to support openid. Last week I added oauth-based authentication instead of openid one, with four auth providers: google, yandex, github and stack exchange. Old google accounts with gmail addresses were converted to the new format, but other were kept intact. If you are an author of the ticket, comment or was subscribed to the ticket update, please write me off list, I?ll link old and new account. From nginx-forum at nginx.us Mon Sep 7 14:17:22 2015 From: nginx-forum at nginx.us (173279834462) Date: Mon, 07 Sep 2015 10:17:22 -0400 Subject: OCSP stapling: automatic updates Message-ID: Hello, nginx is not updating the ocsp response cache: This Update: Sep 5 08:36:32 2015 GMT Next Update: Sep 7 08:36:32 2015 GMT It is 16:09, so the cache is 8h behind. How would you diagnose and solve this problem? A related question is the duration of the cache. The local server uses 2 days, as shown above. How would you change this duration to, say, 8 days? This is an example of an 8 days cache: >echo QUIT | openssl s_client -CAfile /etc/ssl/ca-bundle.pem -connect ssllabs.com:443 -servername ssllabs.com -tlsextdebug -status 2>&1 | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' OCSP response: ====================================== OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response Version: 1 (0x0) Responder Id: C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2012 Entrust, Inc. - for authorized use only", CN = Entrust Certification Authority - L1K, CN = OCSP1 Produced At: Sep 7 02:16:10 2015 GMT Responses: Certificate ID: Hash Algorithm: sha1 Issuer Name Hash: CC6D221CF6B4552C2F87915F5AFEF0E1EECE83CC Issuer Key Hash: 82A27074DDBC533FCF7BD4F7CD7FA760C60A4CBF Serial Number: 50D359F0 Cert Status: good This Update: Sep 6 06:29:30 2015 GMT Next Update: Sep 14 02:16:10 2015 GMT <--------------------- 8 days Thank you, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261473,261473#msg-261473 From nginx-forum at nginx.us Mon Sep 7 15:20:33 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Mon, 07 Sep 2015 11:20:33 -0400 Subject: Content-length missing from Nginx headers Message-ID: <96947b7b6367c726610515bbdd243e7d.NginxMailingListEnglish@forum.nginx.org> Hi When browsing one of our websites the content-length field header is not shown, even with gzip turned off. This causes issues with chunked_transfer_encoding and kaspersky av. How can we get the content-length to show? This is a ruby on rails app using passenger. Thanks Stuart Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261476,261476#msg-261476 From kristofer at cybernetik.net Mon Sep 7 16:23:32 2015 From: kristofer at cybernetik.net (Kristofer Pettijohn) Date: Mon, 7 Sep 2015 11:23:32 -0500 (CDT) Subject: root and alias with php5-fpm Message-ID: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> I am having a difficult time finding a solution for this. I have PHP applications that may be referenced to from several websites, either from root locations or sub locations. For example: server { server_name domain.com; location / { ... unrelated stuff ... } location /grant { root /apps/grant/; index index.php index.html; try_files $uri $uri/ /grant/index.php?$args; location ~ \.php$ { include fastcgi_params; fastcgi_pass unix:/local/sockets/grant.sock; } } } and then it may be somewhere else like this: server { server_name another-domain.com; location / { ... unrelated stuff ... } location /employ { root /apps/grant/; index index.php index.html; try_files $uri $uri/ /employ/index.php?$args; location ~ \.php$ { include fastcgi_params; fastcgi_pass unix:/local/sockets/grant.sock; } } } So the only two things different are the "location" and "try_files". However, I know that "root" will append the URI to the path, so it will try /apps/grant/grant/ and /apps/grant/employ/ when looking for files. When I use "alias", it seems that try_files tries looking for index.php in the context of "location /" on each. The only way I can seem to resolve this is by creating a symbolic link at /apps/grant/grant/ and /apps/grant/employ/ pointing back to /apps/rant/, which I do not want. I just want each location in each server to see /apps/grant/ as the root, and for try_files to process the index.php file in the base of that location last. I have Lua compiled in, so I'm not sure if there are any tricks I can do with that to get this to work. I'm not sure what I'm missing. Can someone provide some guidance? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristofer at cybernetik.net Mon Sep 7 16:48:40 2015 From: kristofer at cybernetik.net (Kristofer Pettijohn) Date: Mon, 7 Sep 2015 11:48:40 -0500 (CDT) Subject: root and alias with php5-fpm In-Reply-To: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> References: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> Message-ID: <06E852A6-AF78-4F79-8B23-C752571DA779@cybernetik.net> Is this the issue I might be experiencing when I try to use the alias directive? https://trac.nginx.org/nginx/ticket/97 > On Sep 7, 2015, at 11:23 AM, Kristofer Pettijohn wrote: > > I am having a difficult time finding a solution for this. > > I have PHP applications that may be referenced to from several websites, either from root locations or sub locations. > > For example: > > server { > server_name domain.com; > location / { > ... unrelated stuff ... > } > location /grant { > root /apps/grant/; > index index.php index.html; > try_files $uri $uri/ /grant/index.php?$args; > > location ~ \.php$ { > include fastcgi_params; > fastcgi_pass unix:/local/sockets/grant.sock; > } > } > } > > and then it may be somewhere else like this: > > server { > server_name another-domain.com; > location / { > ... unrelated stuff ... > } > location /employ { > root /apps/grant/; > index index.php index.html; > try_files $uri $uri/ /employ/index.php?$args; > > location ~ \.php$ { > include fastcgi_params; > fastcgi_pass unix:/local/sockets/grant.sock; > } > } > } > > So the only two things different are the "location" and "try_files". However, I know that "root" will append the URI to the path, so it will try /apps/grant/grant/ and /apps/grant/employ/ when looking for files. When I use "alias", it seems that try_files tries looking for index.php in the context of "location /" on each. > > The only way I can seem to resolve this is by creating a symbolic link at /apps/grant/grant/ and /apps/grant/employ/ pointing back to /apps/rant/, which I do not want. I just want each location in each server to see /apps/grant/ as the root, and for try_files to process the index.php file in the base of that location last. > > I have Lua compiled in, so I'm not sure if there are any tricks I can do with that to get this to work. I'm not sure what I'm missing. > > Can someone provide some guidance? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 7 17:28:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Sep 2015 20:28:20 +0300 Subject: OCSP stapling: automatic updates In-Reply-To: References: Message-ID: <20150907172820.GE52312@mdounin.ru> Hello! On Mon, Sep 07, 2015 at 10:17:22AM -0400, 173279834462 wrote: > Hello, > > nginx is not updating the ocsp response cache: > > This Update: Sep 5 08:36:32 2015 GMT > Next Update: Sep 7 08:36:32 2015 GMT > > It is 16:09, so the cache is 8h behind. > > How would you diagnose and solve this problem? OCSP responses are re-requested by nginx after 1 hour, older responses may be returned only if there are no requests for OCSP stapling for a long time. If you consistently see an expired response - this likely means that it's what OCSP responder of your CA returns. Also, as of nginx 1.9.2, there are checks to avoid returning expired OCSP responses as this confuses some browsers. You may want to upgrade if you see expired responses returned. > A related question is the duration of the cache. > The local server uses 2 days, as shown above. > How would you change this duration to, say, 8 days? "This Update" and "Next Update" aren't something nginx controls, they are returned by OCSP responder of your CA. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Mon Sep 7 21:23:53 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Sep 2015 22:23:53 +0100 Subject: root and alias with php5-fpm In-Reply-To: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> References: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> Message-ID: <20150907212353.GQ3177@daoine.org> On Mon, Sep 07, 2015 at 11:23:32AM -0500, Kristofer Pettijohn wrote: Hi there, > I just want each location in each > server to see /apps/grant/ as the root, and for try_files to process > the index.php file in the base of that location last. If I've understood you correctly, what you describe is not what your current configuration does. Assume that files called "yes" do exist on your filesystem, and files called "no" do not exist. What response do you want for requests for each of: /grant/one/yes.txt /grant/one/no.txt /grant/one/yes.php /grant/one/no.php And do you get that response in each case? If not, does the difference matter? > Can someone provide some guidance? I suggest using a named location for handling the "not there" fallback -- either as the final argument to try_files, or perhaps as "error_page 404 = @fallback". Then location @fallback { fastcgi_param SCRIPT_FILENAME /apps/grant/index.php; include fastcgi_params; fastcgi_pass unix:/local/sockets/grant.sock; } Test what order of the first two directives work in your fasctcgi server. f -- Francis Daly francis at daoine.org From kristofer at cybernetik.net Mon Sep 7 23:48:57 2015 From: kristofer at cybernetik.net (Kristofer Pettijohn) Date: Mon, 7 Sep 2015 18:48:57 -0500 Subject: root and alias with php5-fpm In-Reply-To: <20150907212353.GQ3177@daoine.org> References: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> <20150907212353.GQ3177@daoine.org> Message-ID: Thanks for the response. > I suggest using a named location for handling the "not there" fallback > -- either as the final argument to try_files, or perhaps as "error_page > 404 = @fallback". > > Then > > location @fallback { > fastcgi_param SCRIPT_FILENAME /apps/grant/index.php; > include fastcgi_params; > fastcgi_pass unix:/local/sockets/grant.sock; > } > That is what I am attempting to do with my try_files directive: try_files $uri $uri/ /grant/index.php?$args; If the file does not exist, I want it to try ?index.php? in the root of the grant folder, so in the try_files directive I have ?/grant/index.php?$args? so that it tries using the proper location. At least that?s how I understand try_files? From nginx-forum at nginx.us Tue Sep 8 00:16:36 2015 From: nginx-forum at nginx.us (maplesyrupandrew) Date: Mon, 07 Sep 2015 20:16:36 -0400 Subject: How to send all these requests to the same file when I have an Angular state based router? In-Reply-To: <20150902190435.GF3177@daoine.org> References: <20150902190435.GF3177@daoine.org> Message-ID: <22ec0745ac6517f30808c464859bd2bd.NginxMailingListEnglish@forum.nginx.org> Thank you Francis - the "try" line was key :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261396,261495#msg-261495 From francis at daoine.org Tue Sep 8 07:44:49 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 8 Sep 2015 08:44:49 +0100 Subject: root and alias with php5-fpm In-Reply-To: References: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> <20150907212353.GQ3177@daoine.org> Message-ID: <20150908074449.GR3177@daoine.org> On Mon, Sep 07, 2015 at 06:48:57PM -0500, Kristofer Pettijohn wrote: Hi there, > > I suggest using a named location for handling the "not there" fallback > > -- either as the final argument to try_files, or perhaps as "error_page > > 404 = @fallback". > > > > Then > > > > location @fallback { > > fastcgi_param SCRIPT_FILENAME /apps/grant/index.php; > > include fastcgi_params; > > fastcgi_pass unix:/local/sockets/grant.sock; > > } > > > > That is what I am attempting to do with my try_files directive: > > try_files $uri $uri/ /grant/index.php?$args; > > If the file does not exist, I want it to try ?index.php? in the root of the grant folder, so in the try_files directive I have ?/grant/index.php?$args? so that it tries using the proper location. At least that?s how I understand try_files? No - the last argument to try_files is different to the other arguments. If this last argument is used, you get an internal redirect to the url /grant/index.php?$args, and the appropriate location{} to handle that request is chosen from scratch. Which shows exactly what you see, which is not what you want. Does "try_files $uri $uri/ @fallback;" do what you want? (Rename @fallback to @grant or @grantindex if you like.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Sep 8 12:00:45 2015 From: nginx-forum at nginx.us (StSch) Date: Tue, 08 Sep 2015 08:00:45 -0400 Subject: Redirect from HTTP to HTTPS does not work In-Reply-To: <20150811073555.GT23844@daoine.org> References: <20150811073555.GT23844@daoine.org> Message-ID: <74b05861d507a04e1f0c2a0cada7d2a0.NginxMailingListEnglish@forum.nginx.org> > Use "curl -i" to make the http request. See the response. > Use (for example) $host instead of $server_name. Thank you very much for your immediate response and sorry for my late reply. "curl -i" was very useful and using "$host" instead of "$server_name" indeed solved my problem. Thank you very much for your help. Greetings from South-Germany, Steffen Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260913,261506#msg-261506 From kristofer at cybernetik.net Tue Sep 8 14:23:08 2015 From: kristofer at cybernetik.net (Kristofer Pettijohn) Date: Tue, 8 Sep 2015 09:23:08 -0500 Subject: root and alias with php5-fpm In-Reply-To: <20150908074449.GR3177@daoine.org> References: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> <20150907212353.GQ3177@daoine.org> <20150908074449.GR3177@daoine.org> Message-ID: <55EEEF4C.80202@cybernetik.net> > No - the last argument to try_files is different to the other arguments. > > If this last argument is used, you get an internal redirect to the url > /grant/index.php?$args, and the appropriate location{} to handle that > request is chosen from scratch. > > Which shows exactly what you see, which is not what you want. When I do that, and use "root" instead of "alias" inside of "location /grant", it works. However, inside of the path I need to create a symlink "ln -s . grant/" with how the root directive looks for any static files. This is the part I am trying to avoid. Which tells me that I should be using "alias" instead of "root" But then if I use "alias", it breaks completely, which I do not understand. If I use @fallback, I see the same exact behavior with root vs. alias, as I see when I use "/grant/index.php?$args" in try_files. From reallfqq-nginx at yahoo.fr Tue Sep 8 15:19:55 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 8 Sep 2015 17:19:55 +0200 Subject: Compiling nginx Message-ID: Hello, I noticed the documentation page about compiling nginx is not up-to-date, making it impossible to have a clear view about which modules are included by default (ie with no --with-* or --without-* options at configure time). One such example is the limit_req module which seems to be included by default, although it is not clear: not mentioned in the module's docs page nor on the compiling instructions one. I only saw one mention of a wpecific option to remove that module on the Wiki's modules list , which would mean the module is integrated by default. ?Where could you find an exhaustive official list of such configure-time options?? ?Thanks,? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Sep 8 15:26:13 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 08 Sep 2015 18:26:13 +0300 Subject: Compiling nginx In-Reply-To: References: Message-ID: <2846710.xPSARdKCS6@vbart-workstation> On Tuesday 08 September 2015 17:19:55 B.R. wrote: > Hello, > > I noticed the documentation page about compiling nginx > is not up-to-date, making it > impossible to have a clear view about which modules are included by default > (ie with no --with-* or --without-* options at configure time). > > One such example is the limit_req module > which seems > to be included by default, although it is not clear: not mentioned in the > module's docs page nor on the compiling instructions one. > I only saw one mention of a wpecific option to remove that module on the > Wiki's modules list , which would mean the > module is integrated by default. > > ?Where could you find an exhaustive official list of such configure-time > options?? ./configure --help wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Sep 8 16:14:19 2015 From: nginx-forum at nginx.us (justink101) Date: Tue, 08 Sep 2015 12:14:19 -0400 Subject: NginxPlus error: zero size buf in output Message-ID: <72cedaa00078f2da1649dee09f6214e2.NginxMailingListEnglish@forum.nginx.org> Hello, saw this logged in the error log in our NginxPlus (nginx/1.7.11 (nginx-plus-extras-r6-p1) load balancer. Any ideas? 2015/09/08 14:31:02 [alert] 2399#0: *452322 zero size buf in output t:0 r:0 f:1 0000000000000000 0000000000000000-0000000000000000 0000000002F51428 0-0 while sending request to upstream Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261516,261516#msg-261516 From mdounin at mdounin.ru Tue Sep 8 18:13:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Sep 2015 21:13:39 +0300 Subject: NginxPlus error: zero size buf in output In-Reply-To: <72cedaa00078f2da1649dee09f6214e2.NginxMailingListEnglish@forum.nginx.org> References: <72cedaa00078f2da1649dee09f6214e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150908181339.GK52312@mdounin.ru> Hello! On Tue, Sep 08, 2015 at 12:14:19PM -0400, justink101 wrote: > Hello, saw this logged in the error log in our NginxPlus (nginx/1.7.11 > (nginx-plus-extras-r6-p1) load balancer. Any ideas? > > 2015/09/08 14:31:02 [alert] 2399#0: *452322 zero size buf in output t:0 r:0 > f:1 0000000000000000 0000000000000000-0000000000000000 0000000002F51428 0-0 > while sending request to upstream This message suggests there is a bug somewhere. In particular, if you use 3rd party modules ("extras" suggests this may be the case), it may be a bug in a 3rd party module. The message should contain additional information to help to identify the request triggered the bug. Also, as you are using nginx-plus, you may want to contact technical support. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Sep 9 07:45:26 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Sep 2015 08:45:26 +0100 Subject: root and alias with php5-fpm In-Reply-To: <55EEEF4C.80202@cybernetik.net> References: <1461931871.9639892.1441643012556.JavaMail.zimbra@cybernetik.net> <20150907212353.GQ3177@daoine.org> <20150908074449.GR3177@daoine.org> <55EEEF4C.80202@cybernetik.net> Message-ID: <20150909074526.GS3177@daoine.org> On Tue, Sep 08, 2015 at 09:23:08AM -0500, Kristofer Pettijohn wrote: Hi there, > When I do that, and use "root" instead of "alias" inside of > "location /grant", it works. However, inside of the path I need to > create a symlink "ln -s . grant/" with how the root directive looks > for any static files. This is the part I am trying to avoid. Which > tells me that I should be using "alias" instead of "root" I confess that I have become confused about what behaviour you want, and what behaviour you see, and what configuration you are using when you see the behaviour that you see. When you make a request for /grant/one/yes.txt, what file on your filesystem do you want nginx to serve? When you make a request for /grant/one/yes.php, what file on your filesystem do you want nginx to tell the fastcgi server to process? When you make a request for /grant/one/no.txt, what file on your filesystem do you want nginx to tell the fastcgi server to process (because the file does not exist)? And which of those do not do what you want, using your current configuration? > But then if I use "alias", it breaks completely, which I do not understand. > > If I use @fallback, I see the same exact behavior with root vs. > alias, as I see when I use "/grant/index.php?$args" in try_files. This confuses me too. There is no "root" or "alias" in the suggested @fallback location. So I suspect that I am misunderstanding something. f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Wed Sep 9 10:38:18 2015 From: lists at ruby-forum.com (Nguyen Nhat Khang) Date: Wed, 09 Sep 2015 12:38:18 +0200 Subject: proxy_cache_use_stale updating Message-ID: <0149af5a84cf661f5b7e08c608f95da8@ruby-forum.com> I've one storage server using nginx, one cache file server using nginx. The following are my configuration files: 1. storage_server.conf(ip address 192.168.1.10): server { listen 80; listen [::]:80 ipv6only=on; server_name _; location / { return 403; } location ~ ^/cache/ { root /var/my_file_storage; directio 1m; directio_alignment 8k; output_buffers 1 1m; try_files $request_uri =404; } } 2. cache_server.conf(ip address 192.168.1.2): proxy_cache_path /nginx-cache/cache-level1/cache levels=1 keys_zone=CacheLVL1:10m inactive=12h max_size=5G; server { listen 80; listen [::]:80 ipv6only=on; server_name _; location / { return 403; } location ~ ^/cache/ { proxy_pass http://192.168.1.10:80$request_uri; # request_uri is path to file on Storage server. proxy_cache CacheLVL1; proxy_cache_key $request_uri; proxy_cache_valid 200 30d; proxy_temp_path /nginx-cache/cache-level1/temp; proxy_cache_use_stale updating; proxy_max_temp_file_size 0; add_header X-Proxy-Cache $upstream_cache_status; proxy_set_header Range $http_range; proxy_set_header If-Range $http_if_range; } } /etc/nginx/nginx.conf ...{ ... log_format upstreamlog '[$time_local] $remote_addr to $upstream_addr $upstream_cache_status'; access_log /var/log/nginx/cache.log upstreamlog; ... } I've read about proxy_cache_use_stale updating. But I do not understand how its activities: 1. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale: the updating parameter permits using a stale cached response if it is currently being updated. This allows minimizing the number of accesses to proxied servers when updating cached data. 2. https://www.ruby-forum.com/topic/212402#new: "If I understand this right if I use proxy_cache_use_stale updating and If I have 1000 users trying to access expired cached information. It will only send one request to backend server to update the cache ?" I request multiple times and received multiple files in the /nginx-cache/cache-level1/temp folder 000000xx format. I think I have the wrong configuration in my configuration file because it works unlike what I've read. I never saw the $ upstream_cache_status UPDATING in cache.log. It is related to X-Accel-Expires "," Expires "," Cache-Control "is not? Can someone explain to proxy_cache_use_stale updating your help, Thank! -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Sep 9 11:09:05 2015 From: nginx-forum at nginx.us (kay) Date: Wed, 09 Sep 2015 07:09:05 -0400 Subject: Websockets proxy "Broken pipe" Message-ID: I have a problem with nginx and websockets proxy. Here is the message I receive: [error] 20999#0: *1997296 send() failed (32: Broken pipe) while proxying upgraded connection, client: 10.0.25.47, server: example.com, request: "GET /xmpp/ HTTP/1.1", upstream: "http://192.168.122.8:5280/xmpp/", host: "example.com" Here is my config: location /xmpp { proxy_pass http://192.168.122.8:5280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } When ejabberd has big history log, nginx returns error message. nginx -V nginx version: nginx/1.4.6 (Ubuntu) built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261527,261527#msg-261527 From mdounin at mdounin.ru Wed Sep 9 16:31:23 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Sep 2015 19:31:23 +0300 Subject: Websockets proxy "Broken pipe" In-Reply-To: References: Message-ID: <20150909163123.GL52312@mdounin.ru> Hello! On Wed, Sep 09, 2015 at 07:09:05AM -0400, kay wrote: > I have a problem with nginx and websockets proxy. > > Here is the message I receive: > > [error] 20999#0: *1997296 send() failed (32: Broken pipe) while proxying > upgraded connection, client: 10.0.25.47, server: example.com, request: "GET > /xmpp/ HTTP/1.1", upstream: "http://192.168.122.8:5280/xmpp/", host: > "example.com" > > Here is my config: > > location /xmpp { > proxy_pass http://192.168.122.8:5280; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; > } > > When ejabberd has big history log, nginx returns error message. The message suggests that either client or upstream server unexpectedly closed connection while nginx was proxying their data to each other after a connection upgrade. This may happen due to incorrect client (or upstream server) behaviour, but unlikely indicate any problem in nginx itself. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Sep 9 17:30:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Sep 2015 20:30:06 +0300 Subject: proxy_cache_use_stale updating In-Reply-To: <0149af5a84cf661f5b7e08c608f95da8@ruby-forum.com> References: <0149af5a84cf661f5b7e08c608f95da8@ruby-forum.com> Message-ID: <20150909173006.GO52312@mdounin.ru> Hello! On Wed, Sep 09, 2015 at 12:38:18PM +0200, Nguyen Nhat Khang wrote: > I've one storage server using nginx, one cache file server using nginx. > The following are my configuration files: [...] > proxy_cache_key $request_uri; > proxy_cache_valid 200 30d; > proxy_temp_path /nginx-cache/cache-level1/temp; > proxy_cache_use_stale updating; [...] > I request multiple times and received multiple files in the > /nginx-cache/cache-level1/temp folder 000000xx format. I think I have > the wrong configuration in my configuration file because it works unlike > what I've read. I never saw the $ upstream_cache_status UPDATING in > cache.log. The "proxy_cache_use_stale updating" only works when _updating_ cache items, i.e., when a cache items becomes stale. As your proxy_cache_valid is set to 30 days, you should be able to see it working only after 30 days (if at all, as resources may be removed from cache due to inactivity). If you want to reduce load when initially loading resources, consider proxy_cache_lock, see http://nginx.org/r/proxy_cache_lock. -- Maxim Dounin http://nginx.org/ From ahutchings at nginx.com Wed Sep 9 23:06:17 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 10 Sep 2015 00:06:17 +0100 Subject: New NGINX wiki Message-ID: <55F0BB69.2040100@nginx.com> Hi all, The developer relations team along with several members of the community have been working towards creating a new public NGINX wiki. This wiki's source files are stored on GitHub in an easy to modify format called reStructuredText (rst). The source for this has just been made public at: https://github.com/nginxinc/nginx-wiki The end result will eventually be on nginx.com with the current wiki redirecting relevant links to the new one. We have migrated pretty much all of the most accessed content on the site but there may be some recent edits that are not there yet. We have now opened this up so that we can ask anyone who is interested to play with this, contribute edits. Let us know what works and what doesn't, etc... We love all kinds of contributions from questions to bug reports to commits :) Wiki edits can happen straight on GitHub's website which will generate a pull request or via. the usual fork / pull request method as can be seen at: https://github.com/nginxinc/nginx-wiki/blob/master/source/contributing/github.rst This also gives you basic instructions on how to compile and test the wiki locally. Over the next few days I'll also add information to the README to make it easier to get started. Please feel free to come to me or anyone on the developer relations team if you have any questions. Many thanks to all of you for making the NGINX community awesome! Kind Regards -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate, Nginx Inc. Discover best practices for building & delivering apps at scale. nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf From sarah at nginx.com Wed Sep 9 23:56:28 2015 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 9 Sep 2015 16:56:28 -0700 Subject: Join us this month at nginx.conf 2015 Message-ID: Hi All, We hope that you?ll join us this month for the upcoming NGINX user conference, nginx.conf 2015, September 22-24 at Fort Mason in San Francisco. There are a lot of amazing talks from people like you who are building cool shi+ with NGINX. Our guest speakers at nginx.conf 2015 will help you learn how to: ? Build a high-performance app architecture to support large numbers of concurrent users ? Achieve zero downtime, even when you are moving apps to the cloud ? Make continuous delivery faster and easier ? Utilize HTTPS, web encryption, and more to protect and secure your sites and apps ? Deploy and optimize containers in production ? Gain deep insights into what?s happening in your environment ? Design, develop, and deploy scalable microservices architectures You can see the full list of speakers and topics here: http://bit.ly/1NE1qHD. Don?t forget about the community member discount. Please use and share this discount code to get 25% off conference tickets: NG15ORG See you soon in San Francisco! Sarah -- Sarah Novotny Developer Advocacy, Nginx Inc. From ahutchings at nginx.com Thu Sep 10 08:56:47 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 10 Sep 2015 09:56:47 +0100 Subject: Join us this month at nginx.conf 2015 In-Reply-To: References: Message-ID: <55F145CF.7040101@nginx.com> I'll also be stuffing my suitcase as much as I can with cool hardware to play with at the Google Cloud sponsored Hackday event on the Tuesday of the conference. This is aimed at the developers / devops / sysadmins out there who won't be doing our training classes. It will be a great day to hack on any NGINX related things, chat / ask questions with our engineers and other like-minded people in the industry. Lunch will be provided, and of course I will have NGINX stickers for developer laptops :) It should be a fun, casual event and we look forward to seeing you there! Kind Regards Andrew On 10/09/15 00:56, Sarah Novotny wrote: > > Hi All, > > We hope that you?ll join us this month for the upcoming NGINX user conference, nginx.conf 2015, September 22-24 at Fort Mason in San Francisco. > > There are a lot of amazing talks from people like you who are building cool shi+ with NGINX. > > Our guest speakers at nginx.conf 2015 will help you learn how to: > ? Build a high-performance app architecture to support large numbers of concurrent users > ? Achieve zero downtime, even when you are moving apps to the cloud > ? Make continuous delivery faster and easier > ? Utilize HTTPS, web encryption, and more to protect and secure your sites and apps > ? Deploy and optimize containers in production > ? Gain deep insights into what?s happening in your environment > ? Design, develop, and deploy scalable microservices architectures > > You can see the full list of speakers and topics here: http://bit.ly/1NE1qHD. > > Don?t forget about the community member discount. Please use and share this discount code to get 25% off conference tickets: NG15ORG > > See you soon in San Francisco! > > Sarah > -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate, Nginx Inc. Discover best practices for building & delivering apps at scale. nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf From 1989.gaurav at googlemail.com Thu Sep 10 13:21:18 2015 From: 1989.gaurav at googlemail.com (gaurav gupta) Date: Thu, 10 Sep 2015 18:51:18 +0530 Subject: Remove request from nginx error logs Message-ID: Hello Folks, How can we remove request url being logged in nginx error logs. For example it looks something like: 2015/09/01 15:26:03 [error] 30547#0: *208725 upstream prematurely closed connection while reading response header from upstream, client: 123.123.50.44, server: test.example.com, request: "GET /v1.3/status.json?...." is it possible to drop request from the log(if present) so it looks something like: 2015/09/01 15:26:03 [error] 30547#0: *208725 upstream prematurely closed connection while reading response header from upstream, client: 123.123.50.44, server: test.example.com I was able to configure access logs but couldn't find a way to customize error logs. If there is no way to drop just request, is it possible to drop complete error logs matching a particular format something similar to what https://github.com/cfsego/ngx_log_if does for access logs. I see that error_log is part of nginx core https://github.com/cfsego/ngx_log_if/blob/master/ngx_http_aclog_bypass_module.c, but would it be possible to extend by creating a new nginx module. I am really sorry since I am new to nginx code/module and this might sound stupid. Any suggestion/direction to achieve this is really appreciated. -- Thanks & Regards, Gaurav Gupta 7676-999-350 "Quality is never an accident. It is always result of intelligent effort" - John Ruskin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 10 14:33:00 2015 From: nginx-forum at nginx.us (pcfreak30) Date: Thu, 10 Sep 2015 10:33:00 -0400 Subject: Pretty printer for the Nginx config? In-Reply-To: References: Message-ID: After frustration on this, I decided to make a simple web service to solve it. Please see http://www.nginxformatter.com :). Enjoy :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250211,261555#msg-261555 From nginx-forum at nginx.us Thu Sep 10 17:30:46 2015 From: nginx-forum at nginx.us (biazus) Date: Thu, 10 Sep 2015 13:30:46 -0400 Subject: cache file has too long header (bug) ? Message-ID: <653329db67fc16d1ba52f8982b9db6d3.NginxMailingListEnglish@forum.nginx.org> Hey Guys, I've been using nginx 1.8.0 for a couple of months, and I noticed a critical message at error log informing that "cache file has too long header". 2015/09/10 17:11:03 [crit] 27245#0: *10686 cache file "/data/smallfiles/http/6/d8/f5df8d6eda60819319688d1bc0cb2d86" has too long header However, as you can see at the example bellow, there is nothing abnormal with the file: cat /data/smallfiles/http/6/d8/f5df8d6eda60819319688d1bc0cb2d86 KEY: static.myhosthere.com.br/bundles/1234567890123456789012?v=saUVIhIUFNX8o0JpMT9rspK0l6klR4JQBnJXIV1MXkE1 HTTP/1.1 200 OK Cache-Control: public Content-Type: text/javascript; charset=utf-8 Content-Encoding: gzip Expires: Fri, 09 Sep 2016 17:22:25 GMT Last-Modified: Thu, 10 Sep 2015 17:22:25 GMT Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET,OPTIONS Access-Control-Allow-Headers: Content-Type X-Frame-Options: SAMEORIGIN Date: Thu, 10 Sep 2015 17:22:24 GMT Content-Length: 1301 I thought it would be the header buffer size, but I'm using a very large value: client_body_buffer_size 128k; client_header_buffer_size 32k; large_client_header_buffers 4 64k; When this error occurs, the object is fetched from the origin again. Any ideia ? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261567,261567#msg-261567 From mdounin at mdounin.ru Thu Sep 10 18:18:31 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Sep 2015 21:18:31 +0300 Subject: cache file has too long header (bug) ? In-Reply-To: <653329db67fc16d1ba52f8982b9db6d3.NginxMailingListEnglish@forum.nginx.org> References: <653329db67fc16d1ba52f8982b9db6d3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150910181831.GW52312@mdounin.ru> Hello! On Thu, Sep 10, 2015 at 01:30:46PM -0400, biazus wrote: > Hey Guys, > > I've been using nginx 1.8.0 for a couple of months, and I noticed a critical > message at error log informing that "cache file has too long header". > > 2015/09/10 17:11:03 [crit] 27245#0: *10686 cache file > "/data/smallfiles/http/6/d8/f5df8d6eda60819319688d1bc0cb2d86" has too long > header > > However, as you can see at the example bellow, there is nothing abnormal > with the file: > > cat /data/smallfiles/http/6/d8/f5df8d6eda60819319688d1bc0cb2d86 [...] The message is logged when nginx detects the problem, ignores the cached file and starts loading another response from an upstream server. So, unfortunately, as long as the message appeared, it's probably too late to look into the file as it's likely already reloaded from a backend. The message itself is expected to appear if a response header stored in the cache file is too big for configured proxy_buffer_size. It may also appear due to a small race condition in nginx cache logic if two different responses are loaded into cache simultaneously, see here: http://hg.nginx.org/nginx/rev/6f97afc238de http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001287.html The message may also indicates that the cache file was corrupted somehow. If you see the message on a regular basis, we may want to investigate further. If it's just a single case in couple of months, it is probably due to the race condition in question and likely can be ignored safely. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Sep 10 20:52:54 2015 From: nginx-forum at nginx.us (biazus) Date: Thu, 10 Sep 2015 16:52:54 -0400 Subject: cache file has too long header (bug) ? In-Reply-To: <20150910181831.GW52312@mdounin.ru> References: <20150910181831.GW52312@mdounin.ru> Message-ID: <88ce45fd6972348525572116e0934ed9.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thank You for your answer. It really makes sense, however, in the mentioned case, I could see an elevation of the number of occurrences just after the migration from nginx 1.6 to nginx 1.8. This behaviour affects < 0.2 % of our requests, but it means hundreds of requests per hour. Also, I could see this implementation isn't new, therefore I believe this behaviour maybe another change caused it to start happening. https://github.com/nginx/nginx/commit/64a9f700929dbc8f0730be4f91cc3bbfde8fc3e6 Regarding your comment about the race condition, most times the object the caused the message appeared only once on the access log, so I don't think it was caused by a concurrency. Best Regards, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261567,261569#msg-261569 From nginx-forum at nginx.us Thu Sep 10 22:49:25 2015 From: nginx-forum at nginx.us (SimonHF) Date: Thu, 10 Sep 2015 18:49:25 -0400 Subject: time to read packets for HTTP query Message-ID: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org> I'm running a SAAS service running via NGINX and have been running tcpdump to look at the incoming packets for HTTP queries. Many of the HTTP queries are bigger than the MTU of 1,500 bytes and therefore arrive as 2, 3, or 4 packets. I noticed that for some customers there are significant delays between packets. The average size of these delays acts as a kind of fingerprint for each customers. The inter packet delay various from a few milliseconds to 100ms plus! Some customers have no delay. There are all shades of grey. When processing a SAAS query I log how long the processing time took etc. So it would be useful to log how long NGINX took to read the HTTP query packets too. Using tcpdump and a script to analyze the packet dump is not very handy. So I'm wondering if there is a mechanism in NGINX to report somehow the total time necessary to read all the packets of a particular HTTP query? I was thinking that if available, I could add it to the HTTP query in the form of an HTTP header? If not, how easy would it be to implement such a mechanism? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261571,261571#msg-261571 From luky-37 at hotmail.com Fri Sep 11 09:20:39 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 11 Sep 2015 11:20:39 +0200 Subject: time to read packets for HTTP query In-Reply-To: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org> References: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, > I'm running a SAAS service running via NGINX and have been running tcpdump > to look at the incoming packets for HTTP queries. Many of the HTTP queries > are bigger than the MTU of 1,500 bytes and therefore arrive as 2, 3, or 4 > packets. I noticed that for some customers there are significant delays > between packets. The average size of these delays acts as a kind of > fingerprint for each customers. The inter packet delay various from a few > milliseconds to 100ms plus! Some customers have no delay. There are all > shades of grey. > > When processing a SAAS query I log how long the processing time took etc. So > it would be useful to log how long NGINX took to read the HTTP query packets > too. Using tcpdump and a script to analyze the packet dump is not very > handy. So I'm wondering if there is a mechanism in NGINX to report somehow > the total time necessary to read all the packets of a particular HTTP query? > I was thinking that if available, I could add it to the HTTP query in the > form of an HTTP header? If not, how easy would it be to implement such a > mechanism? What about $request_time [1]? Lukas [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_time From nginx-forum at nginx.us Fri Sep 11 09:24:56 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Fri, 11 Sep 2015 05:24:56 -0400 Subject: Multiple limit_req_zone for same site Message-ID: Hi I'm trying to set multiple limit_req_zones for the same site. Is this possible? We have a few areas where clicking on a link seems to generate a lot of 503s so we'd like to up the limit without jeopardizing the stability of the rest of the site. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261573,261573#msg-261573 From nginx-forum at nginx.us Fri Sep 11 11:06:51 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 11 Sep 2015 07:06:51 -0400 Subject: Multiple limit_req_zone for same site In-Reply-To: References: Message-ID: <6a77be3b44734e28031b45384e75c09c.NginxMailingListEnglish@forum.nginx.org> You can define several zones; limit_req_zone $binary_remote_addr zone=flooda:20m rate=128r/s; limit_req_zone $binary_remote_addr zone=floodp:20m rate=64r/s; limit_req_zone $binary_remote_addr zone=floodh:10m rate=64r/s; and use them separately in location(s) /limited/ { limit_req zone=floodh burst=64 nodelay; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261573,261574#msg-261574 From frederik.nosi at postecom.it Fri Sep 11 11:16:39 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Fri, 11 Sep 2015 13:16:39 +0200 Subject: time to read packets for HTTP query In-Reply-To: References: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55F2B817.6050402@postecom.it> Hi, On 09/11/2015 11:20 AM, Lukas Tribus wrote: > Hi, > > >> I'm running a SAAS service running via NGINX and have been running tcpdump >> to look at the incoming packets for HTTP queries. Many of the HTTP queries >> are bigger than the MTU of 1,500 bytes and therefore arrive as 2, 3, or 4 >> packets. I noticed that for some customers there are significant delays >> between packets. The average size of these delays acts as a kind of >> fingerprint for each customers. The inter packet delay various from a few >> milliseconds to 100ms plus! Some customers have no delay. There are all >> shades of grey. >> >> When processing a SAAS query I log how long the processing time took etc. So >> it would be useful to log how long NGINX took to read the HTTP query packets >> too. Using tcpdump and a script to analyze the packet dump is not very >> handy. So I'm wondering if there is a mechanism in NGINX to report somehow >> the total time necessary to read all the packets of a particular HTTP query? >> I was thinking that if available, I could add it to the HTTP query in the >> form of an HTTP header? If not, how easy would it be to implement such a >> mechanism? > What about $request_time [1]? Does not seem to do what the GP asked, from the docs: $request_time request processing time in seconds with a milliseconds resolution (1.3.9, 1.2.6); time elapsed since the first bytes were read from the client Instead as i readed the question, the GP wants to know the difference betweeen the first request packet and last request packet coming from the client. Not sure if it can be obtained (hope i'm wrong). At least when it's a new tcp connection (first http request from the client, no keepalive) that means that you have to know when a SYN packet came, but that is something that only the TCP stack knows. Having a look at a strace of a live nginx, the new connection call sequence is: accept4 recvfrom [...] recvfrom Maybe the sum of the delays between revcfrom calls in a single request can do, maybe somebody can come with a patch. As a sidenote, this other nginx variables seem interesting too: from http://nginx.org/en/docs/http/ngx_http_core_module.html : $tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space information about the client TCP connection; available on systems that support the TCP_INFO socket option > > Lukas > > > [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_time > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Frederik From nginx-forum at nginx.us Fri Sep 11 12:00:24 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Fri, 11 Sep 2015 08:00:24 -0400 Subject: Multiple limit_req_zone for same site In-Reply-To: References: Message-ID: <63415c46a6abf76c1aa0c94897093603.NginxMailingListEnglish@forum.nginx.org> Thanks so if I do location / limit_req_zone and then location /limited/ limit_req_zone then the first limit won't apply to the second location Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261573,261576#msg-261576 From luky-37 at hotmail.com Fri Sep 11 12:15:25 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 11 Sep 2015 14:15:25 +0200 Subject: time to read packets for HTTP query In-Reply-To: <55F2B817.6050402@postecom.it> References: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org>, , <55F2B817.6050402@postecom.it> Message-ID: > Does not seem to do what the GP asked, from the docs: > > $request_time > request processing time in seconds with a milliseconds resolution > (1.3.9, 1.2.6); time elapsed since the first bytes were read from the client "request time" would imply the time (with our without parsing) of the actual HTTTP request, imho. In reality $request_time accounts for the complete request, response and logging, so yes, you are right. This is clearer in [1] then it is in [2]: > between the first bytes were read from the client and > *the log write after the last bytes were sent to the client* Lukas [1] http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format [2] http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_time From nginx-forum at nginx.us Fri Sep 11 12:29:05 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 11 Sep 2015 08:29:05 -0400 Subject: Multiple limit_req_zone for same site In-Reply-To: <63415c46a6abf76c1aa0c94897093603.NginxMailingListEnglish@forum.nginx.org> References: <63415c46a6abf76c1aa0c94897093603.NginxMailingListEnglish@forum.nginx.org> Message-ID: <28d5b4bbc136f3f12a7a596780c6bba2.NginxMailingListEnglish@forum.nginx.org> strtwtsn Wrote: ------------------------------------------------------- > then the first limit won't apply to the second location > > Thanks Once a location match is made it will stay inside this location including its settings. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261573,261578#msg-261578 From mdounin at mdounin.ru Fri Sep 11 12:53:40 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Sep 2015 15:53:40 +0300 Subject: time to read packets for HTTP query In-Reply-To: References: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org> <55F2B817.6050402@postecom.it> Message-ID: <20150911125340.GA62755@mdounin.ru> Hello! On Fri, Sep 11, 2015 at 02:15:25PM +0200, Lukas Tribus wrote: > > Does not seem to do what the GP asked, from the docs: > > > > $request_time > > request processing time in seconds with a milliseconds resolution > > (1.3.9, 1.2.6); time elapsed since the first bytes were read from the client > > "request time" would imply the time (with our without parsing) of the > actual HTTTP request, imho. > > In reality $request_time accounts for the complete request, response and > logging, so yes, you are right. While $request_time is indeed accounts for complete request time when used in logs, it can be accessed (and saved) at some intermediate point. E.g., by using something like set $header_time $request_time; one may save time since 1st bytes were read from a client till rewrite rule processing. This is basically identical to "time necessary to read all the packets of a particular HTTP query" that was asked (at least as long you don't try to count reading a request body). -- Maxim Dounin http://nginx.org/ From frederik.nosi at postecom.it Fri Sep 11 13:37:48 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Fri, 11 Sep 2015 15:37:48 +0200 Subject: time to read packets for HTTP query In-Reply-To: <20150911125340.GA62755@mdounin.ru> References: <15e018b8315f662dc998dc292c45e775.NginxMailingListEnglish@forum.nginx.org> <55F2B817.6050402@postecom.it> <20150911125340.GA62755@mdounin.ru> Message-ID: <55F2D92C.6000807@postecom.it> Hi Maxim, On 09/11/2015 02:53 PM, Maxim Dounin wrote: > Hello! > > On Fri, Sep 11, 2015 at 02:15:25PM +0200, Lukas Tribus wrote: > >>> Does not seem to do what the GP asked, from the docs: >>> >>> $request_time >>> request processing time in seconds with a milliseconds resolution >>> (1.3.9, 1.2.6); time elapsed since the first bytes were read from the client >> "request time" would imply the time (with our without parsing) of the >> actual HTTTP request, imho. >> >> In reality $request_time accounts for the complete request, response and >> logging, so yes, you are right. > While $request_time is indeed accounts for complete request time > when used in logs, it can be accessed (and saved) at some > intermediate point. E.g., by using something like > > set $header_time $request_time; > > one may save time since 1st bytes were read from a client till > rewrite rule processing. This is basically identical to "time > necessary to read all the packets of a particular HTTP query" that > was asked (at least as long you don't try to count reading a > request body). Thanks for explainig this, seems quite useful! From mdounin at mdounin.ru Fri Sep 11 14:40:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Sep 2015 17:40:27 +0300 Subject: cache file has too long header (bug) ? In-Reply-To: <88ce45fd6972348525572116e0934ed9.NginxMailingListEnglish@forum.nginx.org> References: <20150910181831.GW52312@mdounin.ru> <88ce45fd6972348525572116e0934ed9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150911144026.GE62755@mdounin.ru> Hello! On Thu, Sep 10, 2015 at 04:52:54PM -0400, biazus wrote: > Hi Maxim, > > Thank You for your answer. It really makes sense, however, in the > mentioned case, I could see an elevation of the number of occurrences just > after the migration from nginx 1.6 to nginx 1.8. > This behaviour affects < 0.2 % of our requests, but it means hundreds of > requests per hour. If this happens that often you may want to investigate further. Please see http://wiki.nginx.org/Debugging for some basic debugging hints. In this particular case, it should be helpful to see "nginx -V" output, full configuration and a debugging log that shows how a response was placed into the cache and how it then resulted in the message in question. Some basic things to consider before doing anything: - check if you are able to reproduce the problem without 3rd party modules/patchs; - make sure no other programs (including other nginx instances) try to modify files in the cache directory; - make sure there are no hardware (or other low-level) problems on the server in question. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Sat Sep 12 14:57:45 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 12 Sep 2015 15:57:45 +0100 Subject: Remove request from nginx error logs In-Reply-To: References: Message-ID: <20150912145745.GT3177@daoine.org> On Thu, Sep 10, 2015 at 06:51:18PM +0530, gaurav gupta wrote: Hi there, > How can we remove request url being logged in nginx error logs. For example > it looks something like: I don't think you can control the details of what nginx writes. Perhaps you could process the logs yourself before passing them on to whoever should not see the details? > 2015/09/01 15:26:03 [error] 30547#0: *208725 upstream prematurely closed > connection while reading response header from upstream, client: > 123.123.50.44, server: test.example.com, request: "GET > /v1.3/status.json?...." > > is it possible to drop request from the log(if present) so it looks > something like: > > 2015/09/01 15:26:03 [error] 30547#0: *208725 upstream prematurely closed > connection while reading response header from upstream, client: > 123.123.50.44, server: test.example.com I believe there are a limited number of error log patterns; possibly a script to "s/, request: .*//" would work for you? (Test your logs against the output that you want, to see whether it is enough and not too much.) > I was able to configure access logs but couldn't find a way to customize > error logs. If there is no way to drop just request, is it possible to drop > complete error logs matching a particular format something similar to what > https://github.com/cfsego/ngx_log_if does for access logs. I think that the error logs are deliberately not configurable (other than by setting the error log level). When something goes wrong, you generally want all of the information available, to be able to see what is needed to make it go right. And when you ask someone else to interpret the error logs, it is convenient if they don't have to guess which pieces are different due to your configuration. Obviously, you're welcome to write (or encourage someone to write for you) your code to do whatever you want. But I think that stock nginx doesn't have this facility, and won't have this facility. f -- Francis Daly francis at daoine.org From gfrankliu at gmail.com Mon Sep 14 20:48:45 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 14 Sep 2015 13:48:45 -0700 Subject: ignore "connection: close" from upstream Message-ID: Hi, I have below setup: client -> nginx server A -> proxy server -> real backend server (say, nginx server B) I'd like to have the keepalive connection between nginx server A and proxy server never die. The problem is sometimes the real backend server sends a "Connection: close". For example, I know if nginx is used as the real backend server, it will by default send "Connection: close" after 100 requests. When proxy server passes that response to nginx A, nginx will drop the keepalive link. Since I don't have control over the real backend server, my question is whether it is possible to configure nginx A to ignore the upstream "connection: close" and keep the link alive? Does nginx A have to pass the "Connection: close" to client? I thought nginx and client manage the keepalive separately instead of relying on upstream keepalive. I have control over the proxy server, so I could add a "X-My-Connection: close" or "X-My-Connection: keep-alive" to manage the connections between nginx server A and proxy server. Can nginx be configured to honor the customer header so that we don't be affected by the real backend server? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nik.molnar at consbio.org Mon Sep 14 23:26:52 2015 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Mon, 14 Sep 2015 16:26:52 -0700 Subject: NULL values in Nginx logs? Message-ID: <4142bd1c-5f7b-44de-bbe2-35aaa35bfafd@getmailbird.com> I experienced a problem with a web server earlier today in which it stopped responding to all HTTP requests (they would just hang... not sure if they connected and then hung, or got stuck trying to connect...). I'm not sure this was even an Nginx issue (I had problems SSHing to the server and eventually had to do a hard reboot), but the only thing out of the ordinary I can find in any of the application and system logs is a big block of \0 characters in both the access and error logs for Nginx around the time the problem started. Does anyone know what might cause Nginx to write NULLs to its access and error logs? Thanks, _Nik -------------- next part -------------- An HTML attachment was scrubbed... URL: From youcanpoint at me.com Tue Sep 15 01:22:39 2015 From: youcanpoint at me.com (Tyarko Leander Rodney) Date: Tue, 15 Sep 2015 03:22:39 +0200 Subject: nginx IETF RFC21266 Compliance - 'Proxy-Connection' Message-ID: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> Hi, I?ve posted this question on the IRC before but had no luck. I have the following problem: I?d like to disable the ?Proxy-Connection? Response Header. I know, that the ?Connection? Header is hard coded in ngx_http_header_filter_module.c, but does the same apply to ?Proxy-Connection? (couldn?t find it in the sources)? I?ve tried the more_clear_headers from the ngx_headers_more module and proxy_set_header(which both work fine with all other headers). Background: The ?Proxy-Connection? sadly violates our Server Policy (strict RFC21266 compliance). Kind regards T. Rodney From nginx-forum at nginx.us Tue Sep 15 06:58:03 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 15 Sep 2015 02:58:03 -0400 Subject: NULL values in Nginx logs? In-Reply-To: <4142bd1c-5f7b-44de-bbe2-35aaa35bfafd@getmailbird.com> References: <4142bd1c-5f7b-44de-bbe2-35aaa35bfafd@getmailbird.com> Message-ID: You mean something like: [09/Sep/2015:20:21:38 +0200] 52.88.xxx.yyy zzzzz - - http "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 [...] \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 181 "-" "-" "-" - Thats just a normal hack attempt and should have a less then zero % impact on nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261609,261611#msg-261611 From nginx-forum at nginx.us Tue Sep 15 07:55:03 2015 From: nginx-forum at nginx.us (kamalakarv) Date: Tue, 15 Sep 2015 03:55:03 -0400 Subject: Nginx not closing connections after applying logjam fix Message-ID: <481034e6eadfad4e24e434b9c8a9a38f.NginxMailingListEnglish@forum.nginx.org> Nginx details: Nginx version : 1.6.2 Java : 1.6 Openssl : 1.0.1 keepalive timeout : 65 ssl_dhparam /etc/nginx/conf/dhparams.pem ( enabled this) Any help appreciated ? Regards KV Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261613,261613#msg-261613 From francis at daoine.org Tue Sep 15 08:01:21 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Sep 2015 09:01:21 +0100 Subject: ignore "connection: close" from upstream In-Reply-To: References: Message-ID: <20150915080121.GW3177@daoine.org> On Mon, Sep 14, 2015 at 01:48:45PM -0700, Frank Liu wrote: Hi there, There seems to be a few separate questions in here. I do not have useful answers to all of them. > I have below setup: > > client -> nginx server A -> proxy server -> real backend server (say, > nginx server B) nginx does not speak to a proxy server. So either your proxy server is acting as a http server, or your architecture is not going to work very well. > I'd like to have the keepalive connection between nginx server A and proxy > server never die. That's a feature of the two servers -- so if you control them both, you can have it be long-lasting. "never" is ambitious, but you may be able to get "seldom enough". > The problem is sometimes the real backend server sends a > "Connection: close". For example, I know if nginx is used as the real > backend server, it will by default send "Connection: close" after 100 > requests. When proxy server passes that response to nginx A, nginx will > drop the keepalive link. proxy server should not pass that response to nginx A. RFC 2616 s13.5.1 Fix proxy server so that it does not, and your problem might disappear. > Since I don't have control over the real backend server, my question is > whether it is possible to configure nginx A to ignore the upstream > "connection: close" and keep the link alive? Does nginx A have to pass the > "Connection: close" to client? I thought nginx and client manage the > keepalive separately instead of relying on upstream keepalive. If "the thing talking to nginx" says "this connection is closing", it would seem hopeful for nginx to keep talking on that connection -- there is a good chance that the other end of the connection has already closed. > I have control over the proxy server, so I could add a "X-My-Connection: > close" or "X-My-Connection: keep-alive" to manage the connections between > nginx server A and proxy server. Can nginx be configured to honor the > customer header so that we don't be affected by the real backend server? It sounds to me like your proxy server is not doing what a proxy server should do. And, as nginx does not talk to a proxy server, your architecture may be wrong. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 15 08:08:37 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Sep 2015 09:08:37 +0100 Subject: nginx IETF RFC21266 Compliance - 'Proxy-Connection' In-Reply-To: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> References: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> Message-ID: <20150915080837.GX3177@daoine.org> On Tue, Sep 15, 2015 at 03:22:39AM +0200, Tyarko Leander Rodney wrote: Hi there, > I?ve posted this question on the IRC before but had no luck. I have the following problem: > > I?d like to disable the ?Proxy-Connection? Response Header. Which "Proxy-Connection" Response Header is that? > I know, that the ?Connection? Header is hard coded in ngx_http_header_filter_module.c, but does the same apply to ?Proxy-Connection? (couldn?t find it in the sources)? If it's not in the source, it probably doesn't come from nginx. Can you provide a config and a request/response that shows the behaviour that you don't want to see? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Sep 15 13:33:02 2015 From: nginx-forum at nginx.us (derp14) Date: Tue, 15 Sep 2015 09:33:02 -0400 Subject: reverse proxy + basic authentication Message-ID: <2c83e9ed1ef2c101f78f43837db40cff.NginxMailingListEnglish@forum.nginx.org> Hello, Please excuse me if this has been asked/solved before. I've searched an answer for some good hours but haven't found so i'm trying here. I have a website on some different server, which does not have any authentication. (So it loads directly the private stuff) I have configured nginx as a reverse proxy which is pretty clear, and works fine. Is it possible to configure basic authentication in nginx as the only layer of authentication, and if this is succesfull continue with the reverse proxy role and load the website from a different server? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261623,261623#msg-261623 From mdounin at mdounin.ru Tue Sep 15 13:40:13 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Sep 2015 16:40:13 +0300 Subject: nginx IETF RFC21266 Compliance - 'Proxy-Connection' In-Reply-To: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> References: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> Message-ID: <20150915134013.GL62755@mdounin.ru> Hello! On Tue, Sep 15, 2015 at 03:22:39AM +0200, Tyarko Leander Rodney wrote: > Hi, > > I?ve posted this question on the IRC before but had no luck. I > have the following problem: > > I?d like to disable the ?Proxy-Connection? Response Header. I > know, that the ?Connection? Header is hard coded in > ngx_http_header_filter_module.c, but does the same apply to > ?Proxy-Connection? (couldn?t find it in the sources)? > > I?ve tried the more_clear_headers from the ngx_headers_more > module and proxy_set_header(which both work fine with all other > headers). > > Background: The ?Proxy-Connection? sadly violates our Server > Policy (strict RFC21266 compliance). The "Proxy-Connection" header is not something standard and there are no requirements about handling it in any specification I'm aware of, including RFC 2616. (And there is no such a thing as "RFC21266", either.) Either way, if you want to stop some header, e.g., Proxy-Connection, from being forwarded by nginx proxy module, then: - for request headers, use: proxy_set_header Proxy-Connection ""; - for response headers, use proxy_hide_header Proxy-Connection; See here for the documentation: http://nginx.org/r/proxy_set_header http://nginx.org/r/proxy_hide_header -- Maxim Dounin http://nginx.org/ From nik.molnar at consbio.org Tue Sep 15 14:17:50 2015 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 15 Sep 2015 07:17:50 -0700 Subject: NULL values in Nginx logs? In-Reply-To: References: <4142bd1c-5f7b-44de-bbe2-35aaa35bfafd@getmailbird.com> Message-ID: Ok, yes, that's what I was seeing. Thanks for the info! _Nik On 9/14/2015 11:58:13 PM, itpp2012 wrote: You mean something like: [09/Sep/2015:20:21:38 +0200] 52.88.xxx.yyy zzzzz - - http "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 [...] \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 181 "-" "-" "-" - Thats just a normal hack attempt and should have a less then zero % impact on nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261609,261611#msg-261611 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 16 10:05:04 2015 From: nginx-forum at nginx.us (kamalakarv) Date: Wed, 16 Sep 2015 06:05:04 -0400 Subject: Nginx waiting connections growing Message-ID: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> Hi All, Active connections: 551 server accepts handled requests 69542 69542 79078 Reading: 0 Writing: 2 Waiting: 524 After I have moved my server from 32 bit Ubuntu 11.04 to Ubuntu 64bit 12.04 and I am observing they is sudden rise in the Waiting and Active connections and these values are growing daily and not comming back? it is a problem? KeepAlive timeout : 65 Nginx : 1.6.2 On previous 32 bit Ubuntu 11.04 machine the values were pretty low and all of sudden I am seeing rise in the metrics any help appricated. Regards KV Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261632,261632#msg-261632 From nginx-forum at nginx.us Wed Sep 16 10:09:24 2015 From: nginx-forum at nginx.us (kamalakarv) Date: Wed, 16 Sep 2015 06:09:24 -0400 Subject: Nginx waiting connections growing In-Reply-To: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> References: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1a047e773bcfda6ab38bc2dc816227a8.NginxMailingListEnglish@forum.nginx.org> I have one more question 24360 ? Ss 0:00 nginx: master process /etc/nginx/sbin/nginx 10479 ? S 0:06 \_ nginx: worker process > ls -l /proc/10479/fd | wc -l 95 Active connections: 551 server accepts handled requests 69542 69542 79078 Reading: 0 Writing: 2 Waiting: 524 -- As per my understanding if there are 551 active connections means there should be same amount of fd ( file descriptors) should be opened right? but in my case it is just 95 ? Your help appriciated Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261632,261633#msg-261633 From mdounin at mdounin.ru Wed Sep 16 12:49:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Sep 2015 15:49:58 +0300 Subject: Nginx waiting connections growing In-Reply-To: <1a047e773bcfda6ab38bc2dc816227a8.NginxMailingListEnglish@forum.nginx.org> References: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> <1a047e773bcfda6ab38bc2dc816227a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150916124958.GP62755@mdounin.ru> Hello! On Wed, Sep 16, 2015 at 06:09:24AM -0400, kamalakarv wrote: > I have one more question > > 24360 ? Ss 0:00 nginx: master process /etc/nginx/sbin/nginx > 10479 ? S 0:06 \_ nginx: worker process > > > ls -l /proc/10479/fd | wc -l > 95 > > Active connections: 551 > server accepts handled requests > 69542 69542 79078 > Reading: 0 Writing: 2 Waiting: 524 > > -- As per my understanding if there are 551 active connections means there > should be same amount of fd ( file descriptors) should be opened right? > but in my case it is just 95 ? It looks like nginx worker process died for some reason. Try looking into logs, it should have something like "[alert] worker process ... exited on signal ...". Some very basic things to check are: - if you are seeing the problem without 3rd party modules; - if you are seeing the problem with recent nginx versions. Some more debugging hints can be found at: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From lists-nginx at swsystem.co.uk Wed Sep 16 13:05:21 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 16 Sep 2015 14:05:21 +0100 Subject: nginx IETF RFC21266 Compliance - 'Proxy-Connection' In-Reply-To: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> References: <4BF5AF5D-1077-43D2-882B-7F56E3CD98A8@me.com> Message-ID: <2ded2e3252475e7fd542a4fbe7a02c97@swsystem.co.uk> At risk of repeating previous advice, see below ... -------- Original Message -------- Subject: Re: nginx RFC21266 Compliance - 'Proxy-Connection' Date: 25/08/2015 21:21 From: Steve Wilson To: nginx at nginx.org Reply-To: nginx at nginx.org Looking at https://en.wikipedia.org/wiki/List_of_HTTP_header_fields it suggests it's a non-standard request header. You can probably strip this out of the request to the real server with proxy_set_header "Proxy-Connection" ""; Although I'd expect the backend server to ignore invalid request headers rather than bork on the request. Steve. On 15/09/2015 02:22, Tyarko Leander Rodney wrote: > Hi, > > I?ve posted this question on the IRC before but had no luck. I have > the following problem: > > I?d like to disable the ?Proxy-Connection? Response Header. I know, > that the ?Connection? Header is hard coded in > ngx_http_header_filter_module.c, but does the same apply to > ?Proxy-Connection? (couldn?t find it in the sources)? > > I?ve tried the more_clear_headers from the ngx_headers_more module and > proxy_set_header(which both work fine with all other headers). > > Background: The ?Proxy-Connection? sadly violates our Server Policy > (strict RFC21266 compliance). > > Kind regards > > T. Rodney > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists-nginx at swsystem.co.uk Wed Sep 16 13:10:54 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 16 Sep 2015 14:10:54 +0100 Subject: reverse proxy + basic authentication In-Reply-To: <2c83e9ed1ef2c101f78f43837db40cff.NginxMailingListEnglish@forum.nginx.org> References: <2c83e9ed1ef2c101f78f43837db40cff.NginxMailingListEnglish@forum.nginx.org> Message-ID: Adding the below should remove any authentication headers in the request to the backend server(s). proxy_set_header "Authorization" ""; Steve. On 15/09/2015 14:33, derp14 wrote: > Hello, > > Please excuse me if this has been asked/solved before. I've searched an > answer for some good hours but haven't found so i'm trying here. > > I have a website on some different server, which does not have any > authentication. (So it loads directly the private stuff) > I have configured nginx as a reverse proxy which is pretty clear, and > works > fine. > > Is it possible to configure basic authentication in nginx as the only > layer > of authentication, and if this is succesfull continue with the reverse > proxy > role and load the website from a different server? > > Thank you! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261623,261623#msg-261623 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From bruno.premont at restena.lu Wed Sep 16 13:33:13 2015 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Wed, 16 Sep 2015 15:33:13 +0200 Subject: Nginx waiting connections growing In-Reply-To: <20150916124958.GP62755@mdounin.ru> References: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> <1a047e773bcfda6ab38bc2dc816227a8.NginxMailingListEnglish@forum.nginx.org> <20150916124958.GP62755@mdounin.ru> Message-ID: <20150916153313.40f17129@pluto.restena.lu> Hello Maxim, Seeing the same issue here, running nginx-1.8 (compiled for i586, against openssl-1.0.1p). Some workers do complain shortly after the daily SIGHUP to reload configuration and rotate logs: 2015/09/02 10:07:14 [notice] 18162#0: exiting 2015/09/02 10:07:14 [alert] 18162#0: *1471655 open socket #147 left in connection 40 2015/09/02 10:07:14 [alert] 18162#0: *1485419 open socket #224 left in connection 44 2015/09/02 10:07:14 [alert] 18162#0: *1548715 open socket #212 left in connection 84 2015/09/02 10:07:14 [alert] 18162#0: *1685585 open socket #61 left in connection 164 2015/09/02 10:07:14 [alert] 18162#0: *1462853 open socket #290 left in connection 202 2015/09/02 10:07:14 [alert] 18162#0: *1687835 open socket #76 left in connection 231 2015/09/02 10:07:14 [alert] 18162#0: *1684533 open socket #62 left in connection 237 2015/09/02 10:07:14 [alert] 18162#0: *1647090 open socket #32 left in connection 255 2015/09/02 10:07:14 [alert] 18162#0: *1598817 open socket #209 left in connection 281 2015/09/02 10:07:14 [alert] 18162#0: *1686652 open socket #166 left in connection 283 2015/09/02 10:07:14 [alert] 18162#0: aborting Of the two nginx frontend servers, only the one with mostly SSL traffic is visibly affected (same binary on both servers). I've not seen the issue with 1.7.x releases of nginx (only external module in use is headers_more). Bruno On Wed, 16 Sep 2015 15:49:58 +0300 Maxim Dounin wrote: > Hello! > > On Wed, Sep 16, 2015 at 06:09:24AM -0400, kamalakarv wrote: > > > I have one more question > > > > 24360 ? Ss 0:00 nginx: master process /etc/nginx/sbin/nginx > > 10479 ? S 0:06 \_ nginx: worker process > > > > > ls -l /proc/10479/fd | wc -l > > 95 > > > > Active connections: 551 > > server accepts handled requests > > 69542 69542 79078 > > Reading: 0 Writing: 2 Waiting: 524 > > > > -- As per my understanding if there are 551 active connections means there > > should be same amount of fd ( file descriptors) should be opened right? > > but in my case it is just 95 ? > > It looks like nginx worker process died for some reason. Try > looking into logs, it should have something like "[alert] worker > process ... exited on signal ...". > > Some very basic things to check are: > > - if you are seeing the problem without 3rd party modules; > > - if you are seeing the problem with recent nginx versions. > > Some more debugging hints can be found at: > > http://wiki.nginx.org/Debugging > From mdounin at mdounin.ru Wed Sep 16 15:20:56 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Sep 2015 18:20:56 +0300 Subject: Nginx waiting connections growing In-Reply-To: <20150916153313.40f17129@pluto.restena.lu> References: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> <1a047e773bcfda6ab38bc2dc816227a8.NginxMailingListEnglish@forum.nginx.org> <20150916124958.GP62755@mdounin.ru> <20150916153313.40f17129@pluto.restena.lu> Message-ID: <20150916152056.GR62755@mdounin.ru> Hello! On Wed, Sep 16, 2015 at 03:33:13PM +0200, Bruno Pr?mont wrote: > Seeing the same issue here, running nginx-1.8 (compiled for i586, > against openssl-1.0.1p). > > Some workers do complain shortly after the daily SIGHUP to reload > configuration and rotate logs: > 2015/09/02 10:07:14 [notice] 18162#0: exiting > 2015/09/02 10:07:14 [alert] 18162#0: *1471655 open socket #147 left in connection 40 > 2015/09/02 10:07:14 [alert] 18162#0: *1485419 open socket #224 left in connection 44 > 2015/09/02 10:07:14 [alert] 18162#0: *1548715 open socket #212 left in connection 84 > 2015/09/02 10:07:14 [alert] 18162#0: *1685585 open socket #61 left in connection 164 > 2015/09/02 10:07:14 [alert] 18162#0: *1462853 open socket #290 left in connection 202 > 2015/09/02 10:07:14 [alert] 18162#0: *1687835 open socket #76 left in connection 231 > 2015/09/02 10:07:14 [alert] 18162#0: *1684533 open socket #62 left in connection 237 > 2015/09/02 10:07:14 [alert] 18162#0: *1647090 open socket #32 left in connection 255 > 2015/09/02 10:07:14 [alert] 18162#0: *1598817 open socket #209 left in connection 281 > 2015/09/02 10:07:14 [alert] 18162#0: *1686652 open socket #166 left in connection 283 > 2015/09/02 10:07:14 [alert] 18162#0: aborting > > Of the two nginx frontend servers, only the one with mostly SSL > traffic is visibly affected (same binary on both servers). This is likely a different issue, as open sockets are expected to be seen as open file descriptors. If you are using SPDY, please try without it, see these tickets for similar reports: https://trac.nginx.org/nginx/ticket/626 https://trac.nginx.org/nginx/ticket/714 If not, you may want to consider obtaining more information. The http://wiki.nginx.org/Debugging contains some hints about debugging socket leaks as well. > I've not seen the issue with 1.7.x releases of nginx (only external > module in use is headers_more). I wouldn't suppose it's safe, either. In the past it caused segmentation faults at least once. -- Maxim Dounin http://nginx.org/ From r at roze.lv Wed Sep 16 16:51:00 2015 From: r at roze.lv (Reinis Rozitis) Date: Wed, 16 Sep 2015 19:51:00 +0300 Subject: http2 Message-ID: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> Hello, will the HTTP/2 support land also in the community edition or only stay as a nginx-plus feature? rr From sarah at nginx.com Wed Sep 16 16:52:42 2015 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 16 Sep 2015 09:52:42 -0700 Subject: http2 In-Reply-To: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> References: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> Message-ID: <783BF6FA-B87D-4B9C-899C-08F86EB7C7E2@nginx.com> Hi! HTTP/2 is available currently for NGINX open source as a patch https://www.nginx.com/blog/early-alpha-patch-http2/ and will be included in the next open source release of mainline. sarah > On Sep 16, 2015, at 9:51 AM, Reinis Rozitis wrote: > > Hello, > will the HTTP/2 support land also in the community edition or only stay as a nginx-plus feature? > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at nginx.com Wed Sep 16 16:53:49 2015 From: patrick at nginx.com (Patrick Nommensen) Date: Wed, 16 Sep 2015 09:53:49 -0700 Subject: http2 In-Reply-To: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> References: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> Message-ID: On Wed, Sep 16, 2015 at 9:51 AM, Reinis Rozitis wrote: > Hello, > will the HTTP/2 support land also in the community edition or only stay as > a nginx-plus feature? > HTTP/2 is already 100% open source. It will be committed to the base very soon. https://www.nginx.com/blog/early-alpha-patch-http2/ http://nginx.org/patches/http2/ -Patrick > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Sep 16 17:05:31 2015 From: r at roze.lv (Reinis Rozitis) Date: Wed, 16 Sep 2015 20:05:31 +0300 Subject: http2 In-Reply-To: <783BF6FA-B87D-4B9C-899C-08F86EB7C7E2@nginx.com> References: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> <783BF6FA-B87D-4B9C-899C-08F86EB7C7E2@nginx.com> Message-ID: <5306978BD8414966949964CD9586E554@NeiRoze> > From: Sarah Novotny > > Hi! > > HTTP/2 is available currently for NGINX open source as a patch > https://www.nginx.com/blog/early-alpha-patch-http2/ and will be > included in the next open source release of mainline. > > sarah Thx for clarification .. Somehow have missed this blog entry and the whole "patches-allready-available". rr From maxim at nginx.com Wed Sep 16 17:11:42 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 16 Sep 2015 20:11:42 +0300 Subject: http2 In-Reply-To: <5306978BD8414966949964CD9586E554@NeiRoze> References: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> <783BF6FA-B87D-4B9C-899C-08F86EB7C7E2@nginx.com> <5306978BD8414966949964CD9586E554@NeiRoze> Message-ID: <55F9A2CE.2050306@nginx.com> On 9/16/15 8:05 PM, Reinis Rozitis wrote: >> From: Sarah Novotny >> >> Hi! >> >> HTTP/2 is available currently for NGINX open source as a patch >> https://www.nginx.com/blog/early-alpha-patch-http2/ and will be >> included in the next open source release of mainline. >> >> sarah > > > Thx for clarification .. > Somehow have missed this blog entry and the whole > "patches-allready-available". > The first version of the patch was published and announced in nginx-devel@ in August: http://mailman.nginx.org/pipermail/nginx-devel/2015-August/007180.html -- Maxim Konovalov Discover best practices for building & delivering apps at scale. nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf From bruno.premont at restena.lu Thu Sep 17 06:17:31 2015 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Thu, 17 Sep 2015 08:17:31 +0200 Subject: Nginx waiting connections growing In-Reply-To: <20150916152056.GR62755@mdounin.ru> References: <7784c54efed9ae4ba841cd5882025ed6.NginxMailingListEnglish@forum.nginx.org> <1a047e773bcfda6ab38bc2dc816227a8.NginxMailingListEnglish@forum.nginx.org> <20150916124958.GP62755@mdounin.ru> <20150916153313.40f17129@pluto.restena.lu> <20150916152056.GR62755@mdounin.ru> Message-ID: <20150917081731.352c33c0@pluto.restena.lu> Hello Maxim, On Wed, 16 Sep 2015 18:20:56 +0300 Maxim Dounin wrote: > On Wed, Sep 16, 2015 at 03:33:13PM +0200, Bruno Pr?mont wrote: > > > Seeing the same issue here, running nginx-1.8 (compiled for i586, > > against openssl-1.0.1p). > > > > Some workers do complain shortly after the daily SIGHUP to reload > > configuration and rotate logs: > > 2015/09/02 10:07:14 [notice] 18162#0: exiting > > 2015/09/02 10:07:14 [alert] 18162#0: *1471655 open socket #147 left in connection 40 > > 2015/09/02 10:07:14 [alert] 18162#0: *1485419 open socket #224 left in connection 44 > > 2015/09/02 10:07:14 [alert] 18162#0: *1548715 open socket #212 left in connection 84 > > 2015/09/02 10:07:14 [alert] 18162#0: *1685585 open socket #61 left in connection 164 > > 2015/09/02 10:07:14 [alert] 18162#0: *1462853 open socket #290 left in connection 202 > > 2015/09/02 10:07:14 [alert] 18162#0: *1687835 open socket #76 left in connection 231 > > 2015/09/02 10:07:14 [alert] 18162#0: *1684533 open socket #62 left in connection 237 > > 2015/09/02 10:07:14 [alert] 18162#0: *1647090 open socket #32 left in connection 255 > > 2015/09/02 10:07:14 [alert] 18162#0: *1598817 open socket #209 left in connection 281 > > 2015/09/02 10:07:14 [alert] 18162#0: *1686652 open socket #166 left in connection 283 > > 2015/09/02 10:07:14 [alert] 18162#0: aborting > > > > Of the two nginx frontend servers, only the one with mostly SSL > > traffic is visibly affected (same binary on both servers). > > This is likely a different issue, as open sockets are expected to > be seen as open file descriptors. > > If you are using SPDY, please try without it, see these tickets > for similar reports: > > https://trac.nginx.org/nginx/ticket/626 > https://trac.nginx.org/nginx/ticket/714 > > If not, you may want to consider obtaining more information. The > http://wiki.nginx.org/Debugging contains some hints about > debugging socket leaks as well. > > > I've not seen the issue with 1.7.x releases of nginx (only external > > module in use is headers_more). > > I wouldn't suppose it's safe, either. In the past it caused > segmentation faults at least once. SPDY is active on all SSL listen directives so it probably is the same issue as reported in those tickets. Just surprising that I've not seen it with previous 1.7.x or earlier releases of nginx. Will have a look at disabled SPDY in the coming days. Bruno From nginx-forum at nginx.us Thu Sep 17 10:05:15 2015 From: nginx-forum at nginx.us (mjordan79) Date: Thu, 17 Sep 2015 06:05:15 -0400 Subject: NGINX + Spark Web UI Message-ID: <3859838a690557c05735ac032e789da4.NginxMailingListEnglish@forum.nginx.org> Hello! I'm trying to set up a reverse proxy (using nginx) for the Spark Web UI. I have 2 machines: 1) Machine A, with a public IP. This machine will be used to access Spark Web UI on the Machine B through its private IP address. 2) Machine B, where Spark is installed (standalone master cluster, 1 worker node and the history server) not accessible from the outside. Basically I want to access the Spark Web UI through my Machine A using the URL: http://machine_A_ip_address/spark Currently I have this setup: http { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header X-Ssl on; } # Master cluster node upstream app_master { server machine_B_ip_address:8080; } # Slave worker node upstream app_worker { server machine_B_ip_address:8081; } # Job UI upstream app_ui { server machine_B_ip_address:4040; } # History server upstream app_history { server machine_B_ip_address:18080; } I'm really struggling in figuring out a correct location directive to make the whole thing work, not only for accessing all ports using the url /spark but also in making the links in the web app be transformed accordingly. Any help really appreciated. Thank you in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261637,261637#msg-261637 From reallfqq-nginx at yahoo.fr Thu Sep 17 12:03:10 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 17 Sep 2015 14:03:10 +0200 Subject: NGINX + Spark Web UI In-Reply-To: <3859838a690557c05735ac032e789da4.NginxMailingListEnglish@forum.nginx.org> References: <3859838a690557c05735ac032e789da4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have a look at the docs: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass --- *B. R.* On Thu, Sep 17, 2015 at 12:05 PM, mjordan79 wrote: > Hello! > I'm trying to set up a reverse proxy (using nginx) for the Spark Web UI. > I have 2 machines: > 1) Machine A, with a public IP. This machine will be used to access > Spark > Web UI on the Machine B through its private IP address. > 2) Machine B, where Spark is installed (standalone master cluster, 1 > worker node and the history server) not accessible from the outside. > > Basically I want to access the Spark Web UI through my Machine A using the > URL: > http://machine_A_ip_address/spark > > Currently I have this setup: > http { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_set_header X-NginX-Proxy true; > proxy_set_header X-Ssl on; > } > > # Master cluster node > upstream app_master { > server machine_B_ip_address:8080; > } > > # Slave worker node > upstream app_worker { > server machine_B_ip_address:8081; > } > > # Job UI > upstream app_ui { > server machine_B_ip_address:4040; > } > > # History server > upstream app_history { > server machine_B_ip_address:18080; > } > > I'm really struggling in figuring out a correct location directive to make > the whole thing work, not only for accessing all ports using the url /spark > but also in making the links in the web app be transformed accordingly. > > Any help really appreciated. > Thank you in advance. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261637,261637#msg-261637 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 17 13:50:21 2015 From: nginx-forum at nginx.us (pwe) Date: Thu, 17 Sep 2015 09:50:21 -0400 Subject: multiple subdomains Message-ID: <573b9f97be2787c4881a79e03091bb4f.NginxMailingListEnglish@forum.nginx.org> Hello, I want to realize the following: mail.domain1.com --> mail.domain1.com mail.domain2.com --> mail.domain2.com mail.domain3.com --> mail.domain3.com mail.domain4.com --> mail.domain4.com mail.domain5.com --> mail.domain5.com It has to be like this, because there is another nginx integrated in the application software ... I already read about variables in proxy_pass but didnt succeed. :( any hints would be appreciated! kind regards, pwe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261638,261638#msg-261638 From nginx-forum at nginx.us Thu Sep 17 16:19:31 2015 From: nginx-forum at nginx.us (slowhand84) Date: Thu, 17 Sep 2015 12:19:31 -0400 Subject: Memory usage for the ngx_http_limit_req_module module Message-ID: <5c95a3d2c24819000ee713532ee3c7cd.NginxMailingListEnglish@forum.nginx.org> Hello, I'm using the "nxg_http_limit_req" module to limit the service usage. I've a questions about the "limit_req_zone" directive: how can I set the correct size for the zone? In docs I see that "One megabyte zone can keep about 16 thousand 64-byte states", how can I know how much memory is necessary for each state? Is the state a request in the "leaky bucket"? Regards Luca Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261640,261640#msg-261640 From francis at daoine.org Thu Sep 17 19:38:50 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 17 Sep 2015 20:38:50 +0100 Subject: Memory usage for the ngx_http_limit_req_module module In-Reply-To: <5c95a3d2c24819000ee713532ee3c7cd.NginxMailingListEnglish@forum.nginx.org> References: <5c95a3d2c24819000ee713532ee3c7cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150917193850.GZ3177@daoine.org> On Thu, Sep 17, 2015 at 12:19:31PM -0400, slowhand84 wrote: Hi there, > I'm using the "nxg_http_limit_req" module to limit the service usage. > I've a questions about the "limit_req_zone" directive: how can I set the > correct size for the zone? > In docs I see that "One megabyte zone can keep about 16 thousand 64-byte > states", how can I know how much memory is necessary for each state? Is the > state a request in the "leaky bucket"? Without checking the source, I'd suggest that the state is probably close to "a fixed size, plus the size of the key that you choose". And the documentation suggests that using $binary_remote_addr as a key, the state is about 64 bytes. $binary_remote_addr is 4 or 16 bytes, probably. If you want the real number in your deployment, set a zone of size (say) 1kB; make counted requests with unique keys, and see which one gives you the first "zone is full" (503) response. If the state is 64 bytes, you'd expect the 17th request to fail. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 17 19:47:14 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 17 Sep 2015 20:47:14 +0100 Subject: multiple subdomains In-Reply-To: <573b9f97be2787c4881a79e03091bb4f.NginxMailingListEnglish@forum.nginx.org> References: <573b9f97be2787c4881a79e03091bb4f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150917194714.GA3177@daoine.org> On Thu, Sep 17, 2015 at 09:50:21AM -0400, pwe wrote: Hi there, I think I may have missed some words in your mail; but if you are talking about how to proxy_pass to different internal web servers... > I want to realize the following: > > mail.domain1.com --> mail.domain1.com > mail.domain2.com --> mail.domain2.com > mail.domain3.com --> mail.domain3.com > mail.domain4.com --> mail.domain4.com > mail.domain5.com --> mail.domain5.com server { server_name mail.domain1.com; location / { proxy_pass http://mail.domain1.com; } } and have four other similar server{} blocks. The client machines must resolve mail.domain1.com to this server; this server's system resolver must resolve mail.domain1.com to the address that nginx should talk to. (Or you can hard-code things in nginx.conf.) f -- Francis Daly francis at daoine.org From livingdeadzerg at yandex.ru Fri Sep 18 08:38:19 2015 From: livingdeadzerg at yandex.ru (navern) Date: Fri, 18 Sep 2015 11:38:19 +0300 Subject: https for websocket Message-ID: <55FBCD7B.10108@yandex.ru> Hello, I am configuring websockets with nginx in front-end with this article: https://www.nginx.com/blog/websocket-nginx/. I want to setup nginx with SSL for secure web sockets(wss). I have a question is it right approach to solving this task or websockets has native solution for https setup? From nginx-forum at nginx.us Fri Sep 18 09:47:03 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 18 Sep 2015 05:47:03 -0400 Subject: http2 observations Message-ID: Some strange things with http2 When using: listen 443 ssl http2; In firefox you need to set these values (which should already be there but may not have the proper settings): network.http.spdy.enabled.http2 = false network.http.spdy.enabled.http2draft = true When both are set true ssl redirects to its root or even to port 80. When using: listen 80 http2; This turns port 80 into a stream (application/octet-stream) Albeit you should not configure http2 with http but nginx does not complain about it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261645,261645#msg-261645 From mdounin at mdounin.ru Fri Sep 18 12:48:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Sep 2015 15:48:45 +0300 Subject: http2 observations In-Reply-To: References: Message-ID: <20150918124845.GV62755@mdounin.ru> Hello! On Fri, Sep 18, 2015 at 05:47:03AM -0400, itpp2012 wrote: [...] > When using: > listen 80 http2; > > This turns port 80 into a stream (application/octet-stream) > Albeit you should not configure http2 with http but nginx does not complain > about it. Much like SPDY, HTTP/2 can be used in a "prior knowledge" mode, and this is what relevant configuration does: https://tools.ietf.org/html/rfc7540#section-3.4 This is known to be useful when there is some SSL accelerator in place before nginx. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Sep 18 18:45:55 2015 From: nginx-forum at nginx.us (justink101) Date: Fri, 18 Sep 2015 14:45:55 -0400 Subject: Multiple server blocks using spdy, reuseport, deferred Message-ID: <2865aeec6798713c27761d164c427d56.NginxMailingListEnglish@forum.nginx.org> If we have multiple server blocks binding on https using SPDY, reuseport, and deferred nginx fails to start complaining about port already bound: server { listen 443 deferred ssl spdy reuseport; server_name app.foo.com; ... } server { listen 443 deferred ssl spdy reuseport; server_name frontend.bar.com; ... } What is the behavior then if we change to: server { listen 443 deferred ssl spdy reuseport; server_name app.foo.com; ... } server { listen 443 ssl; server_name frontend.bar.com; ... } Will both server blocks use SPDY, reuseport, and deferred, or only the first? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261646,261646#msg-261646 From gfrankliu at gmail.com Fri Sep 18 18:50:16 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 18 Sep 2015 11:50:16 -0700 Subject: Websocket and proxy pass Message-ID: Hi, If I have upstream running websockets on port1 and non websockets on port2, is it possible to configure nginx reverse proxy to proxy_pass to port1 once it sees websockets incoming? Thanks Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 18 19:09:51 2015 From: nginx-forum at nginx.us (justink101) Date: Fri, 18 Sep 2015 15:09:51 -0400 Subject: Recusive includes Message-ID: <32549dc32946ae2ed0ca778effdbb39f.NginxMailingListEnglish@forum.nginx.org> If I want to include all config files within a directly, and all child directories what is the syntax: If is still: include /etc/nginx/*.conf or is it: include /etc/nginx/**/*.conf Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261647,261647#msg-261647 From nginx-forum at nginx.us Fri Sep 18 19:11:06 2015 From: nginx-forum at nginx.us (justink101) Date: Fri, 18 Sep 2015 15:11:06 -0400 Subject: Recusive includes In-Reply-To: <32549dc32946ae2ed0ca778effdbb39f.NginxMailingListEnglish@forum.nginx.org> References: <32549dc32946ae2ed0ca778effdbb39f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <22240cc6d1166ac7282cae153b94b243.NginxMailingListEnglish@forum.nginx.org> Arg, sorry for the typos. I really wish this forum allowed edits. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261647,261648#msg-261648 From gfrankliu at gmail.com Fri Sep 18 21:35:53 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 18 Sep 2015 14:35:53 -0700 Subject: realip and remote_port Message-ID: Hi, It seems if I use realip module and X-Forwarded-For header, the $remote_port becomes blank. Any ideas how to work around this? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahutchings at nginx.com Fri Sep 18 22:39:29 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Fri, 18 Sep 2015 23:39:29 +0100 Subject: Multiple server blocks using spdy, reuseport, deferred In-Reply-To: <2865aeec6798713c27761d164c427d56.NginxMailingListEnglish@forum.nginx.org> References: <2865aeec6798713c27761d164c427d56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55FC92A1.2050009@nginx.com> Hi, >From the documentation (about those two parameters): "The listen directive can have several additional parameters specific to socket-related system calls. These parameters can be specified in any listen directive, but only once for a given address:port pair." It will automatically apply to other directives using that address:port pair. I hope this helps. Kind Regards Andrew On 18/09/15 19:45, justink101 wrote: > If we have multiple server blocks binding on https using SPDY, reuseport, > and deferred nginx fails to start complaining about port already bound: > > server { > listen 443 deferred ssl spdy reuseport; > server_name app.foo.com; > ... > } > > server { > listen 443 deferred ssl spdy reuseport; > server_name frontend.bar.com; > ... > } > > What is the behavior then if we change to: > > server { > listen 443 deferred ssl spdy reuseport; > server_name app.foo.com; > ... > } > > server { > listen 443 ssl; > server_name frontend.bar.com; > ... > } > > Will both server blocks use SPDY, reuseport, and deferred, or only the > first? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261646,261646#msg-261646 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate, Nginx Inc. Discover best practices for building & delivering apps at scale. nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf From nginx-forum at nginx.us Sat Sep 19 03:41:21 2015 From: nginx-forum at nginx.us (ha_saeeda) Date: Fri, 18 Sep 2015 23:41:21 -0400 Subject: Set up Proxy Server Message-ID: Dear, I'm trying to set up a proxy server with Nginx. In this case I have two machines both of them on Linux. Machine A: is database server e.g => 192.20.2.100 Machine B: is proxy server e.g=> 192.20.4.50 So I want to access to internet via proxy server like this. DB=>Proxy=>Internet I could set up but the Machine A only access to one URL or Site at same time. How can I access to any URL at internet from Machine A? Setting on Machine B: server { listen 192.20.4.50:80; server_name webtest.com #charset koi8-r; access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 30s; #proxy_pass http://127.0.0.1:80; #proxy_pass http://www.oorsprong.org/; #proxy_pass http://google.com/; proxy_pass http://bing.com/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Best regards, Saeed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261649,261649#msg-261649 From nginx-forum at nginx.us Sat Sep 19 06:11:16 2015 From: nginx-forum at nginx.us (mex) Date: Sat, 19 Sep 2015 02:11:16 -0400 Subject: Set up Proxy Server In-Reply-To: References: Message-ID: <1093717fdbee9f95202d81d54e656f2e.NginxMailingListEnglish@forum.nginx.org> > I could set up but the Machine A only access to one URL or Site at > same time. > How can I access to any URL at internet from Machine A? > nginx is a reverse-proxy, what you are looking for is a forward-proxy and you could use apache or squid for this for more information on diffferences reverse vs forward-proxy read http://stackoverflow.com/questions/224664/difference-between-proxy-server-and-reverse-proxy-server cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261649,261650#msg-261650 From nginx-forum at nginx.us Sat Sep 19 07:29:42 2015 From: nginx-forum at nginx.us (ha_saeeda) Date: Sat, 19 Sep 2015 03:29:42 -0400 Subject: Set up Proxy Server In-Reply-To: <1093717fdbee9f95202d81d54e656f2e.NginxMailingListEnglish@forum.nginx.org> References: <1093717fdbee9f95202d81d54e656f2e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1de1490f81c5bbd25f9f5a54e6ae10d6.NginxMailingListEnglish@forum.nginx.org> Dear mex, Thanks for this guide, I'v solved my problem with Apache Forward Proxy. Regards, Saeed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261649,261652#msg-261652 From tinhcx at gmail.com Sat Sep 19 13:25:28 2015 From: tinhcx at gmail.com (TINHCX-GMAIL) Date: Sat, 19 Sep 2015 20:25:28 +0700 Subject: How to check valid size of images before crop Message-ID: I am researching nginx module "http_image_filter_module" and use the crop function but I wonder how to check valid size of images before crop them. _____________________ ?? [E]: tinhcx at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Sep 19 15:05:20 2015 From: nginx-forum at nginx.us (gdarceneaux) Date: Sat, 19 Sep 2015 11:05:20 -0400 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> References: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9a7a77d468ae6a2a3ee0c34a44a4ad58.NginxMailingListEnglish@forum.nginx.org> Alright. It compiles fine w/o RTMP module and no need for OpenSSL. I have NASM, and it's in my path. I still get the same error mentioned because when nginx begins it's compile it's trying to do the ms\do_ms.bat instead of the ms\do_nasm.bat. I have looked at various files till I can't see straight and do not know which files to change to make it point to ms\do_nasm.bat to process the assembly language modules. Any help will be appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,261674#msg-261674 From nginx-forum at nginx.us Sat Sep 19 16:12:57 2015 From: nginx-forum at nginx.us (gdarceneaux) Date: Sat, 19 Sep 2015 12:12:57 -0400 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> References: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Alright. It compiles fine w/o RTMP module and no need for OpenSSL. I have NASM, and it's in my path. I still get the same error mentioned because when nginx begins it's compile it's trying to do the ms\do_ms.bat instead of the ms\do_nasm.bat. I have looked at various files till I can't see straight and do not know which files to change to make it point to ms\do_nasm.bat to process the assembly language modules. Any help will be appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,261675#msg-261675 From nginx-forum at nginx.us Sat Sep 19 17:36:07 2015 From: nginx-forum at nginx.us (George) Date: Sat, 19 Sep 2015 13:36:07 -0400 Subject: http2 In-Reply-To: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> References: <07DA69855DC3462BAF6EF81F616D0F8F@NeiRoze> Message-ID: <38343f2020e00bd3df2ee80991feac6b.NginxMailingListEnglish@forum.nginx.org> It's already slated for Nginx 1.9.5 community/free edition :) Although I have been playing with HTTP/2 patches since Nginx 1.9.3 :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261669,261676#msg-261676 From fsantiago at deviltracks.net Sat Sep 19 22:31:56 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Sat, 19 Sep 2015 18:31:56 -0400 Subject: running nginx-running and nginx concurrently Message-ID: <457c490787d922b8b42aa29c19d15e1e@deviltracks.net> Can i install nginx-plus (trial license) concurrently with nginx and run them side by side? - Fabe From mike503 at gmail.com Mon Sep 21 00:25:26 2015 From: mike503 at gmail.com (Michael Shadle) Date: Sun, 20 Sep 2015 17:25:26 -0700 Subject: Trying to use SMTP proxy, but there might be limitations? Message-ID: The goal: To use headers/metadata from the incoming mail message to determine if delivery should be allowed based on the recipients of the message. Example: development/test environments, only allow whitelisted recipients to get messages. I couldn't find any packages, SaaS services or other options out there (except Mandrill with their "rules" capability, but there is no API to manage the whitelist...) I discovered nginx SMTP proxy might actually be able to let me do this though. It would be great to use PHP (since it's my language of choice) to do this - a quick lookup in a database (or cache) - so I liked the possibility of the auth_http option. However, I can only test and prove the concept for a single "To: destination" - if there are multiple recipients on the To: line, CC: or Bcc:, nginx still only seems to see one of them. I don't think this is only allowed in SMTP pipelining (which last I checked isn't supported in nginx) I'm not sure there is a way to make it work. It might simply not be supported. Here's my config. It seems to pass things around properly and allow me to send "Auth-Status OK" or "Auth-Status Denied" and properly allow or deny the message. But it doesn't expand the recipient list. http { server { listen 127.0.0.1:8080; server_name localhost; root /var/www; location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } } mail { server_name localhost; auth_http 127.0.0.1:8080/filter.php; xclient off; smtp_capabilities "SIZE 10240000" "VRFY" "ETRN" "ENHANCEDSTATUSCODES" "8BITMIME" "DSN"; smtp_auth none; proxy on; server { listen 25; protocol smtp; } } I examined $_SERVER in PHP: [HTTP_AUTH_METHOD] => none [HTTP_AUTH_USER] => [HTTP_AUTH_PASS] => [HTTP_AUTH_PROTOCOL] => smtp [HTTP_AUTH_LOGIN_ATTEMPT] => 1 [HTTP_CLIENT_IP] => 1.2.3.4 [HTTP_CLIENT_HOST] => [UNAVAILABLE] [HTTP_AUTH_SMTP_HELO] => client-hostname.com [HTTP_AUTH_SMTP_FROM] => MAIL FROM: SIZE=418 [HTTP_AUTH_SMTP_TO] => RCPT TO: ORCPT=rfc822;destination at address.com I was looking around to see if the body of the message or headers came in via stdin, but I can't find much documentation about the SMTP proxy. Also, I'm not sure ultimately it would help me, as I would have to somehow "ignore" the recipients that aren't allowed (which could be any combination, maybe only one is okay, maybe all are okay, maybe 3 out of 5 are okay, etc) I guess at this point my question is ... any ideas? From ek at nginx.com Mon Sep 21 09:49:52 2015 From: ek at nginx.com (Ekaterina Kukushkina) Date: Mon, 21 Sep 2015 12:49:52 +0300 Subject: running nginx-running and nginx concurrently In-Reply-To: <457c490787d922b8b42aa29c19d15e1e@deviltracks.net> References: <457c490787d922b8b42aa29c19d15e1e@deviltracks.net> Message-ID: Hello Fabe, Unfortunately, you can't. The 'nginx-plus' is a package name not a binary/service name and your current 'nginx' package will be replaced with 'nginx-plus' package during installation. > On 20 Sep 2015, at 01:31, fsantiago at deviltracks.net wrote: > > Can i install nginx-plus (trial license) concurrently with nginx and run them side by side? > > - Fabe > -- Ekaterina Kukushkina Support Engineer | NGINX, Inc. From rainer at ultra-secure.de Mon Sep 21 10:13:18 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Mon, 21 Sep 2015 12:13:18 +0200 Subject: running nginx-running and nginx concurrently In-Reply-To: References: <457c490787d922b8b42aa29c19d15e1e@deviltracks.net> Message-ID: <9896FC28-DC80-4084-908B-227BF4AAE929@ultra-secure.de> > Am 21.09.2015 um 11:49 schrieb Ekaterina Kukushkina : > > Hello Fabe, > > Unfortunately, you can't. > The 'nginx-plus' is a package name not a binary/service name and your > current 'nginx' package will be replaced with 'nginx-plus' package during > installation. > Well, on FreeBSD, the nginx package contains just one binary: /usr/local/sbin/nginx and no libraries. Technically, I?m pretty sure I could unpack that binary from the package and place it just about anywhere in the filesystem (and subsequently run it from there). Of course, I never got the chance to try out nginx-plus - but I assume it?s not much different. Though I have trouble understanding the reason behind such a request? From ek at nginx.com Mon Sep 21 11:04:39 2015 From: ek at nginx.com (Ekaterina Kukushkina) Date: Mon, 21 Sep 2015 14:04:39 +0300 Subject: running nginx-running and nginx concurrently In-Reply-To: <9896FC28-DC80-4084-908B-227BF4AAE929@ultra-secure.de> References: <457c490787d922b8b42aa29c19d15e1e@deviltracks.net> <9896FC28-DC80-4084-908B-227BF4AAE929@ultra-secure.de> Message-ID: <2BFCF7F4-744D-4BF8-9E07-95A427B46F80@nginx.com> Hello Rainer, > On 21 Sep 2015, at 13:13, Rainer Duffner wrote: > > Well, on FreeBSD, the nginx package contains just one binary: /usr/local/sbin/nginx and no libraries. > Technically, I?m pretty sure I could unpack that binary from the package and place it just about anywhere in the filesystem (and subsequently run it from there). Technically, yes. It's possible to unpack deb, put binary, configs, etc to new hierarchy, create start-up script (or run it by hand). But I said about whole package not binary. Running two binary simultaneously - possible. Thank you for clarify. > > Of course, I never got the chance to try out nginx-plus - but I assume it?s not much different. But you can ;) > > Though I have trouble understanding the reason behind such a request? May be current installed version was compiled with modules which aren't available for nginx-plus(-extras). > -- Ekaterina Kukushkina Support Engineer | NGINX, Inc. From nginx-forum at nginx.us Mon Sep 21 11:34:01 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 21 Sep 2015 07:34:01 -0400 Subject: [ANN] Windows nginx 1.9.5.1 Lizard Message-ID: <93bdda73561c2693cab2dc3681549a86.NginxMailingListEnglish@forum.nginx.org> 10:38 21-9-2015 nginx 1.9.5.1 Lizard Based on nginx 1.9.5 (18-9-2015) with; * Note about using http2 with plain http, https://tools.ietf.org/html/rfc7540#section-3.4 "This is known to be useful when there is some SSL accelerator in place before nginx." tnx. to Maxim Dounin for this link and explanation * prove06 will be released shortly for recent changes + spdy replaced with http/2 module (check your .conf file(s)) selfcheck: https://www.h2check.org/ (which isn't working atm.) "listen 443 ssl spdy;" => "listen 443 ssl http2;" + lua-nginx-module patches for http/2 + stream examples: load balance your smtp servers stream-smtp-nginx.conf load balance your vpn servers stream-openvpn-nginx.conf + lua-nginx-module v0.9.17 (upgraded 15-9-2015) + mimetype android.package-archive (apk) + pcre-8.37b-r1600 (upgraded 6-9-2015) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Known broken issues: ajp cache * Scheduled release: yes * Additional specifications: see 'Feature list' * This is the last of the Lizard series, watch out for the new release name Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261689,261689#msg-261689 From francis at daoine.org Mon Sep 21 11:51:17 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 21 Sep 2015 12:51:17 +0100 Subject: Trying to use SMTP proxy, but there might be limitations? In-Reply-To: References: Message-ID: <20150921115117.GC3177@daoine.org> On Sun, Sep 20, 2015 at 05:25:26PM -0700, Michael Shadle wrote: Hi there, This is all untested by me. > To use headers/metadata from the incoming mail message to determine if > delivery should be allowed based on the recipients of the message. I think that that is not available when using the nginx SMTP (reverse) proxy. You get one MAIL FROM address, plus one RCPT TO address each time that your auth_http url is called. All you can use is the SMTP Envelope data. > Example: development/test environments, That should be fine - one nginx mail server{} for dev, one for test. > only allow whitelisted recipients to get messages. That should be fine - you get each RCPT TO in turn, and your code decides what Auth-Status to return for each one. > However, I can only test and prove the concept for a single "To: > destination" - if there are multiple recipients on the To: line, CC: > or Bcc:, nginx still only seems to see one of them. I don't think this > is only allowed in SMTP pipelining (which last I checked isn't > supported in nginx) Can you set up a test to watch the SMTP traffic that happens? To:, Cc:, Bcc: are all mail client things. Whatever is talking SMTP to your nginx SMTP server should send one MAIL FROM, then multiple lines of RCPT TO, getting a response for each line sent. > I'm not sure there is a way to make it work. It might simply not be supported. > > Here's my config. It seems to pass things around properly and allow me > to send "Auth-Status OK" or "Auth-Status Denied" and properly allow or > deny the message. But it doesn't expand the recipient list. For the SMTP server, there is no recipient list to expand. (At least, in the context you refer to here.) > I examined $_SERVER in PHP: > > [HTTP_AUTH_SMTP_FROM] => MAIL FROM: SIZE=418 > [HTTP_AUTH_SMTP_TO] => RCPT TO: > ORCPT=rfc822;destination at address.com Do you want *that* address to be delivered to? If so, "Auth-Status: OK". After you do that, you should get another request for the next address (I think). > I was looking around to see if the body of the message or headers came > in via stdin, but I can't find much documentation about the SMTP > proxy. Also, I'm not sure ultimately it would help me, as I would have > to somehow "ignore" the recipients that aren't allowed (which could be > any combination, maybe only one is okay, maybe all are okay, maybe 3 > out of 5 are okay, etc) Send the Auth-Status that you want, for each RCPT TO address that you are given. See what breaks. > I guess at this point my question is ... any ideas? What do your logs or "tcpdump" output say happens? What do you want to happen instead? Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Sep 21 12:11:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Sep 2015 15:11:47 +0300 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: References: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150921121147.GC62755@mdounin.ru> Hello! On Sat, Sep 19, 2015 at 12:12:57PM -0400, gdarceneaux wrote: > Alright. It compiles fine w/o RTMP module and no need for OpenSSL. I have > NASM, and it's in my path. I still get the same error mentioned because when > nginx begins it's compile it's trying to do the ms\do_ms.bat instead of the > ms\do_nasm.bat. I have looked at various files till I can't see straight and > do not know which files to change to make it point to ms\do_nasm.bat to > process the assembly language modules. Any help will be appreciated. As of OpenSSL 1.0.2, OpenSSL fails to compile on Windows using a procedure nginx uses (and the one used to be the default for years). The build process used by the --with-openssl configure option was adjusted to work around this in nginx 1.9.4, see here: http://hg.nginx.org/nginx/rev/5def760fe95e Please upgrade to a latest nginx version, it should help. Alternatively, you can compile OpenSSL yourself instead of asking nginx to do this. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Sep 21 13:12:15 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Sep 2015 16:12:15 +0300 Subject: Recusive includes In-Reply-To: <32549dc32946ae2ed0ca778effdbb39f.NginxMailingListEnglish@forum.nginx.org> References: <32549dc32946ae2ed0ca778effdbb39f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150921131215.GE62755@mdounin.ru> Hello! On Fri, Sep 18, 2015 at 03:09:51PM -0400, justink101 wrote: > If I want to include all config files within a directly, and all child > directories what is the syntax: > > If is still: > > include /etc/nginx/*.conf > > or is it: > > include /etc/nginx/**/*.conf The "include" directive uses OS glob() syntax, so you should use something like: include /etc/nginx/*/*.conf; Note that two asterisks are only understood by some shells and won't be different from just one. That is, don't expect recursive behaviour. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Sep 21 13:44:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Sep 2015 16:44:20 +0300 Subject: Trying to use SMTP proxy, but there might be limitations? In-Reply-To: <20150921115117.GC3177@daoine.org> References: <20150921115117.GC3177@daoine.org> Message-ID: <20150921134420.GH62755@mdounin.ru> Hello! On Mon, Sep 21, 2015 at 12:51:17PM +0100, Francis Daly wrote: > On Sun, Sep 20, 2015 at 05:25:26PM -0700, Michael Shadle wrote: [...] > > I examined $_SERVER in PHP: > > > > [HTTP_AUTH_SMTP_FROM] => MAIL FROM: SIZE=418 > > [HTTP_AUTH_SMTP_TO] => RCPT TO: > > ORCPT=rfc822;destination at address.com > > Do you want *that* address to be delivered to? If so, "Auth-Status: OK". > > After you do that, you should get another request for the next address > (I think). No, this isn't how it works. With "smtp_auth none;" only the first MAIL FROM with the fist RCPT TO is passed to the auth script. Once it responds with "Auth-Status: OK", the connection is passed to a smtp backend returned, and an opaque pipe is established - that is, more recipient addresses can be passed there, and even more messages. The backend is expected to be properly configured to do actual recipient checking itself. What nginx smtp proxy layer does is initial filtering - whether we are willing to talk to this client, or not at all. -- Maxim Dounin http://nginx.org/ From fsantiago at deviltracks.net Mon Sep 21 14:00:19 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Mon, 21 Sep 2015 10:00:19 -0400 Subject: running nginx-running and nginx concurrently In-Reply-To: <2BFCF7F4-744D-4BF8-9E07-95A427B46F80@nginx.com> References: <457c490787d922b8b42aa29c19d15e1e@deviltracks.net> <9896FC28-DC80-4084-908B-227BF4AAE929@ultra-secure.de> <2BFCF7F4-744D-4BF8-9E07-95A427B46F80@nginx.com> Message-ID: <52c61c48fb4da8726941bb411c5314cc@deviltracks.net> Thanks Rainer and Ekaterina. The reason for my request is I have a production Nginx server (my only server) and I wanted to play with Nginx+ without disturbing my running server. On 2015-09-21 07:04, Ekaterina Kukushkina wrote: > Hello Rainer, > >> On 21 Sep 2015, at 13:13, Rainer Duffner >> wrote: >> >> Well, on FreeBSD, the nginx package contains just one binary: >> /usr/local/sbin/nginx and no libraries. >> Technically, I?m pretty sure I could unpack that binary from the >> package and place it just about anywhere in the filesystem (and >> subsequently run it from there). > > Technically, yes. > It's possible to unpack deb, put binary, configs, etc to new > hierarchy, create start-up script (or run it by hand). > But I said about whole package not binary. > Running two binary simultaneously - possible. > Thank you for clarify. > >> >> Of course, I never got the chance to try out nginx-plus - but I assume >> it?s not much different. > > But you can ;) > >> >> Though I have trouble understanding the reason behind such a request? > > May be current installed version was compiled with modules which > aren't available for nginx-plus(-extras). > >> > > -- > Ekaterina Kukushkina > Support Engineer | NGINX, Inc. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ricardo.aravena at coupa.com Mon Sep 21 18:32:09 2015 From: ricardo.aravena at coupa.com (Ricardo Aravena) Date: Mon, 21 Sep 2015 13:32:09 -0500 Subject: Issue with nginx crashing 'segfault' Message-ID: Hi Folks, I was wondering of somebody could shed some light on this issue that we are experiencing. nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22584]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22582]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22707]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22583]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22806]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22741]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22862]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22866]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22770]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22518]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] nginx[22896]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error 4 in libapr-1.so.0.3.9[376e600000+2b000] Basically, the nginx workers are crashing intermittently causing our application no server requests. We are using nginx with mod_security and mod_passenger. Apparently, this may be related to mod_security but can somebody shed some light as to how to solve it ? Is there a fix in the nginx code base or mod_security code base ? Thanks, Ricardo -- *Follow us on Twitter , LinkedIn , and Google+ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.nosi at postecom.it Mon Sep 21 18:42:35 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Mon, 21 Sep 2015 20:42:35 +0200 Subject: Issue with nginx crashing 'segfault' In-Reply-To: References: Message-ID: <56004F9B.1020005@postecom.it> Hi, On 09/21/2015 08:32 PM, Ricardo Aravena wrote: > Hi Folks, > > I was wondering of somebody could shed some light on this issue that > we are experiencing. > > nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22584]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22582]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22707]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22583]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22806]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22741]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22862]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22866]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22770]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22518]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22896]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 > error 4 in libapr-1.so.0.3.9[376e600000+2b000] > > Basically, the nginx workers are crashing intermittently causing our > application no server requests. We are using nginx with mod_security > and mod_passenger. Apparently, this may be related to mod_security but > can somebody shed some light as to how to solve it ? Is there a fix in > the nginx code base or mod_security code base ? > This seems strage, as nginx does not use libapr. Being libapr an Apache library, maybe you're using an mod_security or mod_passenger built for apache instead of the nginx one? > Thanks, > Ricardo > > > -- > * > Follow us on Twitter , LinkedIn > , and Google+ > > * -------------- next part -------------- An HTML attachment was scrubbed... URL: From stl at wiredrive.com Mon Sep 21 18:49:53 2015 From: stl at wiredrive.com (Scott Larson) Date: Mon, 21 Sep 2015 11:49:53 -0700 Subject: Issue with nginx crashing 'segfault' In-Reply-To: <56004F9B.1020005@postecom.it> References: <56004F9B.1020005@postecom.it> Message-ID: The mod_security module for nginx does require libapr-1. *[image: userimage]Scott Larson[image: los angeles] Lead Systems Administrator[image: wdlogo] [image: linkedin] [image: facebook] [image: twitter] [image: instagram] T 310 823 8238 x1106 <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* On Mon, Sep 21, 2015 at 11:42 AM, Frederik Nosi wrote: > Hi, > > On 09/21/2015 08:32 PM, Ricardo Aravena wrote: > > Hi Folks, > > I was wondering of somebody could shed some light on this issue that we > are experiencing. > > nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22584]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22582]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22707]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22583]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22806]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22741]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22862]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22866]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22770]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22518]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > nginx[22896]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] > > Basically, the nginx workers are crashing intermittently causing our > application no server requests. We are using nginx with mod_security and > mod_passenger. Apparently, this may be related to mod_security but can > somebody shed some light as to how to solve it ? Is there a fix in the > nginx code base or mod_security code base ? > > > This seems strage, as nginx does not use libapr. Being libapr an Apache > library, maybe you're using an mod_security or mod_passenger built for > apache instead of the nginx one? > > > > Thanks, > Ricardo > > > -- > * Follow us on Twitter , LinkedIn > , and Google+ > * > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.aravena at coupa.com Mon Sep 21 19:23:07 2015 From: ricardo.aravena at coupa.com (Ricardo Aravena) Date: Mon, 21 Sep 2015 14:23:07 -0500 Subject: Issue with nginx crashing 'segfault' In-Reply-To: References: <56004F9B.1020005@postecom.it> Message-ID: Thanks guys. Is there something in the nginx configs that will allow us to see more logs ? Like for example get a traceback of the error. Cheers, Ricardo On Mon, Sep 21, 2015 at 1:49 PM, Scott Larson wrote: > The mod_security module for nginx does require libapr-1. > > > *[image: userimage]Scott Larson[image: los angeles] > Lead > Systems Administrator[image: wdlogo] [image: > linkedin] [image: facebook] > [image: twitter] > [image: instagram] > T 310 823 8238 x1106 > <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* > > On Mon, Sep 21, 2015 at 11:42 AM, Frederik Nosi > wrote: > >> Hi, >> >> On 09/21/2015 08:32 PM, Ricardo Aravena wrote: >> >> Hi Folks, >> >> I was wondering of somebody could shed some light on this issue that we >> are experiencing. >> >> nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22584]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22582]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22707]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22583]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22806]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22741]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22862]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22866]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22770]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22518]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> nginx[22896]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >> >> Basically, the nginx workers are crashing intermittently causing our >> application no server requests. We are using nginx with mod_security and >> mod_passenger. Apparently, this may be related to mod_security but can >> somebody shed some light as to how to solve it ? Is there a fix in the >> nginx code base or mod_security code base ? >> >> >> This seems strage, as nginx does not use libapr. Being libapr an Apache >> library, maybe you're using an mod_security or mod_passenger built for >> apache instead of the nginx one? >> >> >> >> Thanks, >> Ricardo >> >> >> -- >> * Follow us on Twitter , LinkedIn >> , and Google+ >> * >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Follow us on Twitter , LinkedIn , and Google+ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From stl at wiredrive.com Mon Sep 21 19:30:53 2015 From: stl at wiredrive.com (Scott Larson) Date: Mon, 21 Sep 2015 12:30:53 -0700 Subject: Issue with nginx crashing 'segfault' In-Reply-To: References: <56004F9B.1020005@postecom.it> Message-ID: Is it not creating a core file wherever your working_directory is currently set to? Whenever I see segfaults that's the second thing I'm looking for if there is nothing immediately obvious in the error logs. If you want full blown debugging support you'll likely need to rebuild nginx from source with that option enabled. *[image: userimage]Scott Larson[image: los angeles] Lead Systems Administrator[image: wdlogo] [image: linkedin] [image: facebook] [image: twitter] [image: instagram] T 310 823 8238 x1106 <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* On Mon, Sep 21, 2015 at 12:23 PM, Ricardo Aravena wrote: > Thanks guys. > > Is there something in the nginx configs that will allow us to see more > logs ? Like for example get a traceback of the error. > > Cheers, > Ricardo > > > On Mon, Sep 21, 2015 at 1:49 PM, Scott Larson wrote: > >> The mod_security module for nginx does require libapr-1. >> >> >> *[image: userimage]Scott Larson[image: los angeles] >> Lead >> Systems Administrator[image: wdlogo] [image: >> linkedin] [image: facebook] >> [image: twitter] >> [image: instagram] >> T 310 823 8238 x1106 >> <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* >> >> On Mon, Sep 21, 2015 at 11:42 AM, Frederik Nosi < >> frederik.nosi at postecom.it> wrote: >> >>> Hi, >>> >>> On 09/21/2015 08:32 PM, Ricardo Aravena wrote: >>> >>> Hi Folks, >>> >>> I was wondering of somebody could shed some light on this issue that we >>> are experiencing. >>> >>> nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22584]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22582]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22707]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22583]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22806]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22741]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22862]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22866]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22770]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22518]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> nginx[22896]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493640 >>> error 4 in libapr-1.so.0.3.9[376e600000+2b000] >>> >>> Basically, the nginx workers are crashing intermittently causing our >>> application no server requests. We are using nginx with mod_security and >>> mod_passenger. Apparently, this may be related to mod_security but can >>> somebody shed some light as to how to solve it ? Is there a fix in the >>> nginx code base or mod_security code base ? >>> >>> >>> This seems strage, as nginx does not use libapr. Being libapr an Apache >>> library, maybe you're using an mod_security or mod_passenger built for >>> apache instead of the nginx one? >>> >>> >>> >>> Thanks, >>> Ricardo >>> >>> >>> -- >>> * Follow us on Twitter , LinkedIn >>> , and Google+ >>> * >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > *Follow us on Twitter , LinkedIn > , and Google+ > * > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Sep 21 21:54:56 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 21 Sep 2015 22:54:56 +0100 Subject: Trying to use SMTP proxy, but there might be limitations? In-Reply-To: <20150921134420.GH62755@mdounin.ru> References: <20150921115117.GC3177@daoine.org> <20150921134420.GH62755@mdounin.ru> Message-ID: <20150921215456.GD3177@daoine.org> On Mon, Sep 21, 2015 at 04:44:20PM +0300, Maxim Dounin wrote: > On Mon, Sep 21, 2015 at 12:51:17PM +0100, Francis Daly wrote: > > On Sun, Sep 20, 2015 at 05:25:26PM -0700, Michael Shadle wrote: Hi there, > > > [HTTP_AUTH_SMTP_FROM] => MAIL FROM: SIZE=418 > > > [HTTP_AUTH_SMTP_TO] => RCPT TO: > > > ORCPT=rfc822;destination at address.com > > > > Do you want *that* address to be delivered to? If so, "Auth-Status: OK". > > > > After you do that, you should get another request for the next address > > (I think). > > No, this isn't how it works. Ah, thank you for the correction. I should have tested more before responding. > What nginx smtp proxy layer does is initial filtering - whether we are > willing to talk to this client, or not at all. That sounds like a very good one-line summary of the design intention. Now that I look properly, I see that it is indicated towards the end of http://nginx.org/en/docs/mail/ngx_mail_smtp_module.html Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Sep 21 23:55:12 2015 From: nginx-forum at nginx.us (gdarceneaux) Date: Mon, 21 Sep 2015 19:55:12 -0400 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <20150921121147.GC62755@mdounin.ru> References: <20150921121147.GC62755@mdounin.ru> Message-ID: <5572606778f7953a3ce2096d1afacb6d.NginxMailingListEnglish@forum.nginx.org> Okay, i'm probably an idiot, but here is what I have done: 1. D/L latest nginx 1.9.4 source from http://hg.nginx.org/nginx/tags 1.9.4.zip 2. pcre8.36 3. zlib 1.2.8 4. openssl 1.0.2d And original configure runs fine. When i run the nmake -f objs/Makefile is when I get error. If I read your reply-post correctly, I've done everything you've said, but still fails openssl compile.According to your post the code should have been adjusted to use NO_ASM in openssl, but it still fails. So please explain how to use an already compiled openssl with an nginx compile where you don't use nginx to compile openssl. I'm not trying to be difficult, but I've read, tried, read some more and am getting no where. I just want to be able to compile and get a working nginx with the rtmp module included. i really do appreciate any help you can give me. Do i need to change some file in the latest nginx to specifiy No_ASM? Thanks for your assistance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,261707#msg-261707 From mdounin at mdounin.ru Tue Sep 22 02:06:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 05:06:49 +0300 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <5572606778f7953a3ce2096d1afacb6d.NginxMailingListEnglish@forum.nginx.org> References: <20150921121147.GC62755@mdounin.ru> <5572606778f7953a3ce2096d1afacb6d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150922020649.GA7713@mdounin.ru> Hello! On Mon, Sep 21, 2015 at 07:55:12PM -0400, gdarceneaux wrote: > Okay, i'm probably an idiot, but here is what I have done: > 1. D/L latest nginx 1.9.4 source from http://hg.nginx.org/nginx/tags > 1.9.4.zip > 2. pcre8.36 > 3. zlib 1.2.8 > 4. openssl 1.0.2d > > And original configure runs fine. When i run the nmake -f objs/Makefile is > when I get error. If I read your reply-post correctly, I've done everything > you've said, but still fails openssl compile.According to your post the code > should have been adjusted to use NO_ASM in openssl, but it still fails. The important part is the "--with-openssl-opt=no-asm" configure option as shown in the patch linked. If you are using your own ./configure string, not the one from misc/GNUmakefile, make sure to use this option as well. Also please make sure you are using clean OpenSSL directory. > So please explain how to use an already compiled openssl with an nginx compile > where you don't use nginx to compile openssl. It looks like I was mistaken and this is not something nginx can easily handle on Windows. (On UNIX, you just compile OpenSSL yourself, and then run nginx's ./configure with appropriate --with-cc-opt and --with-ld-opt.) If needed, you still can compile OpenSSL on Windows yourself as long as you use the same prefix as nginx expects. That is, run nginx's configure: ./configure --with-openssl=/path/to/openssl ... Then compile OpenSSL yourself as documented in INSTALL.W32, and using "openssl" prefix: cd /path/to/openssl perl Configure VC-WIN32 no-asm --prefix=openssl ms\do_ms nmake -f ms\ntdll.mak nmake -f ms\ntdll.mak install Then cd back to nginx and run nmake. It should be enough to use the "--with-openssl-opt=no-asm" option though, as recommended above. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 22 02:15:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 05:15:41 +0300 Subject: Issue with nginx crashing 'segfault' In-Reply-To: References: Message-ID: <20150922021541.GB7713@mdounin.ru> Hello! On Mon, Sep 21, 2015 at 01:32:09PM -0500, Ricardo Aravena wrote: > Hi Folks, > > I was wondering of somebody could shed some light on this issue that we are > experiencing. > > nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 error > 4 in libapr-1.so.0.3.9[376e600000+2b000] [...] > Basically, the nginx workers are crashing intermittently causing our > application no server requests. We are using nginx with mod_security and > mod_passenger. Apparently, this may be related to mod_security but can > somebody shed some light as to how to solve it ? Is there a fix in the > nginx code base or mod_security code base ? The mod_security module for nginx is known to have problems. Try looking at the "nginx_refactoring" (or "nginx_refactoring_def") branch of mod_security, it may work for you. -- Maxim Dounin http://nginx.org/ From ricardo.aravena at coupa.com Tue Sep 22 03:41:15 2015 From: ricardo.aravena at coupa.com (Ricardo Aravena) Date: Mon, 21 Sep 2015 22:41:15 -0500 Subject: Issue with nginx crashing 'segfault' In-Reply-To: <20150922021541.GB7713@mdounin.ru> References: <20150922021541.GB7713@mdounin.ru> Message-ID: That's exactly what we tried and seems to be working for us. Thanks! Ricardo On Mon, Sep 21, 2015 at 9:15 PM, Maxim Dounin wrote: > Hello! > > On Mon, Sep 21, 2015 at 01:32:09PM -0500, Ricardo Aravena wrote: > > > Hi Folks, > > > > I was wondering of somebody could shed some light on this issue that we > are > > experiencing. > > > > nginx[22490]: segfault at 10 ip 000000376e616a11 sp 00007ffed9493690 > error > > 4 in libapr-1.so.0.3.9[376e600000+2b000] > > [...] > > > Basically, the nginx workers are crashing intermittently causing our > > application no server requests. We are using nginx with mod_security and > > mod_passenger. Apparently, this may be related to mod_security but can > > somebody shed some light as to how to solve it ? Is there a fix in the > > nginx code base or mod_security code base ? > > The mod_security module for nginx is known to have problems. Try > looking at the "nginx_refactoring" (or "nginx_refactoring_def") > branch of mod_security, it may work for you. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Follow us on Twitter , LinkedIn , and Google+ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 22 07:59:03 2015 From: nginx-forum at nginx.us (vps4) Date: Tue, 22 Sep 2015 03:59:03 -0400 Subject: nginx 2,upstream question Message-ID: i have 2 backend server A & B, i want the upstream only works with A, when A die then works with B, if A not die , only works with A how can i do Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261712,261712#msg-261712 From me at myconan.net Tue Sep 22 08:01:45 2015 From: me at myconan.net (nanaya) Date: Tue, 22 Sep 2015 17:01:45 +0900 Subject: nginx 2,upstream question In-Reply-To: References: Message-ID: <1442908905.2005338.390188361.088244BC@webmail.messagingengine.com> Hi On Tue, Sep 22, 2015, at 04:59 PM, vps4 wrote: > i have 2 backend server A & B, i want the upstream only works with A, > when A > die then works with B, if A not die , only works with A > how can i do > this? upstream backend { server A; server B backup; } From nginx-forum at nginx.us Tue Sep 22 08:05:36 2015 From: nginx-forum at nginx.us (vps4) Date: Tue, 22 Sep 2015 04:05:36 -0400 Subject: nginx 2,upstream question In-Reply-To: <1442908905.2005338.390188361.088244BC@webmail.messagingengine.com> References: <1442908905.2005338.390188361.088244BC@webmail.messagingengine.com> Message-ID: <69ea617791bff3e47bbd63862b8ea01b.NginxMailingListEnglish@forum.nginx.org> no upstream backend { server A; server B backup; } this will works both of them i want only A B only works when A die Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261712,261714#msg-261714 From nginx-forum at nginx.us Tue Sep 22 08:08:44 2015 From: nginx-forum at nginx.us (vps4) Date: Tue, 22 Sep 2015 04:08:44 -0400 Subject: nginx 2,upstream question In-Reply-To: <1442908905.2005338.390188361.088244BC@webmail.messagingengine.com> References: <1442908905.2005338.390188361.088244BC@webmail.messagingengine.com> Message-ID: <08aa235214c506e06096f6716e6d6274.NginxMailingListEnglish@forum.nginx.org> nanaya Wrote: ------------------------------------------------------- > Hi > > On Tue, Sep 22, 2015, at 04:59 PM, vps4 wrote: > > i have 2 backend server A & B, i want the upstream only works with > A, > > when A > > die then works with B, if A not die , only works with A > > how can i do > > > > this? > > upstream backend { > server A; > server B backup; > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx thanks. the option backup do what i want. but it seems when A busy, it will works with B? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261712,261715#msg-261715 From nginx-forum at nginx.us Tue Sep 22 09:33:57 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 22 Sep 2015 05:33:57 -0400 Subject: There is a newer OCSP response but was not provided by the server Message-ID: Hello, nginx is not updating the OCSP response cache. openssl says: [...] Cert Status: good This Update: Sep 9 09:59:46 2015 GMT Next Update: Sep 11 09:59:46 2015 GMT gnutls says "There is a newer OCSP response but was not provided by the server". The configuration says: [...] ssl_stapling on; ssl_stapling_verify on; ssl_stapling_file [...]/ssl/ocsp-response.der; [...] How do you enforce automatic update of the OCSP response cache? Some server's "next update" occurs at a later date than 48h. How do you enfoce, say, a 6days next update? Thank you for your time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261716#msg-261716 From mdounin at mdounin.ru Tue Sep 22 13:01:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 16:01:16 +0300 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: Message-ID: <20150922130116.GG7713@mdounin.ru> Hello! On Tue, Sep 22, 2015 at 05:33:57AM -0400, 173279834462 wrote: > Hello, > > nginx is not updating the OCSP response cache. > > openssl says: > [...] > Cert Status: good > This Update: Sep 9 09:59:46 2015 GMT > Next Update: Sep 11 09:59:46 2015 GMT > > gnutls says "There is a newer OCSP response but was not provided by the > server". > > The configuration says: > > [...] > ssl_stapling on; > ssl_stapling_verify on; > ssl_stapling_file [...]/ssl/ocsp-response.der; > [...] > > > How do you enforce automatic update of the OCSP response cache? You are using ssl_stapling_file, that is, nginx will always return content of the file specified and it's you who have to update the file. Quoting docs (http://nginx.org/r/ssl_stapling_file): : When set, the stapled OCSP response will be taken from the : specified file instead of querying the OCSP responder specified in : the server certificate. If you want nginx to fetch OCSP responses for you instead, comment out this directive. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Sep 22 13:10:55 2015 From: nginx-forum at nginx.us (Milos) Date: Tue, 22 Sep 2015 09:10:55 -0400 Subject: Nginx- files currently not available at first load Message-ID: I have a problem with XenForo and I think it's up to my NGINX server configuration. The problem is that sometimes the Googlebot can not access to resources such as images or scripts. Googlebot says the images are currently not available. Example: Here is a Google screenshot: https://xenforo.com/community/attachments/upload_2015-9-22_11-33-36-png.117474/ It looks as if there is a limit for loading files. ------------------------------------------------------------- Here my nginx.conf: user www-data; worker_processes 8; pid /run/nginx.pid; events { worker_connections 4000; multi_accept on; use epoll; } http { geoip_country /usr/share/GeoIP/GeoIP.dat; map $geoip_country_code $allowed_country { default yes; CN no; TW no; TR no; RU no; IR no; UA no; GE no; TH no; RO no; PH no; BA no; LV no; LT no; EE no; HR no; AL no; RS no; AF no; IN no; BR no; } ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 10; types_hash_max_size 2048; send_timeout 60; server_tokens off; client_max_body_size 100m; client_body_timeout 30; client_header_timeout 30; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; # File Cache Settings ## open_file_cache max=5000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; ## ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} ----------------------------------------------------- and here is my conf file for this domain: server { listen 5.x.x.x:80; server_name my-website.de; return 301 http://www.my-website.de$request_uri; } server { listen 5.x.x.x:80; server_name www.my-website.de; root /var/www/my-website; index index.php index.html index.htm; access_log /var/log/nginx/my-website.de.access.log; error_log /var/log/nginx/my-website.de.error.log; # Make site accessible from http://localhost/ #server_name localhost; if ($allowed_country = no) { return 444; } location / { try_files $uri $uri/ /index.php?$uri&$args; location ~/admin\.php$ { auth_basic "Administrator Login"; auth_basic_user_file /var/www/htpasswd; root /var/www/my-website; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_read_timeout 300; fastcgi_param HTTP_SCHEME https; include fastcgi_params; } # Media: images, icons, video, audio, HTC location ~* \.(?:jpg|jpeg|gif|png|ico|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { expires 1M; access_log off; add_header Cache-Control "public"; } # CSS and Javascript location ~* \.(?:css|js)$ { expires 1y; access_log off; add_header Cache-Control "public"; } location ~ /(internal_data|library) { internal; } location /install { auth_basic "Administrator Login"; auth_basic_user_file /var/www/htpasswd; index index.php index.html index.htm; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_read_timeout 300; fastcgi_send_timeout 180; fastcgi_connect_timeout 60; fastcgi_ignore_client_abort off; fastcgi_intercept_errors on; } } ----------------------------------------------------------- The problem occurs only when the first access. IF I repeat the rendering of the page, there is no problem. Please can someone help me? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261717,261717#msg-261717 From nginx-forum at nginx.us Tue Sep 22 14:57:19 2015 From: nginx-forum at nginx.us (schnix) Date: Tue, 22 Sep 2015 10:57:19 -0400 Subject: variable suggestion - msec_start Message-ID: <66fc0ea0877c2895ca7e04550a5afa32.NginxMailingListEnglish@forum.nginx.org> Hi, can i ask for a variable, $msec_start to provide a timestamp on which nginx was started? this way we could do some logging like $msec_start$connection to get a unique value, even after reload... or does there exist a way we could do that already? thanks alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261724,261724#msg-261724 From mdounin at mdounin.ru Tue Sep 22 15:20:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 18:20:20 +0300 Subject: nginx-1.9.5 Message-ID: <20150922152020.GB13202@mdounin.ru> Changes with nginx 1.9.5 22 Sep 2015 *) Feature: the ngx_http_v2_module (replaces ngx_http_spdy_module). Thanks to Dropbox and Automattic for sponsoring this work. *) Change: now the "output_buffers" directive uses two buffers by default. *) Change: now nginx limits subrequests recursion, not simultaneous subrequests. *) Change: now nginx checks the whole cache key when returning a response from cache. Thanks to Gena Makhomed and Sergey Brester. *) Bugfix: "header already sent" alerts might appear in logs when using cache; the bug had appeared in 1.7.5. *) Bugfix: "writev() failed (4: Interrupted system call)" errors might appear in logs when using CephFS and the "timer_resolution" directive on Linux. *) Bugfix: in invalid configurations handling. Thanks to Markus Linnala. *) Bugfix: a segmentation fault occurred in a worker process if the "sub_filter" directive was used at http level; the bug had appeared in 1.9.4. -- Maxim Dounin http://nginx.org/ From r at roze.lv Tue Sep 22 21:00:08 2015 From: r at roze.lv (Reinis Rozitis) Date: Wed, 23 Sep 2015 00:00:08 +0300 Subject: variable suggestion - msec_start In-Reply-To: <66fc0ea0877c2895ca7e04550a5afa32.NginxMailingListEnglish@forum.nginx.org> References: <66fc0ea0877c2895ca7e04550a5afa32.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0446B522ADD64D1A8E41F5EB21E1B922@NeiRoze> > can i ask for a variable, $msec_start to provide a timestamp on which > nginx was started? > this way we could do some logging like $msec_start$connection to get a unique value, even after reload... Depending on the needs you could use $msec (current time with milliseconds resolution ) and/or $pid (worker process id - obviously can repeat at some point). rr From nginx-forum at nginx.us Tue Sep 22 21:21:27 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 22 Sep 2015 17:21:27 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: Message-ID: <96062bc60d8acfe8c8d29ea4c0698dda.NginxMailingListEnglish@forum.nginx.org> The purpose of the ssl_stapling_file was to prime the cache. Without that file, openssl says "OCSP response: no response sent". For nginx to load the cache by itself, clients have to hit the same worker process a few times. I currently have 8 worker processes, which means that the server needs at least 8 simultaneous client who are knowledgeable and patient enough to hit the server a few times, purging the cache of their browser each time. This does not work seem to work all the times, however. I have a www to non-www redirection with stapling enabled on both. Hitting www does not fill the cache, and I keep seeing "OCSP response: no response sent". Am I missing something? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261744#msg-261744 From aapo.talvensaari at gmail.com Wed Sep 23 03:28:37 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Wed, 23 Sep 2015 06:28:37 +0300 Subject: Problems with HTTP/2 Message-ID: I tried the 1.9.5 release with http2 and it worked fine, but Ajax request especially were problematic. I did get errors like: net::ERR_SPDY_COMPRESSION_ERROR And the status code was 0. With the former spdy support I didn't have any problems. I'm also using fastcgi and PHP5 in this server where I tried it. What could cause these problems? Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 23 08:20:44 2015 From: nginx-forum at nginx.us (schnix) Date: Wed, 23 Sep 2015 04:20:44 -0400 Subject: variable suggestion - msec_start In-Reply-To: <0446B522ADD64D1A8E41F5EB21E1B922@NeiRoze> References: <0446B522ADD64D1A8E41F5EB21E1B922@NeiRoze> Message-ID: Thanks for your answer. the connection id is already unique and during keep-alive it will be the same for the same visitor. this is actually what i want, the only problem is that the restart of nginx resets the connection id this makes log analysis complicated having the start time of nginx would solve the problem. manually setting a value for this on every reload is not a very good solution to me :) i could probably write a startup script that includes a file where i increment a value, but asking for that variable could help a lot more :) thanks alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261724,261749#msg-261749 From nginx-forum at nginx.us Wed Sep 23 08:48:14 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 23 Sep 2015 04:48:14 -0400 Subject: Problems with HTTP/2 In-Reply-To: References: Message-ID: Have you seen this one http://forum.nginx.org/read.php?29,261735,261737#msg-261737 Have you completely removed the spdy module ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261745,261750#msg-261750 From nginx-forum at nginx.us Wed Sep 23 10:02:14 2015 From: nginx-forum at nginx.us (linsonj) Date: Wed, 23 Sep 2015 06:02:14 -0400 Subject: Trailing slash issue with https redirect - Nginx Message-ID: Hello, I'm using following settings for redirecting all http requests to https Our nginx configuration is as follows server { listen 80; server_name ~^(.*)\.mydomain\.com$; set $servername $1; rewrite ^(.*)$ https://$servername.mydomain.com/$1; error_page 500 502 503 504 /50x.html; } SSL conf file server { listen 443 ssl; server_name ~^(?.+)\.mydomain\.com$; open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; location / { root /var/www/html/WebApps1; } location /server { proxy_pass http://mydomain/server; proxy_set_header Host $subdomain.mydomain.com; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /var/nginx/proxy_temp; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_redirect off; proxy_cache sd6; add_header X-Proxy-Cache $upstream_cache_status; proxy_cache_bypass $http_cache_control; } We use wild card DNS. When we use https://webapp.mydomain.com, the static pages loaded from location "/var/www/html/WebApps1" and API requests are forwarded to https://mydomain.com/server Issue is that when I try to access http://webapp.mydomain.com using current setup, it is redirecting to https://webapp.mydomain.com// ( with two trailing slash at the end of url). Looking for a solution to remove this double slash issue. I'm not sure what exactly the problem is. Any suggestion would be of great help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261751,261751#msg-261751 From aapo.talvensaari at gmail.com Wed Sep 23 10:08:01 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Wed, 23 Sep 2015 13:08:01 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: Message-ID: On 23 September 2015 at 11:48, itpp2012 wrote: > Have you seen this one > http://forum.nginx.org/read.php?29,261735,261737#msg-261737 > Have you completely removed the spdy module ? I have seen that, but I'm using the official Ubuntu precise packages from nginx.org, so I kinda think it should be totally removed. I also removed all the spdy settings from configs. It seems that either there is some magical configs somewhere or that http/2 is buggy. Or that it is buggy only with fastcgi + PHP-FPM. Normal requests seems fine, but using Backbone.js / jQuery AJAX calls, leads to errors. I suspect the POST/PUT requests are the most problematic, because GET requests seem to work. I will do further testing at some point. Still, I never had any of these problems with Nginx+SPDY. Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Wed Sep 23 10:10:29 2015 From: me at myconan.net (nanaya) Date: Wed, 23 Sep 2015 19:10:29 +0900 Subject: Trailing slash issue with https redirect - Nginx In-Reply-To: References: Message-ID: <1443003029.84742.391332577.02E485C6@webmail.messagingengine.com> Hi On Wed, Sep 23, 2015, at 07:02 PM, linsonj wrote: > > Issue is that when I try to access http://webapp.mydomain.com using > current > setup, it is redirecting to https://webapp.mydomain.com// ( with two > trailing slash at the end of url). Looking for a solution to remove this > double slash issue. > > I'm not sure what exactly the problem is. Any suggestion would be of > great > help. > Maybe should look closer at this line. > rewrite ^(.*)$ https://$servername.mydomain.com/$1; (additionally, there's better way to do it) From nginx-forum at nginx.us Wed Sep 23 10:34:24 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 06:34:24 -0400 Subject: v1.9.5: compiler Message-ID: inflate.c:1507:61: warning: shifting a negative signed value is undefined [-Wshift-negative-value] if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16; ~~~ ^ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261754,261754#msg-261754 From nginx-forum at nginx.us Wed Sep 23 10:35:57 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 06:35:57 -0400 Subject: v1.9.5: compiler In-Reply-To: References: Message-ID: cannot delete - please ignore this thread Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261754,261755#msg-261755 From nginx-forum at nginx.us Wed Sep 23 10:38:13 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 06:38:13 -0400 Subject: v1.9.5: compiler warning Message-ID: <9051e02a1ecf6ca5904d578920727005.NginxMailingListEnglish@forum.nginx.org> inflate.c:1507:61: warning: shifting a negative signed value is undefined [-Wshift-negative-value] if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16; ~~~ ^ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261756,261756#msg-261756 From nginx-forum at nginx.us Wed Sep 23 10:38:54 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 06:38:54 -0400 Subject: v1.9.5: compiler warning In-Reply-To: <9051e02a1ecf6ca5904d578920727005.NginxMailingListEnglish@forum.nginx.org> References: <9051e02a1ecf6ca5904d578920727005.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55327159e72d0e9ac4257af62197bac5.NginxMailingListEnglish@forum.nginx.org> I hate this editor... The warning points at the "<< 16" part. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261756,261757#msg-261757 From nginx-forum at nginx.us Wed Sep 23 11:25:31 2015 From: nginx-forum at nginx.us (linsonj) Date: Wed, 23 Sep 2015 07:25:31 -0400 Subject: Trailing slash issue with https redirect - Nginx In-Reply-To: <1443003029.84742.391332577.02E485C6@webmail.messagingengine.com> References: <1443003029.84742.391332577.02E485C6@webmail.messagingengine.com> Message-ID: Yes, the line rewrite ^(.*)$ https://$servername.smartdocsonline.com/$1; could be the reason. Any other way to do this ? or Can I edit the existing rewrite rule to avoid double trailing slash ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261751,261758#msg-261758 From me at myconan.net Wed Sep 23 11:26:39 2015 From: me at myconan.net (nanaya) Date: Wed, 23 Sep 2015 20:26:39 +0900 Subject: Trailing slash issue with https redirect - Nginx In-Reply-To: References: <1443003029.84742.391332577.02E485C6@webmail.messagingengine.com> Message-ID: <1443007599.97406.391384329.398A7D2A@webmail.messagingengine.com> On Wed, Sep 23, 2015, at 08:25 PM, linsonj wrote: > Yes, the line rewrite ^(.*)$ https://$servername.smartdocsonline.com/$1; > could be the reason. > > Any other way to do this ? or Can I edit the existing rewrite rule to > avoid > double trailing slash ? > I suggest finding out what's being captured in there... From nginx-forum at nginx.us Wed Sep 23 11:37:49 2015 From: nginx-forum at nginx.us (locojohn) Date: Wed, 23 Sep 2015 07:37:49 -0400 Subject: nginx-1.9.5 In-Reply-To: <20150922152020.GB13202@mdounin.ru> References: <20150922152020.GB13202@mdounin.ru> Message-ID: <0f9c64aa2bb6627d7affb3148c7120d1.NginxMailingListEnglish@forum.nginx.org> Maxim, How is compression of headers taking place when using the new http_v2 module? Does "spdy_headers_comp" directive have any replacement in the http_v2 module? I looked at the source code but couldn't find any info. Are headers compressed by default now? Andrejs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261727,261760#msg-261760 From nginx-forum at nginx.us Wed Sep 23 11:47:26 2015 From: nginx-forum at nginx.us (locojohn) Date: Wed, 23 Sep 2015 07:47:26 -0400 Subject: nginx-1.9.5 In-Reply-To: <0f9c64aa2bb6627d7affb3148c7120d1.NginxMailingListEnglish@forum.nginx.org> References: <20150922152020.GB13202@mdounin.ru> <0f9c64aa2bb6627d7affb3148c7120d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7bdb919246da51a82bd1a70dc5487f3e.NginxMailingListEnglish@forum.nginx.org> I am sorry, I found the answer to my own question: HTTP/2 uses SPDY as a jumping-off point. HTTP/2, however, uses a fixed Huffman code-based header compression algorithm, instead of SPDY's dynamic stream-based compression. This helps to reduce the potential for attacks on the protocol. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261727,261761#msg-261761 From nginx-forum at nginx.us Wed Sep 23 11:55:34 2015 From: nginx-forum at nginx.us (locojohn) Date: Wed, 23 Sep 2015 07:55:34 -0400 Subject: Trailing slash issue with https redirect - Nginx In-Reply-To: References: Message-ID: How about this: server { listen 80; server_name *.mydomain.com; return 301 https://$http_host$request_uri; } Andrejs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261751,261762#msg-261762 From mdounin at mdounin.ru Wed Sep 23 12:33:19 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Sep 2015 15:33:19 +0300 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: <96062bc60d8acfe8c8d29ea4c0698dda.NginxMailingListEnglish@forum.nginx.org> References: <96062bc60d8acfe8c8d29ea4c0698dda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150923123319.GJ13202@mdounin.ru> Hello! On Tue, Sep 22, 2015 at 05:21:27PM -0400, 173279834462 wrote: > The purpose of the ssl_stapling_file was to prime the cache. Without that > file, openssl says "OCSP response: no response sent". For nginx to load the > cache by itself, clients have to hit the same worker process a few times. I > currently have 8 worker processes, which means that the server needs at > least 8 simultaneous client who are knowledgeable and patient enough to hit > the server a few times, purging the cache of their browser each time. This > does not work seem to work all the times, however. I have a www to non-www > redirection with stapling enabled on both. Hitting www does not fill the > cache, and I keep seeing "OCSP response: no response sent". Am I missing > something? Yes. Two basic points: - The ssl_stapling_file directive completely replaces nginx OCSP stapling logic, and it can't be used to only provide some "initial" OCSP response; it is to be used when you want to implement your own OCSP distribution logic (e.g., on a server without direct access to OCSP responder), and/or for debugging. - OCSP responses are loaded once nginx sees connections with Certificate Status Request TLS extension, i.e., a client asks nginx to provide stapled OCSP response (and this happens per-worker). Though not providing an OCSP response isn't a problem at all as OCSP stapling is just an optimization, and there is no need to care about pre-caching things. As long as there are clients who ask your server about an OCSP response - nginx will load it and will provide it to clients as needed. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Sep 23 13:42:32 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 09:42:32 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: <20150923123319.GJ13202@mdounin.ru> References: <20150923123319.GJ13202@mdounin.ru> Message-ID: > Though not providing an OCSP response isn't a problem at all > as OCSP stapling is just an optimization, and Well. it *is* a problem. Without stapling, each client that hits our server also hits the ocsp server. In our case, the ocsp server is overloaded (StartSSL), and therefore we can help by caching the response and delivering it ouselves. There is another, more general problem: ocsp servers may log the hits. Although this may not happen with StartSSL (we do not know for sure), it is still a concern on privacy of clients and profiling of all sorts. > there is no need to care about pre-caching things. If it works, yes. If it does not work, then we must update manually. One wants to avoid the latter case. > As long as there are clients who ask your server about an OCSP response >- nginx will load it and will provide it to clients as needed. It is *not* working. Please move on with the wishful thinking. It would be great if things were as you say. In reality, they are not. I think we agree that the following openssl test would be sufficient and good to ask the server about an OCSP response. In practice, nginx is still not delivering as intended. echo QUIT \ | openssl s_client \ -CAfile /etc/ssl/ca-bundle.pem \ -connect $fqdn:443 \ -servername $fqdn \ -tlsextdebug \ -status 2>&1 where fqdn is the server's address. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261767#msg-261767 From mdounin at mdounin.ru Wed Sep 23 14:49:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Sep 2015 17:49:27 +0300 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923123319.GJ13202@mdounin.ru> Message-ID: <20150923144927.GL13202@mdounin.ru> Hello! On Wed, Sep 23, 2015 at 09:42:32AM -0400, 173279834462 wrote: > > Though not providing an OCSP response isn't a problem at all > > as OCSP stapling is just an optimization, and > > Well. it *is* a problem. > > Without stapling, each client that hits our server also hits the ocsp > server. > In our case, the ocsp server is overloaded (StartSSL), and therefore we > can help by caching the response and delivering it ouselves. The problem here is that OCSP server is overloaded. The fact that nginx isn't able to provide help in this situation to a few first connections is at most lack of optimisation. > There is another, more general problem: ocsp servers may log the hits. > Although this may not happen with StartSSL (we do not know for sure), > it is still a concern on privacy of clients and profiling of all sorts. That's the problem with OCSP, not with nginx. Clients who consider this to be a problem can use other ways to check certificate validity, including CRLs and CRLsets. > > there is no need to care about pre-caching things. > > If it works, yes. > If it does not work, then we must update manually. > One wants to avoid the latter case. Consider switching to a CA which works? If you OCSP server is overloaded and not able to respond to requests, nginx won't be able to load an OCSP response as well, and nothing would help. If your OCSP server is responding - at least to some requests - nginx eventually will be able to load a response and will start serving it to clients. > > As long as there are clients who ask your server about an OCSP response > >- nginx will load it and will provide it to clients as needed. > > It is *not* working. Please move on with the wishful thinking. It would be > great if things were as you say. In reality, they are not. > > I think we agree that the following openssl test would be sufficient > and good to ask the server about an OCSP response. In practice, > nginx is still not delivering as intended. > > echo QUIT \ > | openssl s_client \ > -CAfile /etc/ssl/ca-bundle.pem \ > -connect $fqdn:443 \ > -servername $fqdn \ > -tlsextdebug \ > -status 2>&1 > > where fqdn is the server's address. I believe I already explained how it works, but let me repeat. As soon nginx sees an connection with Certificate Status Request TLS extension, it will start loading an OCSP response from you CA OCSP responder. Once the response is loaded, it will be stapled to further connections. It is not possible to return an OCSP response in connection which was first to request it due to OpenSSL API limitations and the fact that loading a response can take a while. That is, assuming 1 worker process and just started nginx, an OCSP staple is expected to be returned on the second connection with "openssl s_client -status ...". Note though that if your CA OCSP responder is overloaded and not responding, it is likely that nginx won't be able to load a response, much like your clients. In this case nginx will retry loading a response each 5 minutes. As far as I understand, what you are asking about is a persistent cache of OCSP responses, to mitigate your CA's OCSP responder availability issues. This is not a feature currently available in nginx (and you've choosen very wrong way to ask for a feature in an open source project). -- Maxim Dounin http://nginx.org/ From djczaski at gmail.com Wed Sep 23 15:20:48 2015 From: djczaski at gmail.com (Danomi Czaski) Date: Wed, 23 Sep 2015 11:20:48 -0400 Subject: Nginx Javascript Configuration Message-ID: I read quite a while ago that Nginx plans to move towards a Javascript style configuration file that may have similar functionality to ngx_lua. I'm wondering if there were any announcements at the Nginx Conference this week. From igor at sysoev.ru Wed Sep 23 15:26:44 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 23 Sep 2015 08:26:44 -0700 Subject: Nginx Javascript Configuration In-Reply-To: References: Message-ID: <192967AE-128B-4747-A628-8B3A6134819B@sysoev.ru> It will be announced today. -- Igor Sysoev > 23 ????. 2015 ?., ? 8:20, Danomi Czaski ???????(?): > > I read quite a while ago that Nginx plans to move towards a Javascript > style configuration file that may have similar functionality to > ngx_lua. I'm wondering if there were any announcements at the Nginx > Conference this week. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Sep 23 15:29:13 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 23 Sep 2015 11:29:13 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923123319.GJ13202@mdounin.ru> Message-ID: 173279834462 Wrote: ------------------------------------------------------- > > Though not providing an OCSP response isn't a problem at all > > as OCSP stapling is just an optimization, and > > Well. it *is* a problem. > > Without stapling, each client that hits our server also hits the ocsp > server. > In our case, the ocsp server is overloaded (StartSSL), and therefore Google "ocsp response caching nginx" Cache it or prime it (or both). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261774#msg-261774 From pluknet at nginx.com Wed Sep 23 15:31:04 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 23 Sep 2015 18:31:04 +0300 Subject: v1.9.5: compiler warning In-Reply-To: <9051e02a1ecf6ca5904d578920727005.NginxMailingListEnglish@forum.nginx.org> References: <9051e02a1ecf6ca5904d578920727005.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0C8BDAE1-91BA-43D7-8D8F-878B0FF5514F@nginx.com> On Sep 23, 2015, at 1:38 PM, 173279834462 wrote: > inflate.c:1507:61: warning: shifting a negative signed value is undefined > [-Wshift-negative-value] > if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16; > > ~~~ ^ Looks like you are building nginx with zlib library sources specified manually with ?-with-zlib option, and that?s an issue in zlib, not nginx. If such a warning bothers you, you may want to look at this change: https://github.com/madler/zlib/commit/e54e12 -- Sergey Kandaurov From nginx-forum at nginx.us Wed Sep 23 15:39:13 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 11:39:13 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: <20150923144927.GL13202@mdounin.ru> References: <20150923144927.GL13202@mdounin.ru> Message-ID: >From my seat, the CA works and NGINX is not returning the OCSP response. In fact, I can generate the stapling manually. Barred the various considerations of what is or is not possible, I think that a more robust solution is in order, for example, nginx could (should at this point?) log the stapling progress, so that sysadmin knows that the process is being executed, possibly with relevant warnings and error messages. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261777#msg-261777 From nginx-forum at nginx.us Wed Sep 23 15:51:34 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 11:51:34 -0400 Subject: v1.9.5: compiler warning In-Reply-To: <0C8BDAE1-91BA-43D7-8D8F-878B0FF5514F@nginx.com> References: <0C8BDAE1-91BA-43D7-8D8F-878B0FF5514F@nginx.com> Message-ID: <5539448c0fab8226ca631f0c85f3ebc1.NginxMailingListEnglish@forum.nginx.org> Hot from the oven... Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261756,261778#msg-261778 From citrin at citrin.ru Wed Sep 23 16:01:09 2015 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Wed, 23 Sep 2015 19:01:09 +0300 Subject: v1.9.5: compiler warning In-Reply-To: <0C8BDAE1-91BA-43D7-8D8F-878B0FF5514F@nginx.com> References: <9051e02a1ecf6ca5904d578920727005.NginxMailingListEnglish@forum.nginx.org> <0C8BDAE1-91BA-43D7-8D8F-878B0FF5514F@nginx.com> Message-ID: <5602CCC5.3080409@citrin.ru> On 09/23/15 18:31, Sergey Kandaurov wrote: > Looks like you are building nginx with zlib library sources specified > manually with ?-with-zlib option, and that?s an issue in zlib, not nginx. > If such a warning bothers you, you may want to look at this change: > https://github.com/madler/zlib/commit/e54e12 Less obfuscated fix for this warning: https://svnweb.freebsd.org/changeset/base/287541 From mdounin at mdounin.ru Wed Sep 23 16:16:26 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Sep 2015 19:16:26 +0300 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923144927.GL13202@mdounin.ru> Message-ID: <20150923161626.GN13202@mdounin.ru> Hello! On Wed, Sep 23, 2015 at 11:39:13AM -0400, 173279834462 wrote: > From my seat, the CA works and NGINX is not returning the > OCSP response. In fact, I can generate the stapling manually. Most problems I've seen with OCSP stapling was about incorrect use of ssl_stapling_verify (without appropriate set of trusted certificates). Given symptomps you describe and the fact that configuration snippet you've quoted contains "ssl_stapling_verify on" (and doesn't contain ssl_trusted_certificate) - it's likely the issue you are facing. > Barred the various considerations of what is or is not possible, > I think that a more robust solution is in order, for example, > nginx could (should at this point?) log the stapling progress, > so that sysadmin knows that the process is being executed, > possibly with relevant warnings and error messages. All OCSP stapling errors (including ones related to OCSP response verification) are logged into nginx global error log. Detailed progress can be seen at 'debug' level. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Sep 23 16:49:47 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 12:49:47 -0400 Subject: v1.9.5: compiler warning In-Reply-To: <5602CCC5.3080409@citrin.ru> References: <5602CCC5.3080409@citrin.ru> Message-ID: Patch applied to zlib... Zero errors and zero warnings compiling nginx 1.9.5 with clang/llvm 3.7.0. Well done... --- inflate.c.orig 2015-09-23 18:22:54.000000000 +0200 +++ inflate.c 2015-09-23 18:23:45.000000000 +0200 @@ -1504,9 +1504,10 @@ { struct inflate_state FAR *state; - if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16; + if (strm == Z_NULL || strm->state == Z_NULL) + return (long)(((unsigned long)0 - 1) << 16); state = (struct inflate_state FAR *)strm->state; - return ((long)(state->back) << 16) + + return (long)(((unsigned long)((long)state->back)) << 16) + (state->mode == COPY ? state->length : (state->mode == MATCH ? state->was - state->length : 0)); } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261756,261781#msg-261781 From nginx-forum at nginx.us Wed Sep 23 16:53:19 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 12:53:19 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: <20150923161626.GN13202@mdounin.ru> References: <20150923161626.GN13202@mdounin.ru> Message-ID: I see this: ==> stderr.log <== 2015/09/23 18:33:00 [error] 41509#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261782#msg-261782 From mdounin at mdounin.ru Wed Sep 23 17:21:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Sep 2015 20:21:36 +0300 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923161626.GN13202@mdounin.ru> Message-ID: <20150923172136.GO13202@mdounin.ru> Hello! On Wed, Sep 23, 2015 at 12:53:19PM -0400, 173279834462 wrote: > I see this: > > ==> stderr.log <== > 2015/09/23 18:33:00 [error] 41509#0: OCSP_basic_verify() failed (SSL: > error:27069065:OCSP routines:OCSP_basic_verify:certificate verify > error:Verify error:unable to get local issuer certificate) while requesting > certificate status, responder: ocsp.startssl.com So this confirms my guess: you've enabled OCSP response verification but failed to provide appropriate certificates for the verification to succeed. Simpliest solution would be to switch off OCSP response verification. Alternatively, provide appropriate certificates via the ssl_trusted_certificate directive, see http://nginx.org/r/ssl_stapling_verify for details. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Sep 23 17:33:53 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 13:33:53 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: <20150923172136.GO13202@mdounin.ru> References: <20150923172136.GO13202@mdounin.ru> Message-ID: > Simpliest solution would be to switch off OCSP response verification. I have just tried it. It takes two hits from a client to fill the cache of its worker process. There are two problems with this: - the other worker processes are not primed on restart, and therefore clients that require ocsp stapling wil print an error instead of rendering the page (my FF does it). - the stapling is not verified... > Alternatively, provide appropriate certificates via the > ssl_trusted_certificate directive, see > http://nginx.org/r/ssl_stapling_verify for details. Yes, done that as well. The ssl_trusted_certificate includes the intermediate and the server's own. However, ... >> For verification to work, the certificate of the server certificate issuer, the root certificate, >> and all intermediate certificates should be configured as trusted using the ssl_trusted_certificate directive. So, nginx wants the root certificate too, which is non-sense. Can't nginx get the root certificate by itself? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261784#msg-261784 From nginx-forum at nginx.us Wed Sep 23 17:35:54 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 13:35:54 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923172136.GO13202@mdounin.ru> Message-ID: <431a770b679397741eef3bd0511a4f78.NginxMailingListEnglish@forum.nginx.org> After all, the root certificate is part of the local trust store (/etc/ssl/ca-bundle.pem), and nginx knows it (ssl_trusted_certificate points to it). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261785#msg-261785 From nginx-forum at nginx.us Wed Sep 23 17:39:05 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 13:39:05 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923172136.GO13202@mdounin.ru> Message-ID: Hold on... ssl_dhparam [...]/ssl/dh2048.pem; ssl_certificate_key [...]/ssl/www.key; ssl_certificate [...]/ssl/www-bundle.pem; ssl_trusted_certificate [...]/ssl/ca-bundle.pem; The intermediate and the server's own are in www-bundle.pem. The local trust store is in ca-bundle.pem; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261786#msg-261786 From djczaski at gmail.com Wed Sep 23 17:39:50 2015 From: djczaski at gmail.com (Danomi Czaski) Date: Wed, 23 Sep 2015 13:39:50 -0400 Subject: Nginx Javascript Configuration In-Reply-To: <192967AE-128B-4747-A628-8B3A6134819B@sysoev.ru> References: <192967AE-128B-4747-A628-8B3A6134819B@sysoev.ru> Message-ID: For those interested: https://www.nginx.com/blog/launching-nginscript-and-looking-ahead On Wed, Sep 23, 2015 at 11:26 AM, Igor Sysoev wrote: > It will be announced today. > > -- > Igor Sysoev > >> 23 ????. 2015 ?., ? 8:20, Danomi Czaski ???????(?): >> >> I read quite a while ago that Nginx plans to move towards a Javascript >> style configuration file that may have similar functionality to >> ngx_lua. I'm wondering if there were any announcements at the Nginx >> Conference this week. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Sep 23 17:41:30 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 13:41:30 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923172136.GO13202@mdounin.ru> Message-ID: Will adjust the files, and see what happens... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261787#msg-261787 From nginx-forum at nginx.us Wed Sep 23 18:22:17 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 23 Sep 2015 14:22:17 -0400 Subject: There is a newer OCSP response but was not provided by the server In-Reply-To: References: <20150923172136.GO13202@mdounin.ru> Message-ID: <7e3d27b9419d88bf946687379c46d654.NginxMailingListEnglish@forum.nginx.org> The files are correct as they are: ssl_trusted_certificate includes the intermediate and the root ca, ssl_certificate includes the server's own and the intermediate. The error was ... in a missing ssl_trusted_certificate directive in one of the server clauses. A human error, undetected by nginx. To prevent such errors from happening, considering the complexity of certain configurations and the possibility of human error, it would be very useful to have a static check from nginx, at startup. Moving forward, server is up and running with > ssl_stapling on; > ssl_stapling_verify on; and no ssl_stapling_file. The last problem standing is ... the priming of the cache for each worker process. When nginx starts, it should prime all of its worker processes. Both the above recomendations are now in the wish list. Thank you for the exchange. I hope it will be useful to others. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261716,261790#msg-261790 From vikrant.thakur at gmail.com Wed Sep 23 18:22:28 2015 From: vikrant.thakur at gmail.com (vikrant singh) Date: Wed, 23 Sep 2015 11:22:28 -0700 Subject: Fwd: Config Guidance In-Reply-To: References: Message-ID: It seems I sent to wrong mailing list... got no response. So forwarding this question at "nginx at nginx.org" ---------- Forwarded message ---------- From: vikrant singh Date: Tue, Sep 22, 2015 at 12:38 PM Subject: Config Guidance To: nginx-forum at nginx.us Hello, I have quick question on config. On my reverse proxy I need to serve both web-socket and normal http request. For websocket request I add following in request... proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_pass $servAdd; And for normal ones I just do proxy_pass $servAdd; My question is how to unify these two in a single location directive? I can identify a websocket request and add extra header in a if block. But as using if is not recommended I am not sure if I should do that. Any advise? Thanks, Vikrant -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Sep 23 18:43:44 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 23 Sep 2015 21:43:44 +0300 Subject: Nginx Javascript Configuration In-Reply-To: References: <192967AE-128B-4747-A628-8B3A6134819B@sysoev.ru> Message-ID: On 23 Sep 2015, at 20:39, Danomi Czaski wrote: > For those interested: > > https://www.nginx.com/blog/launching-nginscript-and-looking-ahead Yes, repository is here: http://hg.nginx.org/njs/ This is preliminary version. No built-in JS objects, no closures. We appreciate your feedback on JS interface to nginx internals. -- Igor Sysoev http://nginx.com From maxim at nginx.com Wed Sep 23 18:47:55 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 23 Sep 2015 21:47:55 +0300 Subject: Nginx Javascript Configuration In-Reply-To: References: <192967AE-128B-4747-A628-8B3A6134819B@sysoev.ru> Message-ID: <5602F3DB.70801@nginx.com> On 9/23/15 9:43 PM, Igor Sysoev wrote: > On 23 Sep 2015, at 20:39, Danomi Czaski wrote: > >> For those interested: >> >> https://www.nginx.com/blog/launching-nginscript-and-looking-ahead > > Yes, repository is here: > http://hg.nginx.org/njs/ > > This is preliminary version. > No built-in JS objects, no closures. > We appreciate your feedback on JS interface to nginx internals. > .. and for readers in twitter: we are not going to kill lua or any other great nginx modules. -- Maxim Konovalov From nginx-forum at nginx.us Wed Sep 23 19:29:34 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 23 Sep 2015 15:29:34 -0400 Subject: Nginx Javascript Configuration In-Reply-To: <5602F3DB.70801@nginx.com> References: <5602F3DB.70801@nginx.com> Message-ID: <6e2aa3fe72bdc3ccfabd1247f4cc779d.NginxMailingListEnglish@forum.nginx.org> Is there any module loading order advice? ea. should it be before or after Lua? does/should it matter? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261772,261796#msg-261796 From danilo.moret at descomplica.com.br Wed Sep 23 20:05:49 2015 From: danilo.moret at descomplica.com.br (Danilo Moret) Date: Wed, 23 Sep 2015 17:05:49 -0300 Subject: upstream, aws elb and resolver Message-ID: Hello everyone. I'm trying to setup an Nginx proxy on AWS EC2 with the following general layout: mydomain.com > ELB > EC2 Nginx > App's Beanstalk ELB My first configuration was something like this: http { upstream app { server current-app.elasticbeanstalk.com weight 5; server new-app.elasticbeanstalk.com weight 1; } server { listen 80; location / { proxy_pass http://app; } } } But it stopped working twice so far at around the same time after about one or two days. After reading more about that setup I found some suggestions about what could be going on: http://ghost.thekindof.me/nginx-aws-elb-dns-resolution-nginx-resolver-directive-and-black-magic/ https://stackoverflow.com/questions/26956979/error-with-ip-and-nginx-as-reverse-proxy http://gc-taylor.com/blog/2011/11/10/nginx-aws-elb-name-resolution-resolvers/ http://forum.nginx.org/read.php?2,255961,255961#msg-255961 Is it still necessary to use variables to force Nginx to resolve? If yes, to use upstream should I set the servers as variables or adding $request_uri will do the trick? Bye! -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Sep 23 21:19:06 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 23 Sep 2015 14:19:06 -0700 Subject: Nginx Javascript Configuration In-Reply-To: <6e2aa3fe72bdc3ccfabd1247f4cc779d.NginxMailingListEnglish@forum.nginx.org> References: <5602F3DB.70801@nginx.com> <6e2aa3fe72bdc3ccfabd1247f4cc779d.NginxMailingListEnglish@forum.nginx.org> Message-ID: 23 ????. 2015 ?., ? 12:29, itpp2012 ???????(?): > > Is there any module loading order advice? ea. should it be before or after > Lua? does/should it matter? It doesn't matter. -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Thu Sep 24 01:29:42 2015 From: nginx-forum at nginx.us (gdarceneaux) Date: Wed, 23 Sep 2015 21:29:42 -0400 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <20150922020649.GA7713@mdounin.ru> References: <20150922020649.GA7713@mdounin.ru> Message-ID: <88b605ea8e78f2f2d874a6ff53df7228.NginxMailingListEnglish@forum.nginx.org> Thanks for all of your help. With me though it's 2 steps forward and 1 step back.... It went further in the build process but I received other errors that I'm still researching. Let me ask a hopefully simple question though: Is there a way to compile nginx and the rtmp-module without openssl? I ask because where I'm trying to stream is a closed network, no outside connectivity and I control who has access manually, so I really don't need https control? Thanks again for your patience, and assistance. Glenn Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,261808#msg-261808 From nginx-forum at nginx.us Thu Sep 24 04:39:19 2015 From: nginx-forum at nginx.us (linsonj) Date: Thu, 24 Sep 2015 00:39:19 -0400 Subject: Trailing slash issue with https redirect - Nginx In-Reply-To: References: Message-ID: I was able to resolve the issue using following rewrite rule. rewrite ^(.*)$ https://$servername.mydomain.com$1; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261751,261810#msg-261810 From livingdeadzerg at yandex.ru Thu Sep 24 11:39:56 2015 From: livingdeadzerg at yandex.ru (navern) Date: Thu, 24 Sep 2015 14:39:56 +0300 Subject: Nginx Javascript Configuration In-Reply-To: References: <5602F3DB.70801@nginx.com> <6e2aa3fe72bdc3ccfabd1247f4cc779d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5603E10C.8040804@yandex.ru> On 24.09.2015 00:19, Igor Sysoev wrote: > 23 ????. 2015 ?., ? 12:29, itpp2012 ???????(?): >> Is there any module loading order advice? ea. should it be before or after >> Lua? does/should it matter? > It doesn't matter. > Hello, Could you please clarify, is this module will be in main code base or should be installed as separate module? From maxim at nginx.com Thu Sep 24 11:43:00 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 24 Sep 2015 14:43:00 +0300 Subject: Nginx Javascript Configuration In-Reply-To: <5603E10C.8040804@yandex.ru> References: <5602F3DB.70801@nginx.com> <6e2aa3fe72bdc3ccfabd1247f4cc779d.NginxMailingListEnglish@forum.nginx.org> <5603E10C.8040804@yandex.ru> Message-ID: <5603E1C4.2030809@nginx.com> On 9/24/15 2:39 PM, navern wrote: > > On 24.09.2015 00:19, Igor Sysoev wrote: >> 23 ????. 2015 ?., ? 12:29, itpp2012 >> ???????(?): >>> Is there any module loading order advice? ea. should it be before >>> or after >>> Lua? does/should it matter? >> It doesn't matter. >> > Hello, > > Could you please clarify, is this module will be in main code base > or should be installed as separate module? > It should be compiled in as an external module: http://hg.nginx.org/njs/file/11d4d66851ed/README -- Maxim Konovalov From nginx-forum at nginx.us Thu Sep 24 13:00:20 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 24 Sep 2015 09:00:20 -0400 Subject: Nginx Javascript Configuration In-Reply-To: <5603E1C4.2030809@nginx.com> References: <5603E1C4.2030809@nginx.com> Message-ID: Given: unresolved external symbol _nxt_mem_cache_pool_destroy referenced in function _ngx_http_js_cleanup_mem_cache_pool Is libnjs.a required to build or can an existing version (.lib) work as well? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261772,261817#msg-261817 From nginx-forum at nginx.us Thu Sep 24 13:31:20 2015 From: nginx-forum at nginx.us (vbresults) Date: Thu, 24 Sep 2015 09:31:20 -0400 Subject: Preload Files Module for Nginx Message-ID: <320f0d3447807f0d04e3353c08f8ff21.NginxMailingListEnglish@forum.nginx.org> Send your relevant portfolio [past modules made] and a quote to develop and test this module. I will be posting it under my github account for easy access for myself and anyone else that needs it. I won't compromise on the functionality I've requested below even 1 centimeter. This module "ngx_preload_files" preloads one or more files on the filesystem into a single variable. This can be used for tasks like concatenating critical css and javascript files and inlining them with a substitution module, all within Nginx. All parameters accept variables in the standard format. This is implemented with a minimal memory and zero performance footprint; all the heavy lifting happens when nginx configuration loads/reloads. == Directives == Syntax: preload_files $variable $files [file1 file2 file3 ...]; Default: ? Context: http, server, location Loads $files into $variable. If any $file is a URI, pull in files with wget or cURL [whatever is available on the system]. --- Syntax: preload_files_context_local $path; Default: ? in server context, $document_root in server context, $uri in location context; Context: http, server, location The relative path/context for local preload_files [within this directive's and child contexts without overrides]. Set to "off" (without quotes) to resolve all $files as absolute paths. --- Syntax: preload_files_context_remote $url; Default: ? Context: http, server, location The relative $url for remote preload_files [within this directive's and child contexts without overrides]. --- Syntax: preload_files_separator $separator; Default: "\n"; Context: http, server, location Defines the separator between preload_files [within this directive's and child contexts without overrides]. --- Syntax: preload_files_substitute $search $replace [$file]; Default: ? Context: http, server, location Search and replace in $files prior to placing contents in $variable [within this directive's and child contexts]. This directive can be used multiple times for different search and replace operations. If optional third parameter $file is passed, search/replace only occurs within that file; $file must *exactly* match it's preload_files $files entry. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261818,261818#msg-261818 From arut at nginx.com Thu Sep 24 13:32:29 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 24 Sep 2015 16:32:29 +0300 Subject: Nginx Javascript Configuration In-Reply-To: References: <5603E1C4.2030809@nginx.com> Message-ID: <8A17DCD6-C35D-4D3C-BED8-762DD115E129@nginx.com> Hello, > On 24 Sep 2015, at 16:00, itpp2012 wrote: > > Given: > unresolved external symbol _nxt_mem_cache_pool_destroy referenced in > function _ngx_http_js_cleanup_mem_cache_pool > > Is libnjs.a required to build or can an existing version (.lib) work as > well? Are you trying to build njs on Windows? Windows is not currently supported. -- Roman Arutyunyan From nginx-forum at nginx.us Thu Sep 24 13:36:32 2015 From: nginx-forum at nginx.us (vbresults) Date: Thu, 24 Sep 2015 09:36:32 -0400 Subject: Preload Files Module for Nginx In-Reply-To: <320f0d3447807f0d04e3353c08f8ff21.NginxMailingListEnglish@forum.nginx.org> References: <320f0d3447807f0d04e3353c08f8ff21.NginxMailingListEnglish@forum.nginx.org> Message-ID: Typo; for preload_files_context_local I meant: Default: ? in http context [same effect as "off"], $document_root in server context, $uri in location context. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261818,261820#msg-261820 From nginx-forum at nginx.us Thu Sep 24 14:01:01 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 24 Sep 2015 10:01:01 -0400 Subject: Nginx Javascript Configuration In-Reply-To: <8A17DCD6-C35D-4D3C-BED8-762DD115E129@nginx.com> References: <8A17DCD6-C35D-4D3C-BED8-762DD115E129@nginx.com> Message-ID: Roman Arutyunyan Wrote: ------------------------------------------------------- > Hello, > > > On 24 Sep 2015, at 16:00, itpp2012 wrote: > > > > Given: > > unresolved external symbol _nxt_mem_cache_pool_destroy referenced in > > function _ngx_http_js_cleanup_mem_cache_pool > > > > Is libnjs.a required to build or can an existing version (.lib) work > as > > well? > > Are you trying to build njs on Windows? Of course I am, it builds fine except for this dependency. I can force it to link but not everything works then. > Windows is not currently supported. Not yet. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261772,261821#msg-261821 From zxcvbn4038 at gmail.com Thu Sep 24 18:16:05 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 24 Sep 2015 14:16:05 -0400 Subject: $upstream_cache_status is EXPIRED? Message-ID: Hello! I'm experimenting with fastcgi caching - I've added $upstream_cache_status to the access log, and I can see that periodically there will be a small cluster of EXPIRED requests for an object. Does EXPIRED imply that the object was fetched from origin each time? ..or that the requests were queued while a request to origin was made? ..or that the expired object was served while an update was fetched? ..or something else? I can understand any of those cases, I just want to be able to explain how many origin fetches were actually done. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Sep 24 21:23:50 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 25 Sep 2015 00:23:50 +0300 Subject: $upstream_cache_status is EXPIRED? In-Reply-To: References: Message-ID: <553D91A5-66BC-49CD-A17D-1A25C1CC503A@nginx.com> On Sep 24, 2015, at 9:16 PM, CJ Ess wrote: > Hello! > > I'm experimenting with fastcgi caching - I've added $upstream_cache_status to the access log, and I can see that periodically there will be a small cluster of EXPIRED requests for an object. > > Does EXPIRED imply that the object was fetched from origin each time? Correct, but see below. > ..or that the requests were queued while a request to origin was made? For requests queued with proxy_cache_lock, this would be HIT, i.e. after a resource was eventually fetched from origin, queued requests to the same resource are served from now populated cache. > ..or that the expired object was served while an update was fetched? That would be UPDATING. > ..or something else? > > I can understand any of those cases, I just want to be able to explain how many origin fetches were actually done. Depending on fastcgi_cache_revalidate setting, EXPIRED is either simply due to an outdated cached response, or a failed revalidation. In either way, a full response is served from an upstream (origin) server. -- Sergey Kandaurov From adam at jooadam.hu Fri Sep 25 14:20:41 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Fri, 25 Sep 2015 16:20:41 +0200 Subject: Setting headers Message-ID: Hi, Something that long bothered me, and perhaps has already been discussed, but I haven?t found anything on it: why is there no way to set arbitrary headers without using an extension module? I know that the `add_header` directive has been augmented with the `always` flag recently, but there?s still no way to freely change headers. I personally think this is something that one could expect from a web server. My specific scenario is that I want to implement server-side content-negotiation, by conditionally applying an XSL stylesheet and changing the content-type accordingly. This really made me jump through hoops using the built-in modules and in specific cases simply broke. So I gave in and used Headers More, but now, that HTTP/2 support is out, I would like to use it, Thomes Ward?s PPA, however, is not yet updated, because the Lua module does not build with the new release (https://twitter.com/teward001/status/646775480931123201), so my only option would be compiling from source, but I would prefer no to lose the benefits of package management. Is there any way to do this within the confines of the built-in modules? Thanks, ?d?m From highclass99 at gmail.com Fri Sep 25 18:38:04 2015 From: highclass99 at gmail.com (highclass99) Date: Sat, 26 Sep 2015 03:38:04 +0900 Subject: Nginx for windows in high traffic production environment? Message-ID: Hello, I am curious whether nginx is suitable for windows in high traffic production environment? http://nginx.org/en/docs/windows.html says "Version of nginx for Windows uses the native Win32 API (not the Cygwin emulation layer). Only the select() connection processing method is currently used, so high performance and scalability should not be expected. Due to this and some other known issues version of nginx for Windows is considered to be a beta version." Is this still the case? Google searches show http://nginx-win.ecsds.eu/ as a top result. Is http://nginx-win.ecsds.eu/ performance wise better than the nginx.org builds for windows, or just a mostly feature rich compile? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Sep 26 13:33:20 2015 From: nginx-forum at nginx.us (gdarceneaux) Date: Sat, 26 Sep 2015 09:33:20 -0400 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <88b605ea8e78f2f2d874a6ff53df7228.NginxMailingListEnglish@forum.nginx.org> References: <20150922020649.GA7713@mdounin.ru> <88b605ea8e78f2f2d874a6ff53df7228.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0957588a859e776bc1886eef46b7494f.NginxMailingListEnglish@forum.nginx.org> I finally got it to work. I was getting errors at the last from libeay32.lib (which I didn't have) so downloaded it, placed in path and then everything compiled fine. Thanks again for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,261857#msg-261857 From kworthington at gmail.com Sat Sep 26 14:49:25 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Sat, 26 Sep 2015 10:49:25 -0400 Subject: [nginx-announce] nginx-1.9.5 In-Reply-To: <20150922152026.GC13202@mdounin.ru> References: <20150922152026.GC13202@mdounin.ru> Message-ID: Hello Nginx users, My apologies for the delay - I was at Nginx.Conf 2015 in San Francisco and my Windows machine was inaccessible in NY! Now available: Nginx 1.9.5 for Windows http://tiny.cc/nginxwin195 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 22, 2015 at 8:20 AM, Maxim Dounin wrote: > Changes with nginx 1.9.5 22 Sep > 2015 > > *) Feature: the ngx_http_v2_module (replaces ngx_http_spdy_module). > Thanks to Dropbox and Automattic for sponsoring this work. > > *) Change: now the "output_buffers" directive uses two buffers by > default. > > *) Change: now nginx limits subrequests recursion, not simultaneous > subrequests. > > *) Change: now nginx checks the whole cache key when returning a > response from cache. > Thanks to Gena Makhomed and Sergey Brester. > > *) Bugfix: "header already sent" alerts might appear in logs when using > cache; the bug had appeared in 1.7.5. > > *) Bugfix: "writev() failed (4: Interrupted system call)" errors might > appear in logs when using CephFS and the "timer_resolution" > directive > on Linux. > > *) Bugfix: in invalid configurations handling. > Thanks to Markus Linnala. > > *) Bugfix: a segmentation fault occurred in a worker process if the > "sub_filter" directive was used at http level; the bug had appeared > in 1.9.4. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Sat Sep 26 17:38:40 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Sat, 26 Sep 2015 10:38:40 -0700 Subject: Keepalive timeout Message-ID: Hi, If I have set the client facing keep alive timeout to 30s, but nginx takes longer to respond a given request due to slow backend in a reverse proxy setup, will nginx drop the client keep alive connection since its been idle for too long, or will nginx keep it until it sends the response ? Thanks Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Sep 26 21:03:15 2015 From: nginx-forum at nginx.us (Per Hansson) Date: Sat, 26 Sep 2015 17:03:15 -0400 Subject: nginx systemd reload service command skips configtest Message-ID: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> Hi, the "nginx.service" file shipped with systemd rpm's both in nginx's stable repository and epel for CentOS7 / RHEL7 do not perform a "configtest" when "systemctl reload nginx" is issued. So if there is an error in the configuration file nginx is killed but not started due to the faulty configuration. It's possible to mitigate this in the nginx.service file by having two "ExecReload" commands on separate lines like so: # grep ExecReload /usr/lib/systemd/system/nginx.service ExecReload=/usr/sbin/nginx -t ExecReload=/bin/kill -s HUP $MAINPID This way if the configtest in the first line fails nginx is never killed, the command does not print any output but that is a systemd issue so I think unavoidable. I can't guarantee that this is a supported way to do this but it works for me at least :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261864,261864#msg-261864 From nginx-forum at nginx.us Sun Sep 27 08:32:56 2015 From: nginx-forum at nginx.us (cacrus) Date: Sun, 27 Sep 2015 04:32:56 -0400 Subject: nginx nested location and different basic authentication file Message-ID: <6e396f7da3d700dca5eece4cc49369a1.NginxMailingListEnglish@forum.nginx.org> Hi , I am trying to setup different authentication based on different strings in the $request , here is my case . location /parent/ { set $basic_file /nginx/conf/.htpasswd; if ($request_uri ~ (visualize|dashboard|settings)){ set $basic_file /nginx/conf/.dev_pass; } proxy_pass http:///; auth_basic "Restricted"; auth_basic_user_file $basic_file; } I would like file .htpasswd to be used by default and in case of $request_uri having (visualize|dashboard|settings) , auth should happen via .dev_pass . For some reason the authentication is happening only via htpasswd even if i have (visualize|dashboard|settings) in the URI . Any idea , what am i missing? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261867,261867#msg-261867 From cj.wijtmans at gmail.com Sun Sep 27 15:02:33 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Sun, 27 Sep 2015 17:02:33 +0200 Subject: nginx nested location and different basic authentication file In-Reply-To: <6e396f7da3d700dca5eece4cc49369a1.NginxMailingListEnglish@forum.nginx.org> References: <6e396f7da3d700dca5eece4cc49369a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Not an nginx expert but i think you can use a map http://nginx.org/en/docs/http/ngx_http_map_module.html You dont even need an if. Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans On Sun, Sep 27, 2015 at 10:32 AM, cacrus wrote: > Hi , > > I am trying to setup different authentication based on different strings in > the $request , here is my case . > > location /parent/ { > set $basic_file /nginx/conf/.htpasswd; > if ($request_uri ~ (visualize|dashboard|settings)){ > set $basic_file /nginx/conf/.dev_pass; > } > proxy_pass http:///; > auth_basic "Restricted"; > auth_basic_user_file $basic_file; > } > > I would like file .htpasswd to be used by default and in case of > $request_uri having (visualize|dashboard|settings) , auth should happen via > .dev_pass . > > For some reason the authentication is happening only via htpasswd even if > i have (visualize|dashboard|settings) in the URI . > > Any idea , what am i missing? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261867,261867#msg-261867 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cj.wijtmans at gmail.com Sun Sep 27 15:06:00 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Sun, 27 Sep 2015 17:06:00 +0200 Subject: nginx systemd reload service command skips configtest In-Reply-To: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> References: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I support this thought. Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans On Sat, Sep 26, 2015 at 11:03 PM, Per Hansson wrote: > Hi, the "nginx.service" file shipped with systemd rpm's both in nginx's > stable repository and epel for CentOS7 / RHEL7 do not perform a "configtest" > when "systemctl reload nginx" is issued. > So if there is an error in the configuration file nginx is killed but not > started due to the faulty configuration. > It's possible to mitigate this in the nginx.service file by having two > "ExecReload" commands on separate lines like so: > # grep ExecReload /usr/lib/systemd/system/nginx.service > ExecReload=/usr/sbin/nginx -t > ExecReload=/bin/kill -s HUP $MAINPID > > This way if the configtest in the first line fails nginx is never killed, > the command does not print any output but that is a systemd issue so I think > unavoidable. > > I can't guarantee that this is a supported way to do this but it works for > me at least :) > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261864,261864#msg-261864 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cj.wijtmans at gmail.com Sun Sep 27 15:14:01 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Sun, 27 Sep 2015 17:14:01 +0200 Subject: Preload Files Module for Nginx In-Reply-To: References: <320f0d3447807f0d04e3353c08f8ff21.NginxMailingListEnglish@forum.nginx.org> Message-ID: Doesnt linux cache files in RAM already? Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans On Thu, Sep 24, 2015 at 3:36 PM, vbresults wrote: > Typo; for preload_files_context_local I meant: > > Default: ? in http context [same effect as "off"], $document_root in server > context, $uri in location context. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261818,261820#msg-261820 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From me at myconan.net Sun Sep 27 15:20:04 2015 From: me at myconan.net (nanaya) Date: Mon, 28 Sep 2015 00:20:04 +0900 Subject: nginx systemd reload service command skips configtest In-Reply-To: References: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1443367204.400022.394717025.43808AA0@webmail.messagingengine.com> > On Sat, Sep 26, 2015 at 11:03 PM, Per Hansson > wrote: > > Hi, the "nginx.service" file shipped with systemd rpm's both in nginx's > > stable repository and epel for CentOS7 / RHEL7 do not perform a "configtest" > > when "systemctl reload nginx" is issued. > > So if there is an error in the configuration file nginx is killed but not > > started due to the faulty configuration. Pretty sure the original process won't be stopped in case of new faulty configuration when it gets a SIGHUP or SIGUSR2... (Unless systemd doing something else. I only tested by manually sending the signal to master process) From cj.wijtmans at gmail.com Sun Sep 27 16:27:18 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Sun, 27 Sep 2015 18:27:18 +0200 Subject: nginx systemd reload service command skips configtest In-Reply-To: <1443367204.400022.394717025.43808AA0@webmail.messagingengine.com> References: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> <1443367204.400022.394717025.43808AA0@webmail.messagingengine.com> Message-ID: i can confirm systemd will stop nginx. Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans On Sun, Sep 27, 2015 at 5:20 PM, nanaya wrote: > >> On Sat, Sep 26, 2015 at 11:03 PM, Per Hansson >> wrote: >> > Hi, the "nginx.service" file shipped with systemd rpm's both in nginx's >> > stable repository and epel for CentOS7 / RHEL7 do not perform a "configtest" >> > when "systemctl reload nginx" is issued. >> > So if there is an error in the configuration file nginx is killed but not >> > started due to the faulty configuration. > > Pretty sure the original process won't be stopped in case of new faulty > configuration when it gets a SIGHUP or SIGUSR2... > > (Unless systemd doing something else. I only tested by manually sending > the signal to master process) > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From gfrankliu at gmail.com Sun Sep 27 17:24:22 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Sun, 27 Sep 2015 10:24:22 -0700 Subject: nginx 1.9.5 and realip Message-ID: Hi all, Just tried latest 1.9.5 rpm that has realip module enabled. The $remote_port variable becomes blank. Is that known? Is there another way I can get the remote_port? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From rikske at deds.nl Mon Sep 28 13:40:53 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Mon, 28 Sep 2015 15:40:53 +0200 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* Message-ID: Dear, Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? It seems like the HTTP/2 module is enabled by default in your RHEL 7.1 based rpm and srpm. Your Nginx website writes about: "Note that accepting HTTP/2 connections over TLS requires the ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, which is available only since OpenSSL version 1.0.2. Using the ?Next Protocol Negotiation? (NPN) TLS extension for this purpose (available since OpenSSL version 1.0.1) is not guaranteed. " RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and backports. Can't find anything in the changelog of RHEL 7.1's OpenSSL about ALPN. The only thing i can find is "Support for Application Layer Protocol Negotiation (ALPN) has been added." in RHEL's GnuTLS. Thanks, Regards, Rik Ske From d33tah at gmail.com Mon Sep 28 14:24:21 2015 From: d33tah at gmail.com (Jacek Wielemborek) Date: Mon, 28 Sep 2015 16:24:21 +0200 Subject: http digest + proxy doesn't work on /something?x=3? Message-ID: <56094D95.9060604@gmail.com> List, It took me a while to actually find this mailing list; I have a question regarding HTTP digest nginx module in combination with proxy_pass. I tried the attached configuration file in combination with nginx-1.6.3 and kept getting asked for password infinitely when I try to accept any resource other than /. How can this problem be solved? I asked the same question on serverfault.com once: http://serverfault.com/q/717235/143824 Thanks, d33tah -------------- next part -------------- worker_processes 1; events { worker_connections 1024; } daemon off; http { error_log stderr debug; keepalive_timeout 65; gzip on; server { auth_digest_user_file digest; listen 9996; server_name some.domain.net; auth_digest 'Realm'; location / { proxy_pass http://127.0.0.1:8000; } } } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From d33tah at gmail.com Mon Sep 28 14:29:09 2015 From: d33tah at gmail.com (Jacek Wielemborek) Date: Mon, 28 Sep 2015 16:29:09 +0200 Subject: nginScript - is it fuzzed? Message-ID: <56094EB5.4010409@gmail.com> Hello, I just read this blog post: https://www.nginx.com/blog/launching-nginscript-and-looking-ahead Given that afl-fuzz was already successful in finding security bugs in Nginx [1], I figured I'd ask whether nginScript was or is planned to be fuzzed as well. Quick Google search didn't find seem to point me anything. Cheers, d33tah [1] https://lolware.net/2015/04/28/nginx-fuzzing.html [2] https://www.google.com/search?hl=en&q=nginscript+fuzzing -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From d33tah at gmail.com Mon Sep 28 14:33:10 2015 From: d33tah at gmail.com (Jacek Wielemborek) Date: Mon, 28 Sep 2015 16:33:10 +0200 Subject: nginScript - is it fuzzed? In-Reply-To: <56094EB5.4010409@gmail.com> References: <56094EB5.4010409@gmail.com> Message-ID: <56094FA6.3080504@gmail.com> W dniu 28.09.2015 o 16:29, Jacek Wielemborek pisze: > [1] https://lolware.net/2015/04/28/nginx-fuzzing.html Oh crap, sorry, actually I misread the article and it looks like nothing was found. Still, running the new language through a fuzzer sounsd worthwhile - if anyone is interested in this task, I can provide some pointers on how this can be done. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mdounin at mdounin.ru Mon Sep 28 14:54:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Sep 2015 17:54:58 +0300 Subject: Setting headers In-Reply-To: References: Message-ID: <20150928145458.GU13202@mdounin.ru> Hello! On Fri, Sep 25, 2015 at 04:20:41PM +0200, Jo? ?d?m wrote: > Something that long bothered me, and perhaps has already been > discussed, but I haven?t found anything on it: why is there no way to > set arbitrary headers without using an extension module? I know that > the `add_header` directive has been augmented with the `always` flag > recently, but there?s still no way to freely change headers. I > personally think this is something that one could expect from a web > server. The "add_header" directive allows to add headers, the "proxy_hide_header" directive allows to hide headers got from an upstream server. These two combined allows mostly arbitrary modifications. If you think these are not enough - please provide specific use cases. > My specific scenario is that I want to implement server-side > content-negotiation, by conditionally applying an XSL stylesheet and > changing the content-type accordingly. When using nginx XSLT filter module, it shouldn't be needed to modify Content-Type at all. Use xsl:output media-type attribute instead to provide a correct media type from the stylesheet itself. -- Maxim Dounin http://nginx.org/ From dewanggaba at xtremenitro.org Mon Sep 28 14:54:36 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 28 Sep 2015 21:54:36 +0700 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: References: Message-ID: <560954AC.1070409@xtremenitro.org> Hello! On 09/28/2015 08:40 PM, rikske at deds.nl wrote: > Dear, > > Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? > > It seems like the HTTP/2 module is enabled by default in your RHEL 7.1 > based rpm and srpm. > > Your Nginx website writes about: > > "Note that accepting HTTP/2 connections over TLS requires the > ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, > which is available only since OpenSSL version 1.0.2. Using the ?Next > Protocol Negotiation? (NPN) TLS extension for this purpose > (available since OpenSSL version 1.0.1) is not guaranteed. " > > RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and > backports. > > Can't find anything in the changelog of RHEL 7.1's OpenSSL about ALPN. > The only thing i can find is "Support for Application Layer Protocol > Negotiation (ALPN) has been added." in RHEL's GnuTLS. Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl 1.0.2d + crypto-policies under centos7. And it was success deployed on my sandbox The rpm was compiled on fedora22, and ported to el7 using mock. https://gitlab.com/antituhan/rpms/tree/master. $ openssl version OpenSSL 1.0.2d-fips 9 Jul 2015 $ uname -a Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Enjoy. > > Thanks, > > Regards, > > Rik Ske > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From frederik.nosi at postecom.it Mon Sep 28 14:58:55 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Mon, 28 Sep 2015 16:58:55 +0200 Subject: Keepalive timeout In-Reply-To: References: Message-ID: <560955AF.6030003@postecom.it> Hi, On 09/26/2015 07:38 PM, Frank Liu wrote: > Hi, > > If I have set the client facing keep alive timeout to 30s, but nginx > takes longer to respond a given request due to slow backend in > a reverse proxy setup, will nginx drop the client keep alive > connection since its been idle for too long, or will nginx keep it > until it sends the response ? FWIK, keepalive works between requests, not in a single request per se. In a request per se the timeout involved would be the browser's one and the connection / request timeout server side. > > Thanks > Frank > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.nosi at postecom.it Mon Sep 28 15:00:37 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Mon, 28 Sep 2015 17:00:37 +0200 Subject: Preload Files Module for Nginx In-Reply-To: References: <320f0d3447807f0d04e3353c08f8ff21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56095615.1000907@postecom.it> Hi, On 09/27/2015 05:14 PM, Christ-Jan Wijtmans wrote: > Doesnt linux cache files in RAM already? Not before you've readed them at least once. > Live long and prosper, > > Christ-Jan Wijtmans > https://github.com/cjwijtmans > http://facebook.com/cj.wijtmans > http://twitter.com/cjwijtmans > > > On Thu, Sep 24, 2015 at 3:36 PM, vbresults wrote: >> Typo; for preload_files_context_local I meant: >> >> Default: ? in http context [same effect as "off"], $document_root in server >> context, $uri in location context. >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261818,261820#msg-261820 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rikske at deds.nl Mon Sep 28 15:15:32 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Mon, 28 Sep 2015 17:15:32 +0200 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: <560954AC.1070409@xtremenitro.org> References: <560954AC.1070409@xtremenitro.org> Message-ID: Hi, So what you're saying. Nginx HTTP/2 module won't work on RHEL 7.1 with (ALPN) TLS, until you are using OpenSSL version 1.0.2 on RHEL 7.1 in any manner whatsoever? Can anyone confirm this? Thanks, Regards, Rik Ske > Hello! > > On 09/28/2015 08:40 PM, rikske at deds.nl wrote: >> Dear, >> >> Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? >> >> It seems like the HTTP/2 module is enabled by default in your RHEL 7.1 >> based rpm and srpm. >> >> Your Nginx website writes about: >> >> "Note that accepting HTTP/2 connections over TLS requires the >> ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, >> which is available only since OpenSSL version 1.0.2. Using the ?Next >> Protocol Negotiation? (NPN) TLS extension for this purpose >> (available since OpenSSL version 1.0.1) is not guaranteed. " >> >> RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and >> backports. >> >> Can't find anything in the changelog of RHEL 7.1's OpenSSL about ALPN. >> The only thing i can find is "Support for Application Layer Protocol >> Negotiation (ALPN) has been added." in RHEL's GnuTLS. > > Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl > 1.0.2d + crypto-policies under centos7. And it was success deployed on > my sandbox > > The rpm was compiled on fedora22, and ported to el7 using mock. > > https://gitlab.com/antituhan/rpms/tree/master. > $ openssl version > OpenSSL 1.0.2d-fips 9 Jul 2015 > $ uname -a > Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > > Enjoy. > > >> >> Thanks, >> >> Regards, >> >> Rik Ske >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Sep 28 15:26:11 2015 From: nginx-forum at nginx.us (Alt) Date: Mon, 28 Sep 2015 11:26:11 -0400 Subject: nginScript - is it fuzzed? In-Reply-To: <56094FA6.3080504@gmail.com> References: <56094FA6.3080504@gmail.com> Message-ID: Hello, Markus Linnala has found at least two bugs with afl-fuzz: http://forum.nginx.org/read.php?29,261583 http://forum.nginx.org/read.php?29,261582 nginScript is very new, I'm sure you can help to test it if you know how to use afl-fuzz. Best Regards, Olivier Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261882,261891#msg-261891 From d33tah at gmail.com Mon Sep 28 15:44:53 2015 From: d33tah at gmail.com (Jacek Wielemborek) Date: Mon, 28 Sep 2015 17:44:53 +0200 Subject: nginScript - is it fuzzed? In-Reply-To: References: <56094FA6.3080504@gmail.com> Message-ID: <56096075.2010405@gmail.com> W dniu 28.09.2015 o 17:26, Alt pisze: > Hello, > > Markus Linnala has found at least two bugs with afl-fuzz: > http://forum.nginx.org/read.php?29,261583 > http://forum.nginx.org/read.php?29,261582 > > nginScript is very new, I'm sure you can help to test it if you know how to > use afl-fuzz. > > Best Regards, > Olivier I believe that this kind of work should be done systematically and the developer behind a project will know best how to build test cases to maximize initial code/functional coverage. This is why I would prefer to provide tips on how this can be done instead of performing the process on my own. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mdounin at mdounin.ru Mon Sep 28 16:06:33 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Sep 2015 19:06:33 +0300 Subject: nginx systemd reload service command skips configtest In-Reply-To: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> References: <9767d9ac57b01ac465a6756438c66a4f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150928160633.GX13202@mdounin.ru> Hello! On Sat, Sep 26, 2015 at 05:03:15PM -0400, Per Hansson wrote: > Hi, the "nginx.service" file shipped with systemd rpm's both in nginx's > stable repository and epel for CentOS7 / RHEL7 do not perform a "configtest" > when "systemctl reload nginx" is issued. > So if there is an error in the configuration file nginx is killed but not > started due to the faulty configuration. > It's possible to mitigate this in the nginx.service file by having two > "ExecReload" commands on separate lines like so: > # grep ExecReload /usr/lib/systemd/system/nginx.service > ExecReload=/usr/sbin/nginx -t > ExecReload=/bin/kill -s HUP $MAINPID > > This way if the configtest in the first line fails nginx is never killed, > the command does not print any output but that is a systemd issue so I think > unavoidable. > > I can't guarantee that this is a supported way to do this but it works for > me at least :) Configuration test is not needed when doing configuration reload. During configuration reload a signal is sent to the nginx master process, and this process handles the rest: it loads an updated configuration, checks it and if everything is fine and applies well it starts new worker processes with the updated configuration and asks old worker processes to exit. If something goes wrong, master processes simply rejects the new configuration with appropriate errors logged to error log. That is, configuration test isn't needed. It's also not enough - as not all configuration changes can be done, e.g., you can't change size of a shared memory zone. Additionally, in some cases doing configuration testing before configuration reload is just wrong, e.g., if you are in the middle of an upgrade of nginx, and nginx binary on disk is different from one currently running. The "killed but not started" case isn't something expected to happen and not something I can reproduce here, just tested with CentOS 7 and official nginx package from nginx.org. If this happens for you - please report more details. Also please make sure you've used "reload", not "restart". -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Mon Sep 28 16:44:18 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 28 Sep 2015 19:44:18 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: Message-ID: <6145646.sWsprBaPmf@vbart-workstation> On Wednesday 23 September 2015 06:28:37 Aapo Talvensaari wrote: > I tried the 1.9.5 release with http2 and it worked fine, but Ajax request > especially were problematic. > > I did get errors like: > net::ERR_SPDY_COMPRESSION_ERROR > > And the status code was 0. With the former spdy support I didn't have any > problems. I'm also using fastcgi and PHP5 in this server where I tried it. > What could cause these problems? > Could you provide a debug log with problematic request? http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From dewanggaba at xtremenitro.org Mon Sep 28 17:39:44 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Tue, 29 Sep 2015 00:39:44 +0700 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: References: <560954AC.1070409@xtremenitro.org> Message-ID: <56097B60.7090701@xtremenitro.org> Like this? nginx version: nginx/1.9.5 built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) built with OpenSSL 1.0.2d-fips 9 Jul 2015 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_v2_module --with-http_image_filter_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' Then how to test if I am already using APLN ? :) On 09/28/2015 10:15 PM, rikske at deds.nl wrote: > Hi, > > So what you're saying. > > Nginx HTTP/2 module won't work on RHEL 7.1 with (ALPN) TLS, > until you are using OpenSSL version 1.0.2 on RHEL 7.1 in any manner > whatsoever? > > Can anyone confirm this? > > Thanks, > > Regards, > > Rik Ske > >> Hello! >> >> On 09/28/2015 08:40 PM, rikske at deds.nl wrote: >>> Dear, >>> >>> Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? >>> >>> It seems like the HTTP/2 module is enabled by default in your RHEL 7.1 >>> based rpm and srpm. >>> >>> Your Nginx website writes about: >>> >>> "Note that accepting HTTP/2 connections over TLS requires the >>> ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, >>> which is available only since OpenSSL version 1.0.2. Using the ?Next >>> Protocol Negotiation? (NPN) TLS extension for this purpose >>> (available since OpenSSL version 1.0.1) is not guaranteed. " >>> >>> RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and >>> backports. >>> >>> Can't find anything in the changelog of RHEL 7.1's OpenSSL about ALPN. >>> The only thing i can find is "Support for Application Layer Protocol >>> Negotiation (ALPN) has been added." in RHEL's GnuTLS. >> >> Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl >> 1.0.2d + crypto-policies under centos7. And it was success deployed on >> my sandbox >> >> The rpm was compiled on fedora22, and ported to el7 using mock. >> >> https://gitlab.com/antituhan/rpms/tree/master. >> $ openssl version >> OpenSSL 1.0.2d-fips 9 Jul 2015 >> $ uname -a >> Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> Enjoy. >> >> >>> >>> Thanks, >>> >>> Regards, >>> >>> Rik Ske >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-09-29 00-38-35.png Type: image/png Size: 61911 bytes Desc: not available URL: From aapo.talvensaari at gmail.com Mon Sep 28 17:59:23 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Mon, 28 Sep 2015 20:59:23 +0300 Subject: Problems with HTTP/2 In-Reply-To: <6145646.sWsprBaPmf@vbart-workstation> References: <6145646.sWsprBaPmf@vbart-workstation> Message-ID: On 28 September 2015 at 19:44, Valentin V. Bartenev wrote: > On Wednesday 23 September 2015 06:28:37 Aapo Talvensaari wrote: > >> I did get errors like: > >> net::ERR_SPDY_COMPRESSION_ERROR > > Could you provide a debug log with problematic request? > I tried to debug this further. And now I'm closer to what happens. If Ajax request sends a PUT request with XmlHtttpRequest I do get: net::ERR_SPDY_COMPRESSION_ERROR But this is only when the PHP-FPM reponds with error code: From rikske at deds.nl Mon Sep 28 18:13:08 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Mon, 28 Sep 2015 20:13:08 +0200 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: <56097B60.7090701@xtremenitro.org> References: <560954AC.1070409@xtremenitro.org> <56097B60.7090701@xtremenitro.org> Message-ID: Hi, I don't know. Can't find anything about Nginx, OpenSSL ALPN and/or NPN in the logs. HTTP/2 seems to be running fine here according to my testing tools. But there is nothing about ALPN or NPN. The only thing i can find in there code is that the Nginx should warn the user in case, the enduser doesn't provide a valid OpenSSL. I can not reproduce that warning. So my question is still applicable. Is the Nginx HTTP/2 module using (ALPN) TLS on RHEL 7.*? Perhaps a Nginx developer can take a look at it? Thanks, + if (lsopt->http2 && lsopt->ssl) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, - "nginx was built without OpenSSL ALPN or NPN " - "support, SPDY is not enabled for %s", lsopt->addr); + "nginx was built with OpenSSL that lacks ALPN " + "and NPN support, HTTP/2 is not enabled for %s", + lsopt->addr); } > Like this? > > nginx version: nginx/1.9.5 > built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) > built with OpenSSL 1.0.2d-fips 9 Jul 2015 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_v2_module > --with-http_image_filter_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-mail --with-mail_ssl_module > --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' > > Then how to test if I am already using APLN ? :) > > On 09/28/2015 10:15 PM, rikske at deds.nl wrote: >> Hi, >> >> So what you're saying. >> >> Nginx HTTP/2 module won't work on RHEL 7.1 with (ALPN) TLS, >> until you are using OpenSSL version 1.0.2 on RHEL 7.1 in any manner >> whatsoever? >> >> Can anyone confirm this? >> >> Thanks, >> >> Regards, >> >> Rik Ske >> >>> Hello! >>> >>> On 09/28/2015 08:40 PM, rikske at deds.nl wrote: >>>> Dear, >>>> >>>> Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? >>>> >>>> It seems like the HTTP/2 module is enabled by default in your RHEL 7.1 >>>> based rpm and srpm. >>>> >>>> Your Nginx website writes about: >>>> >>>> "Note that accepting HTTP/2 connections over TLS requires the >>>> ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, >>>> which is available only since OpenSSL version 1.0.2. Using the ?Next >>>> Protocol Negotiation? (NPN) TLS extension for this purpose >>>> (available since OpenSSL version 1.0.1) is not guaranteed. " >>>> >>>> RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and >>>> backports. >>>> >>>> Can't find anything in the changelog of RHEL 7.1's OpenSSL about ALPN. >>>> The only thing i can find is "Support for Application Layer Protocol >>>> Negotiation (ALPN) has been added." in RHEL's GnuTLS. >>> >>> Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl >>> 1.0.2d + crypto-policies under centos7. And it was success deployed on >>> my sandbox >>> >>> The rpm was compiled on fedora22, and ported to el7 using mock. >>> >>> https://gitlab.com/antituhan/rpms/tree/master. >>> $ openssl version >>> OpenSSL 1.0.2d-fips 9 Jul 2015 >>> $ uname -a >>> Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 >>> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux >>> >>> Enjoy. >>> >>> >>>> >>>> Thanks, >>>> >>>> Regards, >>>> >>>> Rik Ske >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rikske at deds.nl Mon Sep 28 18:23:23 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Mon, 28 Sep 2015 20:23:23 +0200 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: References: <560954AC.1070409@xtremenitro.org> <56097B60.7090701@xtremenitro.org> Message-ID: Hello. I would like to add here that it is important to get an answer. Google is going to remove SPDY support in Chrome, early 2016. That is 3 months from now. Moreover, NPN support will also be dropped with ALPN as its successor. Since by far the majority of users, use Chrome and Chrome is automatically upgraded. Its time to take action now and test feature server setting. To be (near) feature proof. Thanks, Regards, > Hi, > > I don't know. > Can't find anything about Nginx, OpenSSL ALPN and/or NPN in the logs. > > HTTP/2 seems to be running fine here according to my testing tools. > But there is nothing about ALPN or NPN. > > The only thing i can find in there code is that the Nginx should warn the > user in case, the enduser doesn't provide a valid OpenSSL. > I can not reproduce that warning. > > So my question is still applicable. > > Is the Nginx HTTP/2 module using (ALPN) TLS on RHEL 7.*? > > Perhaps a Nginx developer can take a look at it? > > Thanks, > > + if (lsopt->http2 && lsopt->ssl) { > ngx_conf_log_error(NGX_LOG_WARN, cf, 0, > - "nginx was built without OpenSSL ALPN or NPN " > - "support, SPDY is not enabled for %s", > lsopt->addr); > + "nginx was built with OpenSSL that lacks ALPN > " > + "and NPN support, HTTP/2 is not enabled for > %s", > + lsopt->addr); > } > > >> Like this? >> >> nginx version: nginx/1.9.5 >> built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) >> built with OpenSSL 1.0.2d-fips 9 Jul 2015 >> TLS SNI support enabled >> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >> --conf-path=/etc/nginx/nginx.conf >> --error-log-path=/var/log/nginx/error.log >> --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid >> --lock-path=/var/run/nginx.lock >> --http-client-body-temp-path=/var/cache/nginx/client_temp >> --http-proxy-temp-path=/var/cache/nginx/proxy_temp >> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp >> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp >> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx >> --group=nginx --with-http_ssl_module --with-http_realip_module >> --with-http_addition_module --with-http_sub_module >> --with-http_dav_module --with-http_flv_module --with-http_mp4_module >> --with-http_gunzip_module --with-http_v2_module >> --with-http_image_filter_module --with-http_gzip_static_module >> --with-http_random_index_module --with-http_secure_link_module >> --with-http_stub_status_module --with-mail --with-mail_ssl_module >> --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' >> >> Then how to test if I am already using APLN ? :) >> >> On 09/28/2015 10:15 PM, rikske at deds.nl wrote: >>> Hi, >>> >>> So what you're saying. >>> >>> Nginx HTTP/2 module won't work on RHEL 7.1 with (ALPN) TLS, >>> until you are using OpenSSL version 1.0.2 on RHEL 7.1 in any manner >>> whatsoever? >>> >>> Can anyone confirm this? >>> >>> Thanks, >>> >>> Regards, >>> >>> Rik Ske >>> >>>> Hello! >>>> >>>> On 09/28/2015 08:40 PM, rikske at deds.nl wrote: >>>>> Dear, >>>>> >>>>> Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? >>>>> >>>>> It seems like the HTTP/2 module is enabled by default in your RHEL >>>>> 7.1 >>>>> based rpm and srpm. >>>>> >>>>> Your Nginx website writes about: >>>>> >>>>> "Note that accepting HTTP/2 connections over TLS requires the >>>>> ?Application-Layer Protocol Negotiation? (ALPN) TLS extension >>>>> support, >>>>> which is available only since OpenSSL version 1.0.2. Using the ?Next >>>>> Protocol Negotiation? (NPN) TLS extension for this purpose >>>>> (available since OpenSSL version 1.0.1) is not guaranteed. " >>>>> >>>>> RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and >>>>> backports. >>>>> >>>>> Can't find anything in the changelog of RHEL 7.1's OpenSSL about >>>>> ALPN. >>>>> The only thing i can find is "Support for Application Layer Protocol >>>>> Negotiation (ALPN) has been added." in RHEL's GnuTLS. >>>> >>>> Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl >>>> 1.0.2d + crypto-policies under centos7. And it was success deployed on >>>> my sandbox >>>> >>>> The rpm was compiled on fedora22, and ported to el7 using mock. >>>> >>>> https://gitlab.com/antituhan/rpms/tree/master. >>>> $ openssl version >>>> OpenSSL 1.0.2d-fips 9 Jul 2015 >>>> $ uname -a >>>> Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 >>>> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> Enjoy. >>>> >>>> >>>>> >>>>> Thanks, >>>>> >>>>> Regards, >>>>> >>>>> Rik Ske >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ahutchings at nginx.com Mon Sep 28 20:47:13 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Mon, 28 Sep 2015 21:47:13 +0100 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: References: <560954AC.1070409@xtremenitro.org> <56097B60.7090701@xtremenitro.org> Message-ID: <5609A751.5010806@nginx.com> Hi, If you compiled with OpenSSL 1.0.2d then it should have ALPN, otherwise it will fallback to NPN. One way to test is with OpenSSL 1.0.2d: (echo | openssl s_client -alpn h2 -connect example.net:443) | grep ALPN This will respond with something like the following if it is supported: ALPN protocol: h2 The warning you have flagged is only if OpenSSL doesn't support either NPN or ALPN. This means HTTP/2 and SPDY support isn't possible at all (ie. OpenSSL < 1.0.1 or a custom build with NPN/ALPN disabled). Kind Regards Andrew On 28/09/15 19:13, rikske at deds.nl wrote: > Hi, > > I don't know. > Can't find anything about Nginx, OpenSSL ALPN and/or NPN in the logs. > > HTTP/2 seems to be running fine here according to my testing tools. > But there is nothing about ALPN or NPN. > > The only thing i can find in there code is that the Nginx should warn the > user in case, the enduser doesn't provide a valid OpenSSL. > I can not reproduce that warning. > > So my question is still applicable. > > Is the Nginx HTTP/2 module using (ALPN) TLS on RHEL 7.*? > > Perhaps a Nginx developer can take a look at it? > > Thanks, > > + if (lsopt->http2 && lsopt->ssl) { > ngx_conf_log_error(NGX_LOG_WARN, cf, 0, > - "nginx was built without OpenSSL ALPN or NPN " > - "support, SPDY is not enabled for %s", > lsopt->addr); > + "nginx was built with OpenSSL that lacks ALPN " > + "and NPN support, HTTP/2 is not enabled for %s", > + lsopt->addr); > } > > >> Like this? >> >> nginx version: nginx/1.9.5 >> built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) >> built with OpenSSL 1.0.2d-fips 9 Jul 2015 >> TLS SNI support enabled >> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >> --conf-path=/etc/nginx/nginx.conf >> --error-log-path=/var/log/nginx/error.log >> --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid >> --lock-path=/var/run/nginx.lock >> --http-client-body-temp-path=/var/cache/nginx/client_temp >> --http-proxy-temp-path=/var/cache/nginx/proxy_temp >> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp >> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp >> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx >> --group=nginx --with-http_ssl_module --with-http_realip_module >> --with-http_addition_module --with-http_sub_module >> --with-http_dav_module --with-http_flv_module --with-http_mp4_module >> --with-http_gunzip_module --with-http_v2_module >> --with-http_image_filter_module --with-http_gzip_static_module >> --with-http_random_index_module --with-http_secure_link_module >> --with-http_stub_status_module --with-mail --with-mail_ssl_module >> --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' >> >> Then how to test if I am already using APLN ? :) >> >> On 09/28/2015 10:15 PM, rikske at deds.nl wrote: >>> Hi, >>> >>> So what you're saying. >>> >>> Nginx HTTP/2 module won't work on RHEL 7.1 with (ALPN) TLS, >>> until you are using OpenSSL version 1.0.2 on RHEL 7.1 in any manner >>> whatsoever? >>> >>> Can anyone confirm this? >>> >>> Thanks, >>> >>> Regards, >>> >>> Rik Ske >>> >>>> Hello! >>>> >>>> On 09/28/2015 08:40 PM, rikske at deds.nl wrote: >>>>> Dear, >>>>> >>>>> Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? >>>>> >>>>> It seems like the HTTP/2 module is enabled by default in your RHEL 7.1 >>>>> based rpm and srpm. >>>>> >>>>> Your Nginx website writes about: >>>>> >>>>> "Note that accepting HTTP/2 connections over TLS requires the >>>>> ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, >>>>> which is available only since OpenSSL version 1.0.2. Using the ?Next >>>>> Protocol Negotiation? (NPN) TLS extension for this purpose >>>>> (available since OpenSSL version 1.0.1) is not guaranteed. " >>>>> >>>>> RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and >>>>> backports. >>>>> >>>>> Can't find anything in the changelog of RHEL 7.1's OpenSSL about ALPN. >>>>> The only thing i can find is "Support for Application Layer Protocol >>>>> Negotiation (ALPN) has been added." in RHEL's GnuTLS. >>>> >>>> Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl >>>> 1.0.2d + crypto-policies under centos7. And it was success deployed on >>>> my sandbox >>>> >>>> The rpm was compiled on fedora22, and ported to el7 using mock. >>>> >>>> https://gitlab.com/antituhan/rpms/tree/master. >>>> $ openssl version >>>> OpenSSL 1.0.2d-fips 9 Jul 2015 >>>> $ uname -a >>>> Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 >>>> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> Enjoy. >>>> >>>> >>>>> >>>>> Thanks, >>>>> >>>>> Regards, >>>>> >>>>> Rik Ske >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate, Nginx Inc. From vbart at nginx.com Mon Sep 28 21:01:00 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 29 Sep 2015 00:01 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: <6145646.sWsprBaPmf@vbart-workstation> Message-ID: <6625285.g0zDUmD6kG@vbart-laptop> On Monday 28 September 2015 20:59:23 Aapo Talvensaari wrote: > On 28 September 2015 at 19:44, Valentin V. Bartenev wrote: > > > On Wednesday 23 September 2015 06:28:37 Aapo Talvensaari wrote: > > >> I did get errors like: > > >> net::ERR_SPDY_COMPRESSION_ERROR > > > Could you provide a debug log with problematic request? > > > > I tried to debug this further. And now I'm closer to what happens. > > If Ajax request sends a PUT request with XmlHtttpRequest I do get: > net::ERR_SPDY_COMPRESSION_ERROR > > But this is only when the PHP-FPM reponds with error code: > > header(':', true, 403); > die(json_encode(array( ... ))); What does ":" mean in the header() function? If it returns ":" as a header, then it's the cause of the error. > > > Where that "..." is the array contents json_encoded. So it seems to be a > problem with > HTTP error codes and HTTP2. On 200 return codes it works fine. > > On logs, I do not get anything. There is no way to get nothing in the debug log if it properly configured according to the article: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From aapo.talvensaari at gmail.com Mon Sep 28 22:06:50 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 29 Sep 2015 01:06:50 +0300 Subject: Problems with HTTP/2 In-Reply-To: <6625285.g0zDUmD6kG@vbart-laptop> References: <6145646.sWsprBaPmf@vbart-workstation> <6625285.g0zDUmD6kG@vbart-laptop> Message-ID: On 29 September 2015 at 00:01, Valentin V. Bartenev wrote: > >On Monday 28 September 2015 20:59:23 Aapo Talvensaari wrote: > >> On 28 September 2015 at 19:44, Valentin V. Bartenev > wrote: > >> If Ajax request sends a PUT request with XmlHtttpRequest I do get: > >> net::ERR_SPDY_COMPRESSION_ERROR > >> > >> But this is only when the PHP-FPM reponds with error code: > >> > >> >> header(':', true, 403); > >> die(json_encode(array( ... ))); > > > > What does ":" mean in the header() function? > > If it returns ":" as a header, then it's the cause of the error. > > It just sets the status code. It is normal PHP 5.3 code [1]. I might need to compile nginx by hand to get more debug info. [1] http://php.net/manual/en/function.http-response-code.php#109435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.talvensaari at gmail.com Mon Sep 28 22:21:21 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 29 Sep 2015 01:21:21 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: <6145646.sWsprBaPmf@vbart-workstation> <6625285.g0zDUmD6kG@vbart-laptop> Message-ID: On 29 September 2015 at 01:06, Aapo Talvensaari wrote: > On 29 September 2015 at 00:01, Valentin V. Bartenev wrote: >> What does ":" mean in the header() function? >> If it returns ":" as a header, then it's the cause of the error. >> > It just sets the status code. It is normal PHP 5.3 code. I will test tomorrow to see if setting header differently makes any effect. But still it is puzzling to me that the same code worked fine with SPDY. So maybe SPDY had a more loose way of parsing headers? Shouldn't it be followed here as well, as the most browsers do that as well: "Be conservative in what you send, be liberal in what you accept". Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 29 05:16:05 2015 From: nginx-forum at nginx.us (khav) Date: Tue, 29 Sep 2015 01:16:05 -0400 Subject: Http2 not starting up Message-ID: <97575259171ec9fb7b63eb2e3c603018.NginxMailingListEnglish@forum.nginx.org> HTTP2 isn't working for me .I still use HTTP/1.1 is reponse headers but nginx don't show any error.I also restarted nginx with no change server { listen 443 http2 ; ssl on; ssl_certificate /etc/ssl/filterbypass.me.crt; #(or .pem) ssl_certificate_key /etc/ssl/filterbypass.me.key.nopass; server_name filterbypass.me m.filterbypass.me; return 301 $scheme://www.filterbypass.me$request_uri; } server { listen 443 http2 default_server reuseport deferred; #Change to 443 when SSL is on ssl on; ssl_certificate /etc/ssl/filterbypass.me.crt; #(or .pem) ssl_certificate_key /etc/ssl/filterbypass.me.key.nopass; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #keepalive_timeout 70; #ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ssl_buffer_size 8k; ssl_session_cache shared:SSL:20m; ssl_dhparam /etc/ssl/dhparam.pem; ssl_session_timeout 45m; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/ssl/trustchain.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; // rest of config goes below } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261910,261910#msg-261910 From nginx-forum at nginx.us Tue Sep 29 06:05:11 2015 From: nginx-forum at nginx.us (khav) Date: Tue, 29 Sep 2015 02:05:11 -0400 Subject: Http2 not starting up In-Reply-To: <97575259171ec9fb7b63eb2e3c603018.NginxMailingListEnglish@forum.nginx.org> References: <97575259171ec9fb7b63eb2e3c603018.NginxMailingListEnglish@forum.nginx.org> Message-ID: Turn out maxcdn havent implemented http2 yet Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261910,261911#msg-261911 From nginx-forum at nginx.us Tue Sep 29 06:21:55 2015 From: nginx-forum at nginx.us (wolfgangpue) Date: Tue, 29 Sep 2015 02:21:55 -0400 Subject: Throughput with Loadbalancer Message-ID: <172c0f53310ad0b9ed6faf90c4e58d86.NginxMailingListEnglish@forum.nginx.org> Hi, I am not sure how the load balancer affect the data throughput. For example: I have two nginx server with 1 Gps network connection. When I configure the first server as nginx load balancer and as upstream server and the second server only as upstream server. Is the second upstream server sending its data directly to the client or is the data routed through the load balancer (first server). When all data is routed through the first server I have only a maximum bandwith of 1 Gbps. If the second server sends his data directly to the client I have a maximum bandwith of 2 Gbps. And if the second server sends its data directly to the server, does the response data contain the ip of the second upstream server? So which one is correct? Best Regards Wolfgang Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261912,261912#msg-261912 From lucas at slcoding.com Tue Sep 29 06:32:12 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 29 Sep 2015 08:32:12 +0200 Subject: Throughput with Loadbalancer In-Reply-To: <172c0f53310ad0b9ed6faf90c4e58d86.NginxMailingListEnglish@forum.nginx.org> References: <172c0f53310ad0b9ed6faf90c4e58d86.NginxMailingListEnglish@forum.nginx.org> Message-ID: <560A306C.6080303@slcoding.com> You'll decrease your capacity to 1 gigabit, because you'll send it out via the load balancer again. Else you need to look for "DSR" (Direct Server Return), I'm not completely sure if nginx actually supports this. > wolfgangpue > 29 Sep 2015 08:21 > Hi, > > I am not sure how the load balancer affect the data throughput. > > For example: I have two nginx server with 1 Gps network connection. > When I configure the first server as nginx load balancer and as upstream > server and the second server only as upstream server. > > Is the second upstream server sending its data directly to the client > or is > the data routed through the load balancer (first server). When all data is > routed through the first server I have only a maximum bandwith of 1 > Gbps. If > the second server sends his data directly to the client I have a maximum > bandwith of 2 Gbps. And if the second server sends its data directly > to the > server, does the response data contain the ip of the second upstream > server? > > So which one is correct? > > Best Regards > Wolfgang > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261912,261912#msg-261912 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Sep 29 09:25:43 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2015 10:25:43 +0100 Subject: Problems with HTTP/2 In-Reply-To: References: <6145646.sWsprBaPmf@vbart-workstation> <6625285.g0zDUmD6kG@vbart-laptop> Message-ID: <20150929092543.GF3177@daoine.org> On Tue, Sep 29, 2015 at 01:06:50AM +0300, Aapo Talvensaari wrote: > On 29 September 2015 at 00:01, Valentin V. Bartenev wrote: > > >On Monday 28 September 2015 20:59:23 Aapo Talvensaari wrote: > > >> On 28 September 2015 at 19:44, Valentin V. Bartenev > > wrote: Hi there, > > >> > >> header(':', true, 403); > > >> die(json_encode(array( ... ))); > > > > > > What does ":" mean in the header() function? > > > If it returns ":" as a header, then it's the cause of the error. > > > > It just sets the status code. It is normal PHP 5.3 code [1]. I might need > to compile nginx by hand to get more debug info. For what it's worth: when I test with a php 5.1.6 and a php 5.3.3, header(':', true, 403); sets the status code and adds a header called :. So the end of the http header looks like X-Powered-By: PHP/5.3.3 :: with the debug log showing http fastcgi header: "X-Powered-By: PHP/5.3.3" http fastcgi parser: 0 http fastcgi header: ":: " Using header("HTTP/1.1 403 Whatever"); sets the status code and does not add the dubious header. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Sep 29 09:48:55 2015 From: nginx-forum at nginx.us (beatnut) Date: Tue, 29 Sep 2015 05:48:55 -0400 Subject: Http2 not starting up In-Reply-To: <97575259171ec9fb7b63eb2e3c603018.NginxMailingListEnglish@forum.nginx.org> References: <97575259171ec9fb7b63eb2e3c603018.NginxMailingListEnglish@forum.nginx.org> Message-ID: <89cef6e9a75fe5af97c1d6aacaa2e621.NginxMailingListEnglish@forum.nginx.org> Try add ssl parameter to the listen directive. https://www.nginx.com/blog/nginx-1-9-5/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261910,261916#msg-261916 From nginx-forum at nginx.us Tue Sep 29 10:01:26 2015 From: nginx-forum at nginx.us (beatnut) Date: Tue, 29 Sep 2015 06:01:26 -0400 Subject: HTTP/2 cipher suits black list Message-ID: <0db2122689377e2d8bd810302ab2b453.NginxMailingListEnglish@forum.nginx.org> Hello, I'm using "Intermediate compatibility" ciphersuits from here https://wiki.mozilla.org/Security/Server_Side_TLS. Could somebody answer if this ciphers are compatible with HTTP/2 according to the black list https://tools.ietf.org/html/rfc7540#appendix-A ? Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261918,261918#msg-261918 From adam at jooadam.hu Tue Sep 29 10:09:37 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Tue, 29 Sep 2015 12:09:37 +0200 Subject: Setting headers In-Reply-To: <20150928145458.GU13202@mdounin.ru> References: <20150928145458.GU13202@mdounin.ru> Message-ID: Maxim, > When using nginx XSLT filter module, it shouldn't be needed to > modify Content-Type at all. Use xsl:output media-type attribute > instead to provide a correct media type from the stylesheet itself. I never knew output had a media-type attribute, thanks! ?d?m From aapo.talvensaari at gmail.com Tue Sep 29 10:34:46 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 29 Sep 2015 13:34:46 +0300 Subject: Problems with HTTP/2 In-Reply-To: <20150929092543.GF3177@daoine.org> References: <6145646.sWsprBaPmf@vbart-workstation> <6625285.g0zDUmD6kG@vbart-laptop> <20150929092543.GF3177@daoine.org> Message-ID: On 29 September 2015 at 12:25, Francis Daly wrote: > On Tue, Sep 29, 2015 at 01:06:50AM +0300, Aapo Talvensaari wrote: > > It just sets the status code. It is normal PHP 5.3 code [1]. I might need > > to compile nginx by hand to get more debug info. > > For what it's worth: > > when I test with a php 5.1.6 and a php 5.3.3, > > header(':', true, 403); > > Using > > header("HTTP/1.1 403 Whatever"); > > sets the status code and does not add the dubious header. Thanks. I know this is not probably a best way to set it (and in this case I can change it, but it seems like this may cause some problems with user code. See, the browsers accept it, the Nginx HTTP accepts it, Nginx HTTPS accepts it, and even Nginx SPDY accepts it. Only Nginx HTTP/2 will not accept it. And by accepting I mean, browsers and other Nginx protocols give the correct status code as well. Thanks for debugging this furher. I think everyone now has the knowledge of what is going on. Is this a bug or a feature? Because it might break user code running on Nginx, I might call it a bug. Because it breaks only when these weird headers are in place I might call it a feature. Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Sep 29 10:38:04 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 29 Sep 2015 13:38:04 +0300 Subject: Problems with HTTP/2 In-Reply-To: <20150929092543.GF3177@daoine.org> References: <6145646.sWsprBaPmf@vbart-workstation> <6625285.g0zDUmD6kG@vbart-laptop> <20150929092543.GF3177@daoine.org> Message-ID: On Sep 29, 2015, at 12:25 PM, Francis Daly wrote: > On Tue, Sep 29, 2015 at 01:06:50AM +0300, Aapo Talvensaari wrote: >> On 29 September 2015 at 00:01, Valentin V. Bartenev wrote: >>>> On Monday 28 September 2015 20:59:23 Aapo Talvensaari wrote: >>>>> >>>>> >>>> header(':', true, 403); >>>>> die(json_encode(array( ... ))); >>>> >>>> What does ":" mean in the header() function? >>>> If it returns ":" as a header, then it's the cause of the error. >>> >>> It just sets the status code. It is normal PHP 5.3 code [1]. I might need >> to compile nginx by hand to get more debug info. > > For what it's worth: > > when I test with a php 5.1.6 and a php 5.3.3, > > header(':', true, 403); > > sets the status code and adds a header called :. > > So the end of the http header looks like > > X-Powered-By: PHP/5.3.3 > :: > > with the debug log showing > > http fastcgi header: "X-Powered-By: PHP/5.3.3" > http fastcgi parser: 0 > http fastcgi header: ":: " So, the header field name output as generated with php (and previously guessed by Valentin), is invalid as per 7230, which is in turn referenced in 7540. : field-name = token : token = 1*tchar : tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." / : "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA Not much to discuss. -- Sergey Kandaurov From aapo.talvensaari at gmail.com Tue Sep 29 11:14:44 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 29 Sep 2015 14:14:44 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: <6145646.sWsprBaPmf@vbart-workstation> <6625285.g0zDUmD6kG@vbart-laptop> <20150929092543.GF3177@daoine.org> Message-ID: On 29 September 2015 at 13:38, Sergey Kandaurov wrote: > So, the header field name output as generated with php > (and previously guessed by Valentin), > is invalid as per 7230, which is in turn referenced in 7540. > > : field-name = token > : token = 1*tchar > : tchar = "!" / "#" / "quot; / "%" / "&" / "'" / "*" / "+" / "-" / "." / > : "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA > > Not much to discuss. Even if it works just fine with every browser and every other protocol in Nginx than HTTP/2? I kinda agree, but being in sense "be liberal what you accept, be conservative on what you send", it may make the issue still a worth to solve. Also it should be checked what happens here on the Nginx itself as this may even be a possible bug. Why doesn't it reply with something like internal sever error or something in that case? I mean, in worst case this might be a source for a security bug (note: I'm not saying it is). -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 29 12:19:39 2015 From: nginx-forum at nginx.us (wolfgangpue) Date: Tue, 29 Sep 2015 08:19:39 -0400 Subject: Throughput with Loadbalancer In-Reply-To: <560A306C.6080303@slcoding.com> References: <560A306C.6080303@slcoding.com> Message-ID: Lucas Rolff Wrote: ------------------------------------------------------- > Else you need to look for "DSR" (Direct Server Return), I'm not > completely sure if nginx actually supports this. Ok, thank you. I think DSR is on a lower level and nginx has no influence on it. I will try another aproach. Because both ngnix servers contain the same files I will code my own load balancer in my php frontend system and request data from my nginx server in a round robin queue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261912,261923#msg-261923 From lucas at slcoding.com Tue Sep 29 12:23:37 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 29 Sep 2015 14:23:37 +0200 Subject: Throughput with Loadbalancer In-Reply-To: References: <560A306C.6080303@slcoding.com> Message-ID: <560A82C9.4090101@slcoding.com> You could also use multiple A records on a DNS level, and let DNS balance the traffic between the two machines. > wolfgangpue > 29 Sep 2015 14:19 > Lucas Rolff Wrote: > ------------------------------------------------------- > > > Ok, thank you. I think DSR is on a lower level and nginx has no > influence on > it. > > I will try another aproach. Because both ngnix servers contain the same > files I will code my own load balancer in my php frontend system and > request > data from my nginx server in a round robin queue. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261912,261923#msg-261923 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Lucas Rolff > 29 Sep 2015 08:32 > You'll decrease your capacity to 1 gigabit, because you'll send it out > via the load balancer again. > Else you need to look for "DSR" (Direct Server Return), I'm not > completely sure if nginx actually supports this. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 29 14:46:46 2015 From: nginx-forum at nginx.us (footplus) Date: Tue, 29 Sep 2015 10:46:46 -0400 Subject: Delay in proxied requests locked by proxy_cache_lock Message-ID: <0041c6c9bc9cae98237f425bb3c43729.NginxMailingListEnglish@forum.nginx.org> Hello, While doing benchmarks in a configuration using proxy_cache and proxy_cache_lock, I noticed that many requests had a duration very close to 500ms. Upon further analysis, I determined that a huge majority of these requests were actually HITs, and investigated to see where do the delay came from. I currently suspect that this particular delay come from HITs that were delayed by proxy_cache_lock (i.e were waiting for an initial request to succeed and cache the object). I found support for this claim in the following code: https://github.com/nginx/nginx/blob/master/src/http/ngx_http_file_cache.c#L449 (and 509). Am I right in assuming the proxy_cache_lock is using a 500ms polling rate ? Are there any plans to move to an event-based mechanism ? In our use-case (2s video fragments) this represents 25% of the segment duration, so it can be quite impacting, furthermore because this mechanism can act on several layers of caches. We're planning to try using a 10ms interval instead (replacing 500 by 10 in the 2 lines should suffice right ?) - but would it be interesting to make a real patch to have a configurable value ? Is there another work-around ? Thanks, Best regards, Aur?lien Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261927,261927#msg-261927 From francis at daoine.org Tue Sep 29 20:01:55 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2015 21:01:55 +0100 Subject: http digest + proxy doesn't work on /something?x=3? In-Reply-To: <56094D95.9060604@gmail.com> References: <56094D95.9060604@gmail.com> Message-ID: <20150929200155.GG3177@daoine.org> On Mon, Sep 28, 2015 at 04:24:21PM +0200, Jacek Wielemborek wrote: Hi there, > I have a question > regarding HTTP digest nginx module in combination with proxy_pass. I > tried the attached configuration file in combination with nginx-1.6.3 > and kept getting asked for password infinitely when I try to accept any > resource other than /. How can this problem be solved? There's only one nginx-1.6.3. There's not only one HTTP digest nginx module. Which specific module are you using? I tried testing with the one from https://github.com/samizdatco/nginx-http-auth-digest/archive/master.zip, which is linked from https://github.com/samizdatco/nginx-http-auth-digest I found that "/" failed, but "/index.html" succeeded. "/file.html" succeeded with and without proxy_pass. "/file.html?k=v" failed with and without proxy_pass. My "digest" file has the contents a:Realm:ec0b1e4f2b9976181ef458cd8e14e735 which (hopefully) corresponds to user=a, password=a I guess that the module version I used and the version of nginx I used may have some important differences. Or maybe my client implements the digest authentication calculation in a different way to what this module does; presumably not both of them are correct, if that is the case. So, in your case: * what module do you use? * what digest file do you test with (make any one that you are happy to share, like the above one) * what requests do "work", and what do not? * does proxy_pass make a difference in whether it works? (if not, simplify your config) * does your port-8000 upstream server do its own authentication? Those details might allow someone else to more fully investigate. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 29 20:20:35 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2015 21:20:35 +0100 Subject: nginx nested location and different basic authentication file In-Reply-To: <6e396f7da3d700dca5eece4cc49369a1.NginxMailingListEnglish@forum.nginx.org> References: <6e396f7da3d700dca5eece4cc49369a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150929202035.GH3177@daoine.org> On Sun, Sep 27, 2015 at 04:32:56AM -0400, cacrus wrote: Hi there, Your Subject: mentions "nested location"; but your config doesn't seem to show any explicit nesting. > location /parent/ { > set $basic_file /nginx/conf/.htpasswd; > if ($request_uri ~ (visualize|dashboard|settings)){ > set $basic_file /nginx/conf/.dev_pass; > } I was going to say "You're using _if_ inside _location_, which you shouldn't do unless you know why you shouldn't do it". But when I use something very similar in a nginx-1.9.1 system, it works for me. /parent/file.html requires the password from .htpasswd; /parent/file.html?dashboard requires the password from .dev_pass. So I'm unable to reproduce your problem, which means that I can't demonstrate that any "fix" would actually make it work for you. My first suggestion would be to move those four config lines to be outside of all location{}s, so have them at server{} level. When I did that in my test system, it stayed working. If that does work for you, then you could try to achieve the same using a map (at http{} level) as already suggested - set a variable for the password-file to be one thing by default, and have a separate value for the request_uri values that you care about, and then use that variable in your config. > proxy_pass http:///; > auth_basic "Restricted"; > auth_basic_user_file $basic_file; > } > > I would like file .htpasswd to be used by default and in case of > $request_uri having (visualize|dashboard|settings) , auth should happen via > .dev_pass . > > For some reason the authentication is happening only via htpasswd even if > i have (visualize|dashboard|settings) in the URI . > > Any idea , what am i missing? _if_ inside _location_ may not do what you expect. Especially if there is more than one _if_ in there; but your example config does not show that. But your config works for me in 1.9.1. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 29 21:27:16 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2015 22:27:16 +0100 Subject: nginx 1.9.5 and realip In-Reply-To: References: Message-ID: <20150929212716.GI3177@daoine.org> On Sun, Sep 27, 2015 at 10:24:22AM -0700, Frank Liu wrote: Hi there, > Just tried latest 1.9.5 rpm that has realip module enabled. The > $remote_port variable becomes blank. Is that known? > Is there another way I can get the remote_port? I think that there is not. The connection comes from srcip:srcport to dstip:dstport. By using the realip module, you ask nginx to replace srcip anywhere where it might care what value it has. Once you do that, I think that srcport is meaningless. You can see the connecting ip:port in the debug log, on the accept: line; so I guess that in principle, a module that ran before realip (or a modified realip) could make it available somewhere. But I don't think it can be done without getting code written. Good luck with it, f -- Francis Daly francis at daoine.org From vbart at nginx.com Tue Sep 29 23:07:06 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 30 Sep 2015 02:07:06 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: Message-ID: <5443621.176RmXsMbV@vbart-laptop> On Tuesday 29 September 2015 14:14:44 Aapo Talvensaari wrote: > On 29 September 2015 at 13:38, Sergey Kandaurov wrote: > > So, the header field name output as generated with php > > (and previously guessed by Valentin), > > is invalid as per 7230, which is in turn referenced in 7540. > > > > : field-name = token > > : token = 1*tchar > > : tchar = "!" / "#" / "quot; / "%" / "&" / "'" / "*" / "+" / "-" / "." / > > : "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA > > > > Not much to discuss. > > Even if it works just fine with every browser and every other protocol in > Nginx than > HTTP/2? I kinda agree, but being in sense "be liberal what you accept, be > conservative on what you send", it may make the issue still a worth to > solve. > > Also it should be checked what happens here on the Nginx itself as this may even > be a possible bug. Why doesn't it reply with something like internal sever > error or something in that case? I mean, in worst case this might be a source for a security bug (note: I'm not saying it is). NGINX doesn't reply anything special about this header in HTTP/2. The behavior you see is on the client side. wbr, Valentin V. Bartenev From vbart at nginx.com Tue Sep 29 23:20:58 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 30 Sep 2015 02:20:58 +0300 Subject: Problems with HTTP/2 In-Reply-To: References: <20150929092543.GF3177@daoine.org> Message-ID: <68758048.MXPLF0Bz13@vbart-laptop> On Tuesday 29 September 2015 13:34:46 Aapo Talvensaari wrote: > On 29 September 2015 at 12:25, Francis Daly wrote: > > On Tue, Sep 29, 2015 at 01:06:50AM +0300, Aapo Talvensaari wrote: > > > It just sets the status code. It is normal PHP 5.3 code [1]. I might need > > > to compile nginx by hand to get more debug info. > > > > For what it's worth: > > > > when I test with a php 5.1.6 and a php 5.3.3, > > > > header(':', true, 403); > > > > Using > > > > header("HTTP/1.1 403 Whatever"); > > > > sets the status code and does not add the dubious header. > > > Thanks. I know this is not probably a best way to set it (and in this case > I can change it, but it seems like this may cause some problems with user code. > See, the browsers accept it, the Nginx HTTP accepts it, Nginx HTTPS accepts > it, and even Nginx SPDY accepts it. Only Nginx HTTP/2 will not accept it. The error that you have seen isn't from NGINX, it's generated by browser. The behavior of NGINX is consistent across all protocols, but the reaction of specific browser, that you have tested, seems not. > And by accepting I mean, browsers and other Nginx protocols give the correct > status code as well. It's not about the status code. Note, that the function you are calling in PHP does two things: 1. sets the status code; 2. adds invalid header. The 1 and 2 aren't related to each other. > > Thanks for debugging this furher. I think everyone now has the knowledge of > what is going on. > > Is this a bug or a feature? Yes, it's a bug. But it's in the application that generates invalid response. The result of such invalid response is undefined - may work or may not, depending on client's behavior. NGINX tolerates it, some browsers do not. > Because it might break user code running on > Nginx, I might call it a bug. The code was already broken, it was a coincidence that it worked. wbr, Valentin V. Bartenev From aapo.talvensaari at gmail.com Tue Sep 29 23:35:58 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Wed, 30 Sep 2015 02:35:58 +0300 Subject: Problems with HTTP/2 In-Reply-To: <68758048.MXPLF0Bz13@vbart-laptop> References: <20150929092543.GF3177@daoine.org> <68758048.MXPLF0Bz13@vbart-laptop> Message-ID: On 30 September 2015 at 02:20, Valentin V. Bartenev wrote: > The code was already broken, it was a coincidence that it worked. Thanks for the patience. Case closed. I just wanted to be sure. It seems like Chrome is more demanding on HTTP/2 headers than others. Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 30 04:50:25 2015 From: nginx-forum at nginx.us (bbogdan) Date: Wed, 30 Sep 2015 00:50:25 -0400 Subject: 1.9.5 slower then 1.9.4 or 1.8.0 on static files Message-ID: <87e3fb99c7e1d91e3f834d1cc401ef44.NginxMailingListEnglish@forum.nginx.org> I just migrated a new customer to nginx/php-fpm and notice a considerable delay with requests. nginx Version 1.9.5 compiled from rpm with http/2 hosting: linode Virtualizer: KVM I noticed that the site was slower with an avg ttfb of ~1.1s. Only until i got to test static assets did i discover that the time to first byte for js/css was the same 1.1s. Switching to nginx 1.8 bring the response time back to normal of ~0.145. Did i stumble on a bug? Anything i can give you guys to troubleshoot this figure out why 1.9.5. is slower. Bogdan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261944,261944#msg-261944 From nginx-forum at nginx.us Wed Sep 30 09:35:01 2015 From: nginx-forum at nginx.us (cacrus) Date: Wed, 30 Sep 2015 05:35:01 -0400 Subject: nginx nested location and different basic authentication file In-Reply-To: <20150929202035.GH3177@daoine.org> References: <20150929202035.GH3177@daoine.org> Message-ID: <06cc1e16ec4e3b50b0caed99e7b77ca2.NginxMailingListEnglish@forum.nginx.org> Thanks for the replies . I figured out the issue . This was not related to Nginx Config . The requests i was trying to route had an issue . Btw , I shifted to map as well and both configurations worked. Thanks for your comments and appreciate it . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261867,261948#msg-261948 From fsantiago at deviltracks.net Wed Sep 30 10:32:09 2015 From: fsantiago at deviltracks.net (=?utf-8?q?fsantiago=40deviltracks=2Enet?=) Date: Wed, 30 Sep 2015 06:32:09 -0400 Subject: 1.9.5 slower then 1.9.4 or 1.8.0 on static files In-Reply-To: <87e3fb99c7e1d91e3f834d1cc401ef44.NginxMailingListEnglish@forum.nginx.org> References: <87e3fb99c7e1d91e3f834d1cc401ef44.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1D9557CD-E6DA-4281-98A6-8FC660496DDE@deviltracks.net> I second this and have seen the same performance hit. Sincerely, Fabian Santiago Sent from my iPhone > On Sep 30, 2015, at 12:50 AM, bbogdan wrote: > > I just migrated a new customer to nginx/php-fpm and notice a considerable > delay with requests. > > nginx Version 1.9.5 compiled from rpm with http/2 > > hosting: linode > Virtualizer: KVM > > > I noticed that the site was slower with an avg ttfb of ~1.1s. > Only until i got to test static assets did i discover that the time to first > byte for js/css was the same 1.1s. > > Switching to nginx 1.8 bring the response time back to normal of ~0.145. > > Did i stumble on a bug? > > Anything i can give you guys to troubleshoot this figure out why 1.9.5. is > slower. > > Bogdan > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261944,261944#msg-261944 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Sep 30 10:47:04 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 30 Sep 2015 06:47:04 -0400 Subject: 1.9.5 slower then 1.9.4 or 1.8.0 on static files In-Reply-To: <1D9557CD-E6DA-4281-98A6-8FC660496DDE@deviltracks.net> References: <1D9557CD-E6DA-4281-98A6-8FC660496DDE@deviltracks.net> Message-ID: <8fc3c1184b3d93594adf10b6ed5a21b0.NginxMailingListEnglish@forum.nginx.org> Can you check the logfiles for MISS cache entries, if you are using caching on these static files. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261944,261950#msg-261950 From rikske at deds.nl Wed Sep 30 12:45:57 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Wed, 30 Sep 2015 14:45:57 +0200 Subject: Nginx HTTP/2 module (ALPN) TLS on RHEL 7.* In-Reply-To: <5609A751.5010806@nginx.com> References: <560954AC.1070409@xtremenitro.org> <56097B60.7090701@xtremenitro.org> <5609A751.5010806@nginx.com> Message-ID: <7803de238fb5a4b10319cc287b754686.squirrel@deds.nl> Hi Andrew, Thanks for your reply. I am familiar with the command but had forgotten that if ALPN is not supported by the OpenSSL client you also can't test it. This sounds strange but I thought my original command was not right. Because the program always arrived with all options. The OpenSSL client 1.0.1 program returns no error, no reports that it is not supported. If you test ALPN with version 1.0.1. At least I thought that this was supported by Red Hat. But ALPN is not supported by Red Hat's EL7 OpenSSL implementation. That's the answer to my question. So recompiled the Nginx SRPM with the latest OpenSSL and its working now. Andrew, Nginx is now compiled with version 1.0.2. and the server is still running 1.0.1. This can't cause problems? Thanks again Andrew, Regards, Rik Ske > Hi, > > If you compiled with OpenSSL 1.0.2d then it should have ALPN, otherwise > it will fallback to NPN. One way to test is with OpenSSL 1.0.2d: > > (echo | openssl s_client -alpn h2 -connect example.net:443) | grep ALPN > > This will respond with something like the following if it is supported: > > ALPN protocol: h2 > > The warning you have flagged is only if OpenSSL doesn't support either > NPN or ALPN. This means HTTP/2 and SPDY support isn't possible at all > (ie. OpenSSL < 1.0.1 or a custom build with NPN/ALPN disabled). > > Kind Regards > Andrew > > On 28/09/15 19:13, rikske at deds.nl wrote: >> Hi, >> >> I don't know. >> Can't find anything about Nginx, OpenSSL ALPN and/or NPN in the logs. >> >> HTTP/2 seems to be running fine here according to my testing tools. >> But there is nothing about ALPN or NPN. >> >> The only thing i can find in there code is that the Nginx should warn >> the >> user in case, the enduser doesn't provide a valid OpenSSL. >> I can not reproduce that warning. >> >> So my question is still applicable. >> >> Is the Nginx HTTP/2 module using (ALPN) TLS on RHEL 7.*? >> >> Perhaps a Nginx developer can take a look at it? >> >> Thanks, >> >> + if (lsopt->http2 && lsopt->ssl) { >> ngx_conf_log_error(NGX_LOG_WARN, cf, 0, >> - "nginx was built without OpenSSL ALPN or NPN >> " >> - "support, SPDY is not enabled for %s", >> lsopt->addr); >> + "nginx was built with OpenSSL that lacks >> ALPN " >> + "and NPN support, HTTP/2 is not enabled for >> %s", >> + lsopt->addr); >> } >> >> >>> Like this? >>> >>> nginx version: nginx/1.9.5 >>> built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) >>> built with OpenSSL 1.0.2d-fips 9 Jul 2015 >>> TLS SNI support enabled >>> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >>> --conf-path=/etc/nginx/nginx.conf >>> --error-log-path=/var/log/nginx/error.log >>> --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid >>> --lock-path=/var/run/nginx.lock >>> --http-client-body-temp-path=/var/cache/nginx/client_temp >>> --http-proxy-temp-path=/var/cache/nginx/proxy_temp >>> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp >>> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp >>> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx >>> --group=nginx --with-http_ssl_module --with-http_realip_module >>> --with-http_addition_module --with-http_sub_module >>> --with-http_dav_module --with-http_flv_module --with-http_mp4_module >>> --with-http_gunzip_module --with-http_v2_module >>> --with-http_image_filter_module --with-http_gzip_static_module >>> --with-http_random_index_module --with-http_secure_link_module >>> --with-http_stub_status_module --with-mail --with-mail_ssl_module >>> --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall >>> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >>> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' >>> >>> Then how to test if I am already using APLN ? :) >>> >>> On 09/28/2015 10:15 PM, rikske at deds.nl wrote: >>>> Hi, >>>> >>>> So what you're saying. >>>> >>>> Nginx HTTP/2 module won't work on RHEL 7.1 with (ALPN) TLS, >>>> until you are using OpenSSL version 1.0.2 on RHEL 7.1 in any manner >>>> whatsoever? >>>> >>>> Can anyone confirm this? >>>> >>>> Thanks, >>>> >>>> Regards, >>>> >>>> Rik Ske >>>> >>>>> Hello! >>>>> >>>>> On 09/28/2015 08:40 PM, rikske at deds.nl wrote: >>>>>> Dear, >>>>>> >>>>>> Does the Nginx HTTP/2 module work on RHEL 7.1 with (ALPN) TLS? >>>>>> >>>>>> It seems like the HTTP/2 module is enabled by default in your RHEL >>>>>> 7.1 >>>>>> based rpm and srpm. >>>>>> >>>>>> Your Nginx website writes about: >>>>>> >>>>>> "Note that accepting HTTP/2 connections over TLS requires the >>>>>> ?Application-Layer Protocol Negotiation? (ALPN) TLS extension >>>>>> support, >>>>>> which is available only since OpenSSL version 1.0.2. Using the ?Next >>>>>> Protocol Negotiation? (NPN) TLS extension for this purpose >>>>>> (available since OpenSSL version 1.0.1) is not guaranteed. " >>>>>> >>>>>> RHEL 7.1 is using OpenSSL 1.0.1e. with a whole bunch of patches and >>>>>> backports. >>>>>> >>>>>> Can't find anything in the changelog of RHEL 7.1's OpenSSL about >>>>>> ALPN. >>>>>> The only thing i can find is "Support for Application Layer Protocol >>>>>> Negotiation (ALPN) has been added." in RHEL's GnuTLS. >>>>> >>>>> Yes, RHEL using openssl 1.0.1e-42. But, I've compiled using openssl >>>>> 1.0.2d + crypto-policies under centos7. And it was success deployed >>>>> on >>>>> my sandbox >>>>> >>>>> The rpm was compiled on fedora22, and ported to el7 using mock. >>>>> >>>>> https://gitlab.com/antituhan/rpms/tree/master. >>>>> $ openssl version >>>>> OpenSSL 1.0.2d-fips 9 Jul 2015 >>>>> $ uname -a >>>>> Linux 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 >>>>> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux >>>>> >>>>> Enjoy. >>>>> >>>>> >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Regards, >>>>>> >>>>>> Rik Ske >>>>>> >>>>>> _______________________________________________ >>>>>> nginx mailing list >>>>>> nginx at nginx.org >>>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>>> >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- > Andrew Hutchings (LinuxJedi) > Senior Developer Advocate, Nginx Inc. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Sep 30 16:55:27 2015 From: nginx-forum at nginx.us (bbogdan) Date: Wed, 30 Sep 2015 12:55:27 -0400 Subject: 1.9.5 slower then 1.9.4 or 1.8.0 on static files In-Reply-To: <8fc3c1184b3d93594adf10b6ed5a21b0.NginxMailingListEnglish@forum.nginx.org> References: <1D9557CD-E6DA-4281-98A6-8FC660496DDE@deviltracks.net> <8fc3c1184b3d93594adf10b6ed5a21b0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4e69492c4b224f6cb3938baba551f1f5.NginxMailingListEnglish@forum.nginx.org> No Cache was enabled. Accessing the files directly on the drive took a lot longer on 1.9.5 then previous versions. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261944,261955#msg-261955