From jiangmuhui at gmail.com Fri Sep 1 16:59:04 2017 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Sat, 2 Sep 2017 00:59:04 +0800 Subject: HTTP2 DATA frames with 0 length Message-ID: Hi I am using nginx 1.9.15. I noticed when I made a HTTP2 request to the nginx. It will send the data frames that carry the object first. But end with a data frame whose length is zero indicating the Data end flag. I am curious why you guys design in this way. I think we don't need this extra data frame. Maybe nginx has already fix this, If so, could you tell me the exact version.. Many Thanks Regards Muhui -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Sep 1 20:32:26 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 01 Sep 2017 23:32:26 +0300 Subject: HTTP2 DATA frames with 0 length In-Reply-To: References: Message-ID: <2257102.MrgPifYNUk@vbart-laptop> On ???????, 1 ???????? 2017 ?. 19:59:04 MSK Muhui Jiang wrote: > Hi > > I am using nginx 1.9.15. I noticed when I made a HTTP2 request to the > nginx. It will send the data frames that carry the object first. But end > with a data frame whose length is zero indicating the Data end flag. > > I am curious why you guys design in this way. I think we don't need this > extra data frame. Maybe nginx has already fix this, If so, could you tell > me the exact version.. Many Thanks > Request processing in nginx is quite complex. There may have many data sources, modules, filter modules, and so on. The HTTP/2 module works quite straightforward, if it sees the end of buffer chain in nginx, it adds the END_STREAM flag. Otherwise, it doesn't. So, whether you see END_STREAM in a separate DATA frame or not depends on many factors and your configuration. wbr, Valentin V. Bartenev From jiangmuhui at gmail.com Sat Sep 2 05:51:57 2017 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Sat, 2 Sep 2017 13:51:57 +0800 Subject: HTTP2 DATA frames with 0 length In-Reply-To: <2257102.MrgPifYNUk@vbart-laptop> References: <2257102.MrgPifYNUk@vbart-laptop> Message-ID: Hi Valentin Thanks for your email. if it sees the end of buffer chain in nginx, it adds the END_STREAM flag ======================== Here the buffer chain means the ssl buffer? The thing I am curious is that why nginx need to carry a data frame whose payload length is zero. Nginx could add the END_STREAM flag in the previous DATA frame. Since I am doing some research work using nginx. I hope that the nginx only return one DATA frame to me when I request an object. However, No matter how small the object size is, nginx will always add an extra DATA frame whose length is zero plus an END_STREAM flag. Do you have any suggestions or could you please propose a configuration that NGINX will only return one DATA frame for one stream. Many thanks Regards Muhui 2017-09-02 4:32 GMT+08:00 Valentin V. Bartenev : > On ???????, 1 ???????? 2017 ?. 19:59:04 MSK Muhui Jiang wrote: > > Hi > > > > I am using nginx 1.9.15. I noticed when I made a HTTP2 request to the > > nginx. It will send the data frames that carry the object first. But end > > with a data frame whose length is zero indicating the Data end flag. > > > > I am curious why you guys design in this way. I think we don't need this > > extra data frame. Maybe nginx has already fix this, If so, could you tell > > me the exact version.. Many Thanks > > > > Request processing in nginx is quite complex. There may have many data > sources, modules, filter modules, and so on. The HTTP/2 module works quite > straightforward, if it sees the end of buffer chain in nginx, it adds the > END_STREAM flag. Otherwise, it doesn't. > > So, whether you see END_STREAM in a separate DATA frame or not depends on > many factors and your configuration. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 2 19:30:47 2017 From: nginx-forum at forum.nginx.org (foly) Date: Sat, 02 Sep 2017 15:30:47 -0400 Subject: nginx https server logs Message-ID: Hello, When I connect to HTTP site server log shows all requests. I mean that number of requests is the same as log entries. When I connect to HTTPS site (with Let's Encrypt, for example) server log shows only first request. I mean that I can refresh page again and again without new entries in server log. Is it misconfiguration? I have used some manual to setup Let's Encrypt with nginx. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276227,276227#msg-276227 From nginx-forum at forum.nginx.org Sat Sep 2 19:53:52 2017 From: nginx-forum at forum.nginx.org (ivy) Date: Sat, 02 Sep 2017 15:53:52 -0400 Subject: Separated reverse proxy for different users In-Reply-To: <20170830180937.GC20907@daoine.org> References: <20170830180937.GC20907@daoine.org> Message-ID: <2168554542960e56a35c147ad6e46cbe.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks for your reply. I added default value to map file and replaced "localhost" with 127.0.0.1 So currently the map file looks like: ivy 10080; john 10081; default 65355; The conf.file looks like: map $remote_user $rp_port { include /home/secure/reverse_proxy.map; } server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; set $auth_status 100; server_name localhost; root /usr/share/nginx/html; include /etc/nginx/default.d/*.conf; location / { try_files $uri $uri/ =404; auth_basic "restricted content"; auth_basic_user_file "/home/secure/.passwords"; auth_request_set $auth_status $upstream_status; proxy_pass http://127.0.01:$rp_port } } This gave me following error: 2017/09/02 12:46:32 [error] 26959#26959: *1905 connect() failed (111: Connection refused) while connecting to upstream, client: client_ip, server: ..., request: "POST / HTTP/1.1", upstream: "http://server_ip:10081/", host: "server_ip", referrer: "http://server_ip/" I added URI in the proxy_pass line: proxy_pass http://127.0.0.1:$rp_port$uri; Among many iterative experiments i found that $uri and $request_uri give the same result: - On plain root request (like: my.site.info) the needed page is loaded. client_ip - ivy [02/Sep/2017:14:59:43 -0400] "GET / HTTP/1.1" 200 33185 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36" "-" - However, on request of any sub-location (like: my.site.info/about) the proxy_pass generates redirect to itself. client_ip - ivy [02/Sep/2017:14:59:47 -0400] "GET /sysinfo/ HTTP/1.1" 404 571 "http://server_ip/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36" "-" Here (https://www.jethrocarr.com/2013/11/02/nginx-reverse-proxies-and-dns-resolution/) provided number of workarounds with changing upstreams. I tried all of them with the same result as above - sub-locations give error 404. I'd glad to try more ideas. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276150,276228#msg-276228 From ish at lsl.digital Sun Sep 3 06:19:09 2017 From: ish at lsl.digital (Ish Sookun) Date: Sun, 3 Sep 2017 10:19:09 +0400 Subject: nginx https server logs In-Reply-To: References: Message-ID: <09d4dabd-75f2-5112-32bd-6301226ad6ef@lsl.digital> Hi Foly, On 09/02/2017 11:30 PM, foly wrote: > When I connect to HTTPS site (with Let's Encrypt, for example) server log > shows only first request. > I mean that I can refresh page again and again without new entries in server > log. > > Is it misconfiguration? What's the status code for the refreshed page? Are you getting HTTP 200? If not, could you verify whether subsequent requests are not cached by your browser or proxy (if you're behind one). Regards, Ish Sookun From francis at daoine.org Sun Sep 3 09:16:58 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Sep 2017 10:16:58 +0100 Subject: Separated reverse proxy for different users In-Reply-To: <2168554542960e56a35c147ad6e46cbe.NginxMailingListEnglish@forum.nginx.org> References: <20170830180937.GC20907@daoine.org> <2168554542960e56a35c147ad6e46cbe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170903091658.GD20907@daoine.org> On Sat, Sep 02, 2017 at 03:53:52PM -0400, ivy wrote: Hi there, there are a few things here I'm not sure about. > location / { > try_files $uri $uri/ =404; Why is that line there? That probably says "return 404 to most requests". You report that you get a 404 to most requests. Remove that line if you cannot say what you think it should be doing. > auth_basic "restricted content"; > auth_basic_user_file "/home/secure/.passwords"; > auth_request_set $auth_status $upstream_status; > > proxy_pass http://127.0.01:$rp_port If you copy-paste'd that line, you possibly have some typos in your config. If you transcribed that line, then this is an indication of why you should not transcribe. > 2017/09/02 12:46:32 [error] 26959#26959: *1905 connect() failed (111: > Connection refused) while connecting to upstream, client: client_ip, server: > ..., request: "POST / HTTP/1.1", upstream: "http://server_ip:10081/", host: "10081" corresponds to "john", yes? Your proxy_pass line wanted to talk to 127.0.0.1, but the log line says server_ip. I suspect that you are not testing with the configuration/logs that you are showing here. Anyway: the log line says that the server on 10081 is not running. Is the server on 10081 running? If not, make it be running before you test again. > I added URI in the proxy_pass line: > proxy_pass http://127.0.0.1:$rp_port$uri; That should not be necessary, if the first problems are solved. > - However, on request of any sub-location (like: my.site.info/about) the > proxy_pass generates redirect to itself. Just for clarity: a 404 is not a redirect to itself. The 404 probably comes from your try_files line, before proxy_pass takes effect. Your upstream server on port 10081 probably shows nothing in its logs for this request. > Here > (https://www.jethrocarr.com/2013/11/02/nginx-reverse-proxies-and-dns-resolution/) > provided number of workarounds with changing upstreams. I tried all of them > with the same result as above - sub-locations give error 404. I don't see any suggestions on that page that are relevant to you; you don't have varying hostnames in your proxy_pass directives, unless I have missed something. > I'd glad to try more ideas. Remove the try_files line; and if something remains imperfect, build a test system that does not have any secret names or addresses and show the actual tested configuration, request, and logged result. Good luck with it, f -- Francis Daly francis at daoine.org From ish at lsl.digital Sun Sep 3 11:17:17 2017 From: ish at lsl.digital (Ish Sookun) Date: Sun, 3 Sep 2017 15:17:17 +0400 Subject: Separated reverse proxy for different users In-Reply-To: <2168554542960e56a35c147ad6e46cbe.NginxMailingListEnglish@forum.nginx.org> References: <20170830180937.GC20907@daoine.org> <2168554542960e56a35c147ad6e46cbe.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Ivy, On 09/02/2017 11:53 PM, ivy wrote: > location / { > try_files $uri $uri/ =404; > auth_basic "restricted content"; > auth_basic_user_file "/home/secure/.passwords"; > auth_request_set $auth_status $upstream_status; > > proxy_pass http://127.0.01:$rp_port > } The above try_files directive would return only the files & directories that are present withing the document_root. Any other request would meet a 404 error. The requests do not seem to reach the proxy_pass. If all requests are to be served by the proxy_pass, I suggest commenting/removing the try_files directive. E.g. location / { auth_basic "restricted content"; auth_basic_user_file "/home/secure/.passwords"; auth_request_set $auth_status $upstream_status; proxy_pass http://127.0.01:$rp_port; } Regards, Ish Sookun From nginx-forum at forum.nginx.org Sun Sep 3 15:51:28 2017 From: nginx-forum at forum.nginx.org (foly) Date: Sun, 03 Sep 2017 11:51:28 -0400 Subject: nginx https server logs In-Reply-To: <09d4dabd-75f2-5112-32bd-6301226ad6ef@lsl.digital> References: <09d4dabd-75f2-5112-32bd-6301226ad6ef@lsl.digital> Message-ID: <14f2051849d137ce632bfdc30f1f9593.NginxMailingListEnglish@forum.nginx.org> Hello, Ish Sookun The status code for HTTP connection is 200. It writes to server log each time I refresh page. The status code for HTTPS connection is 200. But it writes to server log only once. Then I can refresh again and again without logs. That looks very strange. I have found this using this modified version of nginx. https://github.com/NeusoftSecurity/SEnginx I have not tried official releases for a while. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276227,276238#msg-276238 From nginx-forum at forum.nginx.org Mon Sep 4 19:10:01 2017 From: nginx-forum at forum.nginx.org (Hasta-La-Vista_NN) Date: Mon, 04 Sep 2017 15:10:01 -0400 Subject: How to change/add cookie in http request for sticky? Message-ID: Hi all. So my situation is following: I'm making some backend on java and using nginx as balancer (sticky module). Sticky uses cookie for identify current server which browser is working with. Browser sends such request as POST, OPTION, GET and so on. And all these request contain cookie with route parameter. Firstly route parameter is generated by sticky nginx and then browser use it fo following reqest. My problem now is that frontend developers created DELETE request without cookie. But they can write route parameter as parameter of request. Something like that: DELETE http://myserver.com/winter/user/john?route=12345 So this request can send on wrong upstream, because it hasn't cookie. My solution is to add parameter located in url to cookie in nginx. But i don't know why how to do it. My nginx: upstream backend { server backend1.example.com; server backend2.example.com; sticky secure httponly; } server { ... location / { proxy_pass http://backend; ... } } proxy_set_header Cookie "route=$arg_route"; help me, doesn't it? Or is there another way? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276252,276252#msg-276252 From mail at kilian-ries.de Mon Sep 4 20:46:48 2017 From: mail at kilian-ries.de (Kilian Ries) Date: Mon, 4 Sep 2017 20:46:48 +0000 Subject: No Upstream Proxy Headers In-Reply-To: References: Message-ID: <1D29EB36-B460-4AE8-B776-315EEEE39382@kilian-ries.de> Hello, i'm running a nginx (version: nginx/1.13.1) with two vhosts with exact the same configuration. The only difference is the upstream section: each vhosts points to a different upstream server / ip. My configuration looks like this: ### ... location / { proxy_pass http://IP_ADDRESS; proxy_set_header Host $host; proxy_set_header X-Real-IP $proxy_protocol_addr; proxy_set_header X-Forwarded-For $proxy_protocol_addr; } ... ### vhost_1 works without any problem and i can see both proxy_headers in the tcpdump and in my upstream-apache access-logs. vhost_2 doesn't show me any x-forwarded headers, whether in the upstream-apache nor in the tcpdump (which is running locally on my nignx host). So it looks like when apache doesn't attach the upstream_headers to the HTTP request. Additionally i can see via tcpdump that HTTP protocol (get request) to vhost_2 (borken) is 1.0 and to vhost_1 (working) is 1.1; However, both responses are 1.1 so both upstream-apaches are capable of HTTP 1.1. Does somebody know why my nginx is sending its HTTP request to vhost_2 via HTTP 1.0 and not 1.1? And if, could that be that HTTP 1.0 is not working with proxy_set_header? Thats the only difference im seeing in my setup / tcpdump ... could there be any other difference why nginx is not attaching the upstream headers to my vhost_2? Thanks Greets Kilian -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 5 15:42:32 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Sep 2017 18:42:32 +0300 Subject: nginx-1.13.5 Message-ID: <20170905154232.GQ93611@mdounin.ru> Changes with nginx 1.13.5 05 Sep 2017 *) Feature: the $ssl_client_escaped_cert variable. *) Bugfix: the "ssl_session_ticket_key" directive and the "include" parameter of the "geo" directive did not work on Windows. *) Bugfix: incorrect response length was returned on 32-bit platforms when requesting more than 4 gigabytes with multiple ranges. *) Bugfix: the "expires modified" directive and processing of the "If-Range" request header line did not use the response last modification time if proxying without caching was used. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Sep 5 19:58:38 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 5 Sep 2017 15:58:38 -0400 Subject: [nginx-announce] nginx-1.13.5 In-Reply-To: <20170905154236.GR93611@mdounin.ru> References: <20170905154236.GR93611@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.5 for Windows https://kevinworthington.com/nginxwin1135 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 5, 2017 at 11:42 AM, Maxim Dounin wrote: > Changes with nginx 1.13.5 05 Sep > 2017 > > *) Feature: the $ssl_client_escaped_cert variable. > > *) Bugfix: the "ssl_session_ticket_key" directive and the "include" > parameter of the "geo" directive did not work on Windows. > > *) Bugfix: incorrect response length was returned on 32-bit platforms > when requesting more than 4 gigabytes with multiple ranges. > > *) Bugfix: the "expires modified" directive and processing of the > "If-Range" request header line did not use the response last > modification time if proxying without caching was used. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chase at sillevis.net Wed Sep 6 10:15:33 2017 From: chase at sillevis.net (Chase Sillevis) Date: Wed, 6 Sep 2017 12:15:33 +0200 Subject: nginx-1.12.1 ssl_session_timeout overwritten by default_server Message-ID: <0e478cb5-607e-407d-8dee-668adaf88d8c@Spark> Hi Nginx, Today I ran into the case that the value for ssl_session_timeout?was overwritten by a different server block (namely, the one with default_server). After asking around in the IRC, it seems that this is more or less expected behaviour (?I suspect as TLS/SSL is done before HTTP protocol?), however, I am left wondering which other variables, besides ssl_session_timeout, I should worry about here. And is this indeed expected behaviour? Thanks in advance. Kind regards, Chase Sillevis -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 6 13:37:04 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Sep 2017 16:37:04 +0300 Subject: nginx-1.12.1 ssl_session_timeout overwritten by default_server In-Reply-To: <0e478cb5-607e-407d-8dee-668adaf88d8c@Spark> References: <0e478cb5-607e-407d-8dee-668adaf88d8c@Spark> Message-ID: <20170906133704.GX93611@mdounin.ru> Hello! On Wed, Sep 06, 2017 at 12:15:33PM +0200, Chase Sillevis via nginx wrote: > Today I ran into the case that the value for > ssl_session_timeout?was overwritten by a different server block > (namely, the one with default_server). After asking around in > the IRC, it seems that this is more or less expected behaviour > (?I suspect as TLS/SSL is done before HTTP protocol?), however, > I am left wondering which other variables, besides > ssl_session_timeout, I should worry about here. > > And is this indeed expected behaviour? When using SSL and name-based virtual servers, there are two basic cases to consider: 1. Client is not using Server Name Indication (SNI) TLS extension. This is rare nowadays, though still happens. In this case, all SSL handshake happens before the client tries to access is even known, and all ssl_* settings will be applied from the default server. 2. Client is using SNI. In this case, the name client tries to connect to is known in advance, and it is possible to apply some of the ssl_* settings from the relevant name-based virtual server. Most notably, appropriate SSL certificate will be used. It is not possible to apply all settings though, mostly due to OpenSSL limitations. In particular: - session resumption happens before SNI callback, and hence all session-related settings will be used from the default server (ssl_session_*); - protocol will be fixed by OpenSSL before the SNI extension is parsed, and hence ssl_protocol will be used from the default server; - ssl_ecdh_curve will be used from the default server (https://trac.nginx.org/nginx/ticket/1089). -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Sep 6 21:20:11 2017 From: nginx-forum at forum.nginx.org (Pavel Murzakov) Date: Wed, 06 Sep 2017 17:20:11 -0400 Subject: [PATH] support for UNIX socket in abstract namespace In-Reply-To: <2ac039d5c22fa068710fc16c39472fda.NginxMailingListEnglish@forum.nginx.org> References: <20110523183726.GE39527@sysoev.ru> <2ac039d5c22fa068710fc16c39472fda.NginxMailingListEnglish@forum.nginx.org> Message-ID: Is there any chance that abstract namespace support will be added for unix sockets? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,200086,276278#msg-276278 From josecarlos at coresystems.es Thu Sep 7 00:37:24 2017 From: josecarlos at coresystems.es (Jose Carlos =?utf-8?Q?S=C3=A1nchez?=) Date: Thu, 7 Sep 2017 02:37:24 +0200 (CEST) Subject: Modules needed to reverse proxy In-Reply-To: <1763702393.2199.1504744339543.JavaMail.josecarlos@MacBook-Pro-de-Jose-2.local> Message-ID: <1423573082.2223.1504744642296.JavaMail.josecarlos@MacBook-Pro-de-Jose-2.local> Hi, i need to recompile nginx to include modsecurity module and i want to take advantage of not compiling unnecessary modules. I use nginx on Debian 9.1 and i use only for reverse proxy. Someone has the list of modules needed to do reverse proxy? Thanks in advance -- Jose Carlos Sanchez Ramirez josecarlos at coresystems.es Bravo Murillo, 101 28020 Madrid T. +34915349910 M. +34626487867 http://www.coresystems.es Descarga certificado CA De conformidad con la Ley Org?nica 15/99 de Protecci?n de datos, le informamos que los datos incorporados a este documento forman parte de un fichero titularidad de Core Systems Servicios y Soluciones, S.L. En cualquier momento, Ud. podr? ejercitar los derechos de acceso, rectificaci?n, oposici?n y, en su caso, cancelaci?n, comunic?ndolo por escrito a Core Systems, c/ Bravo Murillo, 101, 7? 28020 Madrid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu Sep 7 09:52:26 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 7 Sep 2017 15:22:26 +0530 Subject: Too many connections in waiting state Message-ID: Hi, I see sometimes too many waiting connections on nginx . This often gets cleared on a restart , but otherwise pileup ################### Active connections: 4930 server accepts handled requests 442071 442071 584163 Reading: 2 Writing: 539 Waiting: 4420 ####################### [root at web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf keepalive_timeout 10s; keepalive_requests 200; keepalive_disable msie6 safari; ######################## [root at web1 ~]# nginx -V nginx version: nginx/1.13.3 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with LibreSSL 2.5.5 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.41 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.5 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-1.12.34.2-stable --add-dynamic-module=/usr/local/rvm/gems/ruby-2.4.1/gems/passenger-5.1.8/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=testcookie-nginx-module --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E ####################### What could be causing this? The server is quite capable and this happens only rarely -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Thu Sep 7 10:29:39 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 7 Sep 2017 10:29:39 +0000 Subject: Too many connections in waiting state In-Reply-To: References: Message-ID: <0D804C76-8FD0-4E19-A273-19D0777636F3@lucasrolff.com> Check if any of the sites you run on the server gets crawled by any crawlers around the time you see an increase ? I know that a crawler such as Screaming Frog doesn?t handle servers that are capable of http2 connections and have it activated for sites that are getting crawled, and will result in connections with a ?waiting? state in nginx. It might be there?s other tools that behave the same way, but I?d personally look into what kind of traffic/requests happened that increased the waiting state a lot. Best Regards, From: nginx on behalf of Anoop Alias Reply-To: "nginx at nginx.org" Date: Thursday, 7 September 2017 at 11.52 To: Nginx Subject: Too many connections in waiting state Hi, I see sometimes too many waiting connections on nginx . This often gets cleared on a restart , but otherwise pileup ################### Active connections: 4930 server accepts handled requests 442071 442071 584163 Reading: 2 Writing: 539 Waiting: 4420 ####################### [root at web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf keepalive_timeout 10s; keepalive_requests 200; keepalive_disable msie6 safari; ######################## [root at web1 ~]# nginx -V nginx version: nginx/1.13.3 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with LibreSSL 2.5.5 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.41 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.5 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-1.12.34.2-stable --add-dynamic-module=/usr/local/rvm/gems/ruby-2.4.1/gems/passenger-5.1.8/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=testcookie-nginx-module --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E ####################### What could be causing this? The server is quite capable and this happens only rarely -- Anoop P Alias -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu Sep 7 10:58:41 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 7 Sep 2017 16:28:41 +0530 Subject: Too many connections in waiting state In-Reply-To: <0D804C76-8FD0-4E19-A273-19D0777636F3@lucasrolff.com> References: <0D804C76-8FD0-4E19-A273-19D0777636F3@lucasrolff.com> Message-ID: Doing strace on a nginx child in the shutdown state i get ################## strace -p 23846 strace: Process 23846 attached restart_syscall(<... resuming interrupted futex ...> ) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5395, {1504781553, 30288000}, ffffffff ) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5397, {1504781554, 30408000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5399, {1504781555, 30535000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5401, {1504781556, 30675000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5403, {1504781557, 30767000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5405, {1504781558, 30889000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5407, {1504781559, 30980000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5409, {1504781560, 31099000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5411, {1504781561, 31210000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5413, {1504781562, 31317000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5415, {1504781563, 31428000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5417, {1504781564, 31575000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5419, {1504781565, 31678000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5421, {1504781566, 31828000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5423, {1504781567, 31941000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5425, {1504781568, 32085000}, ffffffff) = -1 ETIMEDOUT (Connection timed out) ############################################### On Thu, Sep 7, 2017 at 3:59 PM, Lucas Rolff wrote: > Check if any of the sites you run on the server gets crawled by any > crawlers around the time you see an increase ? I know that a crawler such > as Screaming Frog doesn?t handle servers that are capable of http2 > connections and have it activated for sites that are getting crawled, and > will result in connections with a ?waiting? state in nginx. > > > > It might be there?s other tools that behave the same way, but I?d > personally look into what kind of traffic/requests happened that increased > the waiting state a lot. > > > > Best Regards, > > > > *From: *nginx on behalf of Anoop Alias < > anoopalias01 at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Thursday, 7 September 2017 at 11.52 > *To: *Nginx > *Subject: *Too many connections in waiting state > > > > Hi, > > > > I see sometimes too many waiting connections on nginx . > > > > This often gets cleared on a restart , but otherwise pileup > > > > ################### > > Active connections: 4930 > > > > server accepts handled requests > > > > 442071 442071 584163 > > > > Reading: 2 Writing: 539 Waiting: 4420 > > > > ####################### > > [root at web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf > > keepalive_timeout 10s; > > keepalive_requests 200; > > keepalive_disable msie6 safari; > > ######################## > > > > [root at web1 ~]# nginx -V > > nginx version: nginx/1.13.3 > > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) > > built with LibreSSL 2.5.5 > > TLS SNI support enabled > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.41 --with-pcre-jit > --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.5 > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log > --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody > --group=nobody --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src > --with-file-aio --with-threads --with-stream --with-stream_ssl_module > --with-http_slice_module --with-compat --with-http_v2_module > --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-1.12.34.2-stable > --add-dynamic-module=/usr/local/rvm/gems/ruby-2.4.1/ > gems/passenger-5.1.8/src/nginx_module --add-dynamic-module=ngx_brotli > --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32 > --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module > --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 > --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=testcookie-nginx-module > --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' > --with-ld-opt=-Wl,-E > > ####################### > > > > > > What could be causing this? The server is quite capable and this happens > only rarely > > > > > > -- > > *Anoop P Alias* > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 7 13:32:27 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Sep 2017 16:32:27 +0300 Subject: [PATH] support for UNIX socket in abstract namespace In-Reply-To: References: <20110523183726.GE39527@sysoev.ru> <2ac039d5c22fa068710fc16c39472fda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170907133227.GE93611@mdounin.ru> Hello! On Wed, Sep 06, 2017 at 05:20:11PM -0400, Pavel Murzakov wrote: > Is there any chance that abstract namespace support will be added for unix > sockets? You may try the patch here, likely it still applies: http://mailman.nginx.org/pipermail/nginx-devel/2016-October/008878.html -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Sep 7 14:03:51 2017 From: nginx-forum at forum.nginx.org (Pavel Murzakov) Date: Thu, 07 Sep 2017 10:03:51 -0400 Subject: [PATH] support for UNIX socket in abstract namespace In-Reply-To: <20170907133227.GE93611@mdounin.ru> References: <20170907133227.GE93611@mdounin.ru> Message-ID: Thank you. I knew that there was a patch. Just wondering whether it's going to be merged or not cause it's more handy when you don't need to patch anything. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,200086,276296#msg-276296 From mdounin at mdounin.ru Thu Sep 7 15:41:39 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Sep 2017 18:41:39 +0300 Subject: [PATH] support for UNIX socket in abstract namespace In-Reply-To: References: <20170907133227.GE93611@mdounin.ru> Message-ID: <20170907154139.GH93611@mdounin.ru> Hello! On Thu, Sep 07, 2017 at 10:03:51AM -0400, Pavel Murzakov wrote: > Thank you. > I knew that there was a patch. Just wondering whether it's going to be > merged or not cause it's more handy when you don't need to patch anything. Unlikely, unless there will be some positive feedback on the patch. You may help to make this happen by testing the patch and providing some feedback. -- Maxim Dounin http://nginx.org/ From ish at lsl.digital Sat Sep 9 13:34:42 2017 From: ish at lsl.digital (Ish Sookun) Date: Sat, 9 Sep 2017 17:34:42 +0400 Subject: Conditional based cache control Message-ID: <6b425395-7316-7850-40f9-fae013b5ea11@lsl.digital> Hello, I need to write a config that will set the "Cache-Control" value of a resource based on the age of the latter returned by the origin server. E.g if a resource is older than X hours then the Cache-Control should have a specific value. Anybody ever done that? I am hoping to be able to find something by starting with the "Last-Modified" value that is returned by the origin server. Regards, Ish Sookun From zchao1995 at gmail.com Sat Sep 9 14:30:50 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Sat, 9 Sep 2017 10:30:50 -0400 Subject: Conditional based cache control In-Reply-To: <6b425395-7316-7850-40f9-fae013b5ea11@lsl.digital> References: <6b425395-7316-7850-40f9-fae013b5ea11@lsl.digital> Message-ID: Hi! I think you can consider the ngx_lua, you can achieve your purpose easily by this. :) On 9 September 2017 at 21:36:06, Ish Sookun (ish at lsl.digital) wrote: Hello, I need to write a config that will set the "Cache-Control" value of a resource based on the age of the latter returned by the origin server. E.g if a resource is older than X hours then the Cache-Control should have a specific value. Anybody ever done that? I am hoping to be able to find something by starting with the "Last-Modified" value that is returned by the origin server. Regards, Ish Sookun _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ish at lsl.digital Sat Sep 9 18:38:19 2017 From: ish at lsl.digital (Ish Sookun) Date: Sat, 9 Sep 2017 22:38:19 +0400 Subject: Conditional based cache control In-Reply-To: References: <6b425395-7316-7850-40f9-fae013b5ea11@lsl.digital> Message-ID: Hi Zhang, Thank you for the suggestion. I will look into ngx_http_lua_module. Regards, Ish Sookun On 09/09/2017 06:30 PM, Zhang Chao wrote: > Hi! > > I think you can consider the ngx_lua, you can achieve your purpose easily > by this. :) > > > On 9 September 2017 at 21:36:06, Ish Sookun (ish at lsl.digital) wrote: > > Hello, > > I need to write a config that will set the "Cache-Control" value of a > resource based on the age of the latter returned by the origin server. > E.g if a resource is older than X hours then the Cache-Control should > have a specific value. > > Anybody ever done that? > > I am hoping to be able to find something by starting with the > "Last-Modified" value that is returned by the origin server. > > Regards, > > Ish Sookun > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Mon Sep 11 10:56:56 2017 From: nginx-forum at forum.nginx.org (rgentil) Date: Mon, 11 Sep 2017 06:56:56 -0400 Subject: cookie rename nginx Message-ID: Hi, I am new on this forum and I am also new working with nginx. I know the basic of nginx. I was wondering if someone could help me to write a script to rename a cookie name in a http response header. I have the following apache script for that but I am unable to convert it to nginx. ProxyPass http://internal-elb.eu-west-1.elb.amazonaws.com/ ProxyPassReverse http://internal-elb.eu-west-1.elb.amazonaws.com/ #Change from AWSELB cookie to INTELB Header edit Set-Cookie "AWSELB=([0-9a-zA-Z\-]*);" "INTELB=$1;" # When the client request arrives with AWSELB cookie name, change it to OLD RequestHeader edit Cookie "AWSELB=([0-9a-zA-Z\-]*)" "OLD=$1" # Rename the INTELB to AWSELB again so the reverse proxy can send the request to internal elb RequestHeader edit Cookie "INTELB=([0-9a-zA-Z\-]*)" "AWSELB=$1" # Td funciona como experado. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276319,276319#msg-276319 From nginx-forum at forum.nginx.org Mon Sep 11 13:21:49 2017 From: nginx-forum at forum.nginx.org (tbs) Date: Mon, 11 Sep 2017 09:21:49 -0400 Subject: MP4 module with pseudo streaming + proxy_cache Message-ID: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> Hello, I am trying to setup proxy cache with proxy pass and serving mp4 files for my flash player. i am having hard time getting the mp4 module to work with this combination, it works fine for local files but proxy pass/cache it doesn't work i see the CDN providers are using this method and working, wondering if i missing a step. I searched found some outdated post indicating it wouldnt work but if CDN providers doing it i am sure it should be working new version of nginx. anyone? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276322,276322#msg-276322 From arut at nginx.com Mon Sep 11 14:38:53 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 11 Sep 2017 17:38:53 +0300 Subject: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> References: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170911143853.GD1508@Romans-MacBook-Air.local> Hi, On Mon, Sep 11, 2017 at 09:21:49AM -0400, tbs wrote: > Hello, > > I am trying to setup proxy cache with proxy pass and serving mp4 files for > my flash player. i am having hard time getting the mp4 module to work with > this combination, it works fine for local files but proxy pass/cache it > doesn't work The mp4 module currently does not work with cache. A common workaround is proxy_pass + proxy_store + try_files. However the first request (which caches the file) will not be processed by the mp4 module anyway. > i see the CDN providers are using this method and working, wondering if i > missing a step. I searched found some outdated post indicating it wouldnt > work but if CDN providers doing it i am sure it should be working new > version of nginx. > > anyone? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276322,276322#msg-276322 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From gk at leniwiec.biz Mon Sep 11 14:47:34 2017 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Mon, 11 Sep 2017 16:47:34 +0200 Subject: Zero/random file module Message-ID: <59B6A206.4000602@leniwiec.biz> Hello, In my company we often want to monitor transfer speeds. To do that we upload 1M-10G zero/random file to a web server and then we set up some monitoring to time the download. Or we do the download by hand during troubleshooting sessions. The downside of this is that we need to upload and keep those files on disk and sometimes disk is a very limited resource. That's why I am wondering if somebody could develop (and include in mainline) a new nginx module that after configuration similar to this: location = /100mb.test { big_file 100M zero; } or: location = /1m.random { big_file 1M random; } would serve such file in chosen location. Of course the quality of random data does not need to be high - we only need something that compresses poorly - so any simple and fast userspace generator should be enough. Thank you in advance. -- Grzegorz Kulewski From mdounin at mdounin.ru Mon Sep 11 15:34:12 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Sep 2017 18:34:12 +0300 Subject: Zero/random file module In-Reply-To: <59B6A206.4000602@leniwiec.biz> References: <59B6A206.4000602@leniwiec.biz> Message-ID: <20170911153412.GA58595@mdounin.ru> Hello! On Mon, Sep 11, 2017 at 04:47:34PM +0200, Grzegorz Kulewski wrote: > In my company we often want to monitor transfer speeds. To do > that we upload 1M-10G zero/random file to a web server and then > we set up some monitoring to time the download. Or we do the > download by hand during troubleshooting sessions. > > The downside of this is that we need to upload and keep those > files on disk and sometimes disk is a very limited resource. Just a side note: on most operating systems you can create sparse files, which do not occupy disk space. For example: $ truncate -s 10g foo $ ls -lah foo -rw-r--r-- 1 mdounin mdounin 10G Sep 11 18:08 foo $ du -sh foo 96K foo This allows to create huge zero-filled files even in very constrained environments. -- Maxim Dounin http://nginx.org/ From crazibri at gmail.com Tue Sep 12 04:29:00 2017 From: crazibri at gmail.com (Brian) Date: Mon, 11 Sep 2017 23:29:00 -0500 Subject: ssl_preread_server_name not extracted Message-ID: I have the following file named test.stream which is being included via nginx.conf stream { include /etc/nginx/conf.d/*.stream; } the ssl_preread_server_name variable is not being extracted and I?m running Nginx/1.13.5 (via centos 7 nginx repo). Any idea whats going on here? tcpdump shows the SNI field. nginx -V nginx version: nginx/1.13.5 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled map $ssl_preread_server_name $name { cm.example.com cm; ut.example.com ut; } upstream ut { server 10.0.0.76:9000; } upstream cm { server 10.0.0.61:9000; } log_format stream_routing '$remote_addr [$time_local] ' 'with SNI name "$ssl_preread_server_name" ' 'proxying to "$name" ' '$protocol $status $bytes_sent $bytes_received ' '$session_time'; server { listen 443 ssl; #Certificate & Key .PEM Format ssl_certificate /etc/ssl/certs/internal_back.crt; ssl_certificate_key /etc/ssl/certs/internal_back.key; #CIPHERS include /etc/nginx/conf.d/tcp.common; proxy_pass $name; ssl_preread on; access_log /var/log/nginx/stream.log stream_routing; error_log /var/log/nginx/stream-error.log debug; } stream.log shows: 107.0.0.186 [11/Sep/2017:20:30:22 -0700] with SNI name "" proxying to "" TCP 500 0 0 0.066 107.0.0.186 [11/Sep/2017:20:30:22 -0700] with SNI name "" proxying to "" TCP 500 0 0 0.048 Thank you, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Sep 12 09:40:34 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 12 Sep 2017 12:40:34 +0300 Subject: ssl_preread_server_name not extracted In-Reply-To: References: Message-ID: <13A0EEF9-9591-447D-AEFE-43D9C3D65AC2@nginx.com> > On 12 Sep 2017, at 07:29, Brian wrote: > > I have the following file named test.stream which is being included via nginx.conf stream { include /etc/nginx/conf.d/*.stream; } > > the ssl_preread_server_name variable is not being extracted and I?m running Nginx/1.13.5 (via centos 7 nginx repo). Any idea whats going on here? tcpdump shows the SNI field. > > nginx -V > nginx version: nginx/1.13.5 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) > built with OpenSSL 1.0.1e-fips 11 Feb 2013 > TLS SNI support enabled > > > map $ssl_preread_server_name $name { > cm.example.com cm; > ut.example.com ut; > } > upstream ut { > server 10.0.0.76:9000; > } > upstream cm { > server 10.0.0.61:9000; > } > > log_format stream_routing '$remote_addr [$time_local] ' > 'with SNI name "$ssl_preread_server_name" ' > 'proxying to "$name" ' > '$protocol $status $bytes_sent $bytes_received ' > '$session_time'; > > server { > listen 443 ssl; > > #Certificate & Key .PEM Format > ssl_certificate /etc/ssl/certs/internal_back.crt; > ssl_certificate_key /etc/ssl/certs/internal_back.key; > #CIPHERS > include /etc/nginx/conf.d/tcp.common; > > proxy_pass $name; > ssl_preread on; > access_log /var/log/nginx/stream.log stream_routing; > error_log /var/log/nginx/stream-error.log debug; > } > > This is not going to work. ssl_preread isn't designed to work with SSL-terminated connection, as shown in your snippet, i.e. it won't work with ?listen .. ssl?, since it would parse SSL/TLS Application Data, but not Client Hello. See for details: https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html OTOH, once SSL is terminated, you may use $ssl_server_name variable: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#var_ssl_server_name You could also exclude map{} by using $ssl_server_name in proxy_pass. : upstream cm.example.com { : server 10.0.0.61:9000; : } : upstream ut.example.com { : server 10.0.0.76:9000; : } : server { : listen 443 ssl; : : proxy_pass $ssl_server_name; : } The above simplification works with $ssl_preread_server_name as well: : upstream cm.example.com { : server 10.0.0.61:9000; : } : upstream ut.example.com { : server 10.0.0.76:9000; : } : server { : listen 443; : : proxy_pass $ssl_preread_server_name; : } OTOH, you may still want map{} to provide a default value, if client didn?t sent SNI, or something, e.g.: : map $ssl_preread_server_name $name { : ?? default.fallback.value; : default $ssl_preread_server_name; : } -- Sergey Kandaurov From nginx-forum at forum.nginx.org Tue Sep 12 12:56:43 2017 From: nginx-forum at forum.nginx.org (tbs) Date: Tue, 12 Sep 2017 08:56:43 -0400 Subject: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <20170911143853.GD1508@Romans-MacBook-Air.local> References: <20170911143853.GD1508@Romans-MacBook-Air.local> Message-ID: <3b1ac21ecdb56326f6921705313a03ea.NginxMailingListEnglish@forum.nginx.org> thanks for your comment Roman, do you know how these guys did it? https://www.maxcdn.com/one/tutorial/pseudo-streaming-maxcdn/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276322,276332#msg-276332 From arut at nginx.com Tue Sep 12 13:08:40 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 12 Sep 2017 16:08:40 +0300 Subject: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <3b1ac21ecdb56326f6921705313a03ea.NginxMailingListEnglish@forum.nginx.org> References: <20170911143853.GD1508@Romans-MacBook-Air.local> <3b1ac21ecdb56326f6921705313a03ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170912130840.GE1508@Romans-MacBook-Air.local> Hi, On Tue, Sep 12, 2017 at 08:56:43AM -0400, tbs wrote: > thanks for your comment Roman, do you know how these guys did it? > > https://www.maxcdn.com/one/tutorial/pseudo-streaming-maxcdn/ Based on what is written on this page, they have their own module which behaves similarly to nginx standard slice module. Besides, it looks like they heavily patched nginx (at least the mp4 module). -- Roman Arutyunyan From nginx-forum at forum.nginx.org Tue Sep 12 14:08:44 2017 From: nginx-forum at forum.nginx.org (tbs) Date: Tue, 12 Sep 2017 10:08:44 -0400 Subject: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> References: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5d72bbcba05158aa324f3ef8ce870a6a.NginxMailingListEnglish@forum.nginx.org> is it too much to ask for nginx to implement this feature if others can do it via their own developers? i couldn't find developer that are familiar with this, i looked already. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276322,276334#msg-276334 From lucas at slcoding.com Tue Sep 12 15:21:59 2017 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 12 Sep 2017 17:21:59 +0200 Subject: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <5d72bbcba05158aa324f3ef8ce870a6a.NginxMailingListEnglish@forum.nginx.org> References: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> <5d72bbcba05158aa324f3ef8ce870a6a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <59B7FB97.6090106@slcoding.com> > is it too much to ask for nginx to implement this It depends on what you want to get implemented. You can just have a location block in nginx handing mp4 and then using the slice module as Roman already mentioned, this will cause the initial chunk (which contains the MOOV atom) to be loaded pretty quickly even for big files, and thus enable pseudo streaming rather quickly (if the mp4 is not encoded with the MOOV atom in the end which happens in so many cases). The only problem you'll have can be invalidating the cache of a file if you use the slice module, since you basically have to calculate every cache entry that you want to remove from the cache (starts from 0 and increments the number of bytes that you've set in the slice size), the only thing making it hard is the very last slice since this will be equal or less than your slice size. So you can do it already out of the box using the slice module. Now, sure it would be nice for the mp4 module to support pseudo streaming for files that are not yet in the cache - this however requires nginx to be aware of where to seek in a file that is not yet on the filesystem - it can be done, but I don't think it's super pretty. tbs wrote: > is it too much to ask for nginx to implement this feature if others can do > it via their own developers? > > i couldn't find developer that are familiar with this, i looked already. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276322,276334#msg-276334 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From arut at nginx.com Tue Sep 12 15:33:24 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 12 Sep 2017 18:33:24 +0300 Subject: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <59B7FB97.6090106@slcoding.com> References: <336bb73cfbb4517e7c7c322c967989ed.NginxMailingListEnglish@forum.nginx.org> <5d72bbcba05158aa324f3ef8ce870a6a.NginxMailingListEnglish@forum.nginx.org> <59B7FB97.6090106@slcoding.com> Message-ID: <20170912153324.GF1508@Romans-MacBook-Air.local> On Tue, Sep 12, 2017 at 05:21:59PM +0200, Lucas Rolff wrote: > > is it too much to ask for nginx to implement this > > It depends on what you want to get implemented. > > You can just have a location block in nginx handing mp4 and then using the > slice module as Roman already mentioned, this will cause the initial chunk > (which contains the MOOV atom) to be loaded pretty quickly even for big > files, and thus enable pseudo streaming rather quickly (if the mp4 is not > encoded with the MOOV atom in the end which happens in so many cases). > > The only problem you'll have can be invalidating the cache of a file if you > use the slice module, since you basically have to calculate every cache > entry that you want to remove from the cache (starts from 0 and increments > the number of bytes that you've set in the slice size), the only thing > making it hard is the very last slice since this will be equal or less than > your slice size. > > So you can do it already out of the box using the slice module. > > Now, sure it would be nice for the mp4 module to support pseudo streaming > for files that are not yet in the cache - this however requires nginx to be > aware of where to seek in a file that is not yet on the filesystem - it can > be done, but I don't think it's super pretty. In fact, we came up with a pretty solution long ago. The mp4 module should be a filter. But reimplementing it does not seem to be easy. > tbs wrote: > >is it too much to ask for nginx to implement this feature if others can do > >it via their own developers? > > > >i couldn't find developer that are familiar with this, i looked already. > > > >Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276322,276334#msg-276334 > > > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From siefke_listen at web.de Tue Sep 12 20:36:07 2017 From: siefke_listen at web.de (siefke_listen at web.de) Date: Tue, 12 Sep 2017 22:36:07 +0200 Subject: add headers / gixy Message-ID: <20170912223607.dbd3d0185dd4aa05ddc20d22@web.de> Hi, I've encountered a blog article on a few add header statements. I had done a few online tests and it seems to be consistently ignoring all add header specs. I found the tool Gixy and here the same result. Now I ask me how do I set the Add header instructions correctly? Thank you for help Silvio ---- # gixy /etc/nginx/nginx.conf ==================== Results =================== >> Problem: [add_header_redefinition] Nested "add_header" drops parent headers. Description: "add_header" replaces ALL parent headers. See documentation: http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header Additional info: https://github.com/yandex/gixy/blob/master/docs/en/plugins/addheaderredefinition.md Reason: Parent headers "x-frame-options", "x-xss-protection", "x-content-type-options" was dropped in current level Pseudo config: include /etc/nginx/sites-enabled/silviosiefke.de.conf; server { server_name silviosiefke.de www.silviosiefke.de; add_header Referrer-Policy no-referrer; add_header X-Frame-Options SAMEORIGIN always; add_header X-Content-Type-Options nosniff always; add_header X-XSS-Protection 1; mode=block always; add_header Strict-Transport-Security max-age=31536000 always; add_header Cache-Control no-transform; include /etc/nginx/inc/basic.conf; include /etc/nginx/inc/location/expires.conf; location ~* \.(?:manifest|appcache|html?|xml|json)$ { add_header Cache-Control max-age=0; } location ~* \.(?:rss|atom)$ { add_header Cache-Control max-age=3600; } location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|mp4|ogg|ogv|webm|htc)$ { add_header Cache-Control max-age=2592000; } location ~* \.svgz$ { add_header Cache-Control max-age=2592000; } location ~* \.(?:css|js)$ { add_header Cache-Control max-age=31536000; } include /etc/nginx/inc/location/cross-domain-fonts.conf; location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ { add_header Cache-Control max-age=2592000; } } -- Silvio Siefke -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zchao1995 at gmail.com Wed Sep 13 02:03:46 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Tue, 12 Sep 2017 22:03:46 -0400 Subject: add headers / gixy In-Reply-To: <20170912223607.dbd3d0185dd4aa05ddc20d22@web.de> References: <20170912223607.dbd3d0185dd4aa05ddc20d22@web.de> Message-ID: Hi! here is the describtion about add_header may can help you. > Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. The value can contain variables. > There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level. http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header On 13 September 2017 at 04:36:40, siefke_listen at web.de (siefke_listen at web.de) wrote: Hi, I've encountered a blog article on a few add header statements. I had done a few online tests and it seems to be consistently ignoring all add header specs. I found the tool Gixy and here the same result. Now I ask me how do I set the Add header instructions correctly? Thank you for help Silvio ---- # gixy /etc/nginx/nginx.conf ==================== Results =================== >> Problem: [add_header_redefinition] Nested "add_header" drops parent headers. Description: "add_header" replaces ALL parent headers. See documentation: http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header Additional info: https://github.com/yandex/gixy/blob/master/docs/en/plugins/addheaderredefinition.md Reason: Parent headers "x-frame-options", "x-xss-protection", "x-content-type-options" was dropped in current level Pseudo config: include /etc/nginx/sites-enabled/silviosiefke.de.conf; server { server_name silviosiefke.de www.silviosiefke.de; add_header Referrer-Policy no-referrer; add_header X-Frame-Options SAMEORIGIN always; add_header X-Content-Type-Options nosniff always; add_header X-XSS-Protection 1; mode=block always; add_header Strict-Transport-Security max-age=31536000 always; add_header Cache-Control no-transform; include /etc/nginx/inc/basic.conf; include /etc/nginx/inc/location/expires.conf; location ~* \.(?:manifest|appcache|html?|xml|json)$ { add_header Cache-Control max-age=0; } location ~* \.(?:rss|atom)$ { add_header Cache-Control max-age=3600; } location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|mp4|ogg|ogv|webm|htc)$ { add_header Cache-Control max-age=2592000; } location ~* \.svgz$ { add_header Cache-Control max-age=2592000; } location ~* \.(?:css|js)$ { add_header Cache-Control max-age=31536000; } include /etc/nginx/inc/location/cross-domain-fonts.conf; location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ { add_header Cache-Control max-age=2592000; } } -- Silvio Siefke _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Sep 13 08:45:07 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 13 Sep 2017 08:45:07 +0000 Subject: AW: MP4 module with pseudo streaming + proxy_cache In-Reply-To: <3b1ac21ecdb56326f6921705313a03ea.NginxMailingListEnglish@forum.nginx.org> References: <20170911143853.GD1508@Romans-MacBook-Air.local>, <3b1ac21ecdb56326f6921705313a03ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, >?thanks for your comment Roman, do you know how these guys did it? >?https://www.maxcdn.com/one/tutorial/pseudo-streaming-maxcdn/ Why is pseudo streaming still a thing? With HTML5 video players, everything is handled with RFC compliant range requests and HTML5 video should be supported by pretty much every browser in a stable release [1]. On linux, you need to install a codec like ffmpeg, but that still beats flash. I suggest you invest your time in HTML5 video and current technologies, instead of the obsolete Adobe Flash with its URI based pseudo streaming. cheers, lukas [1] https://en.wikipedia.org/wiki/HTML5_video#Browser_support From nginx-forum at forum.nginx.org Wed Sep 13 10:07:34 2017 From: nginx-forum at forum.nginx.org (sivasara) Date: Wed, 13 Sep 2017 06:07:34 -0400 Subject: proxy cache lock responses always has 500ms delay Message-ID: Greetings everbody, I have the following config. I give 3 simulatneous requests and 1 goes back to the upstream and the 2 of them are in proxy_cache_lock. After the first request completes, I am always seeing 500ms delay with proxy_cache_locked requests. Is this expected behavior or am i missing something. Any help would be appreciated. -nginx config: user nginx; worker_processes 1; error_log logs/error.log; pid /var/run/nginx.pid; events {} http { proxy_cache_path /tmp/local_cache keys_zone=local_cache:250m levels=1:2 inactive=8s; proxy_temp_path /tmp/proxy_temp; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '$upstream_addr $upstream_response_time $request_time $upstream_cache_status'; access_log logs/access.log main; server { listen 80; server_name localhost; location / { proxy_cache local_cache; proxy_cache_valid 200 1m; proxy_cache_lock on; proxy_pass http://192.168.126.22:9095; } } } ab -n 4 -c 3 http://192.168.126.22/test access.log: 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 "-" "ApacheBench/2.3" "-" 192.168.126.22:9095 0.003 0.004 MISS 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 "-" "ApacheBench/2.3" "-" - - 0.000 HIT 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 "-" "ApacheBench/2.3" "-" - - 0.502 HIT 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 "-" "ApacheBench/2.3" "-" - - 0.502 HIT upstream: curl http://192.168.126.22:9095/test Hello world! nginx: nginx -V nginx version: nginx/1.10.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276344,276344#msg-276344 From arut at nginx.com Wed Sep 13 11:38:22 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 Sep 2017 14:38:22 +0300 Subject: proxy cache lock responses always has 500ms delay In-Reply-To: References: Message-ID: <20170913113822.GG1508@Romans-MacBook-Air.local> Hello, On Wed, Sep 13, 2017 at 06:07:34AM -0400, sivasara wrote: > Greetings everbody, > > I have the following config. I give 3 simulatneous requests and 1 goes back > to the upstream and the 2 of them are in proxy_cache_lock. After the first > request completes, I am always seeing 500ms delay with proxy_cache_locked > requests. Is this expected behavior or am i missing something. > Any help would be appreciated. Yes, this is the expected behavior. Each proxy_cache_locked request waits for cache entry to be unlocked by 500ms intervals. If you're unlucky, you'll get additional near-500ms delay for locked requests. > -nginx config: > user nginx; > worker_processes 1; > error_log logs/error.log; > pid /var/run/nginx.pid; > events {} > http { > proxy_cache_path /tmp/local_cache keys_zone=local_cache:250m levels=1:2 > inactive=8s; > proxy_temp_path /tmp/proxy_temp; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for" ' > '$upstream_addr $upstream_response_time $request_time > $upstream_cache_status'; > > access_log logs/access.log main; > server { > listen 80; > server_name localhost; > > location / { > proxy_cache local_cache; > proxy_cache_valid 200 1m; > proxy_cache_lock on; > proxy_pass http://192.168.126.22:9095; > } > > } > } > > ab -n 4 -c 3 http://192.168.126.22/test > access.log: > 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 > "-" "ApacheBench/2.3" "-" 192.168.126.22:9095 0.003 0.004 MISS > 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 > "-" "ApacheBench/2.3" "-" - - 0.000 HIT > 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 > "-" "ApacheBench/2.3" "-" - - 0.502 HIT > 192.168.126.22 - - [13/Sep/2017:15:14:36 +0530] "GET /test HTTP/1.0" 200 12 > "-" "ApacheBench/2.3" "-" - - 0.502 HIT > > upstream: > curl http://192.168.126.22:9095/test > Hello world! > > nginx: > nginx -V > nginx version: nginx/1.10.2 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) > built with OpenSSL 1.0.1e-fips 11 Feb 2013 > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid > --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx > --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic > --with-http_geoip_module=dynamic --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_slice_module > --with-http_stub_status_module --with-http_perl_module=dynamic > --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit > --with-stream=dynamic --with-stream_ssl_module > --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' > --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld > -Wl,-E' > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276344,276344#msg-276344 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From nginx-forum at forum.nginx.org Wed Sep 13 11:47:25 2017 From: nginx-forum at forum.nginx.org (sivasara) Date: Wed, 13 Sep 2017 07:47:25 -0400 Subject: proxy cache lock responses always has 500ms delay In-Reply-To: <20170913113822.GG1508@Romans-MacBook-Air.local> References: <20170913113822.GG1508@Romans-MacBook-Air.local> Message-ID: <58599b6736ccbc285ab601b399de178a.NginxMailingListEnglish@forum.nginx.org> Ah.. thanks for the reply. 500ms seems too large. Is there any way to decrease this wait time? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276344,276347#msg-276347 From nginx-forum at forum.nginx.org Wed Sep 13 12:05:14 2017 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 13 Sep 2017 08:05:14 -0400 Subject: Secure Link Md5 Implementation In-Reply-To: References: Message-ID: <0a62900aeb5e807bd16b864d9e7ae6e0.NginxMailingListEnglish@forum.nginx.org> Any Update Please How to use two secret Keys for Secure Link Md5. Primary to be used by application which is in production and secondary for application build which has been rolled out with changed secret key i.e. secondary. So that application should work in both scenario meanwhile till the all the users update the application Please help Inside location or server block secure_link $arg_tok,$arg_e; secure_link_md5 "primarysecret$arg_tok$arg_e"; secure_link_md5 "secondarysecret$arg_tok$arg_e"; if ($secure_link = "") {return 405;} if ($secure_link = "0"){return 410;} This gives error as secure link md5 is used twice within a location block Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275668,276348#msg-276348 From arut at nginx.com Wed Sep 13 12:09:55 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 Sep 2017 15:09:55 +0300 Subject: proxy cache lock responses always has 500ms delay In-Reply-To: <58599b6736ccbc285ab601b399de178a.NginxMailingListEnglish@forum.nginx.org> References: <20170913113822.GG1508@Romans-MacBook-Air.local> <58599b6736ccbc285ab601b399de178a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170913120955.GH1508@Romans-MacBook-Air.local> On Wed, Sep 13, 2017 at 07:47:25AM -0400, sivasara wrote: > Ah.. thanks for the reply. > 500ms seems too large. Is there any way to decrease this wait time? Currently there's no way to change 500ms to a different value. What you can do is reduce proxy_cache_lock_timeout (5s by default) to make the requests which wait longer proceed with uncached proxying. In fact, once this timeout expires, you will have the last chance to check if the resource is already unlocked. -- Roman Arutyunyan From nginx-forum at forum.nginx.org Wed Sep 13 12:53:37 2017 From: nginx-forum at forum.nginx.org (sivasara) Date: Wed, 13 Sep 2017 08:53:37 -0400 Subject: proxy cache lock responses always has 500ms delay In-Reply-To: <20170913120955.GH1508@Romans-MacBook-Air.local> References: <20170913120955.GH1508@Romans-MacBook-Air.local> Message-ID: <29eb56f386cffeb7410f4272dd7b3ea5.NginxMailingListEnglish@forum.nginx.org> Wow, it is a great workaround. If the upstream response times are contained proxy_cache_lock_timeout, this should work perfectly. Thank you for the help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276344,276350#msg-276350 From fares.jaradeh at nuance.com Wed Sep 13 20:25:26 2017 From: fares.jaradeh at nuance.com (Jaradeh, Fares) Date: Wed, 13 Sep 2017 20:25:26 +0000 Subject: Support Request for Preserving Proxied server original Chunk sizes In-Reply-To: References: Message-ID: Hello I am contacting you for support regarding https://stackoverflow.com/questions/46165415/configure-nginx-proxy-to-preserve-the-chunk-size-sent-from-the-proxied-backend-s While trying to use Nginx-Ingress-controller in K8s, ran into issue related to the behavior of nginx proxy, The proxied backend server (our NGW) is set to return audio in chunks of http to the client with multiparts & Some of our clients (already in the field) have so far expected every single HTTP chunk to contain a full part of an audio segment....ie a single audio segment may not be broken across several HTTP chunks. For our use case, we had to set the proxy_buffering : off so to allow the nginx to send back results as soon as possible to not increase the CPL, But as a side effect, we noticed that the nginx was NOT trying to preserve the chunks sizes sent back by the proxied server (NGW), which would cause clients in the field to break. So a single Audio part that is sent back by the NGW with a single http chunk of size 24838 may be broken into 5 or 6 chunks returned by nginx-proxy to the client with the total size amounting to the same thing.... We suspect this is due to the speed to read/write of the responses on Nginx, and it may read a single chunk in separate calls (example 12000 bytes then another 11000 bytes then another 1838 bytes) and because buffering is OFF...these things are being sent back as independent chunks to the client. We understand that the behavior of nginx is fully compliant with http chunking and that the client app should be better implemented to not assume coupling of chunks and audio segments full parts, BUT, customers app already in the field cannot be helped, Enabling proxy_buffering : on is not an option, as it delay responses until ALL the audio segments are ready (from all chunks) and it breaks the 1chunk to 1 audio segment rule as well. Could you please advise if there is any way to control the nginx buffering to allow it to buffer per Chunk, ir do not send back received data unless a full Http chunk is read.... ? Thank you for you support Regards Fares -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Sep 13 21:38:09 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 13 Sep 2017 22:38:09 +0100 Subject: Secure Link Md5 Implementation In-Reply-To: <0a62900aeb5e807bd16b864d9e7ae6e0.NginxMailingListEnglish@forum.nginx.org> References: <0a62900aeb5e807bd16b864d9e7ae6e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170913213809.GI20907@daoine.org> On Wed, Sep 13, 2017 at 08:05:14AM -0400, anish10dec wrote: Hi there, > How to use two secret Keys for Secure Link Md5. the stock nginx secure_link module does not support multiple/alternate keys. I have not tested, but I guess that maybe in your primary 'if ($secure_link = "") {' section you could "rewrite ^ /ex/$uri;" and then have a new location{} that matches those requests and uses the secondary secret key in its config. (Or you could rewrite the module to do what you want; but that is probably much more work.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Sep 14 01:19:49 2017 From: nginx-forum at forum.nginx.org (rnburn) Date: Wed, 13 Sep 2017 21:19:49 -0400 Subject: OpenTracing Module Message-ID: Hello Everyone, I've been working on an NGINX module [github.com/rnburn/nginx-opentracing] to add support for OpenTracing [http://opentracing.io]. It uses OpenTracing's C++ API [http://github.com/opentracing/opentracing-cpp] and attaches handlers to the NGX_HTTP_PREACCESS_PHASE and NGX_HTTP_LOG_PHASE phases to start/stop spans to track requests handled. It currently supports LightStep's C++ tracer [github.com/lightstep/lightstep-tracer-cpp] and a C++ version of Zipkin's tracer [github.com/rnburn/zipkin-cpp-opentracing]. I put together a simple example [github.com/rnburn/nginx-opentracing/tree/master/example/trivial] that shows it interoperating with a Go server traced with Zipkin and wrote up a description for a more complicated example [github.com/rnburn/nginx-opentracing/blob/master/doc/Tutorial.md] showing it working with a Node server using LightStep. If anyone would like to try it or has any feedback, let me know. Ryan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276370,276370#msg-276370 From junaid.malik at confiz.com Thu Sep 14 11:06:30 2017 From: junaid.malik at confiz.com (Junaid Malik) Date: Thu, 14 Sep 2017 11:06:30 +0000 Subject: Http2 enable on all virtul host settings automatically Message-ID: Hello Guys, We recently upgraded Nginx from nginx/1.9.12 - nginx/1.13.2, details of nginx/1.13.2 supported modules are given below. We are facing problem of automatic enabling of HTTP2 protocol on bsa1.example.com as we only enabled http2 on dsa1.example.com. Nginx configurations of both sites are given below. Supported urls of different Nginx configurations are given below respectively 1 - https://dsa1.example.com/forums/user_avatar/www.example.com/cooltahir/25/1497380_1.png 2 - https://bsa1.example.com/blog/wp-content/plugins/ultimate-responsive-image-slider/css/slider-pro.css?ver=4.6.1 Site to verify Http2 protocol https://tools.keycdn.com/http2-test ---------------------------------------------- nginx version ---------------------------------------------- nginx version: nginx/1.13.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.2k 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-openssl=openssl-1.0.2k --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt= ----------------------------------- dsa.conf ----------------------------------- proxy_cache_path /var/www/example_dsa/ levels=2:2:2 keys_zone=pakwheels-dsa:50m max_size=300m inactive=525600m loader_files=400; server { listen 80; listen 443 ssl http2; # Enable SSL #ssl_certificate /etc/nginx/certs/pakwheels_with_subdomains.pem; #ssl_certificate_key /etc/nginx/certs/example_with_subdomains.key; ssl_certificate /etc/nginx/certs/pakwheels_with_subdomains_renew_28_august.pem; ssl_certificate_key /etc/nginx/certs/example_with_subdomains_renew_28_august.key; ssl_session_timeout 10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES; ssl_prefer_server_ciphers on; server_name dsa1.example.com dsa2.example.com dsa3.example.com dsa4.example.com; rewrite ^/forums/forums/(.*)$ /forums/$1 permanent; location / { gzip on; gzip_min_length 100; gzip_types text/plain text/xml application/xml text/css text/javascript application/javascript application/x-javascript text/x-component application/json application/xhtml+xml application/rss+xml application/atom+xml application/vnd.ms-fontobject image/svg+xml application/x-font-ttf font/opentype application/octet-stream; gzip_comp_level 1; gzip_disable "MSIE [1-6]\."; expires 12M; # ProxySettings proxy_cache_lock off; proxy_set_header Accept-Encoding ""; add_header X-Cache $upstream_cache_status; add_header 'Access-Control-Allow-Origin' '*'; #proxy_ignore_headers Vary; proxy_ignore_headers Set-Cookie; resolver 213.133.100.100 213.133.99.99 213.133.98.98; set $backend www.example.com; proxy_pass https://$backend$request_uri; #proxy_set_header Authorization "Basic cGFrYm9hcmQ6M3YzbnR1cjNzMDA3"; #proxy_pass_header Authorization; proxy_pass_header P3P; proxy_cache_min_uses 1; proxy_cache pakwheels-dsa; proxy_cache_valid 200 365d; proxy_cache_valid any 2s; proxy_cache_key pwstatic.pakwheels0""""$uri$is_args$args; proxy_intercept_errors on; error_page 403 = @no_image; error_page 404 = @no_image; error_page 400 = @no_image; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; # END ProxySettings } location @no_image { return 404 ''; add_header Content-Type text/plain; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} location /status { # Turn on nginx stats stub_status on; # I do not need logs for stats access_log off; # Security: Only allow access from 192.168.1.100 IP # allow 127.0.0.1; allow 148.251.76.7; # Send rest of the world to /dev/null # deny all; } } bsa -------------------------------------------------------------------- proxy_cache_path /var/www/example_bsa/ levels=2:2:2 keys_zone=pakwheels-bsa:50m max_size=1000m inactive=525600m loader_files=400; server { listen 80; listen 443 ssl; # Enable SSL #ssl_certificate /etc/nginx/certs/pakwheels_with_subdomains.pem; #ssl_certificate_key /etc/nginx/certs/example_with_subdomains.key; ssl_certificate /etc/nginx/certs/pakwheels_with_subdomains_renew_28_august.pem; ssl_certificate_key /etc/nginx/certs/example_with_subdomains_renew_28_august.key; ssl_session_cache shared:SSL:200m; ssl_buffer_size 8k; ssl_session_timeout 1440m; #ssl_session_tickets off; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES; ssl_prefer_server_ciphers on; server_name bsa1.example.com bsa2.example.com bsa3.example.com bsa4.example.com; location / { gzip on; gzip_min_length 100; gzip_types text/plain text/xml application/xml text/css text/javascript application/javascript application/x-javascript text/x-component application/json application/xhtml+xml application/rss+xml application/atom+xml application/vnd.ms-fontobject image/svg+xml application/x-font-ttf font/opentype application/octet-stream; gzip_comp_level 1; gzip_disable "MSIE [1-6]\."; expires 12M; # ProxySettings proxy_cache_lock off; proxy_set_header Accept-Encoding ""; add_header X-Cache $upstream_cache_status; #proxy_ignore_headers Vary; proxy_ignore_headers Set-Cookie; resolver 213.133.100.100 213.133.99.99 213.133.98.98; set $backend staticn.example.com; proxy_pass https://$backend$request_uri; #proxy_set_header Authorization "Basic cGFrYm9hcmQ6M3YzbnR1cjNzMDA3"; #proxy_pass_header Authorization; proxy_pass_header P3P; proxy_cache_min_uses 1; proxy_cache pakwheels-bsa; proxy_cache_valid 200 365d; proxy_cache_valid any 2s; proxy_cache_key pwstatic.pakwheels0""""$uri$is_args$args; proxy_intercept_errors on; error_page 403 = @no_image; error_page 404 = @no_image; error_page 400 = @no_image; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; # END ProxySettings } location @no_image { return 404 ''; add_header Content-Type text/plain; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} location /status { # Turn on nginx stats stub_status on; # I do not need logs for stats #access_log off; # Security: Only allow access from 192.168.1.100 IP # #allow 127.0.0.1; # Send rest of the world to /dev/null # allow 88.99.211.10; deny all; } } Regads, Junaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 14 13:06:36 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Sep 2017 16:06:36 +0300 Subject: Http2 enable on all virtul host settings automatically In-Reply-To: References: Message-ID: <20170914130635.GO58595@mdounin.ru> Hello! On Thu, Sep 14, 2017 at 11:06:30AM +0000, Junaid Malik wrote: > We recently upgraded Nginx from nginx/1.9.12 - nginx/1.13.2, > details of nginx/1.13.2 supported modules are given below. We > are facing problem of automatic enabling of HTTP2 protocol on > bsa1.example.com as we only enabled http2 on dsa1.example.com. > Nginx configurations of both sites are given below. HTTP/2 is enabled on a listening socket, not in a particular server block. Quoting http://nginx.org/r/listen: : The http2 parameter (1.9.5) configures the port to accept HTTP/2 connections. : Normally, for this to work the ssl parameter should be specified as well, but : nginx can also be configured to accept HTTP/2 connections without SSL. -- Maxim Dounin http://nginx.org/ From tseveendorj at gmail.com Thu Sep 14 15:54:23 2017 From: tseveendorj at gmail.com (tseveendorj) Date: Thu, 14 Sep 2017 23:54:23 +0800 Subject: Too many redirects Message-ID: <6a123c32-9b43-2a22-df50-d39296276bc0@gmail.com> Hi, I configured http (www, non-www) to https (non-www) and https (www) to https (non-www). It is working fine. But I need to add geoip redirection in location / but I got Too many redirect. This server is behind load balancer. LB is redirecting 80 to 81 and 443 to 80. server { listen 81; server_name www.example.com example.com; return 301 https://example.com$request_uri; } server { listen 80; server_name www.example.com; return 301 https://example.com$request_uri; } server { listen 80; ## listen for ipv4; this line is default and implied server_name example.com; ... } I added geoip redirection in nginx.conf geoip_country /usr/share/GeoIP/GeoIP.dat; sites-enabled/example.com location / { # AWS load balancer access log off if ($ignore_ua) { access_log off; return 200; } index index.html; if ($geoip_country_code != "JP") { return 301 https://example.com/en/; } } location = /en/ { index index.html; try_files $uri $uri/ =404; } I'm trying to if request from other than JP request go to /en/ if not /index.html BR, Tseveen From lucas at lucasrolff.com Thu Sep 14 16:34:09 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 14 Sep 2017 16:34:09 +0000 Subject: nginx cache growing well above max_size threshold Message-ID: Hi guys, I have a minor question, so I have an nginx box using proxy_cache, it has a key zone of 40 gigabyte (so it can cache 320 million files), a max_size of 1500 gigabyte for the cache and the inactive set to 30 days. However we experience that nginx goes well above the defined limit - in our case the max size is 1500 gigabyte, but the cache directory takes goes well above 1700 gigabyte. There's a total of 42.000.000 files currently on the system, meaning the average filesize is about 43 kilobyte. Normally I know that nginx can go slightly above the limit, until the cache manager purges the files, but it stays at about 1700 gigabyte constantly unless we manually clear out the size. I see there's a change in 1.13.1 that ignores long locked cache entries, is it possible that this bugfix actually fixes above issue? Upgrading is rather time consuming and we have to ensure nginx versions across the platform, so I wonder if anyone has some pointers if the above bugfix would maybe solve our issue. (currently the custom nginx version is based on nginx 1.10.3). Best Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 14 16:55:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Sep 2017 19:55:57 +0300 Subject: nginx cache growing well above max_size threshold In-Reply-To: References: Message-ID: <20170914165557.GU58595@mdounin.ru> Hello! On Thu, Sep 14, 2017 at 04:34:09PM +0000, Lucas Rolff wrote: > I have a minor question, so I have an nginx box using > proxy_cache, it has a key zone of 40 gigabyte (so it can cache > 320 million files), a max_size of 1500 gigabyte for the cache > and the inactive set to 30 days. > > However we experience that nginx goes well above the defined > limit - in our case the max size is 1500 gigabyte, but the cache > directory takes goes well above 1700 gigabyte. > > There's a total of 42.000.000 files currently on the system, > meaning the average filesize is about 43 kilobyte. > > Normally I know that nginx can go slightly above the limit, > until the cache manager purges the files, but it stays at about > 1700 gigabyte constantly unless we manually clear out the size. > > I see there's a change in 1.13.1 that ignores long locked cache > entries, is it possible that this bugfix actually fixes above > issue? > > Upgrading is rather time consuming and we have to ensure nginx > versions across the platform, so I wonder if anyone has some > pointers if the above bugfix would maybe solve our issue. > (currently the custom nginx version is based on nginx 1.10.3). https://trac.nginx.org/nginx/ticket/1163 TL;DR: This behaviour indicate there is a problem somewhere, likely socket leaks or process crashes. Reports suggests it might be related to HTTP/2. The change in 1.13.1 don't fix the root cause, but will allow nginx to keep cache under max_size regardless of the problem. -- Maxim Dounin http://nginx.org/ From lucas at lucasrolff.com Thu Sep 14 17:09:14 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 14 Sep 2017 17:09:14 +0000 Subject: nginx cache growing well above max_size threshold In-Reply-To: <20170914165557.GU58595@mdounin.ru> References: , <20170914165557.GU58595@mdounin.ru> Message-ID: Okay cool, I'll give it a try In our case we do not run http2 on the machines since haproxy runs in front as well (which doesn't support http2) I'll also try enable a bit more verbose logging on one of the machines to see what the logs say Thanks a lot Maxim! Best regards, Lucas Rolff Get Outlook for iOS ________________________________ From: nginx on behalf of Maxim Dounin Sent: Thursday, September 14, 2017 6:55:57 PM To: nginx at nginx.org Subject: Re: nginx cache growing well above max_size threshold Hello! On Thu, Sep 14, 2017 at 04:34:09PM +0000, Lucas Rolff wrote: > I have a minor question, so I have an nginx box using > proxy_cache, it has a key zone of 40 gigabyte (so it can cache > 320 million files), a max_size of 1500 gigabyte for the cache > and the inactive set to 30 days. > > However we experience that nginx goes well above the defined > limit - in our case the max size is 1500 gigabyte, but the cache > directory takes goes well above 1700 gigabyte. > > There's a total of 42.000.000 files currently on the system, > meaning the average filesize is about 43 kilobyte. > > Normally I know that nginx can go slightly above the limit, > until the cache manager purges the files, but it stays at about > 1700 gigabyte constantly unless we manually clear out the size. > > I see there's a change in 1.13.1 that ignores long locked cache > entries, is it possible that this bugfix actually fixes above > issue? > > Upgrading is rather time consuming and we have to ensure nginx > versions across the platform, so I wonder if anyone has some > pointers if the above bugfix would maybe solve our issue. > (currently the custom nginx version is based on nginx 1.10.3). https://trac.nginx.org/nginx/ticket/1163 TL;DR: This behaviour indicate there is a problem somewhere, likely socket leaks or process crashes. Reports suggests it might be related to HTTP/2. The change in 1.13.1 don't fix the root cause, but will allow nginx to keep cache under max_size regardless of the problem. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dyke at gmail.com Thu Sep 14 18:10:16 2017 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Thu, 14 Sep 2017 14:10:16 -0400 Subject: nginx cache growing well above max_size threshold In-Reply-To: References: <20170914165557.GU58595@mdounin.ru> Message-ID: You can actually can run H/2 through HAProxy, using ALPN to determine if the client understands H/2 I have the following (snippet of a) config that sends to different nginx ports based on the ALPN response. frontend https mode tcp bind 0.0.0.0:443 ssl crt /etc/haproxy/certs alpn h2,http/1.1 ecdhe secp384r1 http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;" timeout http-request 10s #send all HTTP/2 traffic to a specific backend use_backend http2-nodes if { ssl_fc_alpn -i h2 } #send HTTP/1.1 and HTTP/1.0 to default, which don't speak HTTP/2 default_backend http1-nodes backend http1-nodes mode http balance roundrobin default-server inter 1s fall 2 server web01 10.1.1.12:80 check send-proxy server web03 10.1.1.14:80 check send-proxy backend http2-nodes mode tcp balance roundrobin default-server inter 1s fall 2 server web01 10.1.1.12:81 check send-proxy server web03 10.1.1.14:81 check send-proxy Sounds like you may not want to complicate this system ATM, but just throwing it out there. It's worked really well for me, i have had i live for about a year. On Thu, Sep 14, 2017 at 1:09 PM, Lucas Rolff wrote: > Okay cool, I'll give it a try > > In our case we do not run http2 on the machines since haproxy runs in > front as well (which doesn't support http2) > > I'll also try enable a bit more verbose logging on one of the machines to > see what the logs say > > Thanks a lot Maxim! > > Best regards, > Lucas Rolff > > Get Outlook for iOS > ------------------------------ > *From:* nginx on behalf of Maxim Dounin < > mdounin at mdounin.ru> > *Sent:* Thursday, September 14, 2017 6:55:57 PM > *To:* nginx at nginx.org > *Subject:* Re: nginx cache growing well above max_size threshold > > Hello! > > On Thu, Sep 14, 2017 at 04:34:09PM +0000, Lucas Rolff wrote: > > > I have a minor question, so I have an nginx box using > > proxy_cache, it has a key zone of 40 gigabyte (so it can cache > > 320 million files), a max_size of 1500 gigabyte for the cache > > and the inactive set to 30 days. > > > > However we experience that nginx goes well above the defined > > limit - in our case the max size is 1500 gigabyte, but the cache > > directory takes goes well above 1700 gigabyte. > > > > There's a total of 42.000.000 files currently on the system, > > meaning the average filesize is about 43 kilobyte. > > > > Normally I know that nginx can go slightly above the limit, > > until the cache manager purges the files, but it stays at about > > 1700 gigabyte constantly unless we manually clear out the size. > > > > I see there's a change in 1.13.1 that ignores long locked cache > > entries, is it possible that this bugfix actually fixes above > > issue? > > > > Upgrading is rather time consuming and we have to ensure nginx > > versions across the platform, so I wonder if anyone has some > > pointers if the above bugfix would maybe solve our issue. > > (currently the custom nginx version is based on nginx 1.10.3). > > https://trac.nginx.org/nginx/ticket/1163 > > TL;DR: > > This behaviour indicate there is a problem somewhere, likely > socket leaks or process crashes. Reports suggests it might be > related to HTTP/2. The change in 1.13.1 don't fix the root cause, > but will allow nginx to keep cache under max_size regardless of > the problem. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Thu Sep 14 19:57:39 2017 From: chris at cretaforce.gr (Christos Chatzaras) Date: Thu, 14 Sep 2017 22:57:39 +0300 Subject: empty user-agent and logs Message-ID: <9F0203D7-EB59-4897-885C-7B0532D32F14@cretaforce.gr> curl -A "-" https://hostname/index.php and curl -A "" https://hostname/index.php are logged with: xxx.xxx.xxx.xxx - - [14/Sep/2017:22:47:09 +0300] "GET /index.php HTTP/1.1" 200 26039 "-" "-" There is not difference if there is an empty user-agent or a user-agent with a dash. Any idea it shows the empty user agents with dash? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Sep 14 21:44:48 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Sep 2017 22:44:48 +0100 Subject: Too many redirects In-Reply-To: <6a123c32-9b43-2a22-df50-d39296276bc0@gmail.com> References: <6a123c32-9b43-2a22-df50-d39296276bc0@gmail.com> Message-ID: <20170914214448.GM20907@daoine.org> On Thu, Sep 14, 2017 at 11:54:23PM +0800, tseveendorj wrote: Hi there, > location / { > index index.html; > if ($geoip_country_code != "JP") { return 301 > https://example.com/en/; } > } > location = /en/ { Change that to "location /en/ {" or "location ^~ /en/ {". > index index.html; > try_files $uri $uri/ =404; > } > > I'm trying to if request from other than JP request go to /en/ if > not /index.html You request /en/, which is processed in the second location. That does an internal redirect to /en/index.html, which is processed in the first location and returns a 301 redirect to /en/, so you have a loop. Change "location =" to break the loop, so that the request to /en/index.html is not handled in the first location. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 14 21:57:07 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Sep 2017 22:57:07 +0100 Subject: empty user-agent and logs In-Reply-To: <9F0203D7-EB59-4897-885C-7B0532D32F14@cretaforce.gr> References: <9F0203D7-EB59-4897-885C-7B0532D32F14@cretaforce.gr> Message-ID: <20170914215707.GN20907@daoine.org> On Thu, Sep 14, 2017 at 10:57:39PM +0300, Christos Chatzaras wrote: Hi there, > xxx.xxx.xxx.xxx - - [14/Sep/2017:22:47:09 +0300] "GET /index.php HTTP/1.1" 200 26039 "-" "-" > > There is not difference if there is an empty user-agent or a user-agent with a dash. > > Any idea it shows the empty user agents with dash? In the "common log format" widely used by web servers, the "hyphen" in the output indicates that the requested piece of information is not available. nginx does this in the function ngx_http_log_variable() in the file src/http/modules/ngx_http_log_module.c. If the difference between empty and dash matters to you, that is where you would probably change it. f -- Francis Daly francis at daoine.org From tseveendorj at gmail.com Thu Sep 14 22:07:30 2017 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Fri, 15 Sep 2017 06:07:30 +0800 Subject: Too many redirects In-Reply-To: <20170914214448.GM20907@daoine.org> References: <6a123c32-9b43-2a22-df50-d39296276bc0@gmail.com> <20170914214448.GM20907@daoine.org> Message-ID: Thank you Francis Daly. It works. On 15 Sep 2017 5:45 am, "Francis Daly" wrote: On Thu, Sep 14, 2017 at 11:54:23PM +0800, tseveendorj wrote: Hi there, > location / { > index index.html; > if ($geoip_country_code != "JP") { return 301 > https://example.com/en/; } > } > location = /en/ { Change that to "location /en/ {" or "location ^~ /en/ {". > index index.html; > try_files $uri $uri/ =404; > } > > I'm trying to if request from other than JP request go to /en/ if > not /index.html You request /en/, which is processed in the second location. That does an internal redirect to /en/index.html, which is processed in the first location and returns a 301 redirect to /en/, so you have a loop. Change "location =" to break the loop, so that the request to /en/index.html is not handled in the first location. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Sep 14 22:09:51 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Sep 2017 23:09:51 +0100 Subject: No Upstream Proxy Headers In-Reply-To: <1D29EB36-B460-4AE8-B776-315EEEE39382@kilian-ries.de> References: <1D29EB36-B460-4AE8-B776-315EEEE39382@kilian-ries.de> Message-ID: <20170914220951.GO20907@daoine.org> On Mon, Sep 04, 2017 at 08:46:48PM +0000, Kilian Ries wrote: Hi there, > i'm running a nginx (version: nginx/1.13.1) with two vhosts with exact the same configuration. The only difference is the upstream section: each vhosts points to a different upstream server / ip. > "nginx -T" will show the full configuration used. That might help show other differences in the running configs. > proxy_set_header X-Real-IP $proxy_protocol_addr; > proxy_set_header X-Forwarded-For $proxy_protocol_addr; > vhost_1 works without any problem and i can see both proxy_headers in the tcpdump and in my upstream-apache access-logs. "Normal" web clients don't speak the proxy protocol. Can you describe a complete test case that someone else can use to see the problem you see? f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 14 22:35:10 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Sep 2017 23:35:10 +0100 Subject: Modules needed to reverse proxy In-Reply-To: <1423573082.2223.1504744642296.JavaMail.josecarlos@MacBook-Pro-de-Jose-2.local> References: <1763702393.2199.1504744339543.JavaMail.josecarlos@MacBook-Pro-de-Jose-2.local> <1423573082.2223.1504744642296.JavaMail.josecarlos@MacBook-Pro-de-Jose-2.local> Message-ID: <20170914223510.GP20907@daoine.org> On Thu, Sep 07, 2017 at 02:37:24AM +0200, Jose Carlos S?nchez wrote: Hi there, > Hi, i need to recompile nginx to include modsecurity module and i want to take advantage of not compiling unnecessary modules. > Someone has the list of modules needed to do reverse proxy? ./configure --help | grep -o -- --without'[^ ]'* Use all of those except for --without-http (because you want to listen for incoming http) and --without-http_proxy_module (because you want to reverse-proxy to an upstream server). After that, "nginx -t" will tell you if your config needs any other modules -- for example, if you use "location ~", you might not want to exclude pcre. f -- Francis Daly francis at daoine.org From rodrigo at blizhost.com Fri Sep 15 07:29:18 2017 From: rodrigo at blizhost.com (Rodrigo Gomes) Date: Fri, 15 Sep 2017 04:29:18 -0300 Subject: OpenSSL 1.0.2 - CentOS 7.4 Message-ID: ??Hello guys, RedHat released yesterday (September 14) CentOS 7.4, which includes version 1.0.2 of OpenSSL. Now all it takes is the Nginx RPM to be compiled with the latest version of OpenSSL to work with ALPN. Do you have a forecast when we will have these features supported in the official repository?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Fri Sep 15 09:00:54 2017 From: thresh at nginx.com (Konstantin Pavlov) Date: Fri, 15 Sep 2017 12:00:54 +0300 Subject: OpenSSL 1.0.2 - CentOS 7.4 In-Reply-To: References: Message-ID: <1033f7fe-2075-69c7-c904-0d1e076eeee0@nginx.com> Hello Rodrigo, On 15/09/2017 10:29, Rodrigo Gomes wrote: > ??Hello guys, > > RedHat released yesterday (September 14) CentOS 7.4, which includes version 1.0.2 of OpenSSL. > > Now all it takes is the Nginx RPM to be compiled with the latest version of OpenSSL to work with ALPN. > > Do you have a forecast when we will have these features supported in the official repository?? We're working on it. Next nginx release will have a rpm built specifically for CentOS/RHEL 7.4, but hopefully we'll also provide current stable and mainline a bit sooner. Stay tuned. -- Konstantin Pavlov www.nginx.com From nginx-forum at forum.nginx.org Fri Sep 15 12:29:20 2017 From: nginx-forum at forum.nginx.org (sandyman) Date: Fri, 15 Sep 2017 08:29:20 -0400 Subject: location settings for Django Uwsgi Nginx Configuration for production Message-ID: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> Django settings STATICFILES_DIRS = [STATIC_DIR, ] STATIC_DIR = os.path.join(BASE_DIR, 'static') STATIC_ROOT = '/home/sandyman/production/django_project/assets/' STATIC_URL = '/assets/' local machine example file /home/sandyman/development/django_project/css/bootstrap.min.css Template {% load staticfiles %} href="{% static "css/bootstrap.min.css" %}" Production environment ran python manage.py collectstatic this succesfully created all files in sub directories of **/home/sandyman/production/django_project/assets/** e.g. /home/sandyman/production/django_project/assets/css/bootstrap.min.css ** NGING configuration **server { server_name sandyman.xyz www.sandyman.xyz ; location /static { alias /home/sandyman/production/django_project/assets/; } }** ** GET Request ** Request URL:http://www.sandyman.xyz/static/css/bootstrap.min.css Request Method:GET **Status Code:404 NOT FOUND** file bootstrap is located in /home/sandyman/production/django_project/assets/css/bootstrap.min.css Please assist me .. Ive tried many iterations of values for nginx location but no success django Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276402,276402#msg-276402 From nginx-forum at forum.nginx.org Fri Sep 15 12:38:46 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 15 Sep 2017 08:38:46 -0400 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> References: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have a look in the nginx error logfile where it expects the file to be. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276402,276403#msg-276403 From nginx-forum at forum.nginx.org Fri Sep 15 13:43:45 2017 From: nginx-forum at forum.nginx.org (sandyman) Date: Fri, 15 Sep 2017 09:43:45 -0400 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: References: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> Message-ID: error logs and access logs contain nothing As it is a hosting company i don't seem to have priviliges to enable debugginh I'm not allowed to sudo error_log logs/error.log; access_log logs/access.log; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276402,276404#msg-276404 From nginx-forum at forum.nginx.org Fri Sep 15 17:40:03 2017 From: nginx-forum at forum.nginx.org (ivy) Date: Fri, 15 Sep 2017 13:40:03 -0400 Subject: Separated reverse proxy for different users In-Reply-To: <20170903091658.GD20907@daoine.org> References: <20170903091658.GD20907@daoine.org> Message-ID: Bingo! > try_files $uri $uri/ =404; This line was inherited from default configuration of nginx. As newbie I am afraid to change anything I don't completely understand. Thank you very much for help, Francis. :-) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276150,276406#msg-276406 From nginx-forum at forum.nginx.org Fri Sep 15 17:44:00 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 15 Sep 2017 13:44:00 -0400 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: References: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> Message-ID: A 404 is logged unless logging has been disabled, ask the hosting company where logs are, check nginx.conf what/where is logged. . Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276402,276407#msg-276407 From francis at daoine.org Sat Sep 16 10:43:49 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Sep 2017 11:43:49 +0100 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> References: <6920cb07f7e74e1f5d6650b0e2c6f007.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170916104349.GR20907@daoine.org> On Fri, Sep 15, 2017 at 08:29:20AM -0400, sandyman wrote: Hi there, > **server { > server_name sandyman.xyz www.sandyman.xyz ; > location /static { > alias /home/sandyman/production/django_project/assets/; > } > }** > ** GET Request ** > > Request URL:http://www.sandyman.xyz/static/css/bootstrap.min.css > Request Method:GET > **Status Code:404 NOT FOUND** > > file bootstrap is located in > > /home/sandyman/production/django_project/assets/css/bootstrap.min.css That nginx config with that request for that url, works for me -- http 200 with the content of the file. If there is nothing in nginx logs, that suggests that maybe you are not connecting to the ip:port that uses this config; or perhaps that there is more config that is relevant that you are not showing. You can prove that you are using the config that you think by, for example, adding a snippet like location = /test-this/ { return 200 "Yes, this is correct\n"; } and then requesting http://www.sandyman.xyz/test-this/ f -- Francis Daly francis at daoine.org From tkadm30 at yandex.com Sun Sep 17 12:18:51 2017 From: tkadm30 at yandex.com (Etienne Robillard) Date: Sun, 17 Sep 2017 08:18:51 -0400 Subject: mediawiki, php-fpm, and nginx Message-ID: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> Hi, I'm trying to configure nginx with php-fpm to run mediawiki in a distinct location (/wiki). Here's my config: # configuration file /etc/nginx/nginx.conf: user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 80; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip off; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## #isotopesoftware.ca: #include /etc/nginx/conf.d/development.conf; include /etc/nginx/conf.d/isotoperesearch.conf; #include /etc/nginx/sites-enabled/*; } server { # static medias web server configuration, for development # and testing purposes. listen 80; server_name localhost; error_log /var/log/nginx/error_log; #debug root /home/www/isotoperesearch.ca; #autoindex on; client_max_body_size 5m; client_body_timeout 60; location / { # # host and port to fastcgi server #uwsgi_pass django; # 8808=gthc.org; 8801=tm #include uwsgi_params; fastcgi_pass 127.0.0.1:8808; include fastcgi_params; } # debug url rewriting to the error log rewrite_log on; location /media { autoindex on; gzip on; } location /pub { autoindex on; gzip on; } location /webalizer { autoindex on; gzip on; #auth_basic "Private Property"; #auth_basic_user_file /etc/nginx/.htpasswd; allow 67.68.76.70; deny all; } location /documentation { autoindex on; gzip on; } location /moin_static184 { autoindex on; gzip on; } location /favicon.ico { empty_gif; } location /robots.txt { root /home/www/isotopesoftware.ca; } location /sitemap.xml { root /home/www/isotopesoftware.ca; } #location /public_html { # root /home/www/; # autoindex on; #} # redirect server error pages to the static page /50x.html #error_page 404 /404.html; #error_page 403 /403.html; #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /var/www/nginx-default; #} include conf.d/mediawiki.conf; #include conf.d/livestore.conf; } # configuration file /etc/nginx/fastcgi_params: fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; #fastcgi_param REMOTE_USER $remote_user; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; #XXX #fastcgi_param HTTP_IF_NONE_MATCH $http_if_none_match; #fastcgi_param HTTP_IF_MODIFIED_SINCE $http_if_modified_since; # PHP only, required if PHP was built with --enable-force-cgi-redirect # fastcgi_param REDIRECT_STATUS 200; fastcgi_send_timeout 90; fastcgi_read_timeout 90; fastcgi_connect_timeout 40; #fastcgi_cache_valid 200 304 10m; #fastcgi_buffer_size 128k; #fastcgi_buffers 8 128k; #fastcgi_busy_buffers_size 256k; #fastcgi_temp_file_write_size 256k; # configuration file /etc/nginx/conf.d/mediawiki.conf: location = /wiki { root /home/www/isotoperesearch.ca/wiki; fastcgi_index index.php; index index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; } #location @mediawiki { # rewrite ^/(.*)$ /index.php; #} The issue is that the default "/" location is masking the fastcgi_pass directive in the wiki block. Is there any ways to run php-fpm in a location block ? Thank you in advance, Etienne From anoopalias01 at gmail.com Sun Sep 17 12:28:31 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 17 Sep 2017 17:58:31 +0530 Subject: mediawiki, php-fpm, and nginx In-Reply-To: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> References: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> Message-ID: try changing ############################## location = /wiki { root /home/www/isotoperesearch.ca/wiki; fastcgi_index index.php; index index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; } ############################## to ################################# location /wiki/ { # root /home/www/isotoperesearch.ca/wiki; fastcgi_index index.php; index /wiki/index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; } ######################################3 On Sun, Sep 17, 2017 at 5:48 PM, Etienne Robillard wrote: > Hi, > > I'm trying to configure nginx with php-fpm to run mediawiki in a distinct > location (/wiki). > > Here's my config: > > # configuration file /etc/nginx/nginx.conf: > user www-data; > worker_processes 4; > pid /run/nginx.pid; > > events { > worker_connections 512; > multi_accept on; > use epoll; > } > > http { > > ## > # Basic Settings > ## > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 80; > types_hash_max_size 2048; > # server_tokens off; > > # server_names_hash_bucket_size 64; > # server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # SSL Settings > ## > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > ssl_prefer_server_ciphers on; > > ## > # Logging Settings > ## > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > ## > # Gzip Settings > ## > > gzip off; > gzip_disable "msie6"; > > # gzip_vary on; > # gzip_proxied any; > # gzip_comp_level 6; > # gzip_buffers 16 8k; > # gzip_http_version 1.1; > # gzip_types text/plain text/css application/json > application/javascript text/xml application/xml application/xml+rss > text/javascript; > > ## > # Virtual Host Configs > ## > > #isotopesoftware.ca: > #include /etc/nginx/conf.d/development.conf; > include /etc/nginx/conf.d/isotoperesearch.conf; > #include /etc/nginx/sites-enabled/*; > } > > server { > > # static medias web server configuration, for development > # and testing purposes. > > listen 80; > server_name localhost; > error_log /var/log/nginx/error_log; #debug > root /home/www/isotoperesearch.ca; > #autoindex on; > client_max_body_size 5m; > client_body_timeout 60; > > location / { > # # host and port to fastcgi server > #uwsgi_pass django; # 8808=gthc.org; 8801=tm > #include uwsgi_params; > fastcgi_pass 127.0.0.1:8808; > include fastcgi_params; > } > > > # debug url rewriting to the error log > rewrite_log on; > > location /media { > autoindex on; > gzip on; > } > > location /pub { > autoindex on; > gzip on; > } > > location /webalizer { > autoindex on; > gzip on; > #auth_basic "Private Property"; > #auth_basic_user_file /etc/nginx/.htpasswd; > allow 67.68.76.70; > deny all; > } > > location /documentation { > autoindex on; > gzip on; > } > > location /moin_static184 { > autoindex on; > gzip on; > } > location /favicon.ico { > empty_gif; > } > location /robots.txt { > root /home/www/isotopesoftware.ca; > } > location /sitemap.xml { > root /home/www/isotopesoftware.ca; > } > > #location /public_html { > # root /home/www/; > # autoindex on; > #} > # redirect server error pages to the static page /50x.html > #error_page 404 /404.html; > #error_page 403 /403.html; > #error_page 500 502 503 504 /50x.html; > #location = /50x.html { > # root /var/www/nginx-default; > #} > > include conf.d/mediawiki.conf; > #include conf.d/livestore.conf; > } > > > # configuration file /etc/nginx/fastcgi_params: > fastcgi_param PATH_INFO $fastcgi_script_name; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > #fastcgi_param REMOTE_USER $remote_user; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > > #XXX > #fastcgi_param HTTP_IF_NONE_MATCH $http_if_none_match; > #fastcgi_param HTTP_IF_MODIFIED_SINCE $http_if_modified_since; > > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > # fastcgi_param REDIRECT_STATUS 200; > > fastcgi_send_timeout 90; > fastcgi_read_timeout 90; > fastcgi_connect_timeout 40; > #fastcgi_cache_valid 200 304 10m; > #fastcgi_buffer_size 128k; > #fastcgi_buffers 8 128k; > #fastcgi_busy_buffers_size 256k; > #fastcgi_temp_file_write_size 256k; > > > # configuration file /etc/nginx/conf.d/mediawiki.conf: > > > location = /wiki { > root /home/www/isotoperesearch.ca/wiki; > fastcgi_index index.php; > index index.php; > include fastcgi_params; > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > #location @mediawiki { > # rewrite ^/(.*)$ /index.php; > #} > > > The issue is that the default "/" location is masking the fastcgi_pass > directive in the wiki block. > > Is there any ways to run php-fpm in a location block ? > > > Thank you in advance, > > Etienne > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Sep 17 13:16:12 2017 From: nginx-forum at forum.nginx.org (sandyman) Date: Sun, 17 Sep 2017 09:16:12 -0400 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: <20170916104349.GR20907@daoine.org> References: <20170916104349.GR20907@daoine.org> Message-ID: Hello Francis , Thank you for your help this far what happened was I Set DEBUG=True in Django settings , and urlpatterns += staticfiles_urlpatterns() this allowed the files to be served in a non secure setting I'm back to where I started. I added to my config location = /test-this/ { return 200 "Yes, this is correct\n"; } I stopped and started nginx , and restarted the server and then requesting http://www.sandyman.xyz/test-this/ I get the 404 Request URL:http://www.asandhu.xyz/test-this/ Request Method:GET Status Code:404 NOT FOUND I'm stuck again Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276402,276419#msg-276419 From tkadm30 at yandex.com Sun Sep 17 13:25:06 2017 From: tkadm30 at yandex.com (Etienne Robillard) Date: Sun, 17 Sep 2017 09:25:06 -0400 Subject: mediawiki, php-fpm, and nginx In-Reply-To: References: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> Message-ID: <233f83eb-ac3f-9167-7272-6aed41a37d03@yandex.com> Hi Anoop, Not sure why, but php-fpm now returns a blank page. Here's my error_log: 2017/09/17 09:16:02 [debug] 21872#21872: epoll add event: fd:7 op:1 ev:00002001 2017/09/17 09:16:02 [debug] 21874#21874: epoll add event: fd:7 op:1 ev:00002001 2017/09/17 09:16:02 [debug] 21873#21873: epoll add event: fd:7 op:1 ev:00002001 2017/09/17 09:16:02 [debug] 21871#21871: epoll add event: fd:7 op:1 ev:00002001 2017/09/17 09:16:09 [debug] 21871#21871: accept on 0.0.0.0:80, ready: 1 2017/09/17 09:16:09 [debug] 21873#21873: accept on 0.0.0.0:80, ready: 1 2017/09/17 09:16:09 [debug] 21871#21871: accept() not ready (11: Resource temporarily unavailable) 2017/09/17 09:16:09 [debug] 21873#21873: posix_memalign: 0998ACC0:256 @16 2017/09/17 09:16:09 [debug] 21873#21873: *7 accept: 127.0.0.1:56704 fd:4 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 4: 60000:2415675759 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:1 ev:80002001 2017/09/17 09:16:09 [debug] 21873#21873: accept() not ready (11: Resource temporarily unavailable) 2017/09/17 09:16:09 [debug] 21873#21873: *7 http wait request handler 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 099AEEE8:1024 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:0, avail:1 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:4 134 of 1024 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 09992DB0:4096 @16 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request line 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request line: "GET/wiki/ HTTP/1.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http uri: "/wiki/" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http args: "" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http exten: "" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request header line 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "TE: deflate,gzip;q=0.3" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Connection: TE, close" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Host: localhost" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "User-Agent: lwp-request/6.15 libwww-perl/6.15" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header done 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 4: 2415675759 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "/wiki" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 2017/09/17 09:16:09 [debug] 21873#21873: *7 content phase: 12 2017/09/17 09:16:09 [debug] 21873#21873: *7 open index "/home/www/isotoperesearch.ca/wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 internal redirect: "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "\.php$" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 2017/09/17 09:16:09 [debug] 21873#21873: *7 http init upstream, client timer: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:3 ev:80002005 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "PATH_INFO" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "PATH_INFO: /wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "QUERY_STRING" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "QUERY_STRING: " 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_METHOD" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "GET" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_METHOD: GET" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_TYPE" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_TYPE: " 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_LENGTH" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_LENGTH: " 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SCRIPT_NAME" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SCRIPT_NAME: /wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_URI" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_URI:/wiki/" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_URI" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_URI: /wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_ROOT" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/home/www/isotoperesearch.ca" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_ROOT: /home/www/isotoperesearch.ca" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PROTOCOL" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "HTTP/1.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "GATEWAY_INTERFACE" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CGI/1.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_SOFTWARE" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "nginx" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_SOFTWARE: nginx" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_ADDR" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_ADDR: 127.0.0.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_PORT" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "56704" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_PORT: 56704" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_ADDR" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_ADDR: 127.0.0.1" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PORT" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "80" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PORT: 80" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_NAME" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "localhost" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_NAME: localhost" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_TE: deflate,gzip;q=0.3" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_CONNECTION: TE, close" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_HOST: localhost" 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_USER_AGENT: lwp-request/6.15 libwww-perl/6.15" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cleanup add: 09993C3C 2017/09/17 09:16:09 [debug] 21873#21873: *7 get rr peer, try: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 stream socket 6 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add connection: fd:6 ev:80002005 2017/09/17 09:16:09 [debug] 21873#21873: *7 connect to unix:/var/run/php/php7.0-fpm.sock, fd:6 #8 2017/09/17 09:16:09 [debug] 21873#21873: *7 connected 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream connect: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998AA00:128 @16 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request body 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer buf fl:0 s:544 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer in: 09993C5C 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 544 of 544 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer out: 00000000 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 6: 90000:2415705830 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:3 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:3 blk:0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:2 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:2 blk:0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http run request: "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream check client, write event:1, "/wiki/index.php" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream dummy handler 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process header 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 09999E18:4096 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:1, avail:1 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:6 72 of 4096 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 2A 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 42 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header: "Content-type: text/html; charset=UTF-8" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header done 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998F000:4096 @16 2017/09/17 09:16:09 [debug] 21873#21873: *7 HTTP/1.1 200 OK Server: nginx/1.12.0 Date: Sun, 17 Sep 2017 13:16:09 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: close 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:0 f:0 s:165 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cacheable: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process upstream 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe read upstream: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe preread: 22 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: eof:1, avail:0 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: 1, last:4024 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe recv chain: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe buf free s:0 t:1 f:0 09999E18, pos 09999E4A, size: 22 file: 0, size: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe length: -1 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 03 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 08 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 8 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi sent end request 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09999E18 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream: 1 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream done 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer: 6, old: 2415705830, new: 2415705831 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream exit: 00000000 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http upstream request: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http fastcgi request 2017/09/17 09:16:09 [debug] 21873#21873: *7 free rr peer 1 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http upstream connection: 6 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998AA00, unused: 88 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 6: 2415705830 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream temp fd: -1 2017/09/17 09:16:09 [debug] 21873#21873: *7 http output filter "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 image filter 2017/09/17 09:16:09 [debug] 21873#21873: *7 http postpone filter "/wiki/index.php?" BFEBBC94 2017/09/17 09:16:09 [debug] 21873#21873: *7 http chunk: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 write old buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:0 f:0 00000000, pos 080F0A8B, size: 5 file: 0, size: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:1 f:0 s:170 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter limit 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 170 of 170 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter 00000000 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: 0 "/wiki/index.php?" 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: 0, "/wiki/index.php?" a:1, c:1 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:1 blk:0 2017/09/17 09:16:09 [debug] 21873#21873: *7 http close request 2017/09/17 09:16:09 [debug] 21873#21873: *7 http log handler 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 00000000 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09992DB0, unused: 4 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998F000, unused: 3418 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http connection: 4 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 099AEEE8 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998ACC0, unused: 24 Le 2017-09-17 ? 08:28, Anoop Alias a ?crit?: > try changing > > ############################## > > location = /wiki { > ? ? ? root /home/www/isotoperesearch.ca/wiki > ; > ? ? ? fastcgi_index index.php; > ? ? ? index index.php; > ? ? ? include fastcgi_params; > ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > ############################## > to > > ################################# > > location /wiki/ { > ? ? ? # root /home/www/isotoperesearch.ca/wiki > ; > ? ? ? fastcgi_index index.php; > ? ? ? index /wiki/index.php; > ? ? ? include fastcgi_params; > ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > ######################################3 > > On Sun, Sep 17, 2017 at 5:48 PM, Etienne Robillard > wrote: > > Hi, > > I'm trying to configure nginx with php-fpm to run mediawiki in a > distinct location (/wiki). > > Here's my config: > > # configuration file /etc/nginx/nginx.conf: > user www-data; > worker_processes 4; > pid /run/nginx.pid; > > events { > ? ? worker_connections 512; > ? ? multi_accept on; > ? ? use epoll; > } > > http { > > ? ? ## > ? ? # Basic Settings > ? ? ## > > ? ? sendfile on; > ? ? tcp_nopush on; > ? ? tcp_nodelay on; > ? ? keepalive_timeout 80; > ? ? types_hash_max_size 2048; > ? ? # server_tokens off; > > ? ? # server_names_hash_bucket_size 64; > ? ? # server_name_in_redirect off; > > ? ? include /etc/nginx/mime.types; > ? ? default_type application/octet-stream; > > ? ? ## > ? ? # SSL Settings > ? ? ## > > ? ? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > ? ? ssl_prefer_server_ciphers on; > > ? ? ## > ? ? # Logging Settings > ? ? ## > > ? ? access_log /var/log/nginx/access.log; > ? ? error_log /var/log/nginx/error.log; > > ? ? ## > ? ? # Gzip Settings > ? ? ## > > ? ? gzip off; > ? ? gzip_disable "msie6"; > > ? ? # gzip_vary on; > ? ? # gzip_proxied any; > ? ? # gzip_comp_level 6; > ? ? # gzip_buffers 16 8k; > ? ? # gzip_http_version 1.1; > ? ? # gzip_types text/plain text/css application/json > application/javascript text/xml application/xml > application/xml+rss text/javascript; > > ? ? ## > ? ? # Virtual Host Configs > ? ? ## > > ? ? #isotopesoftware.ca : > ? ? #include /etc/nginx/conf.d/development.conf; > ? ? include /etc/nginx/conf.d/isotoperesearch.conf; > ? ? #include /etc/nginx/sites-enabled/*; > } > > server { > > ? ? # static medias web server configuration, for development > ? ? # and testing purposes. > > ? ? listen? ? ? ?80; > ? ? server_name? localhost; > ? ? error_log /var/log/nginx/error_log; #debug > ? ? root /home/www/isotoperesearch.ca ; > ? ? #autoindex on; > ? ? client_max_body_size 5m; > ? ? client_body_timeout 60; > > ? ? location / { > ? ? #? ? # host and port to fastcgi server > ? ? ? ? #uwsgi_pass django; # 8808=gthc.org ; 8801=tm > ? ? ? ? #include uwsgi_params; > ? ? ? ? fastcgi_pass 127.0.0.1:8808 ; > ? ? ? ? include fastcgi_params; > ? ? } > > > ? ? # debug url rewriting to the error log > ? ? rewrite_log on; > > ? ? location /media { > ? ? ? ? autoindex on; > ? ? ? ? gzip on; > ? ? } > > ? ? location /pub { > ? ? ? ? autoindex on; > ? ? ? ? gzip on; > ? ? } > > ? ? location /webalizer { > ? ? ? ? autoindex on; > ? ? ? ? gzip on; > ? ? #auth_basic "Private Property"; > ? ? #auth_basic_user_file /etc/nginx/.htpasswd; > ? ? ? ? allow 67.68.76.70; > ? ? deny all; > ? ? } > > ? ? location /documentation { > ? ? ? ? autoindex on; > ? ? ? ? gzip on; > ? ? } > > ? ? location /moin_static184 { > ? ? autoindex on; > ? ? gzip on; > ? ? } > ? ? location /favicon.ico { > ? ? empty_gif; > ? ? } > ? ? location /robots.txt { > ? ? ? ? ?root /home/www/isotopesoftware.ca > ; > ? ? } > ? ? location /sitemap.xml { > ? ? root /home/www/isotopesoftware.ca ; > ? ? } > > ? ? #location /public_html { > ? ? # root /home/www/; > ? ? # autoindex on; > ? ? #} > ? ? # redirect server error pages to the static page /50x.html > ? ? #error_page 404 /404.html; > ? ? #error_page 403? ? /403.html; > ? ? #error_page 500 502 503 504? /50x.html; > ? ? #location = /50x.html { > ? ? #? ? root? ?/var/www/nginx-default; > ? ? #} > > ? ? include conf.d/mediawiki.conf; > ? ? #include conf.d/livestore.conf; > } > > > # configuration file /etc/nginx/fastcgi_params: > fastcgi_param? PATH_INFO? ? ? ? ? $fastcgi_script_name; > fastcgi_param? QUERY_STRING? ? ? ?$query_string; > fastcgi_param? REQUEST_METHOD? ? ?$request_method; > fastcgi_param? CONTENT_TYPE? ? ? ?$content_type; > fastcgi_param? CONTENT_LENGTH? ? ?$content_length; > > fastcgi_param? SCRIPT_NAME? ? ? ? $fastcgi_script_name; > fastcgi_param? REQUEST_URI? ? ? ? $request_uri; > fastcgi_param? DOCUMENT_URI? ? ? ?$document_uri; > fastcgi_param? DOCUMENT_ROOT? ? ? $document_root; > fastcgi_param? SERVER_PROTOCOL? ? $server_protocol; > > fastcgi_param? GATEWAY_INTERFACE? CGI/1.1; > fastcgi_param? SERVER_SOFTWARE? ? nginx; > > fastcgi_param? REMOTE_ADDR? ? ? ? $remote_addr; > fastcgi_param? REMOTE_PORT? ? ? ? $remote_port; > #fastcgi_param? REMOTE_USER? ? ? $remote_user; > fastcgi_param? SERVER_ADDR? ? ? ? $server_addr; > fastcgi_param? SERVER_PORT? ? ? ? $server_port; > fastcgi_param? SERVER_NAME? ? ? ? $server_name; > > > #XXX > #fastcgi_param HTTP_IF_NONE_MATCH $http_if_none_match; > #fastcgi_param HTTP_IF_MODIFIED_SINCE $http_if_modified_since; > > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > # fastcgi_param? REDIRECT_STATUS? ? 200; > > fastcgi_send_timeout 90; > fastcgi_read_timeout 90; > fastcgi_connect_timeout 40; > #fastcgi_cache_valid 200 304 10m; > #fastcgi_buffer_size 128k; > #fastcgi_buffers 8 128k; > #fastcgi_busy_buffers_size 256k; > #fastcgi_temp_file_write_size 256k; > > > # configuration file /etc/nginx/conf.d/mediawiki.co > nf: > > > location = /wiki { > ? ? ? root /home/www/isotoperesearch.ca/wiki > ; > ? ? ? fastcgi_index index.php; > ? ? ? index index.php; > ? ? ? include fastcgi_params; > ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > #location @mediawiki { > #? ? rewrite ^/(.*)$ /index.php; > #} > > > The issue is that the default "/" location is masking the > fastcgi_pass directive in the wiki block. > > Is there any ways to run php-fpm in a location block ? > > > Thank you in advance, > > Etienne > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > *Anoop P Alias* > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Etienne Robillard tkadm30 at yandex.com http://www.isotopesoftware.ca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sun Sep 17 13:36:19 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 17 Sep 2017 19:06:19 +0530 Subject: mediawiki, php-fpm, and nginx In-Reply-To: <233f83eb-ac3f-9167-7272-6aed41a37d03@yandex.com> References: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> <233f83eb-ac3f-9167-7272-6aed41a37d03@yandex.com> Message-ID: Hi Etienne, Assuming you want mediawiki to be served from /wiki/ ########################## server{ root /path/to/directory/excluding/wiki; .. .. ... location /wiki/ { try_files $uri $uri /wiki/index.php?query_string; location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; } } ########################## note the ~ \.php location is inside location /wiki/ and the root directive only come once in the server {} Good Luck On Sun, Sep 17, 2017 at 6:55 PM, Etienne Robillard wrote: > Hi Anoop, > > Not sure why, but php-fpm now returns a blank page. > > Here's my error_log: > > 2017/09/17 09:16:02 [debug] 21872#21872: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21874#21874: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21873#21873: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21871#21871: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:09 [debug] 21871#21871: accept on 0.0.0.0:80, ready: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: accept on 0.0.0.0:80, ready: 1 > 2017/09/17 09:16:09 [debug] 21871#21871: accept() not ready (11: Resource temporarily unavailable) > 2017/09/17 09:16:09 [debug] 21873#21873: posix_memalign: 0998ACC0:256 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 accept: 127.0.0.1:56704 fd:4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 4: 60000:2415675759 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:1 ev:80002001 > 2017/09/17 09:16:09 [debug] 21873#21873: accept() not ready (11: Resource temporarily unavailable) > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http wait request handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 099AEEE8:1024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:0, avail:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:4 134 of 1024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 09992DB0:4096 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request line > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request line: "GET */wiki/* HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http uri: "*/wiki/*" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http args: "" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http exten: "" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request header line > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "TE: deflate,gzip;q=0.3" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Connection: TE, close" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Host: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "User-Agent: lwp-request/6.15 libwww-perl/6.15" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 4: 2415675759 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 content phase: 12 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 open index "/home/www/isotoperesearch.ca/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 internal redirect: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http init upstream, client timer: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:3 ev:80002005 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "PATH_INFO" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "PATH_INFO: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "QUERY_STRING" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "QUERY_STRING: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_METHOD" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "GET" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_METHOD: GET" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_TYPE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_TYPE: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_LENGTH" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_LENGTH: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SCRIPT_NAME" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SCRIPT_NAME: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_URI" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "*/wiki/*" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_URI: */wiki/*" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_URI" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_URI: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_ROOT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/home/www/isotoperesearch.ca" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_ROOT: /home/www/isotoperesearch.ca" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PROTOCOL" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "GATEWAY_INTERFACE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CGI/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_SOFTWARE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "nginx" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_SOFTWARE: nginx" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_ADDR" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_ADDR: 127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_PORT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "56704" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_PORT: 56704" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_ADDR" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_ADDR: 127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PORT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "80" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PORT: 80" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_NAME" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_NAME: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_TE: deflate,gzip;q=0.3" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_CONNECTION: TE, close" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_HOST: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_USER_AGENT: lwp-request/6.15 libwww-perl/6.15" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cleanup add: 09993C3C > 2017/09/17 09:16:09 [debug] 21873#21873: *7 get rr peer, try: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 stream socket 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add connection: fd:6 ev:80002005 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 connect to unix:/var/run/php/php7.0-fpm.sock, fd:6 #8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 connected > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream connect: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998AA00:128 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request body > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer buf fl:0 s:544 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer in: 09993C5C > 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 544 of 544 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer out: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 6: 90000:2415705830 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:3 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:2 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:2 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http run request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream check client, write event:1, "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream dummy handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process header > 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 09999E18:4096 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:1, avail:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:6 72 of 4096 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 2A > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 42 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header: "Content-type: text/html; charset=UTF-8" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998F000:4096 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 HTTP/1.1 200 OK > Server: nginx/1.12.0 > Date: Sun, 17 Sep 2017 13:16:09 GMT > Content-Type: text/html; charset=UTF-8 > Transfer-Encoding: chunked > Connection: close > > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:0 f:0 s:165 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cacheable: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process upstream > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe read upstream: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe preread: 22 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: eof:1, avail:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: 1, last:4024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe recv chain: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe buf free s:0 t:1 f:0 09999E18, pos 09999E4A, size: 22 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe length: -1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 03 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 08 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi sent end request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09999E18 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer: 6, old: 2415705830, new: 2415705831 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream exit: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http upstream request: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http fastcgi request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free rr peer 1 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http upstream connection: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998AA00, unused: 88 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 6: 2415705830 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream temp fd: -1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http output filter "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 image filter > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http postpone filter "/wiki/index.php?" BFEBBC94 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http chunk: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write old buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:0 f:0 00000000, pos 080F0A8B, size: 5 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:1 f:0 s:170 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter limit 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 170 of 170 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: 0 "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: 0, "/wiki/index.php?" a:1, c:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:1 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http close request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http log handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09992DB0, unused: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998F000, unused: 3418 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http connection: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 099AEEE8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998ACC0, unused: 24 > > > Le 2017-09-17 ? 08:28, Anoop Alias a ?crit : > > try changing > > ############################## > > location = /wiki { > root /home/www/isotoperesearch.ca/wiki; > fastcgi_index index.php; > index index.php; > include fastcgi_params; > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > ############################## > to > > ################################# > > location /wiki/ { > # root /home/www/isotoperesearch.ca/wiki; > fastcgi_index index.php; > index /wiki/index.php; > include fastcgi_params; > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > ######################################3 > > On Sun, Sep 17, 2017 at 5:48 PM, Etienne Robillard > wrote: > >> Hi, >> >> I'm trying to configure nginx with php-fpm to run mediawiki in a distinct >> location (/wiki). >> >> Here's my config: >> >> # configuration file /etc/nginx/nginx.conf: >> user www-data; >> worker_processes 4; >> pid /run/nginx.pid; >> >> events { >> worker_connections 512; >> multi_accept on; >> use epoll; >> } >> >> http { >> >> ## >> # Basic Settings >> ## >> >> sendfile on; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 80; >> types_hash_max_size 2048; >> # server_tokens off; >> >> # server_names_hash_bucket_size 64; >> # server_name_in_redirect off; >> >> include /etc/nginx/mime.types; >> default_type application/octet-stream; >> >> ## >> # SSL Settings >> ## >> >> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE >> ssl_prefer_server_ciphers on; >> >> ## >> # Logging Settings >> ## >> >> access_log /var/log/nginx/access.log; >> error_log /var/log/nginx/error.log; >> >> ## >> # Gzip Settings >> ## >> >> gzip off; >> gzip_disable "msie6"; >> >> # gzip_vary on; >> # gzip_proxied any; >> # gzip_comp_level 6; >> # gzip_buffers 16 8k; >> # gzip_http_version 1.1; >> # gzip_types text/plain text/css application/json >> application/javascript text/xml application/xml application/xml+rss >> text/javascript; >> >> ## >> # Virtual Host Configs >> ## >> >> #isotopesoftware.ca: >> #include /etc/nginx/conf.d/development.conf; >> include /etc/nginx/conf.d/isotoperesearch.conf; >> #include /etc/nginx/sites-enabled/*; >> } >> >> server { >> >> # static medias web server configuration, for development >> # and testing purposes. >> >> listen 80; >> server_name localhost; >> error_log /var/log/nginx/error_log; #debug >> root /home/www/isotoperesearch.ca; >> #autoindex on; >> client_max_body_size 5m; >> client_body_timeout 60; >> >> location / { >> # # host and port to fastcgi server >> #uwsgi_pass django; # 8808=gthc.org; 8801=tm >> #include uwsgi_params; >> fastcgi_pass 127.0.0.1:8808; >> include fastcgi_params; >> } >> >> >> # debug url rewriting to the error log >> rewrite_log on; >> >> location /media { >> autoindex on; >> gzip on; >> } >> >> location /pub { >> autoindex on; >> gzip on; >> } >> >> location /webalizer { >> autoindex on; >> gzip on; >> #auth_basic "Private Property"; >> #auth_basic_user_file /etc/nginx/.htpasswd; >> allow 67.68.76.70; >> deny all; >> } >> >> location /documentation { >> autoindex on; >> gzip on; >> } >> >> location /moin_static184 { >> autoindex on; >> gzip on; >> } >> location /favicon.ico { >> empty_gif; >> } >> location /robots.txt { >> root /home/www/isotopesoftware.ca; >> } >> location /sitemap.xml { >> root /home/www/isotopesoftware.ca; >> } >> >> #location /public_html { >> # root /home/www/; >> # autoindex on; >> #} >> # redirect server error pages to the static page /50x.html >> #error_page 404 /404.html; >> #error_page 403 /403.html; >> #error_page 500 502 503 504 /50x.html; >> #location = /50x.html { >> # root /var/www/nginx-default; >> #} >> >> include conf.d/mediawiki.conf; >> #include conf.d/livestore.conf; >> } >> >> >> # configuration file /etc/nginx/fastcgi_params: >> fastcgi_param PATH_INFO $fastcgi_script_name; >> fastcgi_param QUERY_STRING $query_string; >> fastcgi_param REQUEST_METHOD $request_method; >> fastcgi_param CONTENT_TYPE $content_type; >> fastcgi_param CONTENT_LENGTH $content_length; >> >> fastcgi_param SCRIPT_NAME $fastcgi_script_name; >> fastcgi_param REQUEST_URI $request_uri; >> fastcgi_param DOCUMENT_URI $document_uri; >> fastcgi_param DOCUMENT_ROOT $document_root; >> fastcgi_param SERVER_PROTOCOL $server_protocol; >> >> fastcgi_param GATEWAY_INTERFACE CGI/1.1; >> fastcgi_param SERVER_SOFTWARE nginx; >> >> fastcgi_param REMOTE_ADDR $remote_addr; >> fastcgi_param REMOTE_PORT $remote_port; >> #fastcgi_param REMOTE_USER $remote_user; >> fastcgi_param SERVER_ADDR $server_addr; >> fastcgi_param SERVER_PORT $server_port; >> fastcgi_param SERVER_NAME $server_name; >> >> >> #XXX >> #fastcgi_param HTTP_IF_NONE_MATCH $http_if_none_match; >> #fastcgi_param HTTP_IF_MODIFIED_SINCE $http_if_modified_since; >> >> >> # PHP only, required if PHP was built with --enable-force-cgi-redirect >> # fastcgi_param REDIRECT_STATUS 200; >> >> fastcgi_send_timeout 90; >> fastcgi_read_timeout 90; >> fastcgi_connect_timeout 40; >> #fastcgi_cache_valid 200 304 10m; >> #fastcgi_buffer_size 128k; >> #fastcgi_buffers 8 128k; >> #fastcgi_busy_buffers_size 256k; >> #fastcgi_temp_file_write_size 256k; >> >> >> # configuration file /etc/nginx/conf.d/mediawiki.conf: >> >> >> location = /wiki { >> root /home/www/isotoperesearch.ca/wiki; >> fastcgi_index index.php; >> index index.php; >> include fastcgi_params; >> fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; >> >> } >> >> #location @mediawiki { >> # rewrite ^/(.*)$ /index.php; >> #} >> >> >> The issue is that the default "/" location is masking the fastcgi_pass >> directive in the wiki block. >> >> Is there any ways to run php-fpm in a location block ? >> >> >> Thank you in advance, >> >> Etienne >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > *Anoop P Alias* > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Etienne Robillardtkadm30 at yandex.comhttp://www.isotopesoftware.ca/ > > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkadm30 at yandex.com Sun Sep 17 14:13:51 2017 From: tkadm30 at yandex.com (Etienne Robillard) Date: Sun, 17 Sep 2017 10:13:51 -0400 Subject: mediawiki, php-fpm, and nginx In-Reply-To: References: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> <233f83eb-ac3f-9167-7272-6aed41a37d03@yandex.com> Message-ID: <6d77265c-7450-bb6d-1d1f-fe786b5aeb8f@yandex.com> Hi Anoop, What value should i set cgi.fix_pathinfo in php.ini ? E Le 2017-09-17 ? 09:36, Anoop Alias a ?crit?: > Hi?Etienne, > > Assuming you want mediawiki?to be served from /wiki/ > > ########################## > server{ > > root /path/to/directory/excluding/wiki; > .. > .. > ... > location /wiki/ { > > try_files $uri $uri /wiki/index.php?query_string; > > ? ? location ~ \.php$ { > > ? ? ? try_files $uri =404; > ? ? ? fastcgi_index index.php; > ? ? ? include fastcgi_params; > ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > ? ? } > > } > > ########################## > > note the?~ \.php location is inside location /wiki/ and the root > directive only come once in the server {} > > Good Luck > > > > > On Sun, Sep 17, 2017 at 6:55 PM, Etienne Robillard > wrote: > > Hi Anoop, > > Not sure why, but php-fpm now returns a blank page. > > Here's my error_log: > > 2017/09/17 09:16:02 [debug] 21872#21872: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21874#21874: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21873#21873: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21871#21871: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:09 [debug] 21871#21871: accept on0.0.0.0:80 , ready: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: accept on0.0.0.0:80 , ready: 1 > 2017/09/17 09:16:09 [debug] 21871#21871: accept() not ready (11: Resource temporarily unavailable) > 2017/09/17 09:16:09 [debug] 21873#21873: posix_memalign: 0998ACC0:256 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 accept:127.0.0.1:56704 fd:4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 4: 60000:2415675759 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:1 ev:80002001 > 2017/09/17 09:16:09 [debug] 21873#21873: accept() not ready (11: Resource temporarily unavailable) > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http wait request handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 099AEEE8:1024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:0, avail:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:4 134 of 1024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 09992DB0:4096 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request line > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request line: "GET//wiki// HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http uri: "//wiki//" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http args: "" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http exten: "" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request header line > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "TE: deflate,gzip;q=0.3" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Connection: TE, close" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Host: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "User-Agent: lwp-request/6.15 libwww-perl/6.15" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 4: 2415675759 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 content phase: 12 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 open index "/home/www/isotoperesearch.ca/wiki/index.php > " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 internal redirect: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http init upstream, client timer: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:3 ev:80002005 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "PATH_INFO" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "PATH_INFO: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "QUERY_STRING" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "QUERY_STRING: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_METHOD" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "GET" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_METHOD: GET" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_TYPE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_TYPE: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_LENGTH" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_LENGTH: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SCRIPT_NAME" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SCRIPT_NAME: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_URI" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "//wiki//" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_URI://wiki//" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_URI" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_URI: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_ROOT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/home/www/isotoperesearch.ca " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_ROOT: /home/www/isotoperesearch.ca " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PROTOCOL" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "GATEWAY_INTERFACE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CGI/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_SOFTWARE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "nginx" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_SOFTWARE: nginx" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_ADDR" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_ADDR: 127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_PORT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "56704" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_PORT: 56704" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_ADDR" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_ADDR: 127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PORT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "80" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PORT: 80" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_NAME" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_NAME: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_TE: deflate,gzip;q=0.3" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_CONNECTION: TE, close" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_HOST: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_USER_AGENT: lwp-request/6.15 libwww-perl/6.15" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cleanup add: 09993C3C > 2017/09/17 09:16:09 [debug] 21873#21873: *7 get rr peer, try: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 stream socket 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add connection: fd:6 ev:80002005 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 connect to unix:/var/run/php/php7.0-fpm.sock, fd:6 #8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 connected > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream connect: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998AA00:128 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request body > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer buf fl:0 s:544 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer in: 09993C5C > 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 544 of 544 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer out: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 6: 90000:2415705830 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:3 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:2 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:2 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http run request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream check client, write event:1, "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream dummy handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process header > 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 09999E18:4096 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:1, avail:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:6 72 of 4096 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 2A > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 42 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header: "Content-type: text/html; charset=UTF-8" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998F000:4096 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 HTTP/1.1 200 OK > Server: nginx/1.12.0 > Date: Sun, 17 Sep 2017 13:16:09 GMT > Content-Type: text/html; charset=UTF-8 > Transfer-Encoding: chunked > Connection: close > > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:0 f:0 s:165 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cacheable: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process upstream > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe read upstream: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe preread: 22 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: eof:1, avail:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: 1, last:4024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe recv chain: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe buf free s:0 t:1 f:0 09999E18, pos 09999E4A, size: 22 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe length: -1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 03 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 08 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi sent end request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09999E18 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer: 6, old: 2415705830, new: 2415705831 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream exit: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http upstream request: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http fastcgi request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free rr peer 1 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http upstream connection: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998AA00, unused: 88 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 6: 2415705830 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream temp fd: -1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http output filter "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 image filter > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http postpone filter "/wiki/index.php?" BFEBBC94 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http chunk: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write old buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:0 f:0 00000000, pos 080F0A8B, size: 5 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:1 f:0 s:170 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter limit 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 170 of 170 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: 0 "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: 0, "/wiki/index.php?" a:1, c:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:1 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http close request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http log handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09992DB0, unused: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998F000, unused: 3418 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http connection: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 099AEEE8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998ACC0, unused: 24 > > > Le 2017-09-17 ? 08:28, Anoop Alias a ?crit?: >> try changing >> >> ############################## >> >> location = /wiki { >> ? ? ? root /home/www/isotoperesearch.ca/wiki >> ; >> ? ? ? fastcgi_index index.php; >> ? ? ? index index.php; >> ? ? ? include fastcgi_params; >> ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; >> >> } >> >> ############################## >> to >> >> ################################# >> >> location /wiki/ { >> ? ? ? # root /home/www/isotoperesearch.ca/wiki >> ; >> ? ? ? fastcgi_index index.php; >> ? ? ? index /wiki/index.php; >> ? ? ? include fastcgi_params; >> ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; >> >> } >> >> ######################################3 >> >> On Sun, Sep 17, 2017 at 5:48 PM, Etienne Robillard >> > wrote: >> >> Hi, >> >> I'm trying to configure nginx with php-fpm to run mediawiki >> in a distinct location (/wiki). >> >> Here's my config: >> >> # configuration file /etc/nginx/nginx.conf: >> user www-data; >> worker_processes 4; >> pid /run/nginx.pid; >> >> events { >> ? ? worker_connections 512; >> ? ? multi_accept on; >> ? ? use epoll; >> } >> >> http { >> >> ? ? ## >> ? ? # Basic Settings >> ? ? ## >> >> ? ? sendfile on; >> ? ? tcp_nopush on; >> ? ? tcp_nodelay on; >> ? ? keepalive_timeout 80; >> ? ? types_hash_max_size 2048; >> ? ? # server_tokens off; >> >> ? ? # server_names_hash_bucket_size 64; >> ? ? # server_name_in_redirect off; >> >> ? ? include /etc/nginx/mime.types; >> ? ? default_type application/octet-stream; >> >> ? ? ## >> ? ? # SSL Settings >> ? ? ## >> >> ? ? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, >> ref: POODLE >> ? ? ssl_prefer_server_ciphers on; >> >> ? ? ## >> ? ? # Logging Settings >> ? ? ## >> >> ? ? access_log /var/log/nginx/access.log; >> ? ? error_log /var/log/nginx/error.log; >> >> ? ? ## >> ? ? # Gzip Settings >> ? ? ## >> >> ? ? gzip off; >> ? ? gzip_disable "msie6"; >> >> ? ? # gzip_vary on; >> ? ? # gzip_proxied any; >> ? ? # gzip_comp_level 6; >> ? ? # gzip_buffers 16 8k; >> ? ? # gzip_http_version 1.1; >> ? ? # gzip_types text/plain text/css application/json >> application/javascript text/xml application/xml >> application/xml+rss text/javascript; >> >> ? ? ## >> ? ? # Virtual Host Configs >> ? ? ## >> >> ? ? #isotopesoftware.ca : >> ? ? #include /etc/nginx/conf.d/development.conf; >> ? ? include /etc/nginx/conf.d/isotoperesearch.conf; >> ? ? #include /etc/nginx/sites-enabled/*; >> } >> >> server { >> >> ? ? # static medias web server configuration, for development >> ? ? # and testing purposes. >> >> ? ? listen? ? ? ?80; >> ? ? server_name? localhost; >> ? ? error_log /var/log/nginx/error_log; #debug >> ? ? root /home/www/isotoperesearch.ca >> ; >> ? ? #autoindex on; >> ? ? client_max_body_size 5m; >> ? ? client_body_timeout 60; >> >> ? ? location / { >> ? ? #? ? # host and port to fastcgi server >> ? ? ? ? #uwsgi_pass django; # 8808=gthc.org >> ; 8801=tm >> ? ? ? ? #include uwsgi_params; >> ? ? ? ? fastcgi_pass 127.0.0.1:8808 ; >> ? ? ? ? include fastcgi_params; >> ? ? } >> >> >> ? ? # debug url rewriting to the error log >> ? ? rewrite_log on; >> >> ? ? location /media { >> ? ? ? ? autoindex on; >> ? ? ? ? gzip on; >> ? ? } >> >> ? ? location /pub { >> ? ? ? ? autoindex on; >> ? ? ? ? gzip on; >> ? ? } >> >> ? ? location /webalizer { >> ? ? ? ? autoindex on; >> ? ? ? ? gzip on; >> ? ? #auth_basic "Private Property"; >> ? ? #auth_basic_user_file /etc/nginx/.htpasswd; >> ? ? ? ? allow 67.68.76.70; >> ? ? deny all; >> ? ? } >> >> ? ? location /documentation { >> ? ? ? ? autoindex on; >> ? ? ? ? gzip on; >> ? ? } >> >> ? ? location /moin_static184 { >> ? ? autoindex on; >> ? ? gzip on; >> ? ? } >> ? ? location /favicon.ico { >> ? ? empty_gif; >> ? ? } >> ? ? location /robots.txt { >> ? ? ? ? ?root /home/www/isotopesoftware.ca >> ; >> ? ? } >> ? ? location /sitemap.xml { >> ? ? root /home/www/isotopesoftware.ca >> ; >> ? ? } >> >> ? ? #location /public_html { >> ? ? # root /home/www/; >> ? ? # autoindex on; >> ? ? #} >> ? ? # redirect server error pages to the static page /50x.html >> ? ? #error_page 404 /404.html; >> ? ? #error_page 403? ? /403.html; >> ? ? #error_page 500 502 503 504? /50x.html; >> ? ? #location = /50x.html { >> ? ? #? ? root? ?/var/www/nginx-default; >> ? ? #} >> >> ? ? include conf.d/mediawiki.conf; >> ? ? #include conf.d/livestore.conf; >> } >> >> >> # configuration file /etc/nginx/fastcgi_params: >> fastcgi_param? PATH_INFO $fastcgi_script_name; >> fastcgi_param? QUERY_STRING ?$query_string; >> fastcgi_param? REQUEST_METHOD ?$request_method; >> fastcgi_param? CONTENT_TYPE ?$content_type; >> fastcgi_param? CONTENT_LENGTH ?$content_length; >> >> fastcgi_param? SCRIPT_NAME $fastcgi_script_name; >> fastcgi_param? REQUEST_URI $request_uri; >> fastcgi_param? DOCUMENT_URI ?$document_uri; >> fastcgi_param? DOCUMENT_ROOT $document_root; >> fastcgi_param? SERVER_PROTOCOL $server_protocol; >> >> fastcgi_param? GATEWAY_INTERFACE? CGI/1.1; >> fastcgi_param? SERVER_SOFTWARE? ? nginx; >> >> fastcgi_param? REMOTE_ADDR $remote_addr; >> fastcgi_param? REMOTE_PORT $remote_port; >> #fastcgi_param? REMOTE_USER? ? ? $remote_user; >> fastcgi_param? SERVER_ADDR $server_addr; >> fastcgi_param? SERVER_PORT $server_port; >> fastcgi_param? SERVER_NAME $server_name; >> >> >> #XXX >> #fastcgi_param HTTP_IF_NONE_MATCH $http_if_none_match; >> #fastcgi_param HTTP_IF_MODIFIED_SINCE $http_if_modified_since; >> >> >> # PHP only, required if PHP was built with >> --enable-force-cgi-redirect >> # fastcgi_param? REDIRECT_STATUS? ? 200; >> >> fastcgi_send_timeout 90; >> fastcgi_read_timeout 90; >> fastcgi_connect_timeout 40; >> #fastcgi_cache_valid 200 304 10m; >> #fastcgi_buffer_size 128k; >> #fastcgi_buffers 8 128k; >> #fastcgi_busy_buffers_size 256k; >> #fastcgi_temp_file_write_size 256k; >> >> >> # configuration file /etc/nginx/conf.d/mediawiki.co >> nf: >> >> >> location = /wiki { >> ? ? ? root /home/www/isotoperesearch.ca/wiki >> ; >> ? ? ? fastcgi_index index.php; >> ? ? ? index index.php; >> ? ? ? include fastcgi_params; >> ? ? ? fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; >> >> } >> >> #location @mediawiki { >> #? ? rewrite ^/(.*)$ /index.php; >> #} >> >> >> The issue is that the default "/" location is masking the >> fastcgi_pass directive in the wiki block. >> >> Is there any ways to run php-fpm in a location block ? >> >> >> Thank you in advance, >> >> Etienne >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> >> -- >> *Anoop P Alias* >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- > Etienne Robillard > tkadm30 at yandex.com > http://www.isotopesoftware.ca/ > > > > > -- > *Anoop P Alias* > -- Etienne Robillard tkadm30 at yandex.com http://www.isotopesoftware.ca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Sep 17 14:49:09 2017 From: nginx-forum at forum.nginx.org (sandyman) Date: Sun, 17 Sep 2017 10:49:09 -0400 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: <20170916104349.GR20907@daoine.org> References: <20170916104349.GR20907@daoine.org> Message-ID: <8e55bb57b334260d413f4e1b71441626.NginxMailingListEnglish@forum.nginx.org> 2 hours later still struggling but getting some error logs now location /foo { return 200 "yikes ";} returns yikes .. however www.asandhu.xyz returns 502 log file states either uwsgi.sock failed (13: Permission denied) 664 and owned by me or (111: Connection refused) when chmod 777 and owned by me Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276402,276423#msg-276423 From francis at daoine.org Sun Sep 17 17:33:09 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 17 Sep 2017 18:33:09 +0100 Subject: location settings for Django Uwsgi Nginx Configuration for production In-Reply-To: References: <20170916104349.GR20907@daoine.org> Message-ID: <20170917173309.GS20907@daoine.org> On Sun, Sep 17, 2017 at 09:16:12AM -0400, sandyman wrote: Hi there, > I'm back to where I started. I added to my config > > location = /test-this/ { return 200 "Yes, this is correct\n"; } > > I stopped and started nginx , and restarted the server > > and then requesting http://www.sandyman.xyz/test-this/ > > I get the 404 That says that the nginx.conf that the running nginx is actually using is not the nginx.conf that you are editing. Until you can edit the correct section of the file, you will not be able to fix things. > Request URL:http://www.asandhu.xyz/test-this/ Note that "sandyman" and "asandhu" are not the same thing. Perhaps you are not connecting to the IP address that you want to be connecting to? f -- Francis Daly francis at daoine.org From tkadm30 at yandex.com Sun Sep 17 17:34:20 2017 From: tkadm30 at yandex.com (Etienne Robillard) Date: Sun, 17 Sep 2017 13:34:20 -0400 Subject: mediawiki, php-fpm, and nginx In-Reply-To: References: <74ddc13a-3a32-a29c-f6d5-f3d2293f3172@yandex.com> <233f83eb-ac3f-9167-7272-6aed41a37d03@yandex.com> Message-ID: Hi Anoop Its working fine now :) Here's my final config: location /wiki { try_files $uri $uri /wiki/index.php; location ~ \.php$ { fastcgi_param HTTP_PROXY ""; try_files $uri =404; fastcgi_index index.php; include fastcgi_params; #fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } Thank you very much for your help! Etienne On 17/09/17 09:36 AM, Anoop Alias wrote: > Hi Etienne, > > Assuming you want mediawiki to be served from /wiki/ > > ########################## > server{ > > root /path/to/directory/excluding/wiki; > .. > .. > ... > location /wiki/ { > > try_files $uri $uri /wiki/index.php?query_string; > > location ~ \.php$ { > > try_files $uri =404; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > > } > > } > > ########################## > > note the ~ \.php location is inside location /wiki/ and the root > directive only come once in the server {} > > Good Luck > > > > > On Sun, Sep 17, 2017 at 6:55 PM, Etienne Robillard > wrote: > > Hi Anoop, > > Not sure why, but php-fpm now returns a blank page. > > Here's my error_log: > > 2017/09/17 09:16:02 [debug] 21872#21872: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21874#21874: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21873#21873: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:02 [debug] 21871#21871: epoll add event: fd:7 op:1 ev:00002001 > 2017/09/17 09:16:09 [debug] 21871#21871: accept on0.0.0.0:80 , ready: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: accept on0.0.0.0:80 , ready: 1 > 2017/09/17 09:16:09 [debug] 21871#21871: accept() not ready (11: Resource temporarily unavailable) > 2017/09/17 09:16:09 [debug] 21873#21873: posix_memalign: 0998ACC0:256 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 accept:127.0.0.1:56704 fd:4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 4: 60000:2415675759 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:1 ev:80002001 > 2017/09/17 09:16:09 [debug] 21873#21873: accept() not ready (11: Resource temporarily unavailable) > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http wait request handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 099AEEE8:1024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:0, avail:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:4 134 of 1024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 09992DB0:4096 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request line > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request line: "GET//wiki// HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http uri: "//wiki//" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http args: "" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http exten: "" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http process request header line > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "TE: deflate,gzip;q=0.3" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Connection: TE, close" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "Host: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header: "User-Agent: lwp-request/6.15 libwww-perl/6.15" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http header done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 4: 2415675759 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 content phase: 12 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 open index "/home/www/isotoperesearch.ca/wiki/index.php > " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 internal redirect: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/pub" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/webalizer" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: "/wiki" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 test location: ~ "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 using configuration "\.php$" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cl:-1 max:5242880 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 rewrite phase: 3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post rewrite phase: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 5 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 generic phase: 7 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 9 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 access phase: 10 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 post access phase: 11 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http init upstream, client timer: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add event: fd:4 op:3 ev:80002005 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "PATH_INFO" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "PATH_INFO: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "QUERY_STRING" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "QUERY_STRING: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_METHOD" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "GET" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_METHOD: GET" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_TYPE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_TYPE: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CONTENT_LENGTH" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "CONTENT_LENGTH: " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SCRIPT_NAME" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SCRIPT_NAME: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REQUEST_URI" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "//wiki//" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REQUEST_URI://wiki//" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_URI" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_URI: /wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "DOCUMENT_ROOT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "/home/www/isotoperesearch.ca " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "DOCUMENT_ROOT: /home/www/isotoperesearch.ca " > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PROTOCOL" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "GATEWAY_INTERFACE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "CGI/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_SOFTWARE" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "nginx" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_SOFTWARE: nginx" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_ADDR" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_ADDR: 127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "REMOTE_PORT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "56704" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "REMOTE_PORT: 56704" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_ADDR" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_ADDR: 127.0.0.1" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_PORT" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "80" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_PORT: 80" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script copy: "SERVER_NAME" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http script var: "localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "SERVER_NAME: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_TE: deflate,gzip;q=0.3" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_CONNECTION: TE, close" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_HOST: localhost" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 fastcgi param: "HTTP_USER_AGENT: lwp-request/6.15 libwww-perl/6.15" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cleanup add: 09993C3C > 2017/09/17 09:16:09 [debug] 21873#21873: *7 get rr peer, try: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 stream socket 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 epoll add connection: fd:6 ev:80002005 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 connect to unix:/var/run/php/php7.0-fpm.sock, fd:6 #8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 connected > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream connect: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998AA00:128 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream send request body > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer buf fl:0 s:544 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer in: 09993C5C > 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 544 of 544 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 chain writer out: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer add: 6: 90000:2415705830 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:3 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:3 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: -4, "/wiki/index.php?" a:1, c:2 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:2 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http run request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream check client, write event:1, "/wiki/index.php" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream dummy handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream request: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process header > 2017/09/17 09:16:09 [debug] 21873#21873: *7 malloc: 09999E18:4096 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: eof:1, avail:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 recv: fd:6 72 of 4096 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 2A > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 06 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 42 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header: "Content-type: text/html; charset=UTF-8" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi parser: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi header done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 posix_memalign: 0998F000:4096 @16 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 HTTP/1.1 200 OK > Server: nginx/1.12.0 > Date: Sun, 17 Sep 2017 13:16:09 GMT > Content-Type: text/html; charset=UTF-8 > Transfer-Encoding: chunked > Connection: close > > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:0 f:0 s:165 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cacheable: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process upstream > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe read upstream: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe preread: 22 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: eof:1, avail:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: 1, last:4024 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe recv chain: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe buf free s:0 t:1 f:0 09999E18, pos 09999E4A, size: 22 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe length: -1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 03 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 08 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi sent end request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09999E18 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream: 1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream done > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer: 6, old: 2415705830, new: 2415705831 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream exit: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http upstream request: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http fastcgi request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free rr peer 1 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http upstream connection: 6 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998AA00, unused: 88 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 6: 2415705830 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream temp fd: -1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http output filter "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 image filter > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http postpone filter "/wiki/index.php?" BFEBBC94 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http chunk: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write old buf t:1 f:0 0998F010, pos 0998F010, size: 165 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:0 f:0 00000000, pos 080F0A8B, size: 5 file: 0, size: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:1 f:0 s:170 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter limit 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 170 of 170 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: 0 "/wiki/index.php?" > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: 0, "/wiki/index.php?" a:1, c:1 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:1 blk:0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http close request > 2017/09/17 09:16:09 [debug] 21873#21873: *7 http log handler > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 00000000 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09992DB0, unused: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998F000, unused: 3418 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http connection: 4 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 099AEEE8 > 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998ACC0, unused: 24 > > Le 2017-09-17 ? 08:28, Anoop Alias a ?crit : >> try changing >> ############################## >> location = /wiki { >> root /home/www/isotoperesearch.ca/wiki >> ; >> fastcgi_index index.php; >> index index.php; >> include fastcgi_params; >> fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; >> } >> ############################## >> to >> ################################# >> location /wiki/ { >> # root /home/www/isotoperesearch.ca/wiki >> ; >> fastcgi_index index.php; >> index /wiki/index.php; >> include fastcgi_params; >> fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; >> } >> ######################################3 >> On Sun, Sep 17, 2017 at 5:48 PM, Etienne Robillard >> > wrote: >> >> Hi, I'm trying to configure nginx with php-fpm to run >> mediawiki in a distinct location (/wiki). Here's my config: # >> configuration file /etc/nginx/nginx.conf: user www-data; >> worker_processes 4; pid /run/nginx.pid; events { >> worker_connections 512; multi_accept on; use epoll; } >> http { ## # Basic Settings ## sendfile on; >> tcp_nopush on; tcp_nodelay on; keepalive_timeout >> 80; types_hash_max_size 2048; # server_tokens off; >> # server_names_hash_bucket_size 64; # >> server_name_in_redirect off; include >> /etc/nginx/mime.types; default_type >> application/octet-stream; ## # SSL Settings ## >> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: >> POODLE ssl_prefer_server_ciphers on; ## # Logging >> Settings ## access_log /var/log/nginx/access.log; >> error_log /var/log/nginx/error.log; ## # Gzip >> Settings ## gzip off; gzip_disable "msie6"; # >> gzip_vary on; # gzip_proxied any; # gzip_comp_level >> 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; >> # gzip_types text/plain text/css application/json >> application/javascript text/xml application/xml >> application/xml+rss text/javascript; ## # Virtual >> Host Configs ## #isotopesoftware.ca >> : #include >> /etc/nginx/conf.d/development.conf; include >> /etc/nginx/conf.d/isotoperesearch.conf; #include >> /etc/nginx/sites-enabled/*; } server { # static medias >> web server configuration, for development # and testing >> purposes. listen 80; server_name localhost; >> error_log /var/log/nginx/error_log; #debug root >> /home/www/isotoperesearch.ca ; >> #autoindex on; client_max_body_size 5m; >> client_body_timeout 60; location / { # # host and >> port to fastcgi server #uwsgi_pass django; # >> 8808=gthc.org ; 8801=tm #include >> uwsgi_params; fastcgi_pass 127.0.0.1:8808 >> ; include fastcgi_params; >> } # debug url rewriting to the error log rewrite_log >> on; location /media { autoindex on; gzip >> on; } location /pub { autoindex on; >> gzip on; } location /webalizer { autoindex >> on; gzip on; #auth_basic "Private Property"; >> #auth_basic_user_file /etc/nginx/.htpasswd; allow >> 67.68.76.70; deny all; } location /documentation >> { autoindex on; gzip on; } location >> /moin_static184 { autoindex on; gzip on; } >> location /favicon.ico { empty_gif; } location >> /robots.txt { root /home/www/isotopesoftware.ca >> ; } location /sitemap.xml >> { root /home/www/isotopesoftware.ca >> ; } #location /public_html >> { # root /home/www/; # autoindex on; #} # >> redirect server error pages to the static page /50x.html >> #error_page 404 /404.html; #error_page 403 /403.html; >> #error_page 500 502 503 504 /50x.html; #location = >> /50x.html { # root /var/www/nginx-default; #} >> include conf.d/mediawiki.conf; #include >> conf.d/livestore.conf; } # configuration file >> /etc/nginx/fastcgi_params: fastcgi_param PATH_INFO >> $fastcgi_script_name; fastcgi_param QUERY_STRING >> $query_string; fastcgi_param REQUEST_METHOD >> $request_method; fastcgi_param CONTENT_TYPE >> $content_type; fastcgi_param CONTENT_LENGTH >> $content_length; fastcgi_param SCRIPT_NAME >> $fastcgi_script_name; fastcgi_param REQUEST_URI >> $request_uri; fastcgi_param DOCUMENT_URI >> $document_uri; fastcgi_param DOCUMENT_ROOT >> $document_root; fastcgi_param SERVER_PROTOCOL >> $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; >> fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param >> REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT >> $remote_port; #fastcgi_param REMOTE_USER >> $remote_user; fastcgi_param SERVER_ADDR $server_addr; >> fastcgi_param SERVER_PORT $server_port; >> fastcgi_param SERVER_NAME $server_name; #XXX >> #fastcgi_param HTTP_IF_NONE_MATCH $http_if_none_match; >> #fastcgi_param HTTP_IF_MODIFIED_SINCE >> $http_if_modified_since; # PHP only, required if PHP was >> built with --enable-force-cgi-redirect # fastcgi_param >> REDIRECT_STATUS 200; fastcgi_send_timeout 90; >> fastcgi_read_timeout 90; fastcgi_connect_timeout 40; >> #fastcgi_cache_valid 200 304 10m; #fastcgi_buffer_size 128k; >> #fastcgi_buffers 8 128k; #fastcgi_busy_buffers_size 256k; >> #fastcgi_temp_file_write_size 256k; # configuration file >> /etc/nginx/conf.d/mediawiki.co nf: >> location = /wiki { root >> /home/www/isotoperesearch.ca/wiki >> ; fastcgi_index >> index.php; index index.php; include >> fastcgi_params; fastcgi_pass >> unix:/var/run/php/php7.0-fpm.sock; } #location @mediawiki { >> # rewrite ^/(.*)$ /index.php; #} The issue is that the >> default "/" location is masking the fastcgi_pass directive in >> the wiki block. Is there any ways to run php-fpm in a >> location block ? Thank you in advance, Etienne >> _______________________________________________ nginx mailing >> list nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> -- >> *Anoop P Alias* >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- > Etienne Robillard > tkadm30 at yandex.com > http://www.isotopesoftware.ca/ > > -- > *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Sep 18 06:43:37 2017 From: nginx-forum at forum.nginx.org (sandyman) Date: Mon, 18 Sep 2017 02:43:37 -0400 Subject: (111: Connection refused) while connecting to upstream NGINX Uwsgi Message-ID: <67bb166b7de46c1186de9e4d68efcf47.NginxMailingListEnglish@forum.nginx.org> after struggling for almost a week on this I feel I need to reach out for assistance . Please Help I am trying to deploy my project to a production environment In terms of permissions I have read all the forums changed all permissions i.e. 777 etc etc srwxrwxrwx uwsgi.sock (owned by me full access ) !! I've checked over and over all the directory structures etc. Ive swtiched from unix sockets to http but still no joy . exact error 2017/09/18 06:32:56 [error] 15451#0: *1 connect() to unix:////home/workspace/project// tmp/uwsgi.sock failed (111: Connection refused) while connecting to upstream, client: 1933 .247.239.160, server: website.xyz, request: "GET / HTTP/1.0", upstream: "uwsgi://unix://// /home/workspace/project/tmp/uwsgi.sock:", host: "www.website.xyz" Nginx configuration: upstream _django { server unix:////home/workspace/project/tmp/uwsgi.sock; } server { listen 62032; server_name website.xyz www.website.xyz ; location = /favicon.ico { access_log off; log_not_found off; } location = /test-this { return 200 "Yes, this is correct\n"; } location /foo { return 200 "YIKES what a load of codswollop";} root /home/workspace/project; location /static { alias /home/workspace/project/testsite/assets; } location /assets { root /home/workspace/project/testsite/assets; } location / { include /home/workspace/project/uwsgi_params; #include uwsgi parameters. uwsgi_pass _django; #tell nginx to communicate with uwsgi though unix socket /run/uwsgi/sock. } uwsgi ini file # project.ini file [uwsgi] chdir = /home/workspace/project/testsite module=testsite.wsgi:application socket = /home/workspace/project/uwsgi.sock chmod-socket = 666 daemonize = /home/workspace/project/tmp/uwsgi.log protocol = http master = true vacuum=true max-requests=5000 processes = 10 start script #! /bin/bash PIDFILE=/home/workspace/project/startselvacura.pid source /home/workspace/project/venv/bin/activate uwsgi --ini /home/workspace/project/uwsgi-prod.ini --venv /home/workspace/project/venv --pidfile $PIDFILE ~ running https://www.asandhu.xyz/foo does return the expected result : Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276427,276427#msg-276427 From anoopalias01 at gmail.com Mon Sep 18 06:48:06 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 18 Sep 2017 12:18:06 +0530 Subject: (111: Connection refused) while connecting to upstream NGINX Uwsgi In-Reply-To: <67bb166b7de46c1186de9e4d68efcf47.NginxMailingListEnglish@forum.nginx.org> References: <67bb166b7de46c1186de9e4d68efcf47.NginxMailingListEnglish@forum.nginx.org> Message-ID: What is the output of ls -l /home/workspace/project/tmp/uwsgi.sock and ps aux|grep nginx On Mon, Sep 18, 2017 at 12:13 PM, sandyman wrote: > after struggling for almost a week on this I feel I need to reach out for > assistance . Please Help > > I am trying to deploy my project to a production environment > > > In terms of permissions I have read all the forums changed all permissions > i.e. 777 etc etc > srwxrwxrwx uwsgi.sock (owned by me full access ) !! > I've checked over and over all the directory structures etc. Ive swtiched > from unix sockets to http but still > no joy . > > exact error > > > 2017/09/18 06:32:56 [error] 15451#0: *1 connect() to > unix:////home/workspace/project// > tmp/uwsgi.sock failed (111: Connection refused) while connecting to > upstream, client: 1933 > .247.239.160, server: website.xyz, request: "GET / HTTP/1.0", upstream: > "uwsgi://unix://// > /home/workspace/project/tmp/uwsgi.sock:", host: "www.website.xyz" > > > Nginx configuration: > > upstream _django { > server unix:////home/workspace/project/tmp/uwsgi.sock; > } > > server { > listen 62032; > server_name website.xyz www.website.xyz ; > > location = /favicon.ico { access_log off; log_not_found off; } > location = /test-this { return 200 "Yes, this is correct\n"; } > location /foo { return 200 "YIKES what a load of codswollop";} > root /home/workspace/project; > > location /static { > alias /home/workspace/project/testsite/assets; > } > > location /assets { > root /home/workspace/project/testsite/assets; > } > > location / { > include /home/workspace/project/uwsgi_params; > #include uwsgi parameters. > uwsgi_pass _django; > #tell nginx to communicate with uwsgi though unix socket > /run/uwsgi/sock. > } > > > > > uwsgi ini file > > # project.ini file > [uwsgi] > chdir = /home/workspace/project/testsite > module=testsite.wsgi:application > socket = /home/workspace/project/uwsgi.sock > chmod-socket = 666 > daemonize = /home/workspace/project/tmp/uwsgi.log > protocol = http > master = true > vacuum=true > max-requests=5000 > processes = 10 > > > start script > > #! /bin/bash > PIDFILE=/home/workspace/project/startselvacura.pid > > source /home/workspace/project/venv/bin/activate > uwsgi --ini /home/workspace/project/uwsgi-prod.ini --venv > /home/workspace/project/venv --pidfile $PIDFILE > ~ > > > > running https://www.asandhu.xyz/foo > > does return the expected result : > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,276427,276427#msg-276427 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Sep 18 11:07:41 2017 From: nginx-forum at forum.nginx.org (pavelz) Date: Mon, 18 Sep 2017 07:07:41 -0400 Subject: Memory usage doubles on reload In-Reply-To: <5c2a707889b9468eb0db26186fb10882.NginxMailingListEnglish@forum.nginx.org> References: <20140307102705.GU34696@mdounin.ru> <5c2a707889b9468eb0db26186fb10882.NginxMailingListEnglish@forum.nginx.org> Message-ID: <29968e2a3ac9753860e0011200df86a4.NginxMailingListEnglish@forum.nginx.org> Hi PetrHolik, Did you find a solution to forcibly releasing memory after Ngin reload? We are now experiencing the same problem. After Nginx reload, the memory consumption of Nginx processes grows to 10Gb Posted at Nginx Forum: https://forum.nginx.org/read.php?2,248163,276430#msg-276430 From francis at daoine.org Mon Sep 18 15:18:42 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Sep 2017 16:18:42 +0100 Subject: (111: Connection refused) while connecting to upstream NGINX Uwsgi In-Reply-To: <67bb166b7de46c1186de9e4d68efcf47.NginxMailingListEnglish@forum.nginx.org> References: <67bb166b7de46c1186de9e4d68efcf47.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170918151842.GT20907@daoine.org> On Mon, Sep 18, 2017 at 02:43:37AM -0400, sandyman wrote: Hi there, > I am trying to deploy my project to a production environment It looks like you have been trying lots of different things, and changing some things each time. It will probably be simpler for you if you find one tutorial on how to use nginx to front django, and follow exactly that one. Do not try to mix pieces from two or more until you understand what every config line means; otherwise you will probably become confused. > In terms of permissions I have read all the forums changed all permissions > i.e. 777 etc etc > srwxrwxrwx uwsgi.sock (owned by me full access ) !! > I've checked over and over all the directory structures etc. Ive swtiched > from unix sockets to http but still > no joy . First: decide do you want django to present itself on a unix socket, or on a tcp port. Next: decide do you want django to present itself as a uwsgi service, or as a http service. Then: configure django as the server, and configure nginx as the client talking to that server. > 2017/09/18 06:32:56 [error] 15451#0: *1 connect() to > unix:////home/workspace/project// > tmp/uwsgi.sock failed (111: Connection refused) while connecting to There are lots of / characters there; they probably are not all needed. > upstream, client: 1933 > .247.239.160, server: website.xyz, request: "GET / HTTP/1.0", upstream: > "uwsgi://unix://// > /home/workspace/project/tmp/uwsgi.sock:", host: "www.website.xyz" That says that nginx thinks that django is presented as a uwsgi service, on the unix socket /home/workspace/project/tmp/uwsgi.sock. > [uwsgi] > socket = /home/workspace/project/uwsgi.sock > protocol = http That seems to say that django thinks that django presents itself as a http service, on the unix socket /home/workspace/project/uwsgi.sock. The two socket names are different. The two protocols are different. The client is unlikely to successfully talk to the server, under those conditions. f -- Francis Daly francis at daoine.org From ph.gras at worldonline.fr Mon Sep 18 21:26:37 2017 From: ph.gras at worldonline.fr (Ph. Gras) Date: Mon, 18 Sep 2017 23:26:37 +0200 Subject: I can't access to archives of my mailman lists : Message-ID: <0029F785-AAF9-4FA4-8560-F79EC3903D28@worldonline.fr> Hello there, unfortunately I'm not able to access to archives (pipermail) of my mailman lists. I'm using Nginx with a fcgiwrap gateway interface : :/etc/nginx/sites-available# curl -I http://poste.enpret.com/pipermail/rs-idf-discuter/index.html HTTP/1.1 403 Forbidden Server: nginx Date: Sun, 17 Sep 2017 17:00:25 GMT Content-Type: text/html Content-Length: 162 Connection: keep-alive :/etc/nginx/sites-available# curl -I http://poste.enpret.com/pipermail/rs-idf-discuter/ HTTP/1.1 403 Forbidden Server: nginx Date: Sun, 17 Sep 2017 17:01:14 GMT Content-Type: text/plain Connection: keep-alive :/etc/nginx/sites-available# curl -I http://poste.enpret.com/ HTTP/1.1 301 Moved Permanently Server: nginx Date: Sun, 17 Sep 2017 17:01:30 GMT Content-Type: text/html Content-Length: 178 Location: http://poste.enpret.com/listinfo Connection: keep-alive X-Robots-Tag: noindex, noarchive :/etc/nginx/sites-available# curl -I http://poste.enpret.com/listinfo HTTP/1.1 200 OK Server: nginx Date: Sun, 17 Sep 2017 17:01:48 GMT Content-Type: text/html; charset=utf-8 Connection: keep-alive Cache-control: no-cache X-Robots-Tag: noindex, noarchive :/etc/nginx/sites-available# vi poste.enpret.com [?] location /pipermail { # alias /var/lib/mailman/archives/private; alias /var/lib/mailman/archives/public; # root /var/lib/mailman/archives/public/; autoindex on; allow all; } [?] :/etc/nginx/sites-available# ls -al /var/lib/mailman/archives/public total 8 drwxrwsr-x 2 list list 4096 sept. 16 18:47 . drwxrwsr-x 4 list list 4096 nov. 11 2016 .. lrwxrwxrwx 1 list list 47 sept. 16 18:47 formation-idf -> /var/lib/mailman/archives/private/formation-idf lrwxrwxrwx 1 list list 41 sept. 15 23:07 mailman -> /var/lib/mailman/archives/private/mailman lrwxrwxrwx 1 list list 49 sept. 16 18:44 rs-idf-discuter -> /var/lib/mailman/archives/private/rs-idf-discuter :/etc/nginx/sites-available# ls -al /var/lib/mailman/archives/private total 32 drwxrws--- 8 list list 4096 sept. 16 20:13 . drwxrwsr-x 4 list list 4096 nov. 11 2016 .. drwxrwsr-x 7 list list 4096 sept. 5 03:27 formation-idf drwxrwsr-x 2 list list 4096 sept. 16 20:17 formation-idf.mbox drwxrwsr-x 2 list list 4096 sept. 15 23:07 mailman drwxrwsr-x 2 list list 4096 sept. 15 23:07 mailman.mbox drwxrwsr-x 23 list list 4096 sept. 17 18:17 rs-idf-discuter drwxrwsr-x 2 list list 4096 sept. 16 20:21 rs-idf-discuter.mbox :/etc/nginx/sites-available# ls -al /var/lib/mailman/archives/private/mailman total 12 drwxrwsr-x 2 list list 4096 sept. 15 23:07 . drwxrws--- 8 list list 4096 sept. 16 20:13 .. -rw-rw-r-- 1 list list 504 sept. 15 23:07 index.html # tail -2 /var/log/nginx/poste.enpret.com.error.log 2017/09/18 23:09:38 [error] 32621#0: *68276 "/var/lib/mailman/archives/public/rs-idf-discuter/index.html" is forbidden (13: Permission denied), client: 83.204.192.114, server: poste.enpret.com, request: "GET /pipermail/rs-idf-discuter/ HTTP/1.1", host: "poste.enpret.com", referrer: "http://poste.enpret.com/admin/rs-idf-discuter/members" 2017/09/18 23:10:44 [error] 32621#0: *68276 "/var/lib/mailman/archives/public/rs-idf-discuter/index.html" is forbidden (13: Permission denied), client: 83.204.192.114, server: poste.enpret.com, request: "GET /pipermail/rs-idf-discuter/ HTTP/1.1", host: "poste.enpret.com", referrer: "http://poste.enpret.com/admin/rs-idf-discuter/gateway" Does someone have an idea of this trouble ? Regards, Ph. Gras From francis at daoine.org Mon Sep 18 22:20:17 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Sep 2017 23:20:17 +0100 Subject: I can't access to archives of my mailman lists : In-Reply-To: <0029F785-AAF9-4FA4-8560-F79EC3903D28@worldonline.fr> References: <0029F785-AAF9-4FA4-8560-F79EC3903D28@worldonline.fr> Message-ID: <20170918222017.GU20907@daoine.org> On Mon, Sep 18, 2017 at 11:26:37PM +0200, Ph. Gras wrote: Hi there, > :/etc/nginx/sites-available# curl -I http://poste.enpret.com/pipermail/rs-idf-discuter/index.html > HTTP/1.1 403 Forbidden > location /pipermail { > alias /var/lib/mailman/archives/public; What do each of the following show? (If "Z" is not liked by your "ls", repeat the commands without it.) Does the user that your nginx runs as have at least "x" access to every directory, and "r" to the final file? ls -lLdZ / ls -lLdZ /var ls -lLdZ /var/lib ls -lLdZ /var/lib/mailman ls -lLdZ /var/lib/mailman/archives ls -lLdZ /var/lib/mailman/archives/public ls -lLdZ /var/lib/mailman/archives/public/rs-idf-discuter ls -lLZ /var/lib/mailman/archives/public/rs-idf-discuter/index.html > Does someone have an idea of this trouble ? >From what you have shown, if your nginx user is not "list" or in the group "list", it will possibly not have access to /var/lib/mailman/archives/private. f -- Francis Daly francis at daoine.org From nick.urbanik at optusnet.com.au Tue Sep 19 00:46:17 2017 From: nick.urbanik at optusnet.com.au (Nick Urbanik) Date: Tue, 19 Sep 2017 10:46:17 +1000 Subject: ngx_slab_alloc() failed: no memory in cache keys zone Message-ID: <20170919004617.GB28872@nick.optusnet.com.au> Dear Folks, We have this message repeatedly, despite increasing keys_zone size by a factor of three to: proxy_cache_path /srv/mycache levels=1:2 keys_zone=myzone:150m inactive=15d; This caused the errors to stop for three or four hours and then back again. The cache itself is quite big: sudo du -s /srv/mycache 1826303379 /srv/mycache and we have many messages like this in our logs: unlink() "/srv/mycache/c/fc/4e8755ecadf5a82ec7208c16d8ddbfcc" failed (2: No such file or directory) Any suggestions most welcome. -- Nick Urbanik http://nicku.org 808-71011 nick.urbanik at optusnet.com.au GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24 I disclaim, therefore I am. From nick.urbanik at optusnet.com.au Tue Sep 19 01:16:59 2017 From: nick.urbanik at optusnet.com.au (Nick Urbanik) Date: Tue, 19 Sep 2017 11:16:59 +1000 Subject: ngx_slab_alloc() failed: no memory in cache keys zone In-Reply-To: <20170919004617.GB28872@nick.optusnet.com.au> References: <20170919004617.GB28872@nick.optusnet.com.au> Message-ID: <20170919011659.GC28872@nick.optusnet.com.au> Dear Folks, On 19/09/17 10:46 +1000, Nick Urbanik wrote: >We have this message repeatedly, despite increasing keys_zone size by >a factor of three to: > >proxy_cache_path /srv/mycache levels=1:2 keys_zone=myzone:150m inactive=15d; > >This caused the errors to stop for three or four hours and then back >again. > >The cache itself is quite big: >sudo du -s /srv/mycache >1826303379 /srv/mycache > >and we have many messages like this in our logs: >unlink() "/srv/mycache/c/fc/4e8755ecadf5a82ec7208c16d8ddbfcc" failed (2: No such file or directory) > >Any suggestions most welcome. I omitted important details: $ rpm -q nginx nginx-1.6.2-1.el6.ngx.x86_64 nginx.conf: $ sanitise-nginx /etc/nginx/nginx.conf user nginx; worker_processes 8; worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $request_time $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" $sent_http_content_type ' 'CACHE KEY(s/ph/u) => $scheme $proxy_host $uri -> REQUEST_URI: $request_uri ' 'UPSTREAM => $upstream_cache_status $upstream_status $upstream_addr'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; proxy_buffering on; # default proxy_cache_path /srv/mycache levels=1:2 keys_zone=mycache:150m inactive=15d; proxy_temp_path /srv/mycache_tmp/my.server; proxy_cache_key $scheme$proxy_host$uri; proxy_max_temp_file_size 4096m; proxy_connect_timeout 5s; # how long before upstream dead proxy_next_upstream error timeout invalid_header; # default proxy_buffers 10 1M; upstream mycache_origin { server my.origin.com:80 max_fails=1 fail_timeout=180; } include /etc/nginx/conf.d/acl.conf; include /etc/nginx/conf.d/default.conf; } I reduced the time proxy_cache_valid 200 90d by half from 180d: $ sanitise-nginx /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name my.server.name.com; location /server-status { access_log off; stub_status on; } set $range $http_range; if ($range = "bytes=0-") { set $range ""; } location / { proxy_cache mycache; # cache keys db proxy_cache_valid 200 90d; # http 200 proxy_cache_valid 404 1m; # http 404 proxy_cache_valid any 10m; # any other proxy_ignore_headers "Cache-Control"; proxy_ignore_headers "Expires"; proxy_set_header Host my.origin.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upstream $upstream_cache_status; proxy_set_header Range $range; proxy_no_cache 0; # do cache proxy_cache_bypass 0; # take from cache first proxy_pass http://mycache_origin; } error_page 404 /404.html; location = /404.html { access_log off; root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { access_log off; root /usr/share/nginx/html; } } If any other details would help, please let me know. -- Nick Urbanik http://nicku.org 808-71011 nick.urbanik at optusnet.com.au GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24 I disclaim, therefore I am. From aldernetwork at gmail.com Tue Sep 19 01:49:30 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Mon, 18 Sep 2017 18:49:30 -0700 Subject: nginxize an existing procdess Message-ID: Wonder any has experimented with expanding an existing process by spawning a thread running a thin layer of nginx code so the process can process http and support rest interface? Thanks, - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From deepakpant at hcl.com Tue Sep 19 06:41:26 2017 From: deepakpant at hcl.com (Deepak Pant.) Date: Tue, 19 Sep 2017 06:41:26 +0000 Subject: error logs Message-ID: Hi All, I am getting below error can anyone let me know how to resolve it 2017/09/02 23:26:05 [crit] 28974#0: *2333672 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: 10.210.128.122, server: 0.0.0.0:443 2017/09/02 23:26:06 [crit] 28974#0: *2333853 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: 10.210.128.122, server: 0.0.0.0:443 2017/09/02 23:29:38 [crit] 28973#0: *2338320 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 10.210.128.122, server: 0.0.0.0:443 ::DISCLAIMER:: ---------------------------------------------------------------------------------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects. ---------------------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ph.gras at worldonline.fr Tue Sep 19 11:42:21 2017 From: ph.gras at worldonline.fr (Ph. Gras) Date: Tue, 19 Sep 2017 13:42:21 +0200 Subject: I can't access to archives of my mailman lists : In-Reply-To: <20170918222017.GU20907@daoine.org> References: <0029F785-AAF9-4FA4-8560-F79EC3903D28@worldonline.fr> <20170918222017.GU20907@daoine.org> Message-ID: <1C077AD0-AA5B-4499-8D99-7EFB7C7F1D5F@worldonline.fr> Thanks Francis, > From what you have shown, if your nginx user is not "list" > or in the group "list", it will possibly not have access to > /var/lib/mailman/archives/private. # ls -lLZ /var/lib/mailman/archives/public/rs-idf-discuter/index.html -rw-rw-r-- 1 list list ? 8736 sept. 17 16:44 /var/lib/mailman/archives/public/rs-idf-discuter/index.html # ls -lLdZ /var/lib/mailman/archives/public/rs-idf-discuter drwxrwsr-x 23 list list ? 4096 sept. 17 18:17 /var/lib/mailman/archives/public/rs-idf-discuter # ls -lLdZ /var/lib/mailman/archives/public drwxrwsr-x 2 list list ? 4096 sept. 16 18:47 /var/lib/mailman/archives/public # ls -lLdZ /var/lib/mailman/archives drwxrwsr-x 4 list list ? 4096 nov. 11 2016 /var/lib/mailman/archives # ls -lLdZ /var/lib/mailman drwxrwsr-x 8 list list ? 4096 nov. 11 2016 /var/lib/mailman # ls -lLdZ /var/lib drwxr-xr-x 47 root root ? 4096 sept. 9 23:36 /var/lib # ls -lLdZ /var drwxr-xr-x 14 root root ? 4096 sept. 9 23:37 /var # ls -lLdZ / drwxr-xr-x 23 root root ? 4096 sept. 11 12:59 / # adduser www-data list Ajout de l'utilisateur ? www-data ? au groupe ? list ?... Ajout de l'utilisateur www-data au groupe list Fait. # vi /etc/group list:x:38:www-data # curl -I http://poste.enpret.com/pipermail/rs-idf-discuter/ HTTP/1.1 200 OK Server: nginx Date: Tue, 19 Sep 2017 11:35:04 GMT Content-Type: text/html Content-Length: 8736 Last-Modified: Sun, 17 Sep 2017 14:44:03 GMT Connection: keep-alive ETag: "59be8a33-2220" X-Robots-Tag: noindex, noarchive Accept-Ranges: bytes It works ;-) Ph. Gras From pluknet at nginx.com Tue Sep 19 11:47:06 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Sep 2017 14:47:06 +0300 Subject: error logs In-Reply-To: References: Message-ID: <82E1DEF6-89ED-41A9-A545-C1DF7F1FFB41@nginx.com> > On 19 Sep 2017, at 09:41, Deepak Pant. wrote: > > Hi All, > > I am getting below error can anyone let me know how to resolve it > > > 2017/09/02 23:26:05 [crit] 28974#0: *2333672 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: 10.210.128.122, server: 0.0.0.0:443 > 2017/09/02 23:26:06 [crit] 28974#0: *2333853 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: 10.210.128.122, server: 0.0.0.0:443 That means that a client is resuming a session with ciphers which don?t include the cipher from the original session. > 2017/09/02 23:29:38 [crit] 28973#0: *2338320 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 10.210.128.122, server: 0.0.0.0:443 > -- Sergey Kandaurov From francis at daoine.org Tue Sep 19 11:57:19 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Sep 2017 12:57:19 +0100 Subject: I can't access to archives of my mailman lists : In-Reply-To: <1C077AD0-AA5B-4499-8D99-7EFB7C7F1D5F@worldonline.fr> References: <0029F785-AAF9-4FA4-8560-F79EC3903D28@worldonline.fr> <20170918222017.GU20907@daoine.org> <1C077AD0-AA5B-4499-8D99-7EFB7C7F1D5F@worldonline.fr> Message-ID: <20170919115719.GV20907@daoine.org> On Tue, Sep 19, 2017 at 01:42:21PM +0200, Ph. Gras wrote: Hi there, > > From what you have shown, if your nginx user is not "list" > > or in the group "list", it will possibly not have access to > > /var/lib/mailman/archives/private. Good that it works. As it happens, the "ls -lLdZ" commands all make it look like it should have worked -- but because one directory was a symlink to a readable directory inside a non-readable directory, it failed. If you had run the "ls" commands as the user www-data, it may have shown more quickly where the issue was. > # adduser www-data list > Ajout de l'utilisateur ? www-data ? au groupe ? list ?... > Ajout de l'utilisateur www-data au groupe list > It works ;-) And that's the good part :-) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 19 18:17:52 2017 From: nginx-forum at forum.nginx.org (mblancett) Date: Tue, 19 Sep 2017 14:17:52 -0400 Subject: Options for selective logging Message-ID: <80c6f81276ad1e253389a6a2c07de399.NginxMailingListEnglish@forum.nginx.org> I am looking for ways to target every Nth request into a very busy proxy within an nginx configuration. This particular proxy is extremely busy and receives POSTs to a single URI, and taking an approach like sharding by IP would not be the kind of traffic sample we?re after. The long term goal here is to replay some small amount (like 0.05%) of requests into a separate test environment. Currently I?m logging the entire request to ramdisk and using an every minute logrotation script in python to get the small proportion of requests I need, then using python ?requests? to replay them against the separate environment. This works, but the proxy underperforms its neighbors in the dns pool noticeably, and the RAM requirement is just too high for this to be sustainable long-term. I?d much prefer to find some way to have nginx only log the data that is necessary. I?ve seen that there is an http_mirror command that came out very recently which is nearly perfect for my needs, but that leaves the problem of only mirroring a percentage of the traffic. Thanks for your suggestions. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276452,276452#msg-276452 From peter_booth at me.com Tue Sep 19 21:23:00 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 19 Sep 2017 17:23:00 -0400 Subject: Options for selective logging In-Reply-To: <80c6f81276ad1e253389a6a2c07de399.NginxMailingListEnglish@forum.nginx.org> References: <80c6f81276ad1e253389a6a2c07de399.NginxMailingListEnglish@forum.nginx.org> Message-ID: What is your ultimate goal? You say that you want to replay 0.05% of traffic into a test environment. Are you wanting to capture real world data on a one off or ongoing basis? You say that this particular proxy is very busy. How busy? Is it hosted on a physical host or a virtual machine? If physical, do you own the physical environment? If you do, then you can capture the (entire) content with a network tap or by adding a spanning port to your switch, without affecting your proxy. Assuming you don't have control then I see from the docs that ngx_http_log_module has an if parameter that will only log if a condition equals zero. This implies that if you define a variable that equals the request_id or time in milliseconds mod X then you can sample by time or by request number. Sent from my iPhone > On Sep 19, 2017, at 2:17 PM, mblancett wrote: > > I am looking for ways to target every Nth request into a very busy proxy > within an nginx configuration. This particular proxy is extremely busy and > receives POSTs to a single URI, and taking an approach like sharding by IP > would not be the kind of traffic sample we?re after. > > The long term goal here is to replay some small amount (like 0.05%) of > requests into a separate test environment. Currently I?m logging the entire > request to ramdisk and using an every minute logrotation script in python to > get the small proportion of requests I need, then using python ?requests? to > replay them against the separate environment. This works, but the proxy > underperforms its neighbors in the dns pool noticeably, and the RAM > requirement is just too high for this to be sustainable long-term. > > I?d much prefer to find some way to have nginx only log the data that is > necessary. I?ve seen that there is an http_mirror command that came out very > recently which is nearly perfect for my needs, but that leaves the problem > of only mirroring a percentage of the traffic. > > Thanks for your suggestions. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276452,276452#msg-276452 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From iippolitov at nginx.com Wed Sep 20 10:13:25 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 20 Sep 2017 13:13:25 +0300 Subject: Options for selective logging In-Reply-To: <80c6f81276ad1e253389a6a2c07de399.NginxMailingListEnglish@forum.nginx.org> References: <80c6f81276ad1e253389a6a2c07de399.NginxMailingListEnglish@forum.nginx.org> Message-ID: <81568370-2893-7d85-fa58-903d2a735d78@nginx.com> Let me reply with a link: http://nginx.org/en/docs/http/ngx_http_split_clients_module.html You can either use split_clients to change upstream or to trigger logging with 'if' option of 'access_log' On 19.09.2017 21:17, mblancett wrote: > I am looking for ways to target every Nth request into a very busy proxy > within an nginx configuration. This particular proxy is extremely busy and > receives POSTs to a single URI, and taking an approach like sharding by IP > would not be the kind of traffic sample we?re after. > > The long term goal here is to replay some small amount (like 0.05%) of > requests into a separate test environment. Currently I?m logging the entire > request to ramdisk and using an every minute logrotation script in python to > get the small proportion of requests I need, then using python ?requests? to > replay them against the separate environment. This works, but the proxy > underperforms its neighbors in the dns pool noticeably, and the RAM > requirement is just too high for this to be sustainable long-term. > > I?d much prefer to find some way to have nginx only log the data that is > necessary. I?ve seen that there is an http_mirror command that came out very > recently which is nearly perfect for my needs, but that leaves the problem > of only mirroring a percentage of the traffic. > > Thanks for your suggestions. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276452,276452#msg-276452 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From luky-37 at hotmail.com Wed Sep 20 11:49:30 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 20 Sep 2017 11:49:30 +0000 Subject: AW: Memory usage doubles on reload In-Reply-To: <29968e2a3ac9753860e0011200df86a4.NginxMailingListEnglish@forum.nginx.org> References: <20140307102705.GU34696@mdounin.ru> <5c2a707889b9468eb0db26186fb10882.NginxMailingListEnglish@forum.nginx.org>, <29968e2a3ac9753860e0011200df86a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, starting with nginx 1.11.11 you can use worker_shutdown_timeout to limit the amount of time workers stall the shutdown. However, you will still have increased memory usage. You will always have increased memory usage while soft reloading. If you cannot accept that, then you have to stop soft reloading and not use any graceful procedures (but TERM or INT vs the master process) instead: http://nginx.org/en/docs/control.html From nick.urbanik at optusnet.com.au Wed Sep 20 23:34:21 2017 From: nick.urbanik at optusnet.com.au (Nick Urbanik) Date: Thu, 21 Sep 2017 09:34:21 +1000 Subject: ngx_slab_alloc() failed: no memory in cache keys zone In-Reply-To: <20170919011659.GC28872@nick.optusnet.com.au> References: <20170919004617.GB28872@nick.optusnet.com.au> <20170919011659.GC28872@nick.optusnet.com.au> Message-ID: <20170920233421.GB24332@nick.optusnet.com.au> Dear Folks, On 19/09/17 11:16 +1000, Nick Urbanik wrote: >On 19/09/17 10:46 +1000, Nick Urbanik wrote: >>We have this message repeatedly, despite increasing keys_zone size by >>a factor of three to: >> >>proxy_cache_path /srv/mycache levels=1:2 keys_zone=myzone:150m inactive=15d; Is it possible that we are way underspecifying the shared memory in keys_zone? -- Nick Urbanik http://nicku.org 808-71011 nick.urbanik at optusnet.com.au GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24 I disclaim, therefore I am. From nate at usamm.com Wed Sep 20 23:47:50 2017 From: nate at usamm.com (Nathan Zabaldo) Date: Wed, 20 Sep 2017 16:47:50 -0700 Subject: Migrate Apache directives to Nginx Message-ID: Question on SF: https://serverfault.com/questions/874730/convert-apache-mod-proxy-p-to-nginx-equivalent My PHP MVC platform CodeIgniter performs routing based on the REQUEST_URI. In Apache, you cannot change the REQUEST_URI environment variable. So, in Apache, I made use of the [P]proxy flag. Sending the rewritten URL using mod_proxy took care of that and it just works. The browser does not redirect and the information is returned. How do I do the same in Nginx? Or is there a way to change the REQUEST_URI without proxy? The *APACHE DIRECTIVES* take a URL like: https://www.example.com/_img/h_32/w_36/test.jpg and converts it to https://www.example.com/img/i/_img/h_32/w_36/test.jpg. Then once rewritten, it sends the request using mod_proxy and RewriteCond makes sure to not call RewriteRule again if the URL has /img/i in it. Thus avoiding the loop of death. *APACHE DIRECTIVES* RewriteCond %{REQUEST_URI} (h|w|fm|trim|fit|pad|border|or|bg)_ RewriteCond %{REQUEST_URI} !^(.*)/img/i/ RewriteRule "(.*)\.(jpg|png)$" /index.php/img/i/$1.$2 [P] *NGINX DIRECTIVES SO FAR*: You can see in *BLOCK 1* the regex to match and a possible rewrite. Where do I go next? root /var/www/vhosts/example.com/htdocs; index index.php index.html; # *BLOCK 1* location ~* \/(h|w|fm|trim|fit|pad|border|or|bg)_.*\.(jpg|png)$ { #rewrite "(.*)\.(jpg|png)$" index.php/img/i$1.$2; #doesn't change the REQUEST_URI #do I do proxy pass here? #How do I avoid the regex matching again? } location / { try_files $uri $uri/ /index.php$uri?$args; } location ~ ^(.+\.php)(.*)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param CI_ENV production; #CI environment constant fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Sep 20 23:50:11 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 20 Sep 2017 19:50:11 -0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone In-Reply-To: <20170920233421.GB24332@nick.optusnet.com.au> References: <20170919004617.GB28872@nick.optusnet.com.au> <20170919011659.GC28872@nick.optusnet.com.au> <20170920233421.GB24332@nick.optusnet.com.au> Message-ID: Lots of questions: What are the upstream requests? Are you logging hits and misses for the cache - what's the hit ratio? What size are the objects that you are serving? How many files are there in your cache? What OS and what hardware are you using? If it's Linux can you show the results of the following: cat /proc/cpuinfo | tail -30 cat /proc/meminfo Sent from my iPhone > On Sep 20, 2017, at 7:34 PM, Nick Urbanik wrote: > > Dear Folks, > >> On 19/09/17 11:16 +1000, Nick Urbanik wrote: >>> On 19/09/17 10:46 +1000, Nick Urbanik wrote: >>> We have this message repeatedly, despite increasing keys_zone size by >>> a factor of three to: >>> >>> proxy_cache_path /srv/mycache levels=1:2 keys_zone=myzone:150m inactive=15d; > > Is it possible that we are way underspecifying the shared memory in keys_zone? > -- > Nick Urbanik http://nicku.org 808-71011 nick.urbanik at optusnet.com.au > GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24 > I disclaim, therefore I am. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nate at usamm.com Thu Sep 21 16:18:45 2017 From: nate at usamm.com (Nathan Zabaldo) Date: Thu, 21 Sep 2017 09:18:45 -0700 Subject: 404 on try_files Message-ID: I'm going nuts on this. Any help would be much appreciated. The $request_uri /h_32/w_36/test.jpg needs to be routed to /index.php/img/i/h_32/w_36/test.jpg index.php will route the request to the "img" controller and "i" method, then process the image and return it. However, my MVC works off of the REQUEST_URI. So simply rewriting the url will not work. The REQUEST_URI needs to modified. You can see in the last location block that I'm passing in the modified REQUEST_URI, but Nginx is trying to open /var/www/vhosts/ ezrshop.com/htdocs/h_32/w_36/test.jpg (see **Error Logs** below) and throwing a 404. Shouldn't Nginx be trying to send it for processing to index.php?? Why the 404? In a browser, if I go directly to https://www.example.com/img/i/h_32/w_36/test.jpg the page comes up just fine. If I try to go to https://www.example.com/h_32/w_36/test.jpg in my browser I get 404 and the **Error Logs** you can see below. root /var/www/vhosts/example.com/htdocs; index index.php index.html; set $request_url $request_uri; location ~ (h|w|fm|trim|fit|pad|border|or|bg)_.*\.(jpg|png)$ { if ($request_uri !~ "/img/i/") { set $request_url /index.php/img/i$1.$2; } try_files $uri $uri/ /index.php/img/i$1.$2; } location / { try_files $uri $uri/ /index.php$uri?$args; } location ~ ^(.+\.php)(.*)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param CI_ENV production; #CI environment constant fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param REQUEST_URI $request_url; } **Error Logs:** **Log 1:** "/img/i/" does not match "/h_32/w_36/test.jpg", request: "GET /h_32/w_36/test.jpg HTTP/1.1" **Log 2:** open() "/var/www/vhosts/ezrshop.com/htdocs/h_32/w_36/test.jpg" failed (2: No such file or directory), request: "GET /h_32/w_36/test.jpg HTTP/1.1" -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.wright at icmcapital.co.uk Fri Sep 22 14:04:54 2017 From: peter.wright at icmcapital.co.uk (peter.wright at icmcapital.co.uk) Date: Fri, 22 Sep 2017 15:04:54 +0100 Subject: Private Key issue Message-ID: <001c01d333ab$c4a2bca0$4de835e0$@icmcapital.co.uk> nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/ssl/private/access.uat.icmcapital.co.uk.ke y") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) nginx: configuration file /etc/nginx/nginx.conf test failed root at uat-nginx01:~# nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/ssl/private/access.uat.icmcapital.co.uk.ke y") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) -bash: syntax error near unexpected token `(' root at uat-nginx01:~# nginx: configuration file /etc/nginx/nginx.conf test failed -------------- next part -------------- An HTML attachment was scrubbed... URL: From amirkekh at gmail.com Sat Sep 23 08:58:05 2017 From: amirkekh at gmail.com (Amir Keshavarz) Date: Sat, 23 Sep 2017 13:28:05 +0430 Subject: Scaling nginx caching storage Message-ID: Hello, Since nginx stores some cache metadata in memory , is there any way to share a cache directory between two nginx instances ? If it can't be done what do you think is the best way to go when we need to scale the nginx caching storage ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sat Sep 23 09:18:53 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sat, 23 Sep 2017 09:18:53 +0000 Subject: Scaling nginx caching storage In-Reply-To: References: Message-ID: > is there any way to share a cache directory between two nginx instances ? > If it can't be done what do you think is the best way to go when we need to scale the nginx caching storage ? One is about using same storage for two nginx instances, the other one is scaling the nginx cache storage. I believe it?s two different things. There?s nothing that prevents you from having two nginx instances reading from the same cache storage ? however you will get into scenarios where if you try to write from both machines (Let?s say it tries to cache the same file on both nginx instances), you might have some issues. Why exactly would you need two instances to share the same storage? And what scale do you mean by scaling the nginx caching storage? Currently there?s really only a limit to your disk size and the size of your keys_zone ? if you have 50 terabytes of storage, just set the keys_zone size to be big enough to contain the amount of files you wanna manage (you can store about 8000 files per 1 megabyte). From: nginx on behalf of Amir Keshavarz Reply-To: "nginx at nginx.org" Date: Saturday, 23 September 2017 at 10.58 To: "nginx at nginx.org" Subject: Scaling nginx caching storage Hello, Since nginx stores some cache metadata in memory , is there any way to share a cache directory between two nginx instances ? If it can't be done what do you think is the best way to go when we need to scale the nginx caching storage ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From amirkekh at gmail.com Sat Sep 23 09:48:14 2017 From: amirkekh at gmail.com (Amir Keshavarz) Date: Sat, 23 Sep 2017 14:18:14 +0430 Subject: Scaling nginx caching storage In-Reply-To: References: Message-ID: Sorry for the confusion . My problem is that i need to cache items as much as possible so even if one node had the storage capacity to satisfy my needs it couldn't handle all the requests and we can't afford multiple nginx nodes request to our main server each time an item is requested on a different nginx node . For that problem i have afew scenarios but they either have huge overhead on our servers and our network or are not suitable for sensitive production env because it causes weird problems ( sharing storage ) . But at this point i'm beginning to think if it's even worth it . Should i settle for having multiple nginx nodes requesting the same item to our upstream server ? On Sat, Sep 23, 2017 at 1:48 PM, Lucas Rolff wrote: > > is there any way to share a cache directory between two nginx instances ? > > > If it can't be done what do you think is the best way to go when we need > to scale the nginx caching storage ? > > > > One is about using same storage for two nginx instances, the other one is > scaling the nginx cache storage. > > I believe it?s two different things. > > > > There?s nothing that prevents you from having two nginx instances reading > from the same cache storage ? however you will get into scenarios where if > you try to write from both machines (Let?s say it tries to cache the same > file on both nginx instances), you might have some issues. > > > > Why exactly would you need two instances to share the same storage? > > And what scale do you mean by scaling the nginx caching storage? > > > > Currently there?s really only a limit to your disk size and the size of > your keys_zone ? if you have 50 terabytes of storage, just set the > keys_zone size to be big enough to contain the amount of files you wanna > manage (you can store about 8000 files per 1 megabyte). > > > > > > > > *From: *nginx on behalf of Amir Keshavarz < > amirkekh at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Saturday, 23 September 2017 at 10.58 > *To: *"nginx at nginx.org" > *Subject: *Scaling nginx caching storage > > > > Hello, > > Since nginx stores some cache metadata in memory , is there any way to > share a cache directory between two nginx instances ? > > > > If it can't be done what do you think is the best way to go when we need > to scale the nginx caching storage ? > > > > Thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sat Sep 23 10:07:37 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sat, 23 Sep 2017 10:07:37 +0000 Subject: Scaling nginx caching storage In-Reply-To: References: Message-ID: > if one node had the storage capacity to satisfy my needs it couldn't handle all the requests What amount of requests / traffic are we talking about, and which kind of hardware do you use? You can make nginx serve 20+ gigabit of traffic from a single machine if the content is right, or 50k+ req/s > But at this point i'm beginning to think if it's even worth it . Should i settle for having multiple nginx nodes requesting the same item to our upstream server ? If you?re offloading 99.xx% of the content to nginx anyway, a few extra requests to the upstream shouldn?t really matter much. You could even have multiple layers of nginx to lower the amount of upstream connections going to the server ? so on your let?s say 10 nginx instances, you could use 1-2 nginx instances as upstream, and on those 1-2 nginx instances use the actual upstream. Generally speaking you?ll have downsides with sharing storage or cache between multiple servers, and it just adds a lot of complexity to minimize the cost and then it might turn out you actually do not save anything anyway. Best Regards, Lucas From: nginx on behalf of Amir Keshavarz Reply-To: "nginx at nginx.org" Date: Saturday, 23 September 2017 at 11.48 To: "nginx at nginx.org" Subject: Re: Scaling nginx caching storage Sorry for the confusion . My problem is that i need to cache items as much as possible so even if one node had the storage capacity to satisfy my needs it couldn't handle all the requests and we can't afford multiple nginx nodes request to our main server each time an item is requested on a different nginx node . For that problem i have afew scenarios but they either have huge overhead on our servers and our network or are not suitable for sensitive production env because it causes weird problems ( sharing storage ) . But at this point i'm beginning to think if it's even worth it . Should i settle for having multiple nginx nodes requesting the same item to our upstream server ? On Sat, Sep 23, 2017 at 1:48 PM, Lucas Rolff > wrote: > is there any way to share a cache directory between two nginx instances ? > If it can't be done what do you think is the best way to go when we need to scale the nginx caching storage ? One is about using same storage for two nginx instances, the other one is scaling the nginx cache storage. I believe it?s two different things. There?s nothing that prevents you from having two nginx instances reading from the same cache storage ? however you will get into scenarios where if you try to write from both machines (Let?s say it tries to cache the same file on both nginx instances), you might have some issues. Why exactly would you need two instances to share the same storage? And what scale do you mean by scaling the nginx caching storage? Currently there?s really only a limit to your disk size and the size of your keys_zone ? if you have 50 terabytes of storage, just set the keys_zone size to be big enough to contain the amount of files you wanna manage (you can store about 8000 files per 1 megabyte). From: nginx > on behalf of Amir Keshavarz > Reply-To: "nginx at nginx.org" > Date: Saturday, 23 September 2017 at 10.58 To: "nginx at nginx.org" > Subject: Scaling nginx caching storage Hello, Since nginx stores some cache metadata in memory , is there any way to share a cache directory between two nginx instances ? If it can't be done what do you think is the best way to go when we need to scale the nginx caching storage ? Thanks _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From amirkekh at gmail.com Sat Sep 23 19:52:00 2017 From: amirkekh at gmail.com (Amir Keshavarz) Date: Sun, 24 Sep 2017 00:22:00 +0430 Subject: Scaling nginx caching storage In-Reply-To: References: Message-ID: We currently have ~30k req/s but our network is growing very fast so i need to make sure our architecture is scalable . After some researching i've decided to go with individual nginx nodes for now . If we encounter too much request to our upstream, i'm gonna set up the multi layer architecture you mentioned probably . Thank you for your help. On Sat, Sep 23, 2017 at 2:37 PM, Lucas Rolff wrote: > > if one node had the storage capacity to satisfy my needs it couldn't > handle all the requests > > > > What amount of requests / traffic are we talking about, and which kind of > hardware do you use? > You can make nginx serve 20+ gigabit of traffic from a single machine if > the content is right, or 50k+ req/s > > > > > But at this point i'm beginning to think if it's even worth it . Should > i settle for having multiple nginx nodes requesting the same item to our > upstream server ? > > > > If you?re offloading 99.xx% of the content to nginx anyway, a few extra > requests to the upstream shouldn?t really matter much. > > You could even have multiple layers of nginx to lower the amount of > upstream connections going to the server ? so on your let?s say 10 nginx > instances, you could use 1-2 nginx instances as upstream, and on those 1-2 > nginx instances use the actual upstream. > > > > Generally speaking you?ll have downsides with sharing storage or cache > between multiple servers, and it just adds a lot of complexity to minimize > the cost and then it might turn out you actually do not save anything > anyway. > > > > Best Regards, > > Lucas > > > > *From: *nginx on behalf of Amir Keshavarz < > amirkekh at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Saturday, 23 September 2017 at 11.48 > *To: *"nginx at nginx.org" > *Subject: *Re: Scaling nginx caching storage > > > > Sorry for the confusion . > > My problem is that i need to cache items as much as possible so even if > one node had the storage capacity to satisfy my needs it couldn't handle > all the requests and we can't afford multiple nginx nodes request to our > main server each time an item is requested on a different nginx node . > > > > For that problem i have afew scenarios but they either have huge overhead > on our servers and our network or are not suitable for sensitive > production env because it causes weird problems ( sharing storage ) . > > > > But at this point i'm beginning to think if it's even worth it . Should i > settle for having multiple nginx nodes requesting the same item to our > upstream server ? > > > > > > On Sat, Sep 23, 2017 at 1:48 PM, Lucas Rolff wrote: > > > is there any way to share a cache directory between two nginx instances ? > > > If it can't be done what do you think is the best way to go when we need > to scale the nginx caching storage ? > > > > One is about using same storage for two nginx instances, the other one is > scaling the nginx cache storage. > > I believe it?s two different things. > > > > There?s nothing that prevents you from having two nginx instances reading > from the same cache storage ? however you will get into scenarios where if > you try to write from both machines (Let?s say it tries to cache the same > file on both nginx instances), you might have some issues. > > > > Why exactly would you need two instances to share the same storage? > > And what scale do you mean by scaling the nginx caching storage? > > > > Currently there?s really only a limit to your disk size and the size of > your keys_zone ? if you have 50 terabytes of storage, just set the > keys_zone size to be big enough to contain the amount of files you wanna > manage (you can store about 8000 files per 1 megabyte). > > > > > > > > *From: *nginx on behalf of Amir Keshavarz < > amirkekh at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Saturday, 23 September 2017 at 10.58 > *To: *"nginx at nginx.org" > *Subject: *Scaling nginx caching storage > > > > Hello, > > Since nginx stores some cache metadata in memory , is there any way to > share a cache directory between two nginx instances ? > > > > If it can't be done what do you think is the best way to go when we need > to scale the nginx caching storage ? > > > > Thanks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Sep 23 21:40:32 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 24 Sep 2017 00:40:32 +0300 Subject: Private Key issue In-Reply-To: <001c01d333ab$c4a2bca0$4de835e0$@icmcapital.co.uk> References: <001c01d333ab$c4a2bca0$4de835e0$@icmcapital.co.uk> Message-ID: <20170923214032.GF58595@mdounin.ru> Hello! On Fri, Sep 22, 2017 at 03:04:54PM +0100, peter.wright at icmcapital.co.uk wrote: > nginx: [emerg] > SSL_CTX_use_PrivateKey_file("/etc/ssl/private/access.uat.icmcapital.co.uk.ke > y") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start > line:Expecting: ANY PRIVATE KEY error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > > nginx: configuration file /etc/nginx/nginx.conf test failed The private key file in question is corrupted. Inspect it manually to find out what's wrong. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sun Sep 24 10:17:39 2017 From: nginx-forum at forum.nginx.org (PetrHolik) Date: Sun, 24 Sep 2017 06:17:39 -0400 Subject: Memory usage doubles on reload In-Reply-To: <29968e2a3ac9753860e0011200df86a4.NginxMailingListEnglish@forum.nginx.org> References: <20140307102705.GU34696@mdounin.ru> <5c2a707889b9468eb0db26186fb10882.NginxMailingListEnglish@forum.nginx.org> <29968e2a3ac9753860e0011200df86a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2fb9ce1c00b1696efcbc76d6ffdd499c.NginxMailingListEnglish@forum.nginx.org> Hello, unfortunately no :( we have doubled server memory to 128Gigs. Sincerely Petr Holik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,248163,276499#msg-276499 From luky-37 at hotmail.com Sun Sep 24 14:34:24 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 24 Sep 2017 14:34:24 +0000 Subject: AW: Scaling nginx caching storage In-Reply-To: References: , Message-ID: > After some researching i've decided to go with individual?nginx >?nodes for now . If we encounter too much request to our >?upstream, i'm gonna set up the multi layer?architecture you >?mentioned probably While multi layers of nginx cache may help with bandwidth, it wastes huge amount of storage?while caching the same object on?multiple layers. What I'd suggest?instead?is setup a load balancer with URI hashing in front?of it, so the cache hit ratio is as high as possible without multiple layers caching the same object. Lukas From amirkekh at gmail.com Sun Sep 24 15:14:32 2017 From: amirkekh at gmail.com (Amir Keshavarz) Date: Sun, 24 Sep 2017 19:44:32 +0430 Subject: Scaling nginx caching storage In-Reply-To: References: Message-ID: > > What I'd suggest instead is setup a load balancer with URI hashing > in front of it, so the cache hit ratio is as high as possible without > multiple layers caching the same object. We can also combine LB and cache nodes in one machine as explained in nginx blog and that could be very efficient and scalable . I should monitor our network for a few weeks but i think overall this design would be very good for us . On Sun, Sep 24, 2017 at 7:04 PM, Lukas Tribus wrote: > > After some researching i've decided to go with individual nginx > > nodes for now . If we encounter too much request to our > > upstream, i'm gonna set up the multi layer architecture you > > mentioned probably > > While multi layers of nginx cache may help with bandwidth, it > wastes huge amount of storage while caching the same object > on multiple layers. > > What I'd suggest instead is setup a load balancer with URI hashing > in front of it, so the cache hit ratio is as high as possible without > multiple layers caching the same object. > > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Sep 24 17:48:53 2017 From: nginx-forum at forum.nginx.org (Dabnis) Date: Sun, 24 Sep 2017 13:48:53 -0400 Subject: OpenSSL 1.0.2 - CentOS 7.4 In-Reply-To: <1033f7fe-2075-69c7-c904-0d1e076eeee0@nginx.com> References: <1033f7fe-2075-69c7-c904-0d1e076eeee0@nginx.com> Message-ID: Can't wait for the update with openssl 1.0.2 for full http2 functionality! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276398,276505#msg-276505 From nginx-forum at forum.nginx.org Mon Sep 25 11:17:08 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Mon, 25 Sep 2017 07:17:08 -0400 Subject: auth_request called multiple times for same single request Message-ID: Apologies, I posted this issue to the wrong list (php-fpm), the link is: > https://forum.nginx.org/read.php?3,276451,276475#msg-276475 It has debug log file extracts that seem to suggest the location redirect to a static error page is attempted but doesn't resolve. Many thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276512#msg-276512 From nginx-forum at forum.nginx.org Mon Sep 25 12:04:02 2017 From: nginx-forum at forum.nginx.org (vikas027) Date: Mon, 25 Sep 2017 08:04:02 -0400 Subject: Two Way SSL - client SSL certificate verify error Message-ID: <90e541ebd6cec482b04b8193a61e9d0c.NginxMailingListEnglish@forum.nginx.org> I am testing out two-way SSL and I have configured a Root CA, Intermediate CA and created a server and client certificates which are signed by Intermediate CA. This is my configuration file ------------------------------------------------------------------ server { listen 443; server_name server.test.com; ssl on; # App Cert plus Intermediate CA Cert ssl_certificate /root/ca/intermediate/certs/server_plus_intermediate.chain.pem; # Application Key ssl_certificate_key /root/ca/intermediate/private/server.test.com.key.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; error_log /var/log/nginx/massl.log debug; ssl_client_certificate /root/ca/certs/ca.cert.pem; ssl_verify_client on; location / { root /usr/share/nginx/massl; index index.html index.htm; } } ------------------------------------------------------------------ If I use the above config and pass the client certificate (also signed by the same Intermediate CA) and key in curl or openssl s_client, I get below error in /var/log/nginx/massl.log 2017/09/25 21:49:15 [info] 94#94: *9 client SSL certificate verify error: (21:unable to verify the first certificate) while reading client request headers, client: 1.6.0.30, server: server.test.com, request: "GET / HTTP/1.0", host: "server.test.com" I don't have any certificate error in 'openssl s_client' log. Here is the short and debug log https://gist.github.com/vikas027/6c2225c34bb705d83df3547ac9f7467a I understand that I am missing Intermediate CA certificate in client chain, but I am not sure how to pass it. I have tried it adding intermediate CA in 'ssl_client_certificate' parameter in vain. Additionally, everything works fine if I use certificate (and corresponding key) of RootCA and Intermediate CA.. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276514,276514#msg-276514 From adam at anschwa.com Mon Sep 25 19:11:36 2017 From: adam at anschwa.com (Adam Schwartz) Date: Mon, 25 Sep 2017 15:11:36 -0400 Subject: load balancing algorithms Message-ID: Hello, I?m Adam and I?ve been researching load balancing for my undergraduate senior project. I?m particularly interested in the behavior of the ?power of two choices? and join-idle-queue algorithms on Nginx. I?ve found that the `ngx_http_upstream_module` specifies a `least_conn` and `least_time` load balancing method, but otherwise incorporates round-robin. I was curious if the Nginx community had ever discussed implementing other load balancing methods? Thanks! -Adam From kyle at ifsight.com Mon Sep 25 19:12:52 2017 From: kyle at ifsight.com (Kyle Sloan) Date: Mon, 25 Sep 2017 14:12:52 -0500 Subject: variable map for fastcgi_pass Message-ID: <0BFBBA2E-2A60-407E-B126-A9A4418B7178@ifsight.com> Hello, I am trying to use a MAP function based on HOSTNAMES to determine if this domain should fastcgi_pass to a php5 or php7 container, but am having problems. My map looks like map $host $php_proxy_container { hostnames; default "php5fpm:9000"; www.example.com "php7fpm:9000"; } My fastcgi file looks like fastcgi_pass $php_proxy_container; I have tried also moving the :9000 of the mapping to the the fastcgi_pass to look like fastcgi_pass $php_proxy_container:9000 but alas it did not work either. I am using MAPs in other places in this same format and they work. I get a generic 502 bad gateway, and nothing more in the http log. Changing the fastcgi_pass line to the exact value in the map and restarting does work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Sep 25 19:14:04 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 25 Sep 2017 22:14:04 +0300 Subject: auth_request called multiple times for same single request In-Reply-To: References: Message-ID: <5146525.IvJy3cprdl@vbart-workstation> On Monday 25 September 2017 07:17:08 garyc wrote: > Apologies, I posted this issue to the wrong list (php-fpm), the link is: > > > https://forum.nginx.org/read.php?3,276451,276475#msg-276475 > > It has debug log file extracts that seem to suggest the location redirect to > a static error page is attempted but doesn't resolve. > > Many thanks > [..] I don't see from your logs that it's the same single request. 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate request count:1 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate cleanup count:1 blk:0 2017/09/21 12:09:31 [debug] 22090#0: *1 http posted request: "/lowDiskSpace.html?" 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate handler count:1 2017/09/21 12:09:31 [debug] 22090#0: *1 http request count:1 blk:0 2017/09/21 12:09:31 [debug] 22090#0: *1 http close request Here's the request processing was completed. 2017/09/21 12:09:31 [debug] 22090#0: *1 http log handler 2017/09/21 12:09:31 [debug] 22090#0: *1 run cleanup: 0000000000EE4FA0 2017/09/21 12:09:31 [debug] 22090#0: *1 file cleanup: fd:21 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE57E0 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000E92E20, unused: 4 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE37C0, unused: 3 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE47D0, unused: 1304 2017/09/21 12:09:31 [debug] 22090#0: *1 close http connection: 3 2017/09/21 12:09:31 [debug] 22090#0: *1 event timer del: 3: 1505995771900 2017/09/21 12:09:31 [debug] 22090#0: *1 reusable connection: 0 Here's the connection #1 was closed. 2017/09/21 12:09:31 [debug] 22090#0: *60 http keepalive handler 2017/09/21 12:09:31 [debug] 22090#0: *60 malloc: 0000000000EA46D0:1024 2017/09/21 12:09:31 [debug] 22090#0: *60 recv: eof:0, avail:1 2017/09/21 12:09:31 [debug] 22090#0: *60 recv: fd:22 1024 of 1024 2017/09/21 12:09:31 [debug] 22090#0: *60 reusable connection: 0 2017/09/21 12:09:31 [debug] 22090#0: *60 posix_memalign: 0000000000E92E20:4096 @16 2017/09/21 12:09:31 [debug] 22090#0: *60 event timer del: 22: 1505995796403 2017/09/21 12:09:31 [debug] 22090#0: *60 http process request line Here's a new request received in different connection #60. wbr, Valentin V. Bartenev From vbart at nginx.com Mon Sep 25 19:33:54 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 25 Sep 2017 22:33:54 +0300 Subject: auth_request called multiple times for same single request In-Reply-To: <5146525.IvJy3cprdl@vbart-workstation> References: <5146525.IvJy3cprdl@vbart-workstation> Message-ID: <1547172.dY13I7ScEQ@vbart-workstation> On Monday 25 September 2017 22:14:04 Valentin V. Bartenev wrote: > On Monday 25 September 2017 07:17:08 garyc wrote: > > Apologies, I posted this issue to the wrong list (php-fpm), the link is: > > > > > https://forum.nginx.org/read.php?3,276451,276475#msg-276475 > > > > It has debug log file extracts that seem to suggest the location redirect to > > a static error page is attempted but doesn't resolve. > > > > Many thanks > > > [..] > > I don't see from your logs that it's the same single request. > > > 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate request count:1 > 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate cleanup count:1 blk:0 > 2017/09/21 12:09:31 [debug] 22090#0: *1 http posted request: "/lowDiskSpace.html?" > 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate handler count:1 > 2017/09/21 12:09:31 [debug] 22090#0: *1 http request count:1 blk:0 > 2017/09/21 12:09:31 [debug] 22090#0: *1 http close request > > Here's the request processing was completed. > > 2017/09/21 12:09:31 [debug] 22090#0: *1 http log handler > 2017/09/21 12:09:31 [debug] 22090#0: *1 run cleanup: 0000000000EE4FA0 > 2017/09/21 12:09:31 [debug] 22090#0: *1 file cleanup: fd:21 > 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE57E0 > 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000E92E20, unused: 4 > 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE37C0, unused: 3 > 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE47D0, unused: 1304 > 2017/09/21 12:09:31 [debug] 22090#0: *1 close http connection: 3 > 2017/09/21 12:09:31 [debug] 22090#0: *1 event timer del: 3: 1505995771900 > 2017/09/21 12:09:31 [debug] 22090#0: *1 reusable connection: 0 > > Here's the connection #1 was closed. > > 2017/09/21 12:09:31 [debug] 22090#0: *60 http keepalive handler > 2017/09/21 12:09:31 [debug] 22090#0: *60 malloc: 0000000000EA46D0:1024 > 2017/09/21 12:09:31 [debug] 22090#0: *60 recv: eof:0, avail:1 > 2017/09/21 12:09:31 [debug] 22090#0: *60 recv: fd:22 1024 of 1024 > 2017/09/21 12:09:31 [debug] 22090#0: *60 reusable connection: 0 > 2017/09/21 12:09:31 [debug] 22090#0: *60 posix_memalign: 0000000000E92E20:4096 @16 > 2017/09/21 12:09:31 [debug] 22090#0: *60 event timer del: 22: 1505995796403 > 2017/09/21 12:09:31 [debug] 22090#0: *60 http process request line > > Here's a new request received in different connection #60. > [..] Your Chrome doesn't stop sending data after it received the 413 response. It continues sending while nginx continues reading up to the "lingering_time": http://nginx.org/en/docs/http/ngx_http_core_module.html#lingering_time After 30 seconds of default lingering time nginx closes the connection. Chrome tries to repeat request again, and again, probably 4 or 5 times. That's what actually happens as I can assume based on my knowledge and from the parts of your debug logs. wbr, Valentin V. Bartenev From pluknet at nginx.com Mon Sep 25 21:13:18 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 26 Sep 2017 00:13:18 +0300 Subject: variable map for fastcgi_pass In-Reply-To: <0BFBBA2E-2A60-407E-B126-A9A4418B7178@ifsight.com> References: <0BFBBA2E-2A60-407E-B126-A9A4418B7178@ifsight.com> Message-ID: <5CAAB38E-8ECF-45E0-9FC2-8598287BC5BC@nginx.com> > On 25 Sep 2017, at 22:12, Kyle Sloan wrote: > > Hello, > > I am trying to use a MAP function based on HOSTNAMES to determine if this domain should fastcgi_pass to a php5 or php7 container, but am having problems. > > My map looks like > > map $host $php_proxy_container { > hostnames; > > default "php5fpm:9000"; > > www.example.com "php7fpm:9000"; > } > > > My fastcgi file looks like > > fastcgi_pass $php_proxy_container; > > > [..] > > I get a generic 502 bad gateway, and nothing more in the http log. Changing the fastcgi_pass line to the exact value in the map and restarting does work. > If variable used within fastcgi_pass evaluates to something like hostname:port, you need to define resolver to resolve that name. http://nginx.org/r/resolver OTOH, if that name is used within fastcgi_pass literally, it would be resolved at startup by system resolver instead. -- Sergey Kandaurov From kyle at ifsight.com Mon Sep 25 21:16:58 2017 From: kyle at ifsight.com (Kyle Sloan) Date: Mon, 25 Sep 2017 16:16:58 -0500 Subject: variable map for fastcgi_pass In-Reply-To: <5CAAB38E-8ECF-45E0-9FC2-8598287BC5BC@nginx.com> References: <0BFBBA2E-2A60-407E-B126-A9A4418B7178@ifsight.com> <5CAAB38E-8ECF-45E0-9FC2-8598287BC5BC@nginx.com> Message-ID: Thanks for the response. Both the variables respond/work when they are not maps/variables and just set. I will try out the resolver and see if I can make any progress with it. Kyle Sloan DevOps Engineer | Interpersonal Frequency kyle at ifsight.com https://ifsight.com > On Sep 25, 2017, at 4:13 PM, Sergey Kandaurov wrote: > >> >> On 25 Sep 2017, at 22:12, Kyle Sloan wrote: >> >> Hello, >> >> I am trying to use a MAP function based on HOSTNAMES to determine if this domain should fastcgi_pass to a php5 or php7 container, but am having problems. >> >> My map looks like >> >> map $host $php_proxy_container { >> hostnames; >> >> default "php5fpm:9000"; >> >> www.example.com "php7fpm:9000"; >> } >> >> >> My fastcgi file looks like >> >> fastcgi_pass $php_proxy_container; >> >> >> [..] >> >> I get a generic 502 bad gateway, and nothing more in the http log. Changing the fastcgi_pass line to the exact value in the map and restarting does work. >> > > If variable used within fastcgi_pass evaluates to something like > hostname:port, you need to define resolver to resolve that name. > http://nginx.org/r/resolver > > OTOH, if that name is used within fastcgi_pass literally, > it would be resolved at startup by system resolver instead. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 26 00:40:10 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Sep 2017 03:40:10 +0300 Subject: load balancing algorithms In-Reply-To: References: Message-ID: <20170926004010.GF19617@mdounin.ru> Hello! On Mon, Sep 25, 2017 at 03:11:36PM -0400, Adam Schwartz wrote: > I?m Adam and I?ve been researching load balancing for my > undergraduate senior project. I?m particularly interested in the > behavior of the ?power of two choices? and join-idle-queue > algorithms on Nginx. > > I?ve found that the `ngx_http_upstream_module` specifies a > `least_conn` and `least_time` load balancing method, but > otherwise incorporates round-robin. > > I was curious if the Nginx community had ever discussed > implementing other load balancing methods? I don't remember more or less serious discussions at least since introduction of the least_conn and hash balancing methods. On the other hand, there is API in nginx which allows to implement any load balancing algorithm needed. And there are some 3rd party load balancing modules available. As for the algorithms you've mentioned, "power of two choices" seems to be better than random, though it does not look like it is beneficial even compared to round-robin. Something similar to join-idle-queue probably can be emulated by using least_time + max_conns=1 on each server + queue (though queue is only available in nginx-plus). -- Maxim Dounin http://nginx.org/ From gk at leniwiec.biz Tue Sep 26 01:48:57 2017 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 26 Sep 2017 03:48:57 +0200 Subject: OCSP stapling and resolver Message-ID: <59C9B209.9060804@leniwiec.biz> Hello, Is resolver in nginx still needed for OCSP stapling? I am getting a warning from nginx if resolver is not supplied but at the same time both Qualys and openssl s_client output suggest OCSP stapling is working. Strange. -- Grzegorz Kulewski gk at leniwiec.biz +48 663 92 88 95 From sca at andreasschulze.de Tue Sep 26 07:23:59 2017 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 26 Sep 2017 09:23:59 +0200 Subject: OCSP stapling and resolver In-Reply-To: <59C9B209.9060804@leniwiec.biz> Message-ID: <20170926092359.Horde.DVEPXf2FX9LIGpV1D9TCRxJ@andreasschulze.de> Grzegorz Kulewski: > Hello, > > Is resolver in nginx still needed for OCSP stapling? > > I am getting a warning from nginx if resolver is not supplied but at > the same time both Qualys and openssl s_client output suggest OCSP > stapling is working. Strange There are two options - let nginx fetch the ocsp response from ca server - fetch offline and point nginx via ssl_stapling_file to the data 1 require a resolver and serve the first response after restart without ocsp data 2 require a resolver outside nginx (but not inside), some scripting and deliver oscp data also at the first response > > -- > Grzegorz Kulewski > gk at leniwiec.biz > +48 663 92 88 95 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Sep 26 09:07:25 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Tue, 26 Sep 2017 05:07:25 -0400 Subject: auth_request called multiple times for same single request In-Reply-To: <1547172.dY13I7ScEQ@vbart-workstation> References: <1547172.dY13I7ScEQ@vbart-workstation> Message-ID: <96cd475cf737121a2ce854cb6502424c.NginxMailingListEnglish@forum.nginx.org> Hello, thanks for explaining, can I ask, in the 5MB scenario (client accepted 413 response) the logs show: 2017/09/21 12:06:41 [debug] 21560#0: *1 http run request: "/pcapLowDiskSpace.html?" 2017/09/21 12:06:41 [debug] 21560#0: *1 http read discarded body 2017/09/21 12:06:41 [debug] 21560#0: *1 recv: eof:0, avail:1 2017/09/21 12:06:41 [debug] 21560#0: *1 recv: fd:3 2159 of 2159 2017/09/21 12:06:41 [debug] 21560#0: *1 http finalize request: -4, "/pcapLowDiskSpace.html?" a:1, c:1 2017/09/21 12:06:41 [debug] 21560#0: *1 set http keepalive handler 2017/09/21 12:06:41 [debug] 21560#0: *1 http close request 2017/09/21 12:06:41 [debug] 21560#0: *1 http log handler 2017/09/21 12:06:41 [debug] 21560#0: *1 run cleanup: 0000000001904FA0 2017/09/21 12:06:41 [debug] 21560#0: *1 file cleanup: fd:21 2017/09/21 12:06:41 [debug] 21560#0: *1 free: 00000000019057E0 2017/09/21 12:06:41 [debug] 21560#0: *1 free: 00000000018B2E20, unused: 4 2017/09/21 12:06:41 [debug] 21560#0: *1 free: 00000000019037C0, unused: 3 2017/09/21 12:06:41 [debug] 21560#0: *1 free: 00000000019047D0, unused: 1304 2017/09/21 12:06:41 [debug] 21560#0: *1 free: 00000000018B4450 2017/09/21 12:06:41 [debug] 21560#0: *1 hc free: 0000000000000000 2017/09/21 12:06:41 [debug] 21560#0: *1 hc busy: 0000000000000000 0 2017/09/21 12:06:41 [debug] 21560#0: *1 reusable connection: 1 Is the above line >2017/09/21 12:06:41 [debug] 21560#0: *1 http finalize request: -4, "/pcapLowDiskSpace.html?" a:1, c:1 an indication that the response was accepted by the client? hence the http request is closed and the connection is set reusable? Comparing with the 1GB scenario: 2017/09/21 12:09:31 [debug] 22090#0: *1 http run request: "/pcapLowDiskSpace.html?" 2017/09/21 12:09:31 [debug] 22090#0: *1 http finalize request: -1, "/pcapLowDiskSpace.html?" a:1, c:1 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate request count:1 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate cleanup count:1 blk:0 2017/09/21 12:09:31 [debug] 22090#0: *1 http posted request: "/pcapLowDiskSpace.html?" 2017/09/21 12:09:31 [debug] 22090#0: *1 http terminate handler count:1 2017/09/21 12:09:31 [debug] 22090#0: *1 http request count:1 blk:0 2017/09/21 12:09:31 [debug] 22090#0: *1 http close request 2017/09/21 12:09:31 [debug] 22090#0: *1 http log handler 2017/09/21 12:09:31 [debug] 22090#0: *1 run cleanup: 0000000000EE4FA0 2017/09/21 12:09:31 [debug] 22090#0: *1 file cleanup: fd:21 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE57E0 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000E92E20, unused: 4 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE37C0, unused: 3 2017/09/21 12:09:31 [debug] 22090#0: *1 free: 0000000000EE47D0, unused: 1304 2017/09/21 12:09:31 [debug] 22090#0: *1 close http connection: 3 2017/09/21 12:09:31 [debug] 22090#0: *1 event timer del: 3: 1505995771900 2017/09/21 12:09:31 [debug] 22090#0: *1 reusable connection: 0 Do the lines > 2017/09/21 12:09:31 [debug] 22090#0: *1 http finalize request: -1, "/pcapLowDiskSpace.html?" a:1, c:1 > 2017/09/21 12:09:31 [debug] 22090#0: *1 http posted request: "/pcapLowDiskSpace.html?" indicate that the request was unacknowledged by the client hence the http request is closed along with the connection: > 2017/09/21 12:09:31 [debug] 22090#0: *1 close http connection: 3 and it is marked as non re-usable? > 2017/09/21 12:09:31 [debug] 22090#0: *1 reusable connection: 0 I will have a closer look at our client code, we use a third party angular library to manage our uploads and as far as i can tell (from the network debug window in chrome) a single POST request is sent in all scenarios; in the 1.1GB file case the POST request times out with an ERR_CONNECTION_RESET error. Many thanks Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276541#msg-276541 From nginx-forum at forum.nginx.org Tue Sep 26 10:56:50 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Tue, 26 Sep 2017 06:56:50 -0400 Subject: 'real_ip_header proxy_protocol' don't change the client address Message-ID: <32fc9c65780afa3f16880dcc8cb95da9.NginxMailingListEnglish@forum.nginx.org> Hello, I have enabled proxy_protocol like 'listen 8080 proxy_protocol' and can get the right client address from the $proxy_protocol_addr parameter. I also set 'real_ip_header proxy_protocol', but it don't change the $remote_addr parameter. It says 'The proxy_protocol parameter (1.5.12) changes the client address to the one from the PROXY protocol header. ' in the document http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header. Do I miss something else? Thanks Xiaofeng Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276542,276542#msg-276542 From nginx-forum at forum.nginx.org Tue Sep 26 11:02:33 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Tue, 26 Sep 2017 07:02:33 -0400 Subject: 'real_ip_header proxy_protocol' don't change the client address In-Reply-To: <32fc9c65780afa3f16880dcc8cb95da9.NginxMailingListEnglish@forum.nginx.org> References: <32fc9c65780afa3f16880dcc8cb95da9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <39a95256c762ec2926c76b24b6ad1edb.NginxMailingListEnglish@forum.nginx.org> I have the setting as follow: real_ip_header proxy_protocol; real_ip_recursive on; set_real_ip_from 192.168.1.0/24; For example, when I send request to nginx from 10.0.0.1, $proxy_protocol_addr prints 10.0.0.1, which is the original client, but $remote_addr prints 192.168.1.1 which is our proxy. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276542,276543#msg-276543 From pankaj at releasemanager.in Tue Sep 26 11:31:29 2017 From: pankaj at releasemanager.in (pankaj at releasemanager.in) Date: Tue, 26 Sep 2017 04:31:29 -0700 Subject: Nginx splitting one single request's into multiple requests to upstream. (version 1.13.3) Message-ID: <20170926043129.0a504150a66b62e4c9ddb6488e6496fb.a5fd526cab.wbe@email13.godaddy.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 26 12:34:50 2017 From: nginx-forum at forum.nginx.org (vikas027) Date: Tue, 26 Sep 2017 08:34:50 -0400 Subject: Two Way SSL - client SSL certificate verify error In-Reply-To: <90e541ebd6cec482b04b8193a61e9d0c.NginxMailingListEnglish@forum.nginx.org> References: <90e541ebd6cec482b04b8193a61e9d0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2c6ef640d9f67c2b04421c0d74c2a6c1.NginxMailingListEnglish@forum.nginx.org> This stands resolved now. Pls visit this thread https://serverfault.com/questions/875229/two-way-ssl-error-400-the-ssl-certificate-error-just-for-client-certificate/875547 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276514,276546#msg-276546 From peter_booth at me.com Tue Sep 26 12:37:38 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 26 Sep 2017 08:37:38 -0400 Subject: Nginx splitting one single request's into multiple requests to upstream. (version 1.13.3) In-Reply-To: <20170926043129.0a504150a66b62e4c9ddb6488e6496fb.a5fd526cab.wbe@email13.godaddy.com> References: <20170926043129.0a504150a66b62e4c9ddb6488e6496fb.a5fd526cab.wbe@email13.godaddy.com> Message-ID: Pankaj, I can?t understand exactly what you are saying. But I?m confident that here will be a way for nginx to work for you, providing you ask the question in a clear, unambiguous fashion. Is your application behind nginx, such that nginx is POSTING to the app? Or is your application making the request to nginx which is in front of another back-end? Is so, what is the back-end? How much data is being sent in the POST? Who creates the JSON doc? Peter Are you familiar withe material in https://www.codeproject.com/Articles/648526/All-about-http-chunked-responses > On Sep 26, 2017, at 7:31 AM, pankaj at releasemanager.in wrote: > > Transfer-Encoding: chunked -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Sep 26 12:42:41 2017 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Tue, 26 Sep 2017 14:42:41 +0200 Subject: server_name that starts with a number Message-ID: <5c45254902bc49305fb9c99d72ed166b@ultra-secure.de> Hi, I have a website that has a server_name that starts with a number (or two numbers, actually). I also have a catchall default_server configured with the server_name "_". Now, it seems when the server_name starts with a number, it's ignored and requests are routed to the default server. Can someone explain this? How do I fix this? From mdounin at mdounin.ru Tue Sep 26 13:20:40 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Sep 2017 16:20:40 +0300 Subject: OCSP stapling and resolver In-Reply-To: <59C9B209.9060804@leniwiec.biz> References: <59C9B209.9060804@leniwiec.biz> Message-ID: <20170926132040.GK19617@mdounin.ru> Hello! On Tue, Sep 26, 2017 at 03:48:57AM +0200, Grzegorz Kulewski wrote: > Is resolver in nginx still needed for OCSP stapling? Yes. > I am getting a warning from nginx if resolver is not supplied > but at the same time both Qualys and openssl s_client output > suggest OCSP stapling is working. Strange. The warning means that nginx will use IP addresses of the OCSP responder obtained during configuration parsing, and it won't be able to switch to different IP addresses. That is, everything will work unless OCSP responder will be moved to different IP addresses. -- Maxim Dounin http://nginx.org/ From adam at anschwa.com Tue Sep 26 13:43:37 2017 From: adam at anschwa.com (Adam Schwartz) Date: Tue, 26 Sep 2017 09:43:37 -0400 Subject: load balancing algorithms In-Reply-To: References: Message-ID: <3E8AB450-F4EF-41D9-8909-D4BA6D8743E0@anschwa.com> > On the other hand, there is API in nginx which allows to implement > any load balancing algorithm needed. Cool! I was looking for something like that. > As for the algorithms you've mentioned, "power of two choices" seems > to be better than random, though it does not look like it is > beneficial even compared to round-robin. Are there specific reasons or example you have that show round-robin is sufficient? Thanks for this great info, -Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 26 13:57:51 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Sep 2017 16:57:51 +0300 Subject: 'real_ip_header proxy_protocol' don't change the client address In-Reply-To: <39a95256c762ec2926c76b24b6ad1edb.NginxMailingListEnglish@forum.nginx.org> References: <32fc9c65780afa3f16880dcc8cb95da9.NginxMailingListEnglish@forum.nginx.org> <39a95256c762ec2926c76b24b6ad1edb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170926135751.GL19617@mdounin.ru> Hello! On Tue, Sep 26, 2017 at 07:02:33AM -0400, fengx wrote: > I have the setting as follow: > > real_ip_header proxy_protocol; > real_ip_recursive on; > set_real_ip_from 192.168.1.0/24; > > For example, when I send request to nginx from 10.0.0.1, > $proxy_protocol_addr prints 10.0.0.1, which is the original client, but > $remote_addr prints 192.168.1.1 which is our proxy. Please provide full configuration you are using for tests. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 26 14:15:28 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Sep 2017 17:15:28 +0300 Subject: server_name that starts with a number In-Reply-To: <5c45254902bc49305fb9c99d72ed166b@ultra-secure.de> References: <5c45254902bc49305fb9c99d72ed166b@ultra-secure.de> Message-ID: <20170926141528.GM19617@mdounin.ru> Hello! On Tue, Sep 26, 2017 at 02:42:41PM +0200, rainer at ultra-secure.de wrote: > I have a website that has a server_name that starts with a number (or > two numbers, actually). > > I also have a catchall default_server configured with the server_name > "_". > > Now, it seems when the server_name starts with a number, it's ignored > and requests are routed to the default server. Just a quick test, with the only the following server{} blocks in the http{} section: server { listen 8080; server_name _; return 200 $server_name\n; } server { listen 8080; server_name 12test.example.com; return 200 $server_name\n; } Testing it with curl: $ curl -H 'Host: foo.example.com' http://127.0.0.1:8080/ _ $ curl -H 'Host: 12test.example.com' http://127.0.0.1:8080/ 12test.example.com So clearly it selects correct server based on the name provided in the request, even when there are names starting with a number. > Can someone explain this? > > How do I fix this? Most likely explanations are, in no particular order: - you've configured it wrong; - you are testing it wrong. To find out what exactly happened and how to fix this, please provide the following: - full yet minimal configuration you are able to reproduce the behaviour in question (providing "nginx -T" output might be a good idea); - steps you use to test things. Note well that testing with browsers is generally a bad idea, as browsers tend to cache responses. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Sep 26 14:18:51 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 26 Sep 2017 17:18:51 +0300 Subject: auth_request called multiple times for same single request In-Reply-To: <96cd475cf737121a2ce854cb6502424c.NginxMailingListEnglish@forum.nginx.org> References: <1547172.dY13I7ScEQ@vbart-workstation> <96cd475cf737121a2ce854cb6502424c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2829766.NTrrYNjEiL@vbart-workstation> On Tuesday 26 September 2017 05:07:25 garyc wrote: [..] > > indicate that the request was unacknowledged by the client hence the http > request is closed along with the connection: > > > 2017/09/21 12:09:31 [debug] 22090#0: *1 close http connection: 3 > > and it is marked as non re-usable? > > > 2017/09/21 12:09:31 [debug] 22090#0: *1 reusable connection: 0 > There's no such things like "response was accepted" or "was unacknowledged". The difference between these two cases is that in the first case the whole request was received by nginx and the connection went into a keep-alive state. In the second case, 30 seconds after the response was sent by nginx, the request body still wasn't received and nginx had nothing to do than just close the connection. > > I will have a closer look at our client code, we use a third party angular > library to manage our uploads and as far as i can tell (from the network > debug window in chrome) a single POST request is sent in all scenarios; in > the 1.1GB file case the POST request times out with an ERR_CONNECTION_RESET > error. > I suggest you to use tcpdump for investigation and something simple, like curl as a client. The network debug window in chrome is a toy that doesn't show you real picture. wbr, Valentin V. Bartenev From rainer at ultra-secure.de Tue Sep 26 14:47:48 2017 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Tue, 26 Sep 2017 16:47:48 +0200 Subject: server_name that starts with a number In-Reply-To: <20170926141528.GM19617@mdounin.ru> References: <5c45254902bc49305fb9c99d72ed166b@ultra-secure.de> <20170926141528.GM19617@mdounin.ru> Message-ID: <3559d988151252cc73fedf081042529e@ultra-secure.de> Am 2017-09-26 16:15, schrieb Maxim Dounin: > Hello! > Note well that testing with browsers is generally a bad idea, as > browsers tend to cache responses. I almost always test with curl. I can see that the nginx access log of the vhost where the requests are supposed to show up is empty. They do show up in the request log of the default_server (_). The server_name is 20.domain.com, BTW. I will try to reproduce it and send the config. In this case, nginx is only a reverse-proxy for apache (on the same host). Because even the default vhost has a location to proxy_pass most traffic to 127.0.0.1:8080, it wouldn't matter. But the customer seems to need the ".well-known" directory for his own purposes and on the default vhost, it clashes with the .well-known location there (which I can't remove). From mdounin at mdounin.ru Tue Sep 26 15:07:08 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Sep 2017 18:07:08 +0300 Subject: load balancing algorithms In-Reply-To: <3E8AB450-F4EF-41D9-8909-D4BA6D8743E0@anschwa.com> References: <3E8AB450-F4EF-41D9-8909-D4BA6D8743E0@anschwa.com> Message-ID: <20170926150708.GN19617@mdounin.ru> Hello! On Tue, Sep 26, 2017 at 09:43:37AM -0400, Adam Schwartz wrote: > > On the other hand, there is API in nginx which allows to implement > > any load balancing algorithm needed. > > Cool! I was looking for something like that. > > > As for the algorithms you've mentioned, "power of two choices" seems > > to be better than random, though it does not look like it is > > beneficial even compared to round-robin. > > Are there specific reasons or example you have that show round-robin is sufficient? As long as we are assuming single load balancer and identical requests, round-robin essentially means that we place equal load on all balanced servers, and this is better than what "power of two choices" can provide. Certainly things will be different if requests are not equal, though this is what least_conn is expected to address (and again, it does so better than just testing two choices). As far as I understand, "Power of two choices" might be beneficial if you need to optimize time spent on comparing the load on different balanced servers, for example, when balancing very large number of servers. Or when trying to do distributed balancing with multiple load balancers, and query loads directly from balanced servers - but this is not something nginx currently tries to address. -- Maxim Dounin http://nginx.org/ From gk at leniwiec.biz Tue Sep 26 15:24:26 2017 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 26 Sep 2017 17:24:26 +0200 Subject: OCSP stapling and resolver In-Reply-To: <20170926132040.GK19617@mdounin.ru> References: <59C9B209.9060804@leniwiec.biz> <20170926132040.GK19617@mdounin.ru> Message-ID: <59CA712A.2080205@leniwiec.biz> W dniu 26.09.2017 15:20, Maxim Dounin pisze: > Hello! > > On Tue, Sep 26, 2017 at 03:48:57AM +0200, Grzegorz Kulewski wrote: > >> Is resolver in nginx still needed for OCSP stapling? > > Yes. > >> I am getting a warning from nginx if resolver is not supplied >> but at the same time both Qualys and openssl s_client output >> suggest OCSP stapling is working. Strange. > > The warning means that nginx will use IP addresses of the OCSP > responder obtained during configuration parsing, and it won't be > able to switch to different IP addresses. That is, everything > will work unless OCSP responder will be moved to different IP > addresses. Thank you very much for this explanation. I know that this behavior is compatible with proxy_pass resolving policy but wouldn't it be better to fail fast in this scenario? Doing what nginx is currently doing is bound to surprise some people, especially if must staple is used. If you think it's not possible to change it then maybe the warning can be improved to say exactly what you said? Also, maybe there should be some new configuration directive like: i_really_want_to_resolve_only_at_startup yes; set to no by default - so the user will be forced to be aware of the problem? -- Grzegorz Kulewski From nginx-forum at forum.nginx.org Tue Sep 26 15:40:20 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Tue, 26 Sep 2017 11:40:20 -0400 Subject: auth_request called multiple times for same single request In-Reply-To: <2829766.NTrrYNjEiL@vbart-workstation> References: <2829766.NTrrYNjEiL@vbart-workstation> Message-ID: <8ba84bac1ba769148cbf55fd08d5b46a.NginxMailingListEnglish@forum.nginx.org> Ok, thanks, I will look into tcpdump. In your opinion, in principle, is what i am attempting feasible? >In the second case, 30 seconds after the response was sent by nginx, the >request body still wasn't received and nginx had nothing to do than just >close the connection. This suggests to me that although i am attempting to intercept the request header and reject it if there is insufficient space (auth_request) the whole request must be received anyway so if the file is large enough such that 30 seconds isn't enough time to receive it all then the connection will be closed. Even if i can increase this timeout I really need to find a solution that will protect against the entire file being received unless there is space for it, these files can be up to 10GB in size. I perhaps need to look again at a 2 stage process to validate the space available (GET) before attempting the upload (POST). Many thanks Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276558#msg-276558 From gk at leniwiec.biz Tue Sep 26 16:15:11 2017 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 26 Sep 2017 18:15:11 +0200 Subject: proxy_cache_bypass and non-200 response Message-ID: <59CA7D0F.2080708@leniwiec.biz> Hello, I think I found a bug or undocumented feature in nginx. Or it's just me being stupid. I am debugging the following case: 1. I have an upstream that sometimes returns HTTP 200 and sometimes returns HTTP 401 and both codes are OK from my point of view. Both are returned with X-Accel-Expires allowing for caching. 2. I have an nginx caching proxy server that uses that upstream and caches the responses. It also changes 401 to 403 using error_page. 3. Now if there is cache MISS for the URL and upstream returns 401 it gets cached by nginx correctly. 4. But if there is already HTTP 200 response in the cache and the request bypasses the cache and the upstream returns 401, the cache seems to ignore 401 and keeps old HTTP 200 response. This makes it impossible to re-cache HTTP 200 response with a newer (401 but still newer) response. Possibly same problem applies to other non-200 responses but right now only 401 interests me. I am not using any directives like proxy_cache_use_stale or similar. Only proxy_pass, proxy_cache and proxy_cache_bypass. I set proxy_cache_key to $remote_addr. I am using nginx version 1.12.1 from your Ubuntu xenial repository. -- Grzegorz Kulewski From nginx-forum at forum.nginx.org Wed Sep 27 09:16:35 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Wed, 27 Sep 2017 05:16:35 -0400 Subject: auth_request called multiple times for same single request In-Reply-To: <8ba84bac1ba769148cbf55fd08d5b46a.NginxMailingListEnglish@forum.nginx.org> References: <2829766.NTrrYNjEiL@vbart-workstation> <8ba84bac1ba769148cbf55fd08d5b46a.NginxMailingListEnglish@forum.nginx.org> Message-ID: I had a look around for expected behavior during large http 1.x POST requests, it looks like the standards suggest that browsers should monitor for 413 responses and terminate the body transmission but they don't: https://stackoverflow.com/questions/18367824/how-to-cancel-http-upload-from-data-events/18370751#18370751 Chrome and Firefox have issues 4+ years old on this, doesn't look likely that they will be addressed any time soon. I think a two stage approach may be better for our scenario, too bad, this almost worked! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276571#msg-276571 From nginx-forum at forum.nginx.org Wed Sep 27 10:57:00 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Wed, 27 Sep 2017 06:57:00 -0400 Subject: auth_request called multiple times for same single request In-Reply-To: References: <2829766.NTrrYNjEiL@vbart-workstation> <8ba84bac1ba769148cbf55fd08d5b46a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <313bb8712d7c17bc9307b069ce4f4bda.NginxMailingListEnglish@forum.nginx.org> Looks like support for 100 Continue: https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3 may have covered our scenario, I found an old ticket on this https://trac.nginx.org/nginx/ticket/493#no1 I guess there is no intention to support this in a future release? We can live with a 2 stage process for now, may look to move to http 2 for other reasons in the future, we can address this properly then. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276573#msg-276573 From mdounin at mdounin.ru Wed Sep 27 13:32:01 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Sep 2017 16:32:01 +0300 Subject: auth_request called multiple times for same single request In-Reply-To: <313bb8712d7c17bc9307b069ce4f4bda.NginxMailingListEnglish@forum.nginx.org> References: <2829766.NTrrYNjEiL@vbart-workstation> <8ba84bac1ba769148cbf55fd08d5b46a.NginxMailingListEnglish@forum.nginx.org> <313bb8712d7c17bc9307b069ce4f4bda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170927133201.GW19617@mdounin.ru> Hello! On Wed, Sep 27, 2017 at 06:57:00AM -0400, garyc wrote: > Looks like support for 100 Continue: > > https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3 > > may have covered our scenario, I found an old ticket on this > > https://trac.nginx.org/nginx/ticket/493#no1 Support for 100-continue is present in nginx. And it is expected to work fine with auth_request. (The particular ticket is about not processing 100-continue but delegating it to a backend server, this is not something nginx supports.) The problem is that browsers don't use 100-continue, and hence it won't help in your case. -- Maxim Dounin http://nginx.org/ From adam at anschwa.com Wed Sep 27 13:38:33 2017 From: adam at anschwa.com (Adam Schwartz) Date: Wed, 27 Sep 2017 09:38:33 -0400 Subject: load balancing algorithms In-Reply-To: References: Message-ID: Hi, > Certainly things will be different if > requests are not equal, though this is what least_conn is expected > to address (and again, it does so better than just testing two > choices). Awesome, I hope to address this issue in my research. My suspicion is that round-robin and random will continue to do pretty well under typical web server demands. However, these other algorithms look promising and hopefully like you?ve said, can be implemented using a combination of existing methods. Thank you very much for your replies, -Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 27 16:06:24 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Wed, 27 Sep 2017 12:06:24 -0400 Subject: auth_request called multiple times for same single request In-Reply-To: <20170927133201.GW19617@mdounin.ru> References: <20170927133201.GW19617@mdounin.ru> Message-ID: <130c3aa53d44ea6a9b9f57691d4f2e4e.NginxMailingListEnglish@forum.nginx.org> Thanks, understood. Got a bit excited when I realized our client wasn't sending the 'Expect: 100-continue' header in our POST but as you have pointed out even with this header Chrome and Firefox do not stop sending the body. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276579#msg-276579 From vbart at nginx.com Wed Sep 27 16:40:46 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 27 Sep 2017 19:40:46 +0300 Subject: auth_request called multiple times for same single request In-Reply-To: <313bb8712d7c17bc9307b069ce4f4bda.NginxMailingListEnglish@forum.nginx.org> References: <2829766.NTrrYNjEiL@vbart-workstation> <313bb8712d7c17bc9307b069ce4f4bda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2747577.zrDXh6v35E@vbart-workstation> On Wednesday 27 September 2017 06:57:00 garyc wrote: [..] > We can live with a 2 stage process for now, may look to move to http 2 for > other reasons in the future, we can address this properly then. > [..] Well, Chrome behavior with HTTP/2 isn't better. Here's a workaround that we had to add in order to support Chrome: http://hg.nginx.org/nginx/rev/8df664ebe037 wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Sep 27 17:53:58 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Sep 2017 20:53:58 +0300 Subject: OCSP stapling and resolver In-Reply-To: <59CA712A.2080205@leniwiec.biz> References: <59C9B209.9060804@leniwiec.biz> <20170926132040.GK19617@mdounin.ru> <59CA712A.2080205@leniwiec.biz> Message-ID: <20170927175358.GD19617@mdounin.ru> Hello! On Tue, Sep 26, 2017 at 05:24:26PM +0200, Grzegorz Kulewski wrote: > W dniu 26.09.2017 15:20, Maxim Dounin pisze: > > Hello! > > > > On Tue, Sep 26, 2017 at 03:48:57AM +0200, Grzegorz Kulewski > > wrote: > > > >> Is resolver in nginx still needed for OCSP stapling? > > > > Yes. > > > >> I am getting a warning from nginx if resolver is not supplied > >> but at the same time both Qualys and openssl s_client output > >> suggest OCSP stapling is working. Strange. > > > > The warning means that nginx will use IP addresses of the OCSP > > responder obtained during configuration parsing, and it won't > > be able to switch to different IP addresses. That is, > > everything will work unless OCSP responder will be moved to > > different IP addresses. > > Thank you very much for this explanation. > > I know that this behavior is compatible with proxy_pass > resolving policy but wouldn't it be better to fail fast in this > scenario? Doing what nginx is currently doing is bound to > surprise some people, especially if must staple is used. Even if implemented (this is probably won't be trivial for various reasons), this will limit functionality if you don't have a DNS server on hand and nevertheless want to use OCSP stapling, assuming IP addresses won't change fast enough, or you are using an IP address of an OCSP responder. As for must staple, it was discussed more than once that must staple requirements are quite different from ones needed for OCSP stapling as an optimization technique as implemented in nginx. > If you think it's not possible to change it then maybe the > warning can be improved to say exactly what you said? Looks a little bit long for a warning to me. -- Maxim Dounin http://nginx.org/ From aldernetwork at gmail.com Wed Sep 27 20:17:35 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Wed, 27 Sep 2017 13:17:35 -0700 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <20140520113307.GS1849@mdounin.ru> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> <20140520113307.GS1849@mdounin.ru> Message-ID: Hi Maxim, We came across a case where kill -USR1 doesn't cause nginx reopen the access.log. And we need to run nginx with "daemon off" and "master-process off". Is that a known issue and is there any workaround? On Tue, May 20, 2014 at 4:33 AM, Maxim Dounin wrote: > Hello! > > On Mon, May 19, 2014 at 03:06:06PM -0400, samingrassia wrote: > > > Thanks to everyone in advance! > > > > I have a cron that runs the following: > > > > mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME > > kill -USR1 `cat $NGINX_PID` > > > > My questions is during time between the mv and the kill, is there any log > > writes that are being discarded or are they being stacked in memory and > > dumped into the new access.log after it is recreated? > > Unless you are trying to move logs to a different filesystem, > logging will continue to the old file till USR1 is processed. > > From nginx point of view, the "mv" command does nothing - as nginx > has open file descriptor, it will continue to write to it, and log > lines will appear in the (old) file - the file which is now have a > new name. After USR1 nginx will reopen the log, and will continue > further logging to a new file. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 28 02:38:45 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Wed, 27 Sep 2017 22:38:45 -0400 Subject: 'real_ip_header proxy_protocol' don't change the client address In-Reply-To: <20170926135751.GL19617@mdounin.ru> References: <20170926135751.GL19617@mdounin.ru> Message-ID: <5508c91e9151689b5ee448e119c67801.NginxMailingListEnglish@forum.nginx.org> The config is rather simple as following. My test version is 1.7.2, a bit old. I can't upgrade to the latest one in our production for now. Anyway I think it should work in 1.7.2 because the document says proxy_protocol was introduced from 1.5.12. http { log_format combined '$proxy_protocol_addr - $remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; ... server { server_name www.abc.com; listen 80; listen 8181 proxy_protocol; real_ip_header proxy_protocol; real_ip_recursive on; set_real_ip_from 192.168.1.0/24; location / { ... } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276542,276590#msg-276590 From nginx-forum at forum.nginx.org Thu Sep 28 08:38:32 2017 From: nginx-forum at forum.nginx.org (garyc) Date: Thu, 28 Sep 2017 04:38:32 -0400 Subject: auth_request called multiple times for same single request In-Reply-To: <2747577.zrDXh6v35E@vbart-workstation> References: <2747577.zrDXh6v35E@vbart-workstation> Message-ID: <8f3a837d4058ad697dcd822003b2ccfc.NginxMailingListEnglish@forum.nginx.org> Thanks for the warning! I have to admit i was a little surprised to learn that, arguably, the two major browsers don't implement 100-continue and have no plan to do so. I know our scenario is far from typical but POST uploads are surely widespread. Thanks to all for your help on this, the support provided by this forum is really great and much appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276512,276594#msg-276594 From mdounin at mdounin.ru Thu Sep 28 13:54:41 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Sep 2017 16:54:41 +0300 Subject: 'real_ip_header proxy_protocol' don't change the client address In-Reply-To: <5508c91e9151689b5ee448e119c67801.NginxMailingListEnglish@forum.nginx.org> References: <20170926135751.GL19617@mdounin.ru> <5508c91e9151689b5ee448e119c67801.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170928135441.GF19617@mdounin.ru> Hello! On Wed, Sep 27, 2017 at 10:38:45PM -0400, fengx wrote: > The config is rather simple as following. My test version is 1.7.2, a bit > old. I can't upgrade to the latest one in our production for now. Anyway I > think it should work in 1.7.2 because the document says proxy_protocol was > introduced from 1.5.12. > > http { > log_format combined '$proxy_protocol_addr - $remote_addr - $remote_user > [$time_local] ' > '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent"'; > ... > > server { > server_name www.abc.com; > > listen 80; > listen 8181 proxy_protocol; > > real_ip_header proxy_protocol; > real_ip_recursive on; > set_real_ip_from 192.168.1.0/24; > > location / { > ... > } > } > } And how do you test? Exactly the same config, and even with exctly the same version of nginx works fine here: $ telnet localhost 8181 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. PROXY TCP4 10.0.0.1 10.0.0.2 1 2 GET / HTTP/1.0 HTTP/1.1 200 OK Server: nginx/1.7.2 Date: Thu, 28 Sep 2017 13:48:09 GMT Content-Type: text/plain Content-Length: 19 Connection: close 10.0.0.1 127.0.0.1 Connection closed by foreign host. Where the response body is a result of return 200 "$proxy_protocol_addr $remote_addr\n"; in location /. Corresponding log line: 10.0.0.1 - 127.0.0.1 - - [28/Sep/2017:16:48:09 +0300] "GET / HTTP/1.0" 200 19 "-" "-" Note well that this is not a good idea to run nginx 1.7.2 in production. It is a long obsolete version of the mainline branch, it is not supported for more than 3 years now, and has known security issues, see http://nginx.org/en/security_advisories.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 28 14:08:01 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Sep 2017 17:08:01 +0300 Subject: 'real_ip_header proxy_protocol' don't change the client address In-Reply-To: <20170928135441.GF19617@mdounin.ru> References: <20170926135751.GL19617@mdounin.ru> <5508c91e9151689b5ee448e119c67801.NginxMailingListEnglish@forum.nginx.org> <20170928135441.GF19617@mdounin.ru> Message-ID: <20170928140801.GG19617@mdounin.ru> Hello! On Thu, Sep 28, 2017 at 04:54:41PM +0300, Maxim Dounin wrote: > Hello! > > On Wed, Sep 27, 2017 at 10:38:45PM -0400, fengx wrote: > > > The config is rather simple as following. My test version is 1.7.2, a bit > > old. I can't upgrade to the latest one in our production for now. Anyway I > > think it should work in 1.7.2 because the document says proxy_protocol was > > introduced from 1.5.12. > > > > http { > > log_format combined '$proxy_protocol_addr - $remote_addr - $remote_user > > [$time_local] ' > > '"$request" $status $body_bytes_sent ' > > '"$http_referer" "$http_user_agent"'; > > ... > > > > server { > > server_name www.abc.com; > > > > listen 80; > > listen 8181 proxy_protocol; > > > > real_ip_header proxy_protocol; > > real_ip_recursive on; > > set_real_ip_from 192.168.1.0/24; > > > > location / { > > ... > > } > > } > > } > > And how do you test? > > Exactly the same config, and even with exctly the same version of > nginx works fine here: Uhm, and one more note: exactly the same configuration will produce the following error: nginx: [emerg] duplicate "log_format" name "combined" in ... It might be the reason why you are seeing incorrect address: you are testing with some different configuration instead. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 28 14:53:29 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Sep 2017 17:53:29 +0300 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> <20140520113307.GS1849@mdounin.ru> Message-ID: <20170928145329.GH19617@mdounin.ru> Hello! On Wed, Sep 27, 2017 at 01:17:35PM -0700, Alder Netw wrote: > We came across a case where kill -USR1 doesn't cause nginx reopen the > access.log. And we need to run nginx with "daemon off" and "master-process > off". Is that a known issue and is there any workaround? Works fine here, just tested. Note though that the "master_process" directive is intended for use by developers only, see http://nginx.org/r/master_process. It has various known issues - in particular, it cannot correctly handle configuration reloads - and using it for anything but development is a bad idea. Don't use it unless you understand what you are doing and prepared to deal with resulting issues yourself. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Sep 28 16:59:15 2017 From: nginx-forum at forum.nginx.org (fabian_uy) Date: Thu, 28 Sep 2017 12:59:15 -0400 Subject: v1.1.19 Https SSL Stream Timeout and 502 Message-ID: <88faa29b3cec9c909a54739f0406f676.NginxMailingListEnglish@forum.nginx.org> Hello, I have nginx v 1.1.19 Im trying to configure one revese proxy with an outside ip, I had many problems who its being registered in the log showing: connect() failed (110: Connection timed out) while connecting to upstream and SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol depending if I change in the upstream the :443 post number after the ip example externalip:443. Some help to know the wrong parameter o what?s more I need to add. Thanks in advance. The config file is: server { listen ip:443 ssl; ssl on; root /var/www; ssl_certificate /etc/nginx/certs/3/server.crt; ssl_certificate_key /etc/nginx/certs/3/server.key; ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; keepalive_timeout 60; ssl_session_cache shared:SSL:10m; ssl_verify_client off; proxy_ssl_session_reuse on; ssl_session_timeout 10m; large_client_header_buffers 4 32K; access_log /var/log/nginx/ssl.access.log combinedhackmultiple; error_log /var/log/nginx/ssl.error.log; location / { proxy_pass https://nametoacessexternalssl; } } And the upstream is: upstream nametoacessexternalssl{ server externalipaddress:443; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276607,276607#msg-276607 From nginx-forum at forum.nginx.org Thu Sep 28 18:37:10 2017 From: nginx-forum at forum.nginx.org (fabian_uy) Date: Thu, 28 Sep 2017 14:37:10 -0400 Subject: SOLVED Re: v1.1.19 Https SSL Stream Timeout and 502 In-Reply-To: <88faa29b3cec9c909a54739f0406f676.NginxMailingListEnglish@forum.nginx.org> References: <88faa29b3cec9c909a54739f0406f676.NginxMailingListEnglish@forum.nginx.org> Message-ID: <46a41057874651e6c2d88f844e85ca0d.NginxMailingListEnglish@forum.nginx.org> SOLVED Firewall ISSUE. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276607,276608#msg-276608 From nginx-forum at forum.nginx.org Fri Sep 29 09:00:22 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 05:00:22 -0400 Subject: Can NGINX cache metadata get updated automatically, if file is added through backdoor by another NGINX proxy-cache? Message-ID: <6428544b235ba16050f794b843b2e8f0.NginxMailingListEnglish@forum.nginx.org> Hi, I have a use-case, where NGINX (say NGINX-process-1) is set up as a reverse proxy, with caching enabled (say in /mnt/disk2/pubRoot, with zone name "cacheA"). However, I have another NGINX (say NGINX-Process-B) which also runs in parallel, and caches its content in (/mnt/disk2/frontstore, with zone name "cacheB"). Additionally, there is another application which monitors this "frontstore", and copies its content to "pubRoot". So, effectively, any content that NGINX-B caches in frontstore gets available in the cache-path which is configured for NGINX-A. However, NGINX-A cannot get it as HIT when it receives a request, as its metadata (zoneA) does not have the information, as it didn't cache it there in the first place. Is there a mechanism by which NGINX-A cache lookup can get a HIT in such a case? A work-around which I can think is, if NGINX-A is restarted, it will build its cache based on available files in its cache-path. This way, it will get its metadata populated with the file in pubRoot. Then on, a new request may result in a cache-HIT. Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276611,276611#msg-276611 From nginx-forum at forum.nginx.org Fri Sep 29 09:58:33 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 05:58:33 -0400 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? Message-ID: <338e5c8354015a7f29ecb16c44fb59e2.NginxMailingListEnglish@forum.nginx.org> Hi, While caching files on disk, NGINX-proxy adds certain metadata at the the beginning of the file. Can such files be served by NGINX (acting a server)? Is there a method to skip the metadata part and just serve the content from such a cached file.? Apologies if this is a dumb question. Just starting to get familiar with basics here.. Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276612,276612#msg-276612 From yks0000 at gmail.com Fri Sep 29 11:38:58 2017 From: yks0000 at gmail.com (Yogesh Sharma) Date: Fri, 29 Sep 2017 17:08:58 +0530 Subject: Nginx CPU Issue : Remain at 100% Message-ID: Team, I am using nginx as Reverse proxy, where I see that once CPU goes up for Nginx it never comes down and remain there forever until we kill that worker. We tried tweaking worker_processes to number of cpu we have, but it did not helped. Any suggestion in this regards will help. *Version:* nginx-1.10.1-1.el6 Below is the config: user nginx; worker_processes 2; pid /var/run/nginx.pid; worker_rlimit_nofile 65535; events { worker_connections 15000; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format xxxxxxxxx'; sendfile on; server_tokens off; resolver 127.0.0.1 valid=30s; resolver_timeout 1s; keepalive_requests 75; keepalive_timeout 5 5; server_names_hash_bucket_size 128; server_names_hash_max_size 1024; proxy_buffers 4 32K; proxy_buffer_size 32k; proxy_connect_timeout 5s; proxy_read_timeout 305s; proxy_set_header Host $http_host; ignore_invalid_headers off; underscores_in_headers on; client_header_buffer_size 15k; client_body_buffer_size 16K; client_max_body_size 300M; send_timeout 305s; include /etc/nginx/conf.d/*.conf; } *Thanks & Regards,?Yogesh Sharma* -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Sep 29 11:50:21 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 29 Sep 2017 14:50:21 +0300 Subject: Nginx CPU Issue : Remain at 100% In-Reply-To: References: Message-ID: <10538001.t6D0DEl2ps@vbart-laptop> On Friday, 29 Sep 2017 14:38:58 MSK Yogesh Sharma wrote: > Team, > > I am using nginx as Reverse proxy, where I see that once CPU goes up for > Nginx it never comes down and remain there forever until we kill that > worker. We tried tweaking worker_processes to number of cpu we have, but it > did not helped. > > Any suggestion in this regards will help. > > *Version:* nginx-1.10.1-1.el6 > [..] First of all, you should update nginx to an actual and supported version: 1.13.5 or 1.12.1. There's a big chance that your problem was already fixed. wbr, Valentin V. Bartenev From r at roze.lv Fri Sep 29 13:16:29 2017 From: r at roze.lv (Reinis Rozitis) Date: Fri, 29 Sep 2017 16:16:29 +0300 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? In-Reply-To: <338e5c8354015a7f29ecb16c44fb59e2.NginxMailingListEnglish@forum.nginx.org> References: <338e5c8354015a7f29ecb16c44fb59e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: > While caching files on disk, NGINX-proxy adds certain metadata at the the > beginning of the file. > > Can such files be served by NGINX (acting a server)? Is there a method to > skip the metadata part and just serve the content from such a cached > file.? Well you can use proxy_store ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_store / there is also a configuration snippet ) with that nginx will save all the upstream files as-is without any additional metadata. The drawback is you have to manage the "cache" yourself (expiration / purging) as nginx won't do anything with the stored files. rr From mdounin at mdounin.ru Fri Sep 29 13:18:05 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Sep 2017 16:18:05 +0300 Subject: Can NGINX cache metadata get updated automatically, if file is added through backdoor by another NGINX proxy-cache? In-Reply-To: <6428544b235ba16050f794b843b2e8f0.NginxMailingListEnglish@forum.nginx.org> References: <6428544b235ba16050f794b843b2e8f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170929131804.GI19617@mdounin.ru> Hello! On Fri, Sep 29, 2017 at 05:00:22AM -0400, rnmx18 wrote: > I have a use-case, where NGINX (say NGINX-process-1) is set up as a reverse > proxy, with caching enabled (say in /mnt/disk2/pubRoot, with zone name > "cacheA"). However, I have another NGINX (say NGINX-Process-B) which also > runs in parallel, and caches its content in (/mnt/disk2/frontstore, with > zone name "cacheB"). Additionally, there is another application which > monitors this "frontstore", and copies its content to "pubRoot". > > So, effectively, any content that NGINX-B caches in frontstore gets > available in the cache-path which is configured for NGINX-A. However, > NGINX-A cannot get it as HIT when it receives a request, as its metadata > (zoneA) does not have the information, as it didn't cache it there in the > first place. > > Is there a mechanism by which NGINX-A cache lookup can get a HIT in such a > case? No. Don't do that. nginx assumes exclusive access to the cache directory, and changing cache files in the directory is very likely to produce highly incorrect results - including errors because of mismatch between in-memory metadata and content of cache files. And, because these errors are never expected to appear in proper environment, at least some of these errors are currently known to be handled incorrectly and may result in socket leaks[1]. [1] http://mailman.nginx.org/pipermail/nginx-ru/2017-September/060259.html -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Sep 29 16:33:01 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 12:33:01 -0400 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? In-Reply-To: References: Message-ID: Hi Reinis, Thank you for that pointer to proxy_store directive. I understand that this would be a useful option for static files. However, my application currently cannot handle aspects like expiry, revalidation, eviction etc. So, I guess I will not be able to use the proxy_store directive for the type of content which I need NGINX to act as a proxy for. Regards Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276612,276618#msg-276618 From lucas at lucasrolff.com Fri Sep 29 16:38:58 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 29 Sep 2017 16:38:58 +0000 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? In-Reply-To: References: Message-ID: Can I ask, what?s the problem with having the metadata in the files? On 29/09/2017, 18.33, "nginx on behalf of rnmx18" wrote: Hi Reinis, Thank you for that pointer to proxy_store directive. I understand that this would be a useful option for static files. However, my application currently cannot handle aspects like expiry, revalidation, eviction etc. So, I guess I will not be able to use the proxy_store directive for the type of content which I need NGINX to act as a proxy for. Regards Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276612,276618#msg-276618 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Sep 29 19:17:45 2017 From: nginx-forum at forum.nginx.org (redbaritone) Date: Fri, 29 Sep 2017 15:17:45 -0400 Subject: Reverse Proxy problems Message-ID: <5ad9e4aeda7aaff952e1c62b75240d72.NginxMailingListEnglish@forum.nginx.org> I am having some problems with my Nginx reverse proxy. I'm running a web application on port 8010 that accepts and serves two different web sites from that port. (As far as I know, there's no way to serve them on different ports, with the server I'm using.) I have different host names for each which are pointed to the same IP address. I'll call them name1.domain.com:8010 and name2.domain.com:8010. Both of these successfully resolve to the appropriate web root within my web application, and the appropriate web site is rendered, IF I include the port number. I want to use Nginx as a reverse proxy however, so that I'll have one place to configure my SSL settings and redirect from old hostnames. I set up my proxy in nginx two different ways, and neither of them consistently resolve to the right website: 1: I setup one upstream server and accessed it through proxy_pass from both server definitions: upstream my_server{ server 127.0.0.1:8010; } server { listen 80; server_name name1.domain.com; location / { root /location_1 proxy_pass http://my_server; ... } } server { listen 80; server_name name2.domain.com; location / { root /location_2 proxy_pass http://my_server; ... } } Please note that I'm just trying to get the reverse proxy to work. Once I do that, I'll add SSL requirements, and all the necessary rewrites to make sure people are redirected to our secured interface. The second way I tried this was to create a different upstream for each website, using the full DNS names for each, and then calling the appropriate upstream proxy from each server definition: upstream name1_server{ server name1.domain.com:8010; } upstream name2_server{ server name2.domain.com:8010; } ... (the same as above, except replacing my_server with name1/2_server at proxy_pass) Both ways gave the same results. After restarting my web application and nginx (just to make sure I start from a clean slate), both name1.domain.comand name2.domain.com resolve to the name1:domain.com:8010 website. However, if I go to name2.domain.com:8010, then both name1.domain.com and name2.domain.com will resolve to that website. Going to name1.domain.com:8010 then causes both portless addresses to resolve there, until I visit name2.domain.com:8010 directly again. Obviously, I don't understand the relationship between how nginx deals with upstream declarations and how that passes along to my web application. Any help would be appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276620,276620#msg-276620 From nginx-forum at forum.nginx.org Fri Sep 29 19:30:58 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 15:30:58 -0400 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? In-Reply-To: References: Message-ID: <3c962b65aee33d6ab44a2beefa4ffb42.NginxMailingListEnglish@forum.nginx.org> Hi Lucas, As long as the cached files (with the metadata at the beginning) reside in the directory specified with the proxy_cache_path directive, they are fine. The NGINX-proxy, which added them there in the first place, can serve the content correctly, after skipping the right amount of metadata bytes. In my case, I have a background application, which might copy the cached file to an alternate location. For example, a file movies/welcome.mp4 may be originally cached by NGINX-proxyon /disk1/cache as /disk1/cache/movies/welcome.mp4 (For the moment, let us forget the md5-based cache path for simplicity). In my use-case, the file may either stay there itself, or sometimes, my application may copy it to another location - say /disk2/cache/movies/welcome.mp4 or say, /disk3/cache/movies/welcome.mp4, say for some kind of disk usage balancing.. The application exposes /disk1/pubroot/cache/movies/welcome.mp4 as the published location. In this model, only the originally cached file at /disk1/cache can be served properly by the NGINX-proxy. The files in any of the other locations cannot be served by NGINX properly. It cannot serve as a server, as the copied file contains metadata. Even if I have other proxy-cache-paths defined for the alternate locations (/disk2/cache or /disk3/cache), the NGINX-proxy also cannot serve them as the corresponding in-memory metadata will not have the entries for these files. The files in those paths, would have been physically copied "under the hood" by another process, and not NGINX. Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276612,276621#msg-276621 From lucas at lucasrolff.com Fri Sep 29 19:55:32 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 29 Sep 2017 19:55:32 +0000 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? In-Reply-To: <3c962b65aee33d6ab44a2beefa4ffb42.NginxMailingListEnglish@forum.nginx.org> References: <3c962b65aee33d6ab44a2beefa4ffb42.NginxMailingListEnglish@forum.nginx.org> Message-ID: > In this model, only the originally cached file at /disk1/cache can be served properly by the NGINX-proxy. You can balance disk usage using split_clients module in nginx, and use different proxy_cache?s (e.g. /disk1/cache, /disk2/cache and so on) as described in https://www.nginx.com/blog/nginx-caching-guide/ (Splitting the cache across multiple hard drives). On 29/09/2017, 21.31, "nginx on behalf of rnmx18" wrote: Hi Lucas, As long as the cached files (with the metadata at the beginning) reside in the directory specified with the proxy_cache_path directive, they are fine. The NGINX-proxy, which added them there in the first place, can serve the content correctly, after skipping the right amount of metadata bytes. In my case, I have a background application, which might copy the cached file to an alternate location. For example, a file movies/welcome.mp4 may be originally cached by NGINX-proxyon /disk1/cache as /disk1/cache/movies/welcome.mp4 (For the moment, let us forget the md5-based cache path for simplicity). In my use-case, the file may either stay there itself, or sometimes, my application may copy it to another location - say /disk2/cache/movies/welcome.mp4 or say, /disk3/cache/movies/welcome.mp4, say for some kind of disk usage balancing.. The application exposes /disk1/pubroot/cache/movies/welcome.mp4 as the published location. In this model, only the originally cached file at /disk1/cache can be served properly by the NGINX-proxy. The files in any of the other locations cannot be served by NGINX properly. It cannot serve as a server, as the copied file contains metadata. Even if I have other proxy-cache-paths defined for the alternate locations (/disk2/cache or /disk3/cache), the NGINX-proxy also cannot serve them as the corresponding in-memory metadata will not have the entries for these files. The files in those paths, would have been physically copied "under the hood" by another process, and not NGINX. Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276612,276621#msg-276621 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Sep 29 20:16:04 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 16:16:04 -0400 Subject: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file? In-Reply-To: References: Message-ID: <28b0feb226771dd81d8d4a21a9fdc51c.NginxMailingListEnglish@forum.nginx.org> Hi Lucas, Thanks for that suggestion about split_clients directive. I will consider it in my evaluation. Regards Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276612,276623#msg-276623 From nginx-forum at forum.nginx.org Fri Sep 29 20:27:54 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 16:27:54 -0400 Subject: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata? Message-ID: <5874591bb0198c0176c2913170d65a21.NginxMailingListEnglish@forum.nginx.org> Hi, As I understand, during startup, the cache loader process scans the files in the defined proxy-cache-path directories, and builds up the in-memory-metadata. Once the metadata is built-up, the cache loader process exits. Is there any mechanism by which, this cache loader process can be made to stay alive, so that it can maybe periodically rebuild/update the in-memory metadata by monitoring files in the corresponding directory? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276624,276624#msg-276624 From nginx-forum at forum.nginx.org Fri Sep 29 20:33:02 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 16:33:02 -0400 Subject: Can NGINX cache metadata get updated automatically, if file is added through backdoor by another NGINX proxy-cache? In-Reply-To: <20170929131804.GI19617@mdounin.ru> References: <20170929131804.GI19617@mdounin.ru> Message-ID: Hi Maxim, Thanks for your inputs. I now understand that it is probably better, not to externally interfere with the contents of the cache directory assigned to NGINX. Regards Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276611,276625#msg-276625 From lucas at lucasrolff.com Fri Sep 29 20:36:30 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 29 Sep 2017 20:36:30 +0000 Subject: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata? In-Reply-To: <5874591bb0198c0176c2913170d65a21.NginxMailingListEnglish@forum.nginx.org> References: <5874591bb0198c0176c2913170d65a21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51A679AA-1A21-442C-8F30-3E447B80EE78@lucasrolff.com> I can?t think of any scenario, you?d want that ? care to explain why you?d like this behaviour? On 29/09/2017, 22.28, "nginx on behalf of rnmx18" wrote: Hi, As I understand, during startup, the cache loader process scans the files in the defined proxy-cache-path directories, and builds up the in-memory-metadata. Once the metadata is built-up, the cache loader process exits. Is there any mechanism by which, this cache loader process can be made to stay alive, so that it can maybe periodically rebuild/update the in-memory metadata by monitoring files in the corresponding directory? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276624,276624#msg-276624 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Sep 29 20:56:21 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 16:56:21 -0400 Subject: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata? In-Reply-To: <51A679AA-1A21-442C-8F30-3E447B80EE78@lucasrolff.com> References: <51A679AA-1A21-442C-8F30-3E447B80EE78@lucasrolff.com> Message-ID: <1e6d74881d80dbcadcc228eb9c31f7b8.NginxMailingListEnglish@forum.nginx.org> It would help in a use-case when there are 2 NGINX processes, both working with the same cache directory. NGINX-A runs with a proxy-cache-path /disk1/cache with zone name "cacheA". NGINX-B runs with the same proxy-cache-path /disk1/cache with zone name "cacheB". When NGINX-B adds content to the cache (say for URL test/a.html), the file gets added to cache as /disk/cache1/test/a.html (again, avoiding md5 for simplicity). I think it may be nice if a subsequent request for this URL to NGINX-A would result in a hit, as the file is available in the disk. However, today it does not result in a HIT, as the in-memory metadata is missing for NGINX-A for this URL. So, it would fetch from origin and add it again to cache, and update its in-memory metadata. Otherwise, a restart of NGINX-A would build up the cache metadata for files found in the cache directory. Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276624,276627#msg-276627 From nginx-forum at forum.nginx.org Fri Sep 29 21:05:01 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 29 Sep 2017 17:05:01 -0400 Subject: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata? In-Reply-To: <1e6d74881d80dbcadcc228eb9c31f7b8.NginxMailingListEnglish@forum.nginx.org> References: <51A679AA-1A21-442C-8F30-3E447B80EE78@lucasrolff.com> <1e6d74881d80dbcadcc228eb9c31f7b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Probably, I should clarify that the use-case which I described above would use the same cache-configuration for both NGINX processes - so same proxy-cache-path and proxy-cache-key specifications. Using a different cache-key would obviously create conflict in cached file path. Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276624,276628#msg-276628 From lucas at slcoding.com Fri Sep 29 21:06:36 2017 From: lucas at slcoding.com (Lucas Rolff) Date: Fri, 29 Sep 2017 23:06:36 +0200 Subject: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata? In-Reply-To: <1e6d74881d80dbcadcc228eb9c31f7b8.NginxMailingListEnglish@forum.nginx.org> References: <51A679AA-1A21-442C-8F30-3E447B80EE78@lucasrolff.com> <1e6d74881d80dbcadcc228eb9c31f7b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <59CEB5DC.5050502@slcoding.com> > It would help in a use-case when there are 2 NGINX processes, both working with the same cache directory. Why would you want 2 nginx processes to use the same cache directory? Explain your situation, what's your end-goal, etc. If it's no minimize the amount of origin requests, you can build multiple layers of cache (fast and slow storage if you want), use load balancing mechanisms such as uri based balancing to spread the cache cross multiple servers and maybe use some of the special flags for balancing, so even if a machine goes down it wouldn't cause a full shift of data. I'm sure that regardless of what your goal is - someone here will be able to suggest a (better) and already supported solution. rnmx18 wrote: > It would help in a use-case when there are 2 NGINX processes, both working > with the same cache directory. > > NGINX-A runs with a proxy-cache-path /disk1/cache with zone name "cacheA". > > NGINX-B runs with the same proxy-cache-path /disk1/cache with zone name > "cacheB". > > When NGINX-B adds content to the cache (say for URL test/a.html), the file > gets added to cache as /disk/cache1/test/a.html (again, avoiding md5 for > simplicity). > > I think it may be nice if a subsequent request for this URL to NGINX-A would > result in a hit, as the file is available in the disk. However, today it > does not result in a HIT, as the in-memory metadata is missing for NGINX-A > for this URL. So, it would fetch from origin and add it again to cache, and > update its in-memory metadata. > > Otherwise, a restart of NGINX-A would build up the cache metadata for files > found in the cache directory. > > Thanks > Rajesh > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276624,276627#msg-276627 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ph.gras at worldonline.fr Fri Sep 29 22:06:00 2017 From: ph.gras at worldonline.fr (Ph. Gras) Date: Sat, 30 Sep 2017 00:06:00 +0200 Subject: Why should I have a 307 redirect ? Message-ID: Hi there, can somebody tell me why I have a 307 redirect response code when I'm turning my redirect directive to https off ? # curl -I https://www.avoirun.com HTTP/1.1 200 OK Server: nginx Date: Fri, 29 Sep 2017 21:48:40 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive X-Pingback: https://www.avoirun.com/xmlrpc.php Link: ; rel=shortlink # curl -I http://www.avoirun.com HTTP/1.1 307 Temporary Redirect Server: nginx Date: Fri, 29 Sep 2017 21:49:19 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive X-Pingback: http://www.avoirun.com/xmlrpc.php Location: https://www.avoirun.com # location = / { # return 301 https://$host$request_uri; # } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. #try_files $uri $uri/ =404; try_files $uri $uri/ /index.php?$args; #try_files $uri $uri/ /index.php; root /usr/share/wordpress; if ($http_user_agent ~ "MJ12bot|SemrushBot|Windows NT 5\.1\; rv:7\.0\.1") { return 403; } } Thanks for your help ! Ph. Gras From r at roze.lv Sat Sep 30 07:39:41 2017 From: r at roze.lv (Reinis Rozitis) Date: Sat, 30 Sep 2017 10:39:41 +0300 Subject: Why should I have a 307 redirect ? In-Reply-To: References: Message-ID: <000d01d339bf$47695ae0$d63c10a0$@roze.lv> > can somebody tell me why I have a 307 redirect response code when I'm turning > my redirect directive to https off ? > # curl -I http://www.avoirun.com > HTTP/1.1 307 Temporary Redirect > Server: nginx .. > X-Pingback: http://www.avoirun.com/xmlrpc.php > Location: https://www.avoirun.com It's not done by nginx but your application/wordpress (php). rr From yks0000 at gmail.com Sat Sep 30 07:43:55 2017 From: yks0000 at gmail.com (Yogesh Sharma) Date: Sat, 30 Sep 2017 07:43:55 +0000 Subject: Nginx CPU Issue : Remain at 100% In-Reply-To: <10538001.t6D0DEl2ps@vbart-laptop> References: <10538001.t6D0DEl2ps@vbart-laptop> Message-ID: Thank you Valentin. Will give a chance. On Fri, 29 Sep 2017 at 5:20 PM, Valentin V. Bartenev wrote: > On Friday, 29 Sep 2017 14:38:58 MSK Yogesh Sharma wrote: > > Team, > > > > I am using nginx as Reverse proxy, where I see that once CPU goes up for > > Nginx it never comes down and remain there forever until we kill that > > worker. We tried tweaking worker_processes to number of cpu we have, but > it > > did not helped. > > > > Any suggestion in this regards will help. > > > > *Version:* nginx-1.10.1-1.el6 > > > [..] > > > First of all, you should update nginx to an actual and supported version: > 1.13.5 or 1.12.1. > > There's a big chance that your problem was already fixed. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Yogesh Sharma -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 30 10:05:15 2017 From: nginx-forum at forum.nginx.org (Dejan Grofelnik Pelzel) Date: Sat, 30 Sep 2017 06:05:15 -0400 Subject: limit_conn is dropping valid connections and causing memory leaks on nginx reload Message-ID: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> Hello, We are running the nginx 1.13.5 with HTTP/2 in a proxy_pass proxy_cache configuration with clients having relatively long open connections. Our system does automatic reloads for any new configuration and we recently introduced a limit_conn to some of the config files. After that, I've started noticing a rapid drop in connections and outgoing network every-time the system would perform a configuration reload. Even stranger, on every reload the memory usage would go up for about 1-2GB until ultimately everything crashed if the reloads were too frequent. The memory usage did go down after old workers were released, but that could take up to 30 minutes, while the configuration could get reloaded up to twice per minute. We used the following configuration as recommended by pretty much any example: limit_conn_zone $binary_remote_addr zone=1234con:10m; limit_conn zone1234con 10; I was able to verify the connection drop by doing a simple ab test, for example, I would run ab -c 100 -n -k 1000 https://127.0.0.1/file.bin 990 of the connections went through, however, 10 would still be active. Immediately after the reload, those would get dropped as well. Adding -r option would help the problem, but that doesn't fix our problem. Finally, after I tried to create a workaround, I've configured the limit zone to: limit_conn_zone "v$binary_remote_addr" zone=1234con:10m; Suddenly everything magically started to work. The connections were not being dropped, the limit worked as expected and even more surprisingly the memory usage was not going up anymore. I've been tearing my hair out almost all day yesterday trying to figure this out. While I was very happy to see this resolved, I am now confused as to why nginx behaves in such a way. I'm thinking this might likely be a bug, so I'm just wondering if anyone could explain why it is happening or has a similar problem. Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276633,276633#msg-276633 From anoopalias01 at gmail.com Sat Sep 30 10:13:57 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 30 Sep 2017 15:43:57 +0530 Subject: limit_conn is dropping valid connections and causing memory leaks on nginx reload In-Reply-To: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> References: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> Message-ID: What is the change (workaround) you made ?I don't see a difference? On Sat, Sep 30, 2017 at 3:35 PM, Dejan Grofelnik Pelzel < nginx-forum at forum.nginx.org> wrote: > Hello, > > We are running the nginx 1.13.5 with HTTP/2 in a proxy_pass proxy_cache > configuration with clients having relatively long open connections. Our > system does automatic reloads for any new configuration and we recently > introduced a limit_conn to some of the config files. After that, I've > started noticing a rapid drop in connections and outgoing network > every-time > the system would perform a configuration reload. Even stranger, on every > reload the memory usage would go up for about 1-2GB until ultimately > everything crashed if the reloads were too frequent. The memory usage did > go > down after old workers were released, but that could take up to 30 minutes, > while the configuration could get reloaded up to twice per minute. > > We used the following configuration as recommended by pretty much any > example: > limit_conn_zone $binary_remote_addr zone=1234con:10m; > limit_conn zone1234con 10; > > I was able to verify the connection drop by doing a simple ab test, for > example, I would run ab -c 100 -n -k 1000 https://127.0.0.1/file.bin > 990 of the connections went through, however, 10 would still be active. > Immediately after the reload, those would get dropped as well. Adding -r > option would help the problem, but that doesn't fix our problem. > > Finally, after I tried to create a workaround, I've configured the limit > zone to: > limit_conn_zone "v$binary_remote_addr" zone=1234con:10m; > > Suddenly everything magically started to work. The connections were not > being dropped, the limit worked as expected and even more surprisingly the > memory usage was not going up anymore. I've been tearing my hair out almost > all day yesterday trying to figure this out. While I was very happy to see > this resolved, I am now confused as to why nginx behaves in such a way. > > I'm thinking this might likely be a bug, so I'm just wondering if anyone > could explain why it is happening or has a similar problem. > > Thank you! > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,276633,276633#msg-276633 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sat Sep 30 10:15:18 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sat, 30 Sep 2017 10:15:18 +0000 Subject: limit_conn is dropping valid connections and causing memory leaks on nginx reload In-Reply-To: References: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <491EEA62-6ABA-449D-96B4-4C6DCCE00613@lucasrolff.com> Anoop, He added v and double quotes around $binary_remote_addr. Best Regards, From: nginx on behalf of Anoop Alias Reply-To: "nginx at nginx.org" Date: Saturday, 30 September 2017 at 12.14 To: Nginx Subject: Re: limit_conn is dropping valid connections and causing memory leaks on nginx reload What is the change (workaround) you made ?I don't see a difference? On Sat, Sep 30, 2017 at 3:35 PM, Dejan Grofelnik Pelzel > wrote: Hello, We are running the nginx 1.13.5 with HTTP/2 in a proxy_pass proxy_cache configuration with clients having relatively long open connections. Our system does automatic reloads for any new configuration and we recently introduced a limit_conn to some of the config files. After that, I've started noticing a rapid drop in connections and outgoing network every-time the system would perform a configuration reload. Even stranger, on every reload the memory usage would go up for about 1-2GB until ultimately everything crashed if the reloads were too frequent. The memory usage did go down after old workers were released, but that could take up to 30 minutes, while the configuration could get reloaded up to twice per minute. We used the following configuration as recommended by pretty much any example: limit_conn_zone $binary_remote_addr zone=1234con:10m; limit_conn zone1234con 10; I was able to verify the connection drop by doing a simple ab test, for example, I would run ab -c 100 -n -k 1000 https://127.0.0.1/file.bin 990 of the connections went through, however, 10 would still be active. Immediately after the reload, those would get dropped as well. Adding -r option would help the problem, but that doesn't fix our problem. Finally, after I tried to create a workaround, I've configured the limit zone to: limit_conn_zone "v$binary_remote_addr" zone=1234con:10m; Suddenly everything magically started to work. The connections were not being dropped, the limit worked as expected and even more surprisingly the memory usage was not going up anymore. I've been tearing my hair out almost all day yesterday trying to figure this out. While I was very happy to see this resolved, I am now confused as to why nginx behaves in such a way. I'm thinking this might likely be a bug, so I'm just wondering if anyone could explain why it is happening or has a similar problem. Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276633,276633#msg-276633 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 30 12:47:37 2017 From: nginx-forum at forum.nginx.org (Dejan Grofelnik Pelzel) Date: Sat, 30 Sep 2017 08:47:37 -0400 Subject: limit_conn is dropping valid connections and causing memory leaks on nginx reload In-Reply-To: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> References: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8c31f80a248d0295b2f549cd05ea3287.NginxMailingListEnglish@forum.nginx.org> So upon further investigation, I noticed that changing it the zone name to "v$binary_remote_addr" actually breaks nginx and prevents configuration being reloaded at all even though the reload is going through successfully until the next restart, and after that, it begins dropping the connections again on reload. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276633,276636#msg-276636 From francis at daoine.org Sat Sep 30 12:50:18 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 30 Sep 2017 13:50:18 +0100 Subject: Reverse Proxy problems In-Reply-To: <5ad9e4aeda7aaff952e1c62b75240d72.NginxMailingListEnglish@forum.nginx.org> References: <5ad9e4aeda7aaff952e1c62b75240d72.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170930125018.GX20907@daoine.org> On Fri, Sep 29, 2017 at 03:17:45PM -0400, redbaritone wrote: Hi there, > I am having some problems with my Nginx reverse proxy. I'm running a web > application on port 8010 that accepts and serves two different web sites > from that port. The most likely way that web application does that, is if it selects which site to serve based on the Host: header in the incoming request. > name1.domain.com:8010 and name2.domain.com:8010. > 1: I setup one upstream server and accessed it through proxy_pass from both > server definitions: > > upstream my_server{ > server 127.0.0.1:8010; > } > > server { > server_name name1.domain.com; > > location / { > proxy_pass http://my_server; nginx will send a Host: header of my_server. > server { > server_name name2.domain.com; > > location / { > proxy_pass http://my_server; nginx will also send a Host: header of my_server. Your web application will presumably handle those two requests in the same way. > The second way I tried this was to create a different upstream for each > website, using the full DNS names for each, and then calling the appropriate > upstream proxy from each server definition: > > upstream name1_server{ > server name1.domain.com:8010; > } > > upstream name2_server{ > server name2.domain.com:8010; > } > > ... (the same as above, except replacing my_server with name1/2_server at > proxy_pass) nginx will send a Host: header of name1_server or name2_server; your web application *could* distinguish based on those, but it is probably configured to distinguish only based on the names name1.domain.com and name2.domain.com. > Obviously, I don't understand the relationship between how nginx deals with > upstream declarations and how that passes along to my web application. Any > help would be appreciated. Either use something like "proxy_pass http://name2.domain.com;", or use your current "proxy_pass" lines but add something like "proxy_set_header Host name2.domain.com;" -- either of those would have nginx sending a Host header of name2.domain.com, which is probably what you want. f -- Francis Daly francis at daoine.org