From nginx-forum at forum.nginx.org Mon Apr 2 07:47:12 2018 From: nginx-forum at forum.nginx.org (bcoz123) Date: Mon, 02 Apr 2018 03:47:12 -0400 Subject: Is grpc keepalive supported ? In-Reply-To: <20180331222521.GT77253@mdounin.ru> References: <20180331222521.GT77253@mdounin.ru> Message-ID: <66d8d0994692c41431c104e1556b4e22.NginxMailingListEnglish@forum.nginx.org> Thanks, Maxim, I have another question, If there are multiple grpc clients in the front, Does nginx reuse the same connection to the backend grpc server? By my test, it seems not. In my opinion, http2 can support that. And it can save resourses by avoid creating new tcp connection for each grpc request, Will you plan to support that in the future ? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279261,279269#msg-279269 From mdounin at mdounin.ru Mon Apr 2 12:49:36 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Apr 2018 15:49:36 +0300 Subject: Is grpc keepalive supported ? In-Reply-To: <66d8d0994692c41431c104e1556b4e22.NginxMailingListEnglish@forum.nginx.org> References: <20180331222521.GT77253@mdounin.ru> <66d8d0994692c41431c104e1556b4e22.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180402124936.GU77253@mdounin.ru> Hello! On Mon, Apr 02, 2018 at 03:47:12AM -0400, bcoz123 wrote: > If there are multiple grpc clients in the front, > Does nginx reuse the same connection to the backend grpc server? > By my test, it seems not. > In my opinion, http2 can support that. > And it can save resourses by avoid creating new tcp connection for each grpc > request, > Will you plan to support that in the future ? Connections to backend grpc servers are not bound to particular clients, and if there are cached keepalive connections, they will be used for any client request. We don't try to multiplex serveral requests within a single backend connection though. There are no plans to support this. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Apr 3 10:51:17 2018 From: nginx-forum at forum.nginx.org (bcoz123) Date: Tue, 03 Apr 2018 06:51:17 -0400 Subject: Is grpc keepalive supported ? In-Reply-To: <20180402124936.GU77253@mdounin.ru> References: <20180402124936.GU77253@mdounin.ru> Message-ID: Thanks?Maxim Have a nice day Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279261,279283#msg-279283 From mdounin at mdounin.ru Tue Apr 3 14:56:07 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Apr 2018 17:56:07 +0300 Subject: nginx-1.13.11 Message-ID: <20180403145607.GD77253@mdounin.ru> Changes with nginx 1.13.11 03 Apr 2018 *) Feature: the "proxy_protocol" parameter of the "listen" directive now supports the PROXY protocol version 2. *) Bugfix: nginx could not be built with OpenSSL 1.1.1 statically on Linux. *) Bugfix: in the "http_404", "http_500", etc. parameters of the "proxy_next_upstream" directive. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Apr 3 16:46:39 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 3 Apr 2018 12:46:39 -0400 Subject: [nginx-announce] nginx-1.13.11 In-Reply-To: <20180403145614.GE77253@mdounin.ru> References: <20180403145614.GE77253@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.11 for Windows https://kevinworthington.com/nginxwin11311 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 3, 2018 at 10:56 AM, Maxim Dounin wrote: > Changes with nginx 1.13.11 03 Apr > 2018 > > *) Feature: the "proxy_protocol" parameter of the "listen" directive > now > supports the PROXY protocol version 2. > > *) Bugfix: nginx could not be built with OpenSSL 1.1.1 statically on > Linux. > > *) Bugfix: in the "http_404", "http_500", etc. parameters of the > "proxy_next_upstream" directive. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesbtobin at gmail.com Wed Apr 4 10:15:40 2018 From: jamesbtobin at gmail.com (James Tobin) Date: Wed, 4 Apr 2018 11:15:40 +0100 Subject: JOB | Permanent Web Developer (New York) Message-ID: Hello, I'm working with an employer that is looking to hire a permanent web developer (for their New York office) with fixed income or foreign exchange experience. Consequently I had hoped that some members of this mailing list may like to discuss further; off-list. I can be reached using "JamesBTobin (at) Gmail (dot) Com". Kind regards, James From John.Melom at spok.com Wed Apr 4 21:20:20 2018 From: John.Melom at spok.com (John Melom) Date: Wed, 4 Apr 2018 21:20:20 +0000 Subject: Nginx throttling issue? In-Reply-To: References: <20180327115506.GF77253@mdounin.ru> Message-ID: Hi Maxim, I've looked at the nstat data and found the following values for counters: > nstat -az | grep -I listen TcpExtListenOverflows 0 0.0 TcpExtListenDrops 0 0.0 TcpExtTCPFastOpenListenOverflow 0 0.0 nstat -az | grep -i retra TcpRetransSegs 12157 0.0 TcpExtTCPLostRetransmit 0 0.0 TcpExtTCPFastRetrans 270 0.0 TcpExtTCPForwardRetrans 11 0.0 TcpExtTCPSlowStartRetrans 0 0.0 TcpExtTCPRetransFail 0 0.0 TcpExtTCPSynRetrans 25 0.0 Assuming the above "Listen" counters provide data about the overflow issue you mention, then there are no overflows on my system. While retransmissions are happening, it doesn't seem they are related to listen queue overflows. Am I looking at the correct data items? Is my interpretation of the data correct? If so, do you have any other ideas I could investigate? Thanks, John -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of John Melom Sent: Tuesday, March 27, 2018 8:52 AM To: nginx at nginx.org Subject: RE: Nginx throttling issue? Maxim, Thank you for your reply. I will look to see if "netstat -s" detects any listen queue overflows. John -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Tuesday, March 27, 2018 6:55 AM To: nginx at nginx.org Subject: Re: Nginx throttling issue? Hello! On Mon, Mar 26, 2018 at 08:21:27PM +0000, John Melom wrote: > I am load testing our system using Jmeter as a load generator. > We execute a script consisting of an https request executing in a > loop. The loop does not contain a think time, since at this point I > am not trying to emulate a ?real user?. I want to get a quick look at > our system capacity. Load on our system is increased by increasing > the number of Jmeter threads executing our script. Each Jmeter thread > references different data. > > Our system is in AWS with an ELB fronting Nginx, which serves as a > reverse proxy for our Docker Swarm application cluster. > > At moderate loads, a subset of our https requests start experiencing > to a 1 second delay in addition to their normal response time. The > delay is not due to resource contention. > System utilizations remain low. The response times cluster around 4 > values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 > seconds. Right now, I am most interested in understanding and > eliminating the 1 second delay that gives the clusters at 1 second and > 1.050 seconds. > > The attachment shows a response time scatterplot from one of our runs. > The x-axis is the number of seconds into the run, the y-axis is the > response time in milliseconds. The plotted data shows the response > time of requests at the time they occurred in the run. > > If I run the test bypassing the ELB and Nginx, this delay does not > occur. > If I bypass the ELB, but include Nginx in the request path, the delay > returns. > > This leads me to believe the 1 second delay is coming from Nginx. There are no magic 1 second delays in nginx - unless you've configured something explicitly. Most likely, the 1 second delay is coming from TCP retransmission timeout during connection establishment due to listen queue overflows. Check "netstat -s" to see if there are any listen queue overflows on your hosts. [...] -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. From peter_booth at me.com Thu Apr 5 04:45:45 2018 From: peter_booth at me.com (Peter Booth) Date: Thu, 05 Apr 2018 00:45:45 -0400 Subject: Nginx throttling issue? In-Reply-To: References: <20180327115506.GF77253@mdounin.ru> Message-ID: <85F52145-2C63-45C9-A581-AE609843A8CB@me.com> John, I think that you need to understand what is happening on your host throughout the duration of the test. Specifically, what is happening with the tcp connections. If you run netstat and grep for tcp and do this in a loop every say five seconds then you?ll see how many connections peak get created. If the thing you are testing exists in production then you are lucky. You can do the same in production and see what it is that you need to replicate. You didn?t mention whether you had persistent connections (http keep alive) configured. This is key to maximizing scalability. You did say that you were using SSL. If it were me I?d use a load generator that more closely resembles the behavior of real users on a website. Wrk2, Tsung, httperf, Gatling are examples of some that do. Using jmeter with zero think time is a very common anti pattern that doesn?t behave anything like real users. I think of it as the lazy performance tester pattern. Imagine a real web server under heavy load from human beings. You will see thousands of concurrent connections but fewer concurrent requests in flight. With the jmeter zero think time model then you are either creating new connections or reusing them - so either you have a shitload of connections and your nginx process starts running out of file handles or you are jamming requests down a single connection- neither of which resemble reality. If you are committed to using jmeter for some reason then use more instances with real thinktimes. Each instance?s connection wil have a different source port Sent from my iPhone > On Apr 4, 2018, at 5:20 PM, John Melom wrote: > > Hi Maxim, > > I've looked at the nstat data and found the following values for counters: > >> nstat -az | grep -I listen > TcpExtListenOverflows 0 0.0 > TcpExtListenDrops 0 0.0 > TcpExtTCPFastOpenListenOverflow 0 0.0 > > > nstat -az | grep -i retra > TcpRetransSegs 12157 0.0 > TcpExtTCPLostRetransmit 0 0.0 > TcpExtTCPFastRetrans 270 0.0 > TcpExtTCPForwardRetrans 11 0.0 > TcpExtTCPSlowStartRetrans 0 0.0 > TcpExtTCPRetransFail 0 0.0 > TcpExtTCPSynRetrans 25 0.0 > > Assuming the above "Listen" counters provide data about the overflow issue you mention, then there are no overflows on my system. While retransmissions are happening, it doesn't seem they are related to listen queue overflows. > > > Am I looking at the correct data items? Is my interpretation of the data correct? If so, do you have any other ideas I could investigate? > > Thanks, > > John > > -----Original Message----- > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of John Melom > Sent: Tuesday, March 27, 2018 8:52 AM > To: nginx at nginx.org > Subject: RE: Nginx throttling issue? > > Maxim, > > Thank you for your reply. I will look to see if "netstat -s" detects any listen queue overflows. > > John > > > -----Original Message----- > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin > Sent: Tuesday, March 27, 2018 6:55 AM > To: nginx at nginx.org > Subject: Re: Nginx throttling issue? > > Hello! > >> On Mon, Mar 26, 2018 at 08:21:27PM +0000, John Melom wrote: >> >> I am load testing our system using Jmeter as a load generator. >> We execute a script consisting of an https request executing in a >> loop. The loop does not contain a think time, since at this point I >> am not trying to emulate a ?real user?. I want to get a quick look at >> our system capacity. Load on our system is increased by increasing >> the number of Jmeter threads executing our script. Each Jmeter thread >> references different data. >> >> Our system is in AWS with an ELB fronting Nginx, which serves as a >> reverse proxy for our Docker Swarm application cluster. >> >> At moderate loads, a subset of our https requests start experiencing >> to a 1 second delay in addition to their normal response time. The >> delay is not due to resource contention. >> System utilizations remain low. The response times cluster around 4 >> values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 >> seconds. Right now, I am most interested in understanding and >> eliminating the 1 second delay that gives the clusters at 1 second and >> 1.050 seconds. >> >> The attachment shows a response time scatterplot from one of our runs. >> The x-axis is the number of seconds into the run, the y-axis is the >> response time in milliseconds. The plotted data shows the response >> time of requests at the time they occurred in the run. >> >> If I run the test bypassing the ELB and Nginx, this delay does not >> occur. >> If I bypass the ELB, but include Nginx in the request path, the delay >> returns. >> >> This leads me to believe the 1 second delay is coming from Nginx. > > There are no magic 1 second delays in nginx - unless you've configured something explicitly. > > Most likely, the 1 second delay is coming from TCP retransmission timeout during connection establishment due to listen queue overflows. Check "netstat -s" to see if there are any listen queue overflows on your hosts. > > [...] > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ________________________________ > NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ________________________________ > NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From m16+nginx at monksofcool.net Fri Apr 6 16:02:13 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 6 Apr 2018 18:02:13 +0200 Subject: Why are my CGI scripts not executed like PHP ? Message-ID: Hello list, I am fairly new to nginx and now have stumbled across an issue I can't solve. I have successfully configured nginx on Gentoo Linux to run PHP applications (e.g. phpBB and phpMyAdmin) with php-fpm. As far as I understand, php-fpm should also be able to execute "regular CGI" in the form of Shell-Scripts or Perl, as long as the files are executable and use shebang-notation to indicate what interpreter they want to be run with? In my test installation CGI scripts are never executed by php-fpm. File contents are simply piped to the web browser, and I can't figure out why. I searched the Net and mailing list archives, but did not find a solution, so I thought it best to ask here. Output of nginx -V, configuration dump and test.cgi are attached. Your help is appreciated. -Ralph nginx version: nginx/1.13.11 built with OpenSSL 1.0.2n 7 Dec 2017 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=/var/lib/nginx/tmp/client --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-compat --with-http_v2_module --with-pcre --with-pcre-jit --with-http_addition_module --with-http_dav_module --with-http_perl_module --with-http_realip_module --add-module=external_module/headers-more-nginx-module-0.33 --add-module=external_module/ngx-fancyindex-0.4.2 --add-module=external_module/ngx_http_auth_pam_module-1.5.1 --add-module=external_module/nginx-dav-ext-module-0.1.0 --add-module=external_module/echo-nginx-module-0.61 --add-module=external_module/nginx-auth-ldap-42d195d7a7575ebab1c369ad3fc5d78dc2c2669c --add-module=external_module/nginx-module-vts-0.1.15-gentoo --with-http_ssl_module --without-stream_access_module --without-stream_geo_module --without-stream_limit_conn_module --without-stream_map_module --without-stream_return_module --without-stream_split_clients_module --without-stream_upstream_hash_module --without-stream_upstream_least_conn_module --without-stream_upstream_zone_module --without-mail_pop3_module --with-mail --with-mail_ssl_module --user=nginx --group=nginx # configuration file /etc/nginx/nginx.conf: user nginx nginx; worker_processes 1; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip off; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen *:8080 default_server; access_log /var/log/nginx/access_log main; error_log /var/log/nginx/error_log info; server_name _; root /var/www/localhost/htdocs; # Alternative: temp redirect to HTTPS #return 302 https://$host$request_uri; } include local/*.conf; } # configuration file /etc/nginx/local/20-test.conf: server { listen *:8443 ssl default_server; server_name test.mydomain.tld; access_log /var/log/nginx/ssl_access_log main; error_log /var/log/nginx/ssl_error_log debug; ssl on; ssl_certificate /etc/ssl/mydomain/cert.pem; ssl_certificate_key /etc/ssl/mydomain/key.pem; root /var/www/localhost/test; index test.cgi; location ~ \.cgi$ { # Test for non-existent scripts or throw a 404 error try_files $uri =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass unix:/run/php7-fpm.sock; } } # configuration file /etc/nginx/mime.types: types { text/html html htm shtml; text/css css; text/xml xml; image/gif gif; image/jpeg jpeg jpg; application/javascript js; application/atom+xml atom; application/rss+xml rss; text/mathml mml; text/plain txt; text/vnd.sun.j2me.app-descriptor jad; text/vnd.wap.wml wml; text/x-component htc; image/png png; image/svg+xml svg svgz; image/tiff tif tiff; image/vnd.wap.wbmp wbmp; image/webp webp; image/x-icon ico; image/x-jng jng; image/x-ms-bmp bmp; application/font-woff woff; application/java-archive jar war ear; application/json json; application/mac-binhex40 hqx; application/msword doc; application/pdf pdf; application/postscript ps eps ai; application/rtf rtf; application/vnd.apple.mpegurl m3u8; application/vnd.google-earth.kml+xml kml; application/vnd.google-earth.kmz kmz; application/vnd.ms-excel xls; application/vnd.ms-fontobject eot; application/vnd.ms-powerpoint ppt; application/vnd.oasis.opendocument.graphics odg; application/vnd.oasis.opendocument.presentation odp; application/vnd.oasis.opendocument.spreadsheet ods; application/vnd.oasis.opendocument.text odt; application/vnd.openxmlformats-officedocument.presentationml.presentation pptx; application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx; application/vnd.openxmlformats-officedocument.wordprocessingml.document docx; application/vnd.wap.wmlc wmlc; application/x-7z-compressed 7z; application/x-cocoa cco; application/x-java-archive-diff jardiff; application/x-java-jnlp-file jnlp; application/x-makeself run; application/x-perl pl pm; application/x-pilot prc pdb; application/x-rar-compressed rar; application/x-redhat-package-manager rpm; application/x-sea sea; application/x-shockwave-flash swf; application/x-stuffit sit; application/x-tcl tcl tk; application/x-x509-ca-cert der pem crt; application/x-xpinstall xpi; application/xhtml+xml xhtml; application/xspf+xml xspf; application/zip zip; application/octet-stream bin exe dll; application/octet-stream deb; application/octet-stream dmg; application/octet-stream iso img; application/octet-stream msi msp msm; audio/midi mid midi kar; audio/mpeg mp3; audio/ogg ogg; audio/x-m4a m4a; audio/x-realaudio ra; video/3gpp 3gpp 3gp; video/mp2t ts; video/mp4 mp4; video/mpeg mpeg mpg; video/quicktime mov; video/webm webm; video/x-flv flv; video/x-m4v m4v; video/x-mng mng; video/x-ms-asf asx asf; video/x-ms-wmv wmv; video/x-msvideo avi; } # configuration file /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REQUEST_SCHEME $scheme; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; # httpoxy mitigation (https://httpoxy.org/ https://www.nginx.com/blog/?p=41962) fastcgi_param HTTP_PROXY ""; $ cat /var/www/localhost/test/test.cgi #!/bin/sh echo 'Hello world.' $ ls -l /var/www/localhost/test/test.cgi -rwxr-xr-x 1 root root 67 Apr 6 17:24 /var/www/localhost/test/test.cgi* From r1ch+nginx at teamliquid.net Fri Apr 6 17:04:20 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 6 Apr 2018 19:04:20 +0200 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: References: Message-ID: PHP-FPM is only for PHP. You'll want something like fcgiwrap for regular CGI files. See https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/ On Fri, Apr 6, 2018 at 6:02 PM, Ralph Seichter wrote: > Hello list, > > I am fairly new to nginx and now have stumbled across an issue I can't > solve. I have successfully configured nginx on Gentoo Linux to run PHP > applications (e.g. phpBB and phpMyAdmin) with php-fpm. > > As far as I understand, php-fpm should also be able to execute "regular > CGI" in the form of Shell-Scripts or Perl, as long as the files are > executable and use shebang-notation to indicate what interpreter they > want to be run with? > > In my test installation CGI scripts are never executed by php-fpm. File > contents are simply piped to the web browser, and I can't figure out > why. I searched the Net and mailing list archives, but did not find a > solution, so I thought it best to ask here. > > Output of nginx -V, configuration dump and test.cgi are attached. Your > help is appreciated. > > -Ralph > > > nginx version: nginx/1.13.11 > built with OpenSSL 1.0.2n 7 Dec 2017 > TLS SNI support enabled > configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid > --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include > --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log > --http-client-body-temp-path=/var/lib/nginx/tmp/client > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-compat > --with-http_v2_module --with-pcre --with-pcre-jit > --with-http_addition_module > --with-http_dav_module --with-http_perl_module --with-http_realip_module > --add-module=external_module/headers-more-nginx-module-0.33 > --add-module=external_module/ngx-fancyindex-0.4.2 > --add-module=external_module/ngx_http_auth_pam_module-1.5.1 > --add-module=external_module/nginx-dav-ext-module-0.1.0 > --add-module=external_module/echo-nginx-module-0.61 > --add-module=external_module/nginx-auth-ldap- > 42d195d7a7575ebab1c369ad3fc5d78dc2c2669c > --add-module=external_module/nginx-module-vts-0.1.15-gentoo > --with-http_ssl_module --without-stream_access_module > --without-stream_geo_module --without-stream_limit_conn_module > --without-stream_map_module --without-stream_return_module > --without-stream_split_clients_module --without-stream_upstream_ > hash_module > --without-stream_upstream_least_conn_module > --without-stream_upstream_zone_module --without-mail_pop3_module > --with-mail > --with-mail_ssl_module --user=nginx --group=nginx > > # configuration file /etc/nginx/nginx.conf: > > user nginx nginx; > worker_processes 1; > > error_log /var/log/nginx/error_log info; > > events { > worker_connections 1024; > use epoll; > } > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main > '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $bytes_sent ' > '"$http_referer" "$http_user_agent" ' > '"$gzip_ratio"'; > > client_header_timeout 10m; > client_body_timeout 10m; > send_timeout 10m; > > connection_pool_size 256; > client_header_buffer_size 1k; > large_client_header_buffers 4 2k; > request_pool_size 4k; > > gzip off; > > output_buffers 1 32k; > postpone_output 1460; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > > keepalive_timeout 75 20; > > ignore_invalid_headers on; > > index index.html; > > server { > listen *:8080 default_server; > access_log /var/log/nginx/access_log main; > error_log /var/log/nginx/error_log info; > > server_name _; > root /var/www/localhost/htdocs; > > # Alternative: temp redirect to HTTPS > #return 302 https://$host$request_uri; > } > > include local/*.conf; > } > > # configuration file /etc/nginx/local/20-test.conf: > > server { > listen *:8443 ssl default_server; > server_name test.mydomain.tld; > access_log /var/log/nginx/ssl_access_log main; > error_log /var/log/nginx/ssl_error_log debug; > > ssl on; > ssl_certificate /etc/ssl/mydomain/cert.pem; > ssl_certificate_key /etc/ssl/mydomain/key.pem; > > root /var/www/localhost/test; > index test.cgi; > > location ~ \.cgi$ { > # Test for non-existent scripts or throw a 404 error > try_files $uri =404; > > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_pass unix:/run/php7-fpm.sock; > } > } > > # configuration file /etc/nginx/mime.types: > > types { > text/html html htm shtml; > text/css css; > text/xml xml; > image/gif gif; > image/jpeg jpeg jpg; > application/javascript js; > application/atom+xml atom; > application/rss+xml rss; > > text/mathml mml; > text/plain txt; > text/vnd.sun.j2me.app-descriptor jad; > text/vnd.wap.wml wml; > text/x-component htc; > > image/png png; > image/svg+xml svg svgz; > image/tiff tif tiff; > image/vnd.wap.wbmp wbmp; > image/webp webp; > image/x-icon ico; > image/x-jng jng; > image/x-ms-bmp bmp; > > application/font-woff woff; > application/java-archive jar war ear; > application/json json; > application/mac-binhex40 hqx; > application/msword doc; > application/pdf pdf; > application/postscript ps eps ai; > application/rtf rtf; > application/vnd.apple.mpegurl m3u8; > application/vnd.google-earth.kml+xml kml; > application/vnd.google-earth.kmz kmz; > application/vnd.ms-excel xls; > application/vnd.ms-fontobject eot; > application/vnd.ms-powerpoint ppt; > application/vnd.oasis.opendocument.graphics odg; > application/vnd.oasis.opendocument.presentation odp; > application/vnd.oasis.opendocument.spreadsheet ods; > application/vnd.oasis.opendocument.text odt; > > application/vnd.openxmlformats-officedocument.presentationml.presentation > pptx; > application/vnd.openxmlformats-officedocument.spreadsheetml.sheet > xlsx; > application/vnd.openxmlformats-officedocument. > wordprocessingml.document > docx; > application/vnd.wap.wmlc wmlc; > application/x-7z-compressed 7z; > application/x-cocoa cco; > application/x-java-archive-diff jardiff; > application/x-java-jnlp-file jnlp; > application/x-makeself run; > application/x-perl pl pm; > application/x-pilot prc pdb; > application/x-rar-compressed rar; > application/x-redhat-package-manager rpm; > application/x-sea sea; > application/x-shockwave-flash swf; > application/x-stuffit sit; > application/x-tcl tcl tk; > application/x-x509-ca-cert der pem crt; > application/x-xpinstall xpi; > application/xhtml+xml xhtml; > application/xspf+xml xspf; > application/zip zip; > > application/octet-stream bin exe dll; > application/octet-stream deb; > application/octet-stream dmg; > application/octet-stream iso img; > application/octet-stream msi msp msm; > > audio/midi mid midi kar; > audio/mpeg mp3; > audio/ogg ogg; > audio/x-m4a m4a; > audio/x-realaudio ra; > > video/3gpp 3gpp 3gp; > video/mp2t ts; > video/mp4 mp4; > video/mpeg mpeg mpg; > video/quicktime mov; > video/webm webm; > video/x-flv flv; > video/x-m4v m4v; > video/x-mng mng; > video/x-ms-asf asx asf; > video/x-ms-wmv wmv; > video/x-msvideo avi; > } > > # configuration file /etc/nginx/fastcgi_params: > > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param REQUEST_SCHEME $scheme; > fastcgi_param HTTPS $https if_not_empty; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > fastcgi_param REDIRECT_STATUS 200; > > # httpoxy mitigation (https://httpoxy.org/ > https://www.nginx.com/blog/?p=41962) > fastcgi_param HTTP_PROXY ""; > > > $ cat /var/www/localhost/test/test.cgi > #!/bin/sh > echo 'Hello world.' > > $ ls -l /var/www/localhost/test/test.cgi > -rwxr-xr-x 1 root root 67 Apr 6 17:24 /var/www/localhost/test/test.cgi* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Fri Apr 6 17:11:36 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 6 Apr 2018 19:11:36 +0200 Subject: Nginx throttling issue? In-Reply-To: <85F52145-2C63-45C9-A581-AE609843A8CB@me.com> References: <20180327115506.GF77253@mdounin.ru> <85F52145-2C63-45C9-A581-AE609843A8CB@me.com> Message-ID: Even though it shouldn't be reaching your limits, limit_req does delay in 1 second increments which sounds like it could be responsible for this. You should see error log entries if this happens (severity warning). Have you tried without the limit_req option? You can also use the nodelay option to avoid the delaying behavior. http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req On Thu, Apr 5, 2018 at 6:45 AM, Peter Booth wrote: > John, > > I think that you need to understand what is happening on your host > throughout the duration of the test. Specifically, what is happening with > the tcp connections. If you run netstat and grep for tcp and do this in a > loop every say five seconds then you?ll see how many connections peak get > created. > If the thing you are testing exists in production then you are lucky. You > can do the same in production and see what it is that you need to replicate. > > You didn?t mention whether you had persistent connections (http keep > alive) configured. This is key to maximizing scalability. You did say that > you were using SSL. If it were me I?d use a load generator that more > closely resembles the behavior of real users on a website. Wrk2, Tsung, > httperf, Gatling are examples of some that do. Using jmeter with zero think > time is a very common anti pattern that doesn?t behave anything like real > users. I think of it as the lazy performance tester pattern. > > Imagine a real web server under heavy load from human beings. You will see > thousands of concurrent connections but fewer concurrent requests in > flight. With the jmeter zero think time model then you are either creating > new connections or reusing them - so either you have a shitload of > connections and your nginx process starts running out of file handles or > you are jamming requests down a single connection- neither of which > resemble reality. > > If you are committed to using jmeter for some reason then use more > instances with real thinktimes. Each instance?s connection wil have a > different source port > > Sent from my iPhone > > > On Apr 4, 2018, at 5:20 PM, John Melom wrote: > > > > Hi Maxim, > > > > I've looked at the nstat data and found the following values for > counters: > > > >> nstat -az | grep -I listen > > TcpExtListenOverflows 0 0.0 > > TcpExtListenDrops 0 0.0 > > TcpExtTCPFastOpenListenOverflow 0 0.0 > > > > > > nstat -az | grep -i retra > > TcpRetransSegs 12157 0.0 > > TcpExtTCPLostRetransmit 0 0.0 > > TcpExtTCPFastRetrans 270 0.0 > > TcpExtTCPForwardRetrans 11 0.0 > > TcpExtTCPSlowStartRetrans 0 0.0 > > TcpExtTCPRetransFail 0 0.0 > > TcpExtTCPSynRetrans 25 0.0 > > > > Assuming the above "Listen" counters provide data about the overflow > issue you mention, then there are no overflows on my system. While > retransmissions are happening, it doesn't seem they are related to listen > queue overflows. > > > > > > Am I looking at the correct data items? Is my interpretation of the > data correct? If so, do you have any other ideas I could investigate? > > > > Thanks, > > > > John > > > > -----Original Message----- > > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of John Melom > > Sent: Tuesday, March 27, 2018 8:52 AM > > To: nginx at nginx.org > > Subject: RE: Nginx throttling issue? > > > > Maxim, > > > > Thank you for your reply. I will look to see if "netstat -s" detects > any listen queue overflows. > > > > John > > > > > > -----Original Message----- > > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin > > Sent: Tuesday, March 27, 2018 6:55 AM > > To: nginx at nginx.org > > Subject: Re: Nginx throttling issue? > > > > Hello! > > > >> On Mon, Mar 26, 2018 at 08:21:27PM +0000, John Melom wrote: > >> > >> I am load testing our system using Jmeter as a load generator. > >> We execute a script consisting of an https request executing in a > >> loop. The loop does not contain a think time, since at this point I > >> am not trying to emulate a ?real user?. I want to get a quick look at > >> our system capacity. Load on our system is increased by increasing > >> the number of Jmeter threads executing our script. Each Jmeter thread > >> references different data. > >> > >> Our system is in AWS with an ELB fronting Nginx, which serves as a > >> reverse proxy for our Docker Swarm application cluster. > >> > >> At moderate loads, a subset of our https requests start experiencing > >> to a 1 second delay in addition to their normal response time. The > >> delay is not due to resource contention. > >> System utilizations remain low. The response times cluster around 4 > >> values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 > >> seconds. Right now, I am most interested in understanding and > >> eliminating the 1 second delay that gives the clusters at 1 second and > >> 1.050 seconds. > >> > >> The attachment shows a response time scatterplot from one of our runs. > >> The x-axis is the number of seconds into the run, the y-axis is the > >> response time in milliseconds. The plotted data shows the response > >> time of requests at the time they occurred in the run. > >> > >> If I run the test bypassing the ELB and Nginx, this delay does not > >> occur. > >> If I bypass the ELB, but include Nginx in the request path, the delay > >> returns. > >> > >> This leads me to believe the 1 second delay is coming from Nginx. > > > > There are no magic 1 second delays in nginx - unless you've configured > something explicitly. > > > > Most likely, the 1 second delay is coming from TCP retransmission > timeout during connection establishment due to listen queue overflows. > Check "netstat -s" to see if there are any listen queue overflows on your > hosts. > > > > [...] > > > > -- > > Maxim Dounin > > http://mdounin.ru/ > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ________________________________ > > NOTE: This email message and any attachments are for the sole use of the > intended recipient(s) and may contain confidential and/or privileged > information. Any unauthorized review, use, disclosure or distribution is > prohibited. If you have received this e-mail in error, please contact the > sender by replying to this email, and destroy all copies of the original > message and any material included with this email. > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ________________________________ > > NOTE: This email message and any attachments are for the sole use of the > intended recipient(s) and may contain confidential and/or privileged > information. Any unauthorized review, use, disclosure or distribution is > prohibited. If you have received this e-mail in error, please contact the > sender by replying to this email, and destroy all copies of the original > message and any material included with this email. > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Fri Apr 6 17:26:56 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 6 Apr 2018 19:26:56 +0200 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: References: Message-ID: <80ad6724-89d0-1dad-5779-9b229da66484@monksofcool.net> On 06.04.2018 19:04, Richard Stanway wrote: > PHP-FPM is only for PHP. You'll want something like fcgiwrap for > regular CGI files. Seriously? But http://php.net/manual/en/intro.fpm.php states: "FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation with some additional features (mostly) useful for heavy-loaded sites." I mistakenly assumed that the name FastCGI Process Manager implies this piece of software is meant for CGI in general and used for PHP more as a byproduct. Also, there are the nginx config file names fastcgi.conf and fastcgi_params. Sigh. Silly me... :-P Thanks for letting me know that I can stop wasting time with the wrong tool for the job. I'll investigate FCGI Wrap, like you suggested. -Ralph From giulio at loffreda.com.br Fri Apr 6 17:40:21 2018 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 6 Apr 2018 14:40:21 -0300 Subject: Wordpress multisite + SSL Message-ID: <8f7b67cd-b5f0-4c38-bd48-3ca9273fa79c@Spark> Dears I have one wordpress multisite with subdomain being served by Nginx. We have the main domain, lets call domain.com. We use custom domains for customer site lets say customerone.com, customertwo.com? with correspondent subdomain on WP, as customerone.domain.com, customertwo.domain.com. Everything works fine with the configuration at the end of this email. However, now we want to secure some custom domains for example https://customerone.com. For one secured domain, it works fine. I can use some plugin to force HTTPS on WP and insert certificate on top of nginx configuration. The problem is when I have more than one domain to secure. I tried to insert more than one ssl_certificate on top to secure base domain (domain.com) and its subdomains. Doesn?t work. Then i search for some configuration to check domain and load the right certificate, couldn?t find. Can someone help us to configure our server to work with non-ssl + ssl and Wordpress multisite subdomain ? Thank you map $http_host $blogid { ? ? default ? ? ? -999; } server { ? ? server_name domain.com *.domain.com ; ? ? root /var/www/html/portal; ? ? index index.php; ? ? access_log /var/log/nginx/domain.access.log combined; ? ? error_log /var/log/nginx/domain.error.log; ? ? location / { ? ? ? ? try_files $uri $uri/ /index.php?$args ; ? ? } ? ? #WPMU Files ? ? ? ? location ~ \.php$ { ? ? ? ? ? ? ? ? autoindex on; ? ? ? ? ? ? ? ? try_files $uri =404; ? ? ? ? ? ? ? ? fastcgi_split_path_info ^(.+\.php)(/.+)$; ?? ? ? ? ? ? ? # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini ? ? ? ? ? ? ? ? # With php5-fpm: ? ? ? ? ? ? ? ? #fastcgi_pass unix:/var/run/php5-fpm.sock; ? ? ? ? ? ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; ? ? ? ? ? ? ? ? fastcgi_index index.php; ? ? ? ? ? ? ? ? include fastcgi_params; ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ? ? ? ? ? ? ? ? client_max_body_size ? ? ? 100M; ? ? ? ? ? ? ? ? proxy_connect_timeout? ? ? 180; ? ? ? ? ? ? ? ? proxy_send_timeout ? ? ? ? 180; ? ? ? ? ? ? ? ? proxy_read_timeout ? ? ? ? 180; ? ? ? ? } ? ? ? ? location ~ ^/files/(.*)$ { ? ? ? ? ? ? ? ? try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ; ? ? ? ? ? ? ? ? access_log off; log_not_found off;? ? ? expires max; ? ? ? ? } ? ? #WPMU x-sendfile to avoid php readfile() ? ? location ^~ /blogs.dir { ? ? ? ? internal; ? ? ? ? alias /home/portal/wp-content/blogs.dir; ? ? ? ? access_log off; ? ? log_not_found off;? ? ? expires max; ? ? } ? ? #add some rules for static content expiry-headers here } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Fri Apr 6 17:50:13 2018 From: mailinglist at unix-solution.de (basti) Date: Fri, 6 Apr 2018 19:50:13 +0200 Subject: Wordpress multisite + SSL In-Reply-To: <8f7b67cd-b5f0-4c38-bd48-3ca9273fa79c@Spark> References: <8f7b67cd-b5f0-4c38-bd48-3ca9273fa79c@Spark> Message-ID: <5386299b-e0d6-922e-26b0-3e159d11fa3c@unix-solution.de> Hello, where have you defined your certificate? I cant see. if you use one serer directive for all your domains, all domains must be in this certificate (Subject alt names). On 06.04.2018 19:40, Giulio Loffreda wrote: > Dears > > > I have one wordpress multisite with subdomain being served by Nginx. > > > We have the main domain, lets call domain.com . > > We use custom domains for customer site lets say customerone.com > , customertwo.com ? with > correspondent subdomain on WP, as customerone.domain.com > , customertwo.domain.com > . > > > Everything works fine with the configuration at the end of this email. > > > However, now we want to secure some custom domains for example > https://customerone.com. > > > For one secured domain, it works fine. I can use some plugin to force > HTTPS on WP and insert certificate on top of nginx configuration. > > > The problem is when I have more than one domain to secure. > > > I tried to insert more than one ssl_certificate on top to secure base > domain (domain.com ) and its subdomains. Doesn?t work. > > Then i search for some configuration to check domain and load the right > certificate, couldn?t find. > > > Can someone help us to configure our server to work with non-ssl + ssl > and Wordpress multisite subdomain ? > > > Thank you > > > map $http_host $blogid { > > ? ? default ? ? ? -999; > > } > > > server { > > ? ? server_name domain.com *.domain.com > ; > > > ? ? root /var/www/html/portal; > > ? ? index index.php; > > > ? ? access_log /var/log/nginx/domain.access.log combined; > > ? ? error_log /var/log/nginx/domain.error.log; > > > ? ? location / { > > ? ? ? ? try_files $uri $uri/ /index.php?$args ; > > ? ? } > > > ? ? #WPMU Files > > ? ? ? ? location ~ \.php$ { > > ? ? ? ? ? ? ? ? autoindex on; > > ? ? ? ? ? ? ? ? try_files $uri =404; > > ? ? ? ? ? ? ? ? fastcgi_split_path_info ^(.+\.php)(/.+)$; > > ?? ? ? ? ? ? ? # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > > > ? ? ? ? ? ? ? ? # With php5-fpm: > > ? ? ? ? ? ? ? ? #fastcgi_pass unix:/var/run/php5-fpm.sock; > > ? ? ? ? ? ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > > ? ? ? ? ? ? ? ? fastcgi_index index.php; > > ? ? ? ? ? ? ? ? include fastcgi_params; > > ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > ? ? ? ? ? ? ? ? client_max_body_size ? ? ? 100M; > > ? ? ? ? ? ? ? ? proxy_connect_timeout? ? ? 180; > > ? ? ? ? ? ? ? ? proxy_send_timeout ? ? ? ? 180; > > ? ? ? ? ? ? ? ? proxy_read_timeout ? ? ? ? 180; > > ? ? ? ? } > > ? ? ? ? location ~ ^/files/(.*)$ { > > ? ? ? ? ? ? ? ? try_files /wp-content/blogs.dir/$blogid/$uri > /wp-includes/ms-files.php?file=$1 ; > > ? ? ? ? ? ? ? ? access_log off; log_not_found off;? ? ? expires max; > > ? ? ? ? } > > > ? ? #WPMU x-sendfile to avoid php readfile() > > ? ? location ^~ /blogs.dir { > > ? ? ? ? internal; > > ? ? ? ? alias /home/portal/wp-content/blogs.dir; > > ? ? ? ? access_log off; ? ? log_not_found off;? ? ? expires max; > > ? ? } > > > ? ? #add some rules for static content expiry-headers here > > } > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From giulio at loffreda.com.br Fri Apr 6 18:17:51 2018 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 6 Apr 2018 15:17:51 -0300 Subject: Wordpress multisite + SSL In-Reply-To: <5386299b-e0d6-922e-26b0-3e159d11fa3c@unix-solution.de> References: <8f7b67cd-b5f0-4c38-bd48-3ca9273fa79c@Spark> <5386299b-e0d6-922e-26b0-3e159d11fa3c@unix-solution.de> Message-ID: <61d9560c-a30c-46b7-86c5-92493c33e0ae@Spark> Hi I created one separated file for while (as we have just one customer under ssl) and placed this file on sites-enable. So it is being loaded at top of nginx configuration. Then I have another conf file to handle 443 requests. The aim is to have one certificate for each customer, as customer may want or already have their own certificate. But you gave me a good idea to have a SAN certificate, I don?t know if it will work for all situations thought. Is my aim possible ? below my complete configuration: ssl_certificate ? ? ? ? /customers/certificates/customerone.com.pem; ssl_certificate_key? ? /customers/certificates/customerone.com.key; map $http_host $blogid { ? ? default ? ? ? -999; } server { ? ? server_name domain.com *.domain.com ; ? ? root /var/www/html/portal; ? ? index index.php; ? ? access_log /var/log/nginx/domain.access.log combined; ? ? error_log /var/log/nginx/domain.error.log; ? ? location / { ? ? ? ? try_files $uri $uri/ /index.php?$args ; ? ? } ? ? #WPMU Files ? ? ? ? location ~ \.php$ { ? ? ? ? ? ? ? ? autoindex on; ? ? ? ? ? ? ? ? try_files $uri =404; ? ? ? ? ? ? ? ? fastcgi_split_path_info ^(.+\.php)(/.+)$; ?? ? ? ? ? ? ? # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini ? ? ? ? ? ? ? ? # With php5-fpm: ? ? ? ? ? ? ? ? #fastcgi_pass unix:/var/run/php5-fpm.sock; ? ? ? ? ? ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; ? ? ? ? ? ? ? ? fastcgi_index index.php; ? ? ? ? ? ? ? ? include fastcgi_params; ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ? ? ? ? ? ? ? ? client_max_body_size ? ? ? 100M; ? ? ? ? ? ? ? ? proxy_connect_timeout? ? ? 180; ? ? ? ? ? ? ? ? proxy_send_timeout ? ? ? ? 180; ? ? ? ? ? ? ? ? proxy_read_timeout ? ? ? ? 180; ? ? ? ? } ? ? ? ? location ~ ^/files/(.*)$ { ? ? ? ? ? ? ? ? try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ; ? ? ? ? ? ? ? ? access_log off; log_not_found off;? ? ? expires max; ? ? ? ? } ? ? #WPMU x-sendfile to avoid php readfile() ? ? location ^~ /blogs.dir { ? ? ? ? internal; ? ? ? ? alias /home/portal/wp-content/blogs.dir; ? ? ? ? access_log off; ? ? log_not_found off;? ? ? expires max; ? ? } ? ? #add some rules for static content expiry-headers here } server { ? ? ? ? listen 443; ? ? ? ? ssl on; ? ? ? ? port_in_redirect off; ? ? ? ? server_name domain.com *.domain.com ; ? ? ? ? root /var/www/html/portal; ? ? ? ? index index.php; ? ? ? ? access_log /var/log/nginx/domain.access.log combined; ? ? ? ? error_log /var/log/nginx/domain.error.log; ? ? ? ? location / { ? ? ? ? ? ? ? ? try_files $uri $uri/ /index.php?$args ; ? ? ? ? } ? ? ? ? #WPMU Files ? ? ? ? location ~ \.php$ { ? ? ? ? ? ? ? ? autoindex on; ? ? ? ? ? ? ? ? try_files $uri =404; ? ? ? ? ? ? ? ? fastcgi_split_path_info ^(.+\.php)(/.+)$; ?? ? ? ? ? ? ? # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini ? ? ? ? ? ? ? ? # With php5-fpm: ? ? ? ? ? ? ? ? #fastcgi_pass unix:/var/run/php5-fpm.sock; ? ? ? ? ? ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; ? ? ? ? ? ? ? ? fastcgi_index index.php; ? ? ? ? ? ? ? ? include fastcgi_params; ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ? ? ? ? ? ? ? ? client_max_body_size ? ? ? 100M; ? ? ? ? ? ? ? ? proxy_connect_timeout? ? ? 180; ? ? ? ? ? ? ? ? proxy_send_timeout ? ? ? ? 180; ? ? ? ? ? ? ? ? proxy_read_timeout ? ? ? ? 180; ? ? ? ? } ? ? ? ? location ~ ^/files/(.*)$ { ? ? ? ? ? ? ? ? try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ; ? ? ? ? ? ? ? ? access_log off; log_not_found off;? ? ? expires max; ? ? ? ? } ? ? ? ? #WPMU x-sendfile to avoid php readfile() ? ? ? ? location ^~ /blogs.dir { ? ? ? ? ? ? ? ? internal; ? ? ? ? ? ? ? ? alias /home/portal/wp-content/blogs.dir; ? ? ? ? ? ? ? ? access_log off; ? ? log_not_found off;? ? ? expires max; ? ? ? ? } ? ? ? ? #add some rules for static content expiry-headers here ? ? ? ? add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; ? ? ? ? add_header X-Frame-Options DENY; ? ? ? ? add_header X-Content-Type-Options nosniff; ? ? ? ? add_header X-XSS-Protection "1; mode=block"; ? ? ? ? add_header X-Robots-Tag none; } On 6 Apr 2018 at 14:50 -0300, basti , wrote: > Hello, > where have you defined your certificate? I cant see. > if you use one serer directive for all your domains, all domains must be > in this certificate (Subject alt names). > > On 06.04.2018 19:40, Giulio Loffreda wrote: > > Dears > > > > > > I have one wordpress multisite with subdomain being served by Nginx. > > > > > > We have the main domain, lets call domain.com . > > > > We use custom domains for customer site lets say customerone.com > > , customertwo.com ? with > > correspondent subdomain on WP, as customerone.domain.com > > , customertwo.domain.com > > . > > > > > > Everything works fine with the configuration at the end of this email. > > > > > > However, now we want to secure some custom domains for example > > https://customerone.com. > > > > > > For one secured domain, it works fine. I can use some plugin to force > > HTTPS on WP and insert certificate on top of nginx configuration. > > > > > > The problem is when I have more than one domain to secure. > > > > > > I tried to insert more than one ssl_certificate on top to secure base > > domain (domain.com ) and its subdomains. Doesn?t work. > > > > Then i search for some configuration to check domain and load the right > > certificate, couldn?t find. > > > > > > Can someone help us to configure our server to work with non-ssl + ssl > > and Wordpress multisite subdomain ? > > > > > > Thank you > > > > > > map $http_host $blogid { > > > > ? ? default ? ? ? -999; > > > > } > > > > > > server { > > > > ? ? server_name domain.com *.domain.com > > ; > > > > > > ? ? root /var/www/html/portal; > > > > ? ? index index.php; > > > > > > ? ? access_log /var/log/nginx/domain.access.log combined; > > > > ? ? error_log /var/log/nginx/domain.error.log; > > > > > > ? ? location / { > > > > ? ? ? ? try_files $uri $uri/ /index.php?$args ; > > > > ? ? } > > > > > > ? ? #WPMU Files > > > > ? ? ? ? location ~ \.php$ { > > > > ? ? ? ? ? ? ? ? autoindex on; > > > > ? ? ? ? ? ? ? ? try_files $uri =404; > > > > ? ? ? ? ? ? ? ? fastcgi_split_path_info ^(.+\.php)(/.+)$; > > > > ?? ? ? ? ? ? ? # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > > > > > > ? ? ? ? ? ? ? ? # With php5-fpm: > > > > ? ? ? ? ? ? ? ? #fastcgi_pass unix:/var/run/php5-fpm.sock; > > > > ? ? ? ? ? ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > > > > ? ? ? ? ? ? ? ? fastcgi_index index.php; > > > > ? ? ? ? ? ? ? ? include fastcgi_params; > > > > ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME > > $document_root$fastcgi_script_name; > > > > ? ? ? ? ? ? ? ? client_max_body_size ? ? ? 100M; > > > > ? ? ? ? ? ? ? ? proxy_connect_timeout? ? ? 180; > > > > ? ? ? ? ? ? ? ? proxy_send_timeout ? ? ? ? 180; > > > > ? ? ? ? ? ? ? ? proxy_read_timeout ? ? ? ? 180; > > > > ? ? ? ? } > > > > ? ? ? ? location ~ ^/files/(.*)$ { > > > > ? ? ? ? ? ? ? ? try_files /wp-content/blogs.dir/$blogid/$uri > > /wp-includes/ms-files.php?file=$1 ; > > > > ? ? ? ? ? ? ? ? access_log off; log_not_found off;? ? ? expires max; > > > > ? ? ? ? } > > > > > > ? ? #WPMU x-sendfile to avoid php readfile() > > > > ? ? location ^~ /blogs.dir { > > > > ? ? ? ? internal; > > > > ? ? ? ? alias /home/portal/wp-content/blogs.dir; > > > > ? ? ? ? access_log off; ? ? log_not_found off;? ? ? expires max; > > > > ? ? } > > > > > > ? ? #add some rules for static content expiry-headers here > > > > } > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Fri Apr 6 18:56:34 2018 From: mailinglist at unix-solution.de (basti) Date: Fri, 6 Apr 2018 20:56:34 +0200 Subject: Wordpress multisite + SSL In-Reply-To: <61d9560c-a30c-46b7-86c5-92493c33e0ae@Spark> References: <8f7b67cd-b5f0-4c38-bd48-3ca9273fa79c@Spark> <5386299b-e0d6-922e-26b0-3e159d11fa3c@unix-solution.de> <61d9560c-a30c-46b7-86c5-92493c33e0ae@Spark> Message-ID: <4bb0a051-041a-48e4-c46d-e44ba04f2d7f@unix-solution.de> On 06.04.2018 20:17, Giulio Loffreda wrote: > Hi > > I created one separated file for while (as we have just one customer > under ssl) and placed this file on sites-enable. So it is being loaded > at top of nginx configuration. > Then I have another conf file to handle 443 requests. > > The aim is to have one certificate for each customer, as customer may > want or already have their own certificate. Then you need different server block's. the certificates are loaded at start, so you can't load them dynamically. in short: 1 server block -> certificate with n domains n server block -> certificate with 1 domain ssl_certificate* must be inside serverblock > But you gave me a good idea to have a SAN certificate, I don?t know if > it will work for all situations thought. > > Is my aim possible ? From giulio at loffreda.com.br Fri Apr 6 19:13:14 2018 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 6 Apr 2018 16:13:14 -0300 Subject: Wordpress multisite + SSL In-Reply-To: <4bb0a051-041a-48e4-c46d-e44ba04f2d7f@unix-solution.de> References: <8f7b67cd-b5f0-4c38-bd48-3ca9273fa79c@Spark> <5386299b-e0d6-922e-26b0-3e159d11fa3c@unix-solution.de> <61d9560c-a30c-46b7-86c5-92493c33e0ae@Spark> <4bb0a051-041a-48e4-c46d-e44ba04f2d7f@unix-solution.de> Message-ID: crystal clear Your ?in short? explanation was perfect. Thank you On 6 Apr 2018 at 15:56 -0300, basti , wrote: > > > On 06.04.2018 20:17, Giulio Loffreda wrote: > > Hi > > > > I created one separated file for while (as we have just one customer > > under ssl) and placed this file on sites-enable. So it is being loaded > > at top of nginx configuration. > > Then I have another conf file to handle 443 requests. > > > > The aim is to have one certificate for each customer, as customer may > > want or already have their own certificate. > > Then you need different server block's. the certificates are loaded at > start, so you can't load them dynamically. > > in short: > 1 server block -> certificate with n domains > n server block -> certificate with 1 domain > > ssl_certificate* must be inside serverblock > > > But you gave me a good idea to have a SAN certificate, I don?t know if > > it will work for all situations thought. > > > > Is my aim possible ? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Fri Apr 6 19:53:27 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 6 Apr 2018 21:53:27 +0200 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: References: Message-ID: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> On 06.04.18 19:04, Richard Stanway wrote: > https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/ I altered my setup to use fcgiwrap. Since then, I keep getting "502 Bad Gateway" errors, with log entries like this: 2018/04/06 21:21:02 [error] 17838#0: *1 upstream prematurely closed FastCGI stdout while reading response header from upstream, client: 123.234.123.234, server: test.mydomain.tld, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/cgi.sock:", host: "test.mydomain.tld:8443" I use fcgiwrap 1.1.0 from 2013, which appears to be the latest available release according to https://github.com/gnosek/fcgiwrap . I tried both the Perl script at the location you linked and spawn-fcgi 1.6.4 as an alternative, but the 502 error pops up regardlesss. Permissions for the socket are as follows: $ ls -l /tmp/cgi.sock srwx------ 1 nginx nginx 0 Apr 6 21:48 /tmp/cgi.sock= Interestingly I found this old message of Richard's: http://mailman.nginx.org/pipermail/nginx/2014-January/041963.html Unfortunately no amount of meddling with SCRIPT_FILENAME, including setting the absolute path to the CGI script, made any difference for me. I don't know how to debug this further. Development of fcgiwrap seems to have ended years ago and the project page is no longer connected. I'd be grateful for more ideas how to solve this puzzle. -Ralph From francis at daoine.org Sat Apr 7 14:18:10 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 7 Apr 2018 15:18:10 +0100 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> References: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> Message-ID: <20180407141810.GD3158@daoine.org> On Fri, Apr 06, 2018 at 09:53:27PM +0200, Ralph Seichter wrote: Hi there, This mail is a bit long, but I try to cover the points raised in your previous mails too. "CGI" is an interface between the executable (that you write or find; commonly referred to as a "CGI script", although it may not be a script) and the thing that executes it (typically, traditionally, a web server). The CGI script expects to be run in a particular environment, with particular environment variables set. It is expected to produce output in a particular format. Nginx does not "do" CGI. FastCGI is a separate protocol. It defines the communication between the client (typically, a web server) and the fastcgi server. What the server does next is up to it; all the client cares about is that the response is correctly formatted. Nginx does "do" FastCGI; it knows how to act as a client talking to a FastCGI server. One FastCGI server is php-fpm. It executes PHP scripts. Whether it provides a CGI-like environment and only executes PHP CGI scripts; or whether it does its own magic to execute any PHP script, is not something that the FastCGI client has to care about. One FastCGI server is fcgiwrap. It is intended as a generic wrapper around any CGI script. Fcgiwrap is intended to receive a FastCGI-protocol request, executes a particular CGI script using the correct interface (environment, input, output), accept the output, and return it appropriately modified to the FastCGI client. While nginx does speak the FastCGI protocol, and does include the "generic" parameters (key/value pairs, effectively) in the communication, it cannot know the full set of parameters that *this* FastCGI server expects, or the particular values that some parameters should have for *this* request. That's where the person configuring nginx comes in -- it is their responsibility to ensure that the nginx-side configuration is appropriate. I said that fcgiwrap "executes a particular CGI script". How does the FastCGI server know which script that is? That is entirely up to the FastCGI server to decide. Typically, it will use the value of the parameter SCRIPT_FILENAME that is given to it. But maybe your one does something else. Only you can know, based on the documentation or implementation of your FastCGI server. What happens if the client sends more than one value for the parameter SCRIPT_FILENAME? Again, that is entirely up to the FastCGI server to decide. Perhaps it uses the first; perhaps it uses the last; perhaps it uses any of them randomly; perhaps it uses none. What should the client (in this case: nginx) do if it is configured to send more than one value for the parameter SCRIPT_FILENAME (or: for any parameter)? It could try to be clever, and only send the first value it is configured to send. Or only the last. Or only one of them, randomly. Or it could assume that the administrator knows what they are doing, and send whatever it is configured to send. Nginx does the latter. So, with all that out of the way: what is the problem that you are reporting? You have an executable CGI script, /tmp/script, with the contents == #!/bin/sh echo Content-Type: text/plain echo echo The script is running. echo The environment is: env == You want nginx to tell fcgiwrap to execute that script for all incoming requests: == server { listen 8008; location / { fastcgi_pass unix:/tmp/fcgi.sock; fastcgi_param SCRIPT_FILENAME /tmp/script; } } == For this to work, you must have already configured a FastCGI-wrapper to listen on /tmp/fcgi.sock and to use the parameter SCRIPT_FILENAME as the name of the program to execute. > I altered my setup to use fcgiwrap. Since then, I keep getting "502 Bad > Gateway" errors, with log entries like this: > > 2018/04/06 21:21:02 [error] 17838#0: *1 upstream prematurely closed > FastCGI stdout while reading response header from upstream, client: > 123.234.123.234, server: test.mydomain.tld, request: "GET / HTTP/1.1", > upstream: "fastcgi://unix:/tmp/cgi.sock:", host: "test.mydomain.tld:8443" Without /tmp/fcgi.sock being correctly available: curl -i http://127.0.0.1:8008/x?k=v returns "HTTP/1.1 502 Bad Gateway" and the nginx error log says what nginx saw the problem to be -- "no such file" or "permission denied" indicate that the socket is not listening correctly; "upstream prematurely closed" suggests that the problem is on the fcgiwrap side -- check its logs, or investigate further. Perhaps /tmp/script is not executable by the fcgiwrap user, or does not provide correct CGI output when run in this limited environment. Or perhaps something else on your system prevents this file in /tmp from being executed -- it's your system, only you know how it is configured and where the logs are that report failures. (Perhaps you have to move /tmp/fcgi.sock to somewhere else; perhaps you have to move /tmp/script to somewhere else.) So, turn on fcgiwrap, ensure that the declared socket is readable and writeable by the nginx user, and ensure that the declared script is executable. (In this case, I just do "env -i /usr/local/bin/fcgiwrap -s unix:/tmp/fcgi.sock"; but you do whatever your system wants in order to achieve the same thing.) Now: curl -i http://127.0.0.1:8008/x?k=v returns "HTTP/1.1 200 OK" with some useful content (in my case: 10 lines of output). It works, hurray. That content includes the HTTP_ variables that were the client request headers. It does not include things like QUERY_STRING and the like, which are common CGI variables. That is because nginx was not configured to send them to fcgiwrap, so fcgiwrap did not expose them to /tmp/script. Maybe your "real" CGI script requires that some of those variables are set, and will fail if they are not. So change the nginx config to include "the usually sensible parameters" -- although only you know what is sensible in your particular case, so you may want to edit this to taste. In the nginx.conf, add one line so that you have == server { listen 8008; location / { fastcgi_pass unix:/tmp/fcgi.sock; fastcgi_param SCRIPT_FILENAME /tmp/script; include fastcgi_params; } } == Now "curl -i http://127.0.0.1:8008/x?k=v" returns more output (in my case: 27 lines) including things like DOCUMENT_ROOT and REQUEST_URI and DOCUMENT_URI and all of the other things that you can see in the fastcgi_params file. > I use fcgiwrap 1.1.0 from 2013, which appears to be the latest available > release according to https://github.com/gnosek/fcgiwrap . > I don't know how to debug this further. Development of fcgiwrap seems to > have ended years ago and the project page is no longer connected. I'd be > grateful for more ideas how to solve this puzzle. In this test case, I am using "fcgiwrap version 1.0.1" from Grzegorz Nosek, because that one happens to be lying around on this system. It does not need much in the way of active development, since it works, and the interfaces it implements have not changed recently. All the best, f -- Francis Daly francis at daoine.org From m16+nginx at monksofcool.net Sat Apr 7 18:13:32 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Sat, 7 Apr 2018 20:13:32 +0200 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: <20180407141810.GD3158@daoine.org> References: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> <20180407141810.GD3158@daoine.org> Message-ID: <142edb13-34b5-1996-354d-47582d25f7c0@monksofcool.net> On 07.04.18 16:18, Francis Daly wrote: > This mail is a bit long, but I try to cover the points raised in your > previous mails too. I appreciate you taking the time. Like I said, I am new to nginx. Years of using Apache caused me to expect certain things to happen in certain ways, and even though I studied nginx documentation and already noted substantial differences, I'm glad for your thorough description. One sentence in particular got me thinking: > Perhaps /tmp/script is not executable by the fcgiwrap user, or does > not provide correct CGI output when run in this limited environment. Yesterday I had verified that the CGI test script was executable for all, ran it with "su nginx -c /path/to/test.cgi", and then basically forgot about the script, to focus all my attention on nginx, fcgiwrap, and the other tools in my box. Turns out that the CGI shell script I quickly typed in Vi lacks a small but significant detail. https://tools.ietf.org/html/rfc3875 section 6.2 states "The response comprises a message-header and a message-body, separated by a blank line", and unfortunately I omitted that blank line. Seeing that, the error message I included in yesterday's email makes more sense to me: "Upstream prematurely closed FastCGI stdout while reading response header". With the blank line absent, all returned data was considered message-header, and when the stream was closed, no message-body had apparently been received. As soon as I added the missing blank line to my test.cgi, all worked smoothly. Here's the relevant section of my nginx configuration: server { listen *:443 ssl; server_name test.mydomain.tld; # ...logging and basic SSL stuff here... root /var/www/localhost/test; index test.cgi; location ~ \.cgi$ { try_files $uri =404; include fastcgi_params; fastcgi_pass unix:/run/fcgi-nginx-1; } } Doesn't look like much, and according to Git, that's actually what I used on my very first attempt with spawn-fcgi. I sure wish I had spotted the script problem earlier. Face, meet palm. -Ralph From b631093f-779b-4d67-9ffe-5f6d5b1d3f8a at protonmail.ch Sun Apr 8 13:09:26 2018 From: b631093f-779b-4d67-9ffe-5f6d5b1d3f8a at protonmail.ch (Bob Smith) Date: Sun, 08 Apr 2018 09:09:26 -0400 Subject: Upgradeing from stable to mainline via repo ? Message-ID: <1SWiGHKJfd-2ULwTLnJbSEJ0h_Z4-8ZvLpuvqpGepU8sokglCrFJzS48tn-CePIch68D_lj2d6KKpde3B-teCdTREag07P31hs-u-5xfuNM=@protonmail.ch> Hi, I've currently got stable installed via the NGINX Centos 7 repo. Is there a "supported", seamless way to "upgrade" from stable to mainline via the repo ? Or do I have to go the nuclear option via uninstall and re-install ? Thanks ! Bob -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Apr 9 08:45:46 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Apr 2018 09:45:46 +0100 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: <142edb13-34b5-1996-354d-47582d25f7c0@monksofcool.net> References: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> <20180407141810.GD3158@daoine.org> <142edb13-34b5-1996-354d-47582d25f7c0@monksofcool.net> Message-ID: <20180409084546.GA3938@daoine.org> On Sat, Apr 07, 2018 at 08:13:32PM +0200, Ralph Seichter wrote: > On 07.04.18 16:18, Francis Daly wrote: Hi there, > Turns out that the CGI shell script I quickly typed in Vi lacks a small > but significant detail. https://tools.ietf.org/html/rfc3875 section 6.2 > states "The response comprises a message-header and a message-body, > separated by a blank line", and unfortunately I omitted that blank line. Good that you found the problem, and got it all working. And thanks for sharing the solution with the list -- that will probably help the next person who has a similar problem, so that they now won't need to send a mail :-) Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Apr 9 13:30:29 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Apr 2018 16:30:29 +0300 Subject: Nginx throttling issue? In-Reply-To: References: <20180327115506.GF77253@mdounin.ru> <85F52145-2C63-45C9-A581-AE609843A8CB@me.com> Message-ID: <20180409133029.GA77253@mdounin.ru> Hello! On Fri, Apr 06, 2018 at 07:11:36PM +0200, Richard Stanway via nginx wrote: > Even though it shouldn't be reaching your limits, limit_req does delay in > 1 second increments which sounds like it could be responsible for this. You Delays as introduced by limit_req (again, only if explicitly configured) use milliseconds granularity. In the particular case configured with rate=10000r/s and burst=600, maximum possible delay would be 60ms. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Tue Apr 10 12:45:09 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 10 Apr 2018 15:45:09 +0300 Subject: nginScript question In-Reply-To: <786faa9105dbfec1a87376b081cca5e7.NginxMailingListEnglish@forum.nginx.org> References: <98aad8c8-8885-58e5-649d-fa8796d7cf96@nginx.com> <786faa9105dbfec1a87376b081cca5e7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6563af4f-bd50-39ce-9835-d19182d09f18@nginx.com> On 13.07.2017 18:14, aledbf wrote: > Thanks! Hi, I am glad to inform you that since njs-0.2.0 it is possible to create arbitrary http subrequests from js_content phase. Here you can find the subrequest API description: http://hg.nginx.org/njs/rev/750f7c6f071c Here you can find some usage examples: http://hg.nginx.org/nginx-tests/rev/8e593b068fc0 From nginx-forum at forum.nginx.org Tue Apr 10 12:54:19 2018 From: nginx-forum at forum.nginx.org (Salikhov Dinislam) Date: Tue, 10 Apr 2018 08:54:19 -0400 Subject: More than 65K connections of a proxy on FreeBSD Message-ID: <971bd79eeab70fd1a67b87efe096f200.NginxMailingListEnglish@forum.nginx.org> Hello, On Linux, NINGX can have more than 65K connections to backends per one local address of a proxy (set via proxy_bind), as Linux support IP_BIND_ADDRESS_NO_PORT socket option. I wonder if it is possible to have more than 65K proxy connections on FreeBSD? And if yes, does NGINX support it? Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279368,279368#msg-279368 From mdounin at mdounin.ru Tue Apr 10 14:23:49 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Apr 2018 17:23:49 +0300 Subject: nginx-1.13.12 Message-ID: <20180410142349.GH77253@mdounin.ru> Changes with nginx 1.13.12 10 Apr 2018 *) Bugfix: connections with gRPC backends might be closed unexpectedly when returning a large response. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Apr 10 15:00:08 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 10 Apr 2018 11:00:08 -0400 Subject: [nginx-announce] nginx-1.13.12 In-Reply-To: <20180410142353.GI77253@mdounin.ru> References: <20180410142353.GI77253@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.12 for Windows https://kevinworthington.com/nginxwin11312 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 10, 2018 at 10:23 AM, Maxim Dounin wrote: > Changes with nginx 1.13.12 10 Apr > 2018 > > *) Bugfix: connections with gRPC backends might be closed unexpectedly > when returning a large response. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.hartland at multiplay.co.uk Tue Apr 10 20:50:44 2018 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 10 Apr 2018 21:50:44 +0100 Subject: More than 65K connections of a proxy on FreeBSD In-Reply-To: <971bd79eeab70fd1a67b87efe096f200.NginxMailingListEnglish@forum.nginx.org> References: <971bd79eeab70fd1a67b87efe096f200.NginxMailingListEnglish@forum.nginx.org> Message-ID: This may well help: https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ On 10/04/2018 13:54, Salikhov Dinislam wrote: > Hello, > > On Linux, NINGX can have more than 65K connections to backends per one local > address of a proxy (set via proxy_bind), as Linux support > IP_BIND_ADDRESS_NO_PORT socket option. > > I wonder if it is possible to have more than 65K proxy connections on > FreeBSD? And if yes, does NGINX support it? > > Thanks in advance. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279368,279368#msg-279368 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue Apr 10 20:54:32 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 10 Apr 2018 22:54:32 +0200 Subject: Why are my CGI scripts not executed like PHP ? In-Reply-To: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> References: <92a799ca-dba9-512c-2a0d-00cb15ed7c32@monksofcool.net> Message-ID: <2af3e6aa-8208-da58-fece-34828997f920@none.at> Hi, Am 06.04.2018 um 21:53 schrieb Ralph Seichter: > On 06.04.18 19:04, Richard Stanway wrote: > >> https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/ > > I altered my setup to use fcgiwrap. Since then, I keep getting "502 Bad > Gateway" errors, with log entries like this: > > 2018/04/06 21:21:02 [error] 17838#0: *1 upstream prematurely closed > FastCGI stdout while reading response header from upstream, client: > 123.234.123.234, server: test.mydomain.tld, request: "GET / HTTP/1.1", > upstream: "fastcgi://unix:/tmp/cgi.sock:", host: "test.mydomain.tld:8443" > > I use fcgiwrap 1.1.0 from 2013, which appears to be the latest available > release according to https://github.com/gnosek/fcgiwrap. Even you have found a working solution you can take a look into uwsgi as cgi daemon. https://uwsgi-docs.readthedocs.io/en/latest/CGI.html It's a quite powerful and robust peace of software any it's active developed. Latest release is from 20180226 Regards aleks > I tried both > the Perl script at the location you linked and spawn-fcgi 1.6.4 as an > alternative, but the 502 error pops up regardlesss. Permissions for the > socket are as follows: > > $ ls -l /tmp/cgi.sock > srwx------ 1 nginx nginx 0 Apr 6 21:48 /tmp/cgi.sock= > > Interestingly I found this old message of Richard's: > > http://mailman.nginx.org/pipermail/nginx/2014-January/041963.html > > Unfortunately no amount of meddling with SCRIPT_FILENAME, including > setting the absolute path to the CGI script, made any difference for me. > > I don't know how to debug this further. Development of fcgiwrap seems to > have ended years ago and the project page is no longer connected. I'd be > grateful for more ideas how to solve this puzzle. > > -Ralph > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From al-nginx at none.at Tue Apr 10 20:59:23 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 10 Apr 2018 22:59:23 +0200 Subject: Upgradeing from stable to mainline via repo ? In-Reply-To: <1SWiGHKJfd-2ULwTLnJbSEJ0h_Z4-8ZvLpuvqpGepU8sokglCrFJzS48tn-CePIch68D_lj2d6KKpde3B-teCdTREag07P31hs-u-5xfuNM=@protonmail.ch> References: <1SWiGHKJfd-2ULwTLnJbSEJ0h_Z4-8ZvLpuvqpGepU8sokglCrFJzS48tn-CePIch68D_lj2d6KKpde3B-teCdTREag07P31hs-u-5xfuNM=@protonmail.ch> Message-ID: <73484af5-7244-5a48-52c5-6b1b50802d04@none.at> Hi. Am 08.04.2018 um 15:09 schrieb Bob Smith via nginx: > Hi, > > I've currently got stable installed via the NGINX Centos 7 repo. > > Is there a "supported", seamless way to "upgrade" from stable to > mainline via the repo ?? Or do I have to go the nuclear option via > uninstall and re-install ? Well supported in open source is try it ;-) I would just replace the stable with mainline `nginx.repo` and run a `yum update` > Thanks ! > > Bob Regars aleks From jeff at p27.eu Wed Apr 11 04:19:57 2018 From: jeff at p27.eu (Jeff Abrahamson) Date: Wed, 11 Apr 2018 06:19:57 +0200 Subject: Monitoring http returns Message-ID: I want to monitor nginx better: http returns (e.g., how many 500's, how many 404's, how many 200's, etc.), as well as request rates, response times, etc.? All the solutions I've found start with "set up something to watch and parse your logs, then ..." Here's one of the better examples of that: https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide Perhaps I'm wrong to find this curious.? It seems somewhat heavy and inefficient to put this functionality into log watching, which means another service and being sensitive to an eventual change in log format. Is this, indeed, the recommended solution? And, for my better understanding, can anyone explain why this makes more sense than native nginx support of sending UDP packets to a monitor collector (in our case, telegraf)? -- Jeff Abrahamson +33 6 24 40 01 57 +44 7920 594 255 http://p27.eu/jeff/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Wed Apr 11 04:50:52 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 10 Apr 2018 21:50:52 -0700 Subject: Monitoring http returns In-Reply-To: References: Message-ID: This module can get you started: https://github.com/gfrankliu/nginx-http-reqstat On Tue, Apr 10, 2018 at 9:19 PM, Jeff Abrahamson wrote: > I want to monitor nginx better: http returns (e.g., how many 500's, how > many 404's, how many 200's, etc.), as well as request rates, response > times, etc. All the solutions I've found start with "set up something to > watch and parse your logs, then ..." > > Here's one of the better examples of that: > > https://www.scalyr.com/community/guides/how-to- > monitor-nginx-the-essential-guide > > Perhaps I'm wrong to find this curious. It seems somewhat heavy and > inefficient to put this functionality into log watching, which means > another service and being sensitive to an eventual change in log format. > > Is this, indeed, the recommended solution? > > And, for my better understanding, can anyone explain why this makes more > sense than native nginx support of sending UDP packets to a monitor > collector (in our case, telegraf)? > > -- > > Jeff Abrahamson > +33 6 24 40 01 57 > +44 7920 594 255 > http://p27.eu/jeff/ > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ajay_Sonawane at symantec.com Wed Apr 11 05:11:54 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Wed, 11 Apr 2018 05:11:54 +0000 Subject: Nginx as reverse proxy for https traffic Message-ID: I am trying to use Nginx as a reverse proxy in an environment where clients connects to my server (https://myserver:10443). I am trying to use Nginx as a reverse proxy so that client will connect to Nginx proxy and Nginx will forward all requests to backend server. The communication is ssl communication on port 10443. I have installed and configured Nginx but still not able to connect to server through proxy. The configuration is Not sure what I have done wrong. As of now, my backend is speaking to proxy on https on port 10443, but eventually it will be http on port 10443. http { server { listen 10443; ssl on; access_log /var/log/nginx/ssl-access.log; error_log /var/log/nginx/ssl-error.log; location / { #chunked_transfer_encoding on; proxy_buffering off; proxy_pass https://MYSERVER:10443; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; #proxy_redirect off; #proxy_ssl_session_reuse off; } ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_session_timeout 10m; keepalive_timeout 60; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_ciphers HIGH:!aNULL:!aNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; ssl_certificate /etc/nginx/certs/endpoint/nginx.cer; ssl_certificate_key /etc/nginx/certs/endpoint/nginx_d.key; #ssl_client_certificate /etc/nginx/certs/endpoint/nginx.cer; #ssl_verify_client off; #ssl_verify_depth 2; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Apr 11 05:17:14 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 11 Apr 2018 01:17:14 -0400 Subject: Monitoring http returns In-Reply-To: References: Message-ID: <36AB6576-85D7-42AD-A529-5054CA6BF4E6@me.com> Jeff, There are some very good reasons for doing things in what sounds like a heavy inefficient manner. The first point is that there are some big differences between application code/business logic and monitoring code: Business logic, or what your nginx instance is doing is what makes you money. Maximizing uptime is critical. Monitoring code typically has a different release cycle, often it will be deployed in a tactical reactive fashion. By decoupling the monitoring from the application logic you protect against the risk that your monitoring code break your application, which would be a Bad Thing, The converse point is that your monitoring software is most valuable when your application is failing, or is overloaded. That's why it's good thing if your monitoring code doesn?t depend upon the health of your plant?s infrastructure. One example of a product that is in some ways comparable to nginx that did things the other way was the early versions of IBM?s Websphere application server. Version 2 persisted all configuration settings as EJBs. That meant that their was no way to view a web sphere instance's configuration when the app server wasn?t running. The product?s designer?s were so hungry to drink their EJB Kool-Aid they didnt stop to ask ?Is this smart?? This why, back in 1998 one could watch an IBM professional services consultant spend weeks installing a websphere instance or you could download and install Weblogic server in 15 minutes yourself. tailing a log file doesnt sound sexy, but its also pretty hard to mess it up. I monitored a high traffic email site with a very short Ruby script that would tail an nginx log, pushing messages ten at a time as UDP datagrams to an influxdb. The script would do its thing for 15 mins then die. cron ensured a new instance started every 15 minutes. It was more efficient than a shell script because it didn't start new processes in a pipeline. I like the scalar guide but I disagree with their advice on active monitoring I think its smarter to use real user requests to test if servers are up. i have seen many high profile sites that end up serving more synthetic requests than real customer initiated requests. > On 11 Apr 2018, at 12:19 AM, Jeff Abrahamson wrote: > > I want to monitor nginx better: http returns (e.g., how many 500's, how many 404's, how many 200's, etc.), as well as request rates, response times, etc. All the solutions I've found start with "set up something to watch and parse your logs, then ..." > > Here's one of the better examples of that: > > https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide > Perhaps I'm wrong to find this curious. It seems somewhat heavy and inefficient to put this functionality into log watching, which means another service and being sensitive to an eventual change in log format. > > Is this, indeed, the recommended solution? > > And, for my better understanding, can anyone explain why this makes more sense than native nginx support of sending UDP packets to a monitor collector (in our case, telegraf)? > -- > > Jeff Abrahamson > +33 6 24 40 01 57 > +44 7920 594 255 > > http://p27.eu/jeff/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff at p27.eu Wed Apr 11 06:04:05 2018 From: jeff at p27.eu (Jeff Abrahamson) Date: Wed, 11 Apr 2018 08:04:05 +0200 Subject: Monitoring http returns In-Reply-To: <36AB6576-85D7-42AD-A529-5054CA6BF4E6@me.com> References: <36AB6576-85D7-42AD-A529-5054CA6BF4E6@me.com> Message-ID: <20180411060405.hkxuqz764ffjuwwk@birdsong> On Wed, Apr 11, 2018 at 01:17:14AM -0400, Peter Booth wrote: > There are some very good reasons for doing things in what sounds > like a heavy inefficient manner. I suspected, thanks for the explanations. > The first point is that there are some big differences between > application code /business logic and monitoring code: > > [...] good summary, I agree with you. > tailing a log file doesnt sound sexy, but its also pretty hard to > mess it up. I monitored a high traffic email site with a very short > Ruby script that would tail an nginx log, pushing messages ten at a > time as UDP datagrams to an influxdb. The script would do its thing > for 15 mins then die. cron ensured a new instance started every 15 > minutes. It was more efficient than a shell script because it didn't > start new processes in a pipeline. It's hard to mess up as long as you're not interested in exactly-once. ;-) The tail solution has the particularity that (1) it could miss things if the short gap between process death and process start sees more events than tail catches at startup or if the log file rotates a few seconds into that 15 minute period, and (2) it could duplicate things in case of very few events in that period. Now, with telegraf/influx, duplicates aren't a concern, because influx keys on time, and our site is probably not getting so much traffic that a tail restart is a big deal, although log rotation could lead to gaps we don't like. Of course, this is why Logwatch was written... > I like the scalar guide but I disagree with their advice on active > monitoring I think its smarter to use real user requests to test if > servers are up. i have seen many high profile sites that end up > serving more synthetic requests than real customer initiated > requests. I'm not sure I understood what you mean by "active monitoring". I've understood "sending http queries to see if they are handled properly". In that context: I think both submitting queries (from outside one's own network) and passively watching stats on the service itself are essential. Passively watching stats gives me information on internal state, useful in itself but also when debugging problems. Active monitoring from a different network can alert me to problems that may not be specific to any one service, maybe even are at the network level. Of course, yes, active monitoring shouldn't be trying to DoS my service. ;-) Jeff Abrahamson https://www.p27.eu/jeff/ > On 11 Apr 2018, at 12:19 AM, Jeff Abrahamson wrote: > > I want to monitor nginx better: http returns (e.g., how many > 500's, how many 404's, how many 200's, etc.), as well as request > rates, response times, etc. All the solutions I've found start > with "set up something to watch and parse your logs, then ..." > > Here's one of the better examples of that: > > https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide > > Perhaps I'm wrong to find this curious. It seems somewhat heavy > and inefficient to put this functionality into log watching, > which means another service and being sensitive to an eventual > change in log format. > > Is this, indeed, the recommended solution? > > And, for my better understanding, can anyone explain why this > makes more sense than native nginx support of sending UDP > packets to a monitor collector (in our case, telegraf)? > > -- > > Jeff Abrahamson > +33 6 24 40 01 57 > +44 7920 594 255 > > http://p27.eu/jeff/ From al-nginx at none.at Wed Apr 11 07:30:30 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 11 Apr 2018 09:30:30 +0200 Subject: Nginx as reverse proxy for https traffic In-Reply-To: References: Message-ID: <01f7868d-87f5-3d18-a919-ed3f96652a95@none.at> Am 11.04.2018 um 07:11 schrieb Ajay Sonawane: > I am trying to use Nginx as a reverse proxy in an environment where > clients connects to my server (https://myserver:10443 > ). I am trying to use Nginx as a reverse proxy > so that client will connect to Nginx proxy and Nginx will forward all > requests to backend server. The communication is ssl communication on > port 10443. I have installed and configured Nginx but still not able to > connect to server through proxy. The configuration is > > Not sure what I have done wrong. As of now, my backend is speaking to > proxy on https on port 10443, but eventually it will be http on port 10443. What's in the global and http server error log? Which version of nginx do you use? Best regards Aleks > http > > { > > ?? server > > ?? { > > ??? listen 10443; > > ??? ssl on; > > ? > > ???? access_log /var/log/nginx/ssl-access.log; > > ???? error_log /var/log/nginx/ssl-error.log; > > ? > > ???? location / > > ???? { > > ??????? #chunked_transfer_encoding on; > > ??????? proxy_buffering off; > > ??????? proxy_pass https://MYSERVER:10443; > > ??????? proxy_set_header Host $host; > > ??????? proxy_set_header X-Real-IP $remote_addr; > > ??????? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > ??????? proxy_set_header X-Forwarded-Proto $scheme; > > ??????? #proxy_redirect off; > > ??????? #proxy_ssl_session_reuse off; > > ???? } > > ? > > ???? ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > > ???? ssl_session_timeout 10m; > > ???? keepalive_timeout 60; > > ???? ssl_session_cache builtin:1000 shared:SSL:10m; > > ???? ssl_ciphers??? > HIGH:!aNULL:!aNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; > > ???? ssl_prefer_server_ciphers on; > > ???? ssl_certificate? /etc/nginx/certs/endpoint/nginx.cer; > > ???? ssl_certificate_key /etc/nginx/certs/endpoint/nginx_d.key; > > ? > > ???? #ssl_client_certificate /etc/nginx/certs/endpoint/nginx.cer; > > ???? #ssl_verify_client off; > > ???? #ssl_verify_depth 2; > > ? > > ?? } > > } > > ? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From Ajay_Sonawane at symantec.com Wed Apr 11 08:13:25 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Wed, 11 Apr 2018 08:13:25 +0000 Subject: [EXT] Re: Nginx as reverse proxy for https traffic In-Reply-To: <01f7868d-87f5-3d18-a919-ed3f96652a95@none.at> References: <01f7868d-87f5-3d18-a919-ed3f96652a95@none.at> Message-ID: Nginx version 1.13.7 There are no longs in error.log file. Access log show "POST /HTTP /1.1 408 ..." entries. Nothing specific to if connection is established or not. I need some troubleshooting steps as well to know what exactly is happening. At client side, SSL handshake is completed but no logs after that. -----Original Message----- From: Aleksandar Lazic [mailto:al-nginx at none.at] Sent: Wednesday, April 11, 2018 1:01 PM To: nginx at nginx.org; Ajay Sonawane Subject: [EXT] Re: Nginx as reverse proxy for https traffic Am 11.04.2018 um 07:11 schrieb Ajay Sonawane: > I am trying to use Nginx as a reverse proxy in an environment where > clients connects to my server (https://myserver:10443 > ). I am trying to use Nginx as a reverse > proxy so that client will connect to Nginx proxy and Nginx will > forward all requests to backend server. The communication is ssl > communication on port 10443. I have installed and configured Nginx but > still not able to connect to server through proxy. The configuration > is > > Not sure what I have done wrong. As of now, my backend is speaking to > proxy on https on port 10443, but eventually it will be http on port 10443. What's in the global and http server error log? Which version of nginx do you use? Best regards Aleks > http > > { > > ?? server > > ?? { > > ??? listen 10443; > > ??? ssl on; > > ? > > ???? access_log /var/log/nginx/ssl-access.log; > > ???? error_log /var/log/nginx/ssl-error.log; > > ? > > ???? location / > > ???? { > > ??????? #chunked_transfer_encoding on; > > ??????? proxy_buffering off; > > ??????? proxy_pass https://MYSERVER:10443; > > ??????? proxy_set_header Host $host; > > ??????? proxy_set_header X-Real-IP $remote_addr; > > ??????? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > ??????? proxy_set_header X-Forwarded-Proto $scheme; > > ??????? #proxy_redirect off; > > ??????? #proxy_ssl_session_reuse off; > > ???? } > > ? > > ???? ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > > ???? ssl_session_timeout 10m; > > ???? keepalive_timeout 60; > > ???? ssl_session_cache builtin:1000 shared:SSL:10m; > > ???? ssl_ciphers > HIGH:!aNULL:!aNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; > > ???? ssl_prefer_server_ciphers on; > > ???? ssl_certificate? /etc/nginx/certs/endpoint/nginx.cer; > > ???? ssl_certificate_key /etc/nginx/certs/endpoint/nginx_d.key; > > ? > > ???? #ssl_client_certificate /etc/nginx/certs/endpoint/nginx.cer; > > ???? #ssl_verify_client off; > > ???? #ssl_verify_depth 2; > > ? > > ?? } > > } > > ? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://clicktime.symantec.com/a/1/H1IJ4X_-Gh6l4M4vjdaTgjgnMxYYrQs674w > hdkZpTco=?d=mmaeSLWIuOZl3-ETXWK4VlCJH23T4pXYQCcAvOPw81Lc74fGByXNJej4l- > pIlUjcOpzzELmhlrsbg4HetelvgkFV7NUg4602JjC2NZzshbF_hY2x8Ft1xdZz_5KFt4au > DImBxX9ooBDps24xbJOk4k7bql1FGBU4-MsBYmvebbnsI0c0PAz8n9JK20ozgDkuMJwdFu > Fn_D8U8teov4XoKzwx2sgsxjoRtxADEGTrH77pdbpT5SM3K14DIopzmq1c---uJBzvMBt0 > 7qW0M8HwUk6v2hAnR7lNs3TClmHOUA0RK4jUOTeWwA4YDu8aOI6R_J-dWvAsZICygd2x8w > kOofkIFmIsru2BfIwcv2hPpkBP6JLAudA_M0Wdo6gD&u=http%3A%2F%2Fmailman.ngin > x.org%2Fmailman%2Flistinfo%2Fnginx > From nginx-forum at forum.nginx.org Wed Apr 11 09:22:53 2018 From: nginx-forum at forum.nginx.org (Salikhov Dinislam) Date: Wed, 11 Apr 2018 05:22:53 -0400 Subject: More than 65K connections of a proxy on FreeBSD In-Reply-To: References: Message-ID: <23fcb5e0dd9be6c2ec15a137763f9afe.NginxMailingListEnglish@forum.nginx.org> Unfortunately, the article says nothing about 65K+ connections _per_single_ local address. Using of IP_BIND_ADDRESS_NO_PORT for Linux was mentioned in the comment and there's nothing about FreeBSD. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279368,279394#msg-279394 From r at roze.lv Wed Apr 11 10:13:19 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 11 Apr 2018 13:13:19 +0300 Subject: More than 65K connections of a proxy on FreeBSD In-Reply-To: <23fcb5e0dd9be6c2ec15a137763f9afe.NginxMailingListEnglish@forum.nginx.org> References: <23fcb5e0dd9be6c2ec15a137763f9afe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d3d17d$b700da80$25028f80$@roze.lv> > Unfortunately, the article says nothing about 65K+ connections _per_single_ > local address. > Using of IP_BIND_ADDRESS_NO_PORT for Linux was mentioned in the comment > and there's nothing about FreeBSD. Correct me if I'm wrong but in case of IP_BIND_ADDRESS_NO_PORT doesn't the unique 4-tuple (sourceip+sourceport+destip+destport) limit still remain? As you only defer/delegate to kernel to assign the ephemeral port in connect() rather than at the bind() time (when the destination is not yet known) so in case of a single source ip and single backend/port the ~65k limit still exists. rr From nginx-forum at forum.nginx.org Wed Apr 11 10:37:32 2018 From: nginx-forum at forum.nginx.org (Salikhov Dinislam) Date: Wed, 11 Apr 2018 06:37:32 -0400 Subject: More than 65K connections of a proxy on FreeBSD In-Reply-To: <000001d3d17d$b700da80$25028f80$@roze.lv> References: <000001d3d17d$b700da80$25028f80$@roze.lv> Message-ID: <79005d2a43352e2085fccc05d846a543.NginxMailingListEnglish@forum.nginx.org> > Correct me if I'm wrong but in case of IP_BIND_ADDRESS_NO_PORT doesn't the unique 4-tuple (sourceip+sourceport+destip+destport) limit still remain? Yes, it still remains. > As you only defer/delegate to kernel to assign the ephemeral port in connect() rather than at the bind() time (when the destination is not yet known) so in case of a single source ip and single backend/port the ~65k limit still exists. You are right for the case of single source IP and single backend-port pair. The thing is that in case of single source IP and multiple backend-port pairs overall amount of connections is still limited by 65K. Linux's IP_BIND_ADDRESS_NO_PORT increases the limit up to 65K connections per single backend-port pair (single source IP remains the same for all connections to all backends) and NGINX supports the feature. So I wonder if there's something like on FreeBSD. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279368,279396#msg-279396 From al-nginx at none.at Wed Apr 11 10:59:11 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 11 Apr 2018 12:59:11 +0200 Subject: [EXT] Re: Nginx as reverse proxy for https traffic In-Reply-To: References: <01f7868d-87f5-3d18-a919-ed3f96652a95@none.at> Message-ID: <3fe7a8fa-47c2-dc15-c069-530f45065b4e@none.at> Am 11.04.2018 um 10:13 schrieb Ajay Sonawane: > Nginx version 1.13.7 > > There are no longs in error.log file. Access log show "POST /HTTP /1.1 408 ..." entries. Nothing specific to if connection is established or not. I need some troubleshooting steps as well to know what exactly is happening. Please can you turn debug logging on. https://nginx.org/en/docs/debugging_log.html Depend on your installation you will need to start nginx-debug and stop nginx normal. > At client side, SSL handshake is completed but no logs after that. > > > -----Original Message----- > From: Aleksandar Lazic [mailto:al-nginx at none.at] > Sent: Wednesday, April 11, 2018 1:01 PM > To: nginx at nginx.org; Ajay Sonawane > Subject: [EXT] Re: Nginx as reverse proxy for https traffic > > Am 11.04.2018 um 07:11 schrieb Ajay Sonawane: >> I am trying to use Nginx as a reverse proxy in an environment where >> clients connects to my server (https://myserver:10443 >> ). I am trying to use Nginx as a reverse >> proxy so that client will connect to Nginx proxy and Nginx will >> forward all requests to backend server. The communication is ssl >> communication on port 10443. I have installed and configured Nginx but >> still not able to connect to server through proxy. The configuration >> is >> >> Not sure what I have done wrong. As of now, my backend is speaking to >> proxy on https on port 10443, but eventually it will be http on port 10443. > > What's in the global and http server error log? > Which version of nginx do you use? > > Best regards > Aleks > >> http >> >> { >> >> ?? server >> >> ?? { >> >> ??? listen 10443; >> >> ??? ssl on; >> >> ? >> >> ???? access_log /var/log/nginx/ssl-access.log; >> >> ???? error_log /var/log/nginx/ssl-error.log; >> >> ? >> >> ???? location / >> >> ???? { >> >> ??????? #chunked_transfer_encoding on; >> >> ??????? proxy_buffering off; >> >> ??????? proxy_pass https://MYSERVER:10443; >> >> ??????? proxy_set_header Host $host; >> >> ??????? proxy_set_header X-Real-IP $remote_addr; >> >> ??????? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> >> ??????? proxy_set_header X-Forwarded-Proto $scheme; >> >> ??????? #proxy_redirect off; >> >> ??????? #proxy_ssl_session_reuse off; >> >> ???? } >> >> ? >> >> ???? ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; >> >> ???? ssl_session_timeout 10m; >> >> ???? keepalive_timeout 60; >> >> ???? ssl_session_cache builtin:1000 shared:SSL:10m; >> >> ???? ssl_ciphers >> HIGH:!aNULL:!aNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; >> >> ???? ssl_prefer_server_ciphers on; >> >> ???? ssl_certificate? /etc/nginx/certs/endpoint/nginx.cer; >> >> ???? ssl_certificate_key /etc/nginx/certs/endpoint/nginx_d.key; >> >> ? >> >> ???? #ssl_client_certificate /etc/nginx/certs/endpoint/nginx.cer; >> >> ???? #ssl_verify_client off; >> >> ???? #ssl_verify_depth 2; >> >> ? >> >> ?? } >> >> } >> From gfrankliu at gmail.com Wed Apr 11 17:12:52 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 11 Apr 2018 10:12:52 -0700 Subject: TLS 1.3 Message-ID: https://trac.nginx.org/nginx/roadmap says - [in progress] TLS 1.3 support Now that milestone 1.13 has only 6 days left, is this still in the plan or are we pushing it to 1.15? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 11 17:42:35 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Apr 2018 20:42:35 +0300 Subject: TLS 1.3 In-Reply-To: References: Message-ID: <20180411174234.GL77253@mdounin.ru> Hello! On Wed, Apr 11, 2018 at 10:12:52AM -0700, Frank Liu wrote: > https://trac.nginx.org/nginx/roadmap says > > - [in progress] TLS 1.3 support > > > Now that milestone 1.13 has only 6 days left, is this still in the plan or > are we pushing it to 1.15? There is basic TLS 1.3 supported in nginx 1.13.x since 1.13.0. There are some remaining bits which we are planning to work on to further improve things - notably the are problems with session reuse when using TLS 1.3 to upstream servers, and no support for 0-RTT mode aka early data. These will be worked on in nginx 1.15.x. -- Maxim Dounin http://mdounin.ru/ From peter_booth at me.com Thu Apr 12 00:59:00 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 11 Apr 2018 20:59:00 -0400 Subject: Monitoring http returns In-Reply-To: <20180411060405.hkxuqz764ffjuwwk@birdsong> References: <36AB6576-85D7-42AD-A529-5054CA6BF4E6@me.com> <20180411060405.hkxuqz764ffjuwwk@birdsong> Message-ID: <10A02E86-312B-44AD-A281-AF27C9828B54@me.com> So under the covers things are rarely as pretty as one hopes. In the example quoted the influxdb instance was actually a pool of different pre 1.0 instances- each of which had different bugs or fixes. The log script actually pushed 15:30 worth of data to intentionally overlap. The most surprising observation was that substantially more than 50% of the web traffic was from bots, scrapers, test tools and other nonhuman user agents (over 300 different signatures). If you accept as a given that sometimes there will be an overload situation where users will abandon carts you then have to ask ?how much cash are we leaving on the table because of these nonhuman requests (which included more than a dozen different flavors of active testing)?? There?s a human psychology element to this issue. People don?t find it easy to think probabilistically and accepting the inevitability of overload requires a certain amount of bravery that not all techies can muster. It?s easier to act like a Dilbert character and say ?anything less than 100% uptime is unacceptable? Regarding active testing, if we have a shopper who is connecting via FIOS from their home in Minnesota and experiencing acceptable performance what more do we know from a Gomez, Pingdom, Keynote request that originated from a data center in Minnesota? At least one of these three were colocated on the VLANs as a large CDN vendor. The good news that the test took teported was invariably more positive than real customer experiences- hence the big surge in interest in RUM. The challenge in a large web site is the vast number of parties who have a vested interest in the site being up- and each of them figured ?request a page a minute is no big deal.? But the aggregate picture was ugly. Bad site structure will cause google and bing and other search engines to scrape in a pathological manner Sent from my iPhone > On Apr 11, 2018, at 2:04 AM, Jeff Abrahamson wrote: > >> On Wed, Apr 11, 2018 at 01:17:14AM -0400, Peter Booth wrote: >> There are some very good reasons for doing things in what sounds >> like a heavy inefficient manner. > > I suspected, thanks for the explanations. > > >> The first point is that there are some big differences between >> application code /business logic and monitoring code: >> >> [...] > > good summary, I agree with you. > > >> tailing a log file doesnt sound sexy, but its also pretty hard to >> mess it up. I monitored a high traffic email site with a very short >> Ruby script that would tail an nginx log, pushing messages ten at a >> time as UDP datagrams to an influxdb. The script would do its thing >> for 15 mins then die. cron ensured a new instance started every 15 >> minutes. It was more efficient than a shell script because it didn't >> start new processes in a pipeline. > > It's hard to mess up as long as you're not interested in > exactly-once. ;-) > > The tail solution has the particularity that (1) it could miss things > if the short gap between process death and process start sees more > events than tail catches at startup or if the log file rotates a few > seconds into that 15 minute period, and (2) it could duplicate things > in case of very few events in that period. Now, with telegraf/influx, > duplicates aren't a concern, because influx keys on time, and our site > is probably not getting so much traffic that a tail restart is a big > deal, although log rotation could lead to gaps we don't like. > > Of course, this is why Logwatch was written... > > >> I like the scalar guide but I disagree with their advice on active >> monitoring I think its smarter to use real user requests to test if >> servers are up. i have seen many high profile sites that end up >> serving more synthetic requests than real customer initiated >> requests. > > I'm not sure I understood what you mean by "active monitoring". I've > understood "sending http queries to see if they are handled properly". > > In that context: I think both submitting queries (from outside one's > own network) and passively watching stats on the service itself are > essential. Passively watching stats gives me information on internal > state, useful in itself but also when debugging problems. Active > monitoring from a different network can alert me to problems that may > not be specific to any one service, maybe even are at the network > level. > > Of course, yes, active monitoring shouldn't be trying to DoS my > service. ;-) > > Jeff Abrahamson > https://www.p27.eu/jeff/ > > >> On 11 Apr 2018, at 12:19 AM, Jeff Abrahamson wrote: >> >> I want to monitor nginx better: http returns (e.g., how many >> 500's, how many 404's, how many 200's, etc.), as well as request >> rates, response times, etc. All the solutions I've found start >> with "set up something to watch and parse your logs, then ..." >> >> Here's one of the better examples of that: >> >> https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide >> >> Perhaps I'm wrong to find this curious. It seems somewhat heavy >> and inefficient to put this functionality into log watching, >> which means another service and being sensitive to an eventual >> change in log format. >> >> Is this, indeed, the recommended solution? >> >> And, for my better understanding, can anyone explain why this >> makes more sense than native nginx support of sending UDP >> packets to a monitor collector (in our case, telegraf)? >> >> -- >> >> Jeff Abrahamson >> +33 6 24 40 01 57 >> +44 7920 594 255 >> >> http://p27.eu/jeff/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Thu Apr 12 01:03:47 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 11 Apr 2018 21:03:47 -0400 Subject: Monitoring http returns In-Reply-To: <20180411060405.hkxuqz764ffjuwwk@birdsong> References: <36AB6576-85D7-42AD-A529-5054CA6BF4E6@me.com> <20180411060405.hkxuqz764ffjuwwk@birdsong> Message-ID: <4DA16206-3CA1-4B5B-B633-EB9E5A28E93C@me.com> Just to be clear, I?m not contrasting active synthetic testing with monitoring resource consumption. I think that the highest value variable is $, or those variables that have highest correlation to profit. The real customer experience is probably #2 after sales. Monitoring things like active connections, cache hit ratios etc is important to understand ?what is normal?? It?s easy for our mental model of how a site works to differ markedly from reality. Sent from my iPhone > On Apr 11, 2018, at 2:04 AM, Jeff Abrahamson wrote: > >> On Wed, Apr 11, 2018 at 01:17:14AM -0400, Peter Booth wrote: >> There are some very good reasons for doing things in what sounds >> like a heavy inefficient manner. > > I suspected, thanks for the explanations. > > >> The first point is that there are some big differences between >> application code /business logic and monitoring code: >> >> [...] > > good summary, I agree with you. > > >> tailing a log file doesnt sound sexy, but its also pretty hard to >> mess it up. I monitored a high traffic email site with a very short >> Ruby script that would tail an nginx log, pushing messages ten at a >> time as UDP datagrams to an influxdb. The script would do its thing >> for 15 mins then die. cron ensured a new instance started every 15 >> minutes. It was more efficient than a shell script because it didn't >> start new processes in a pipeline. > > It's hard to mess up as long as you're not interested in > exactly-once. ;-) > > The tail solution has the particularity that (1) it could miss things > if the short gap between process death and process start sees more > events than tail catches at startup or if the log file rotates a few > seconds into that 15 minute period, and (2) it could duplicate things > in case of very few events in that period. Now, with telegraf/influx, > duplicates aren't a concern, because influx keys on time, and our site > is probably not getting so much traffic that a tail restart is a big > deal, although log rotation could lead to gaps we don't like. > > Of course, this is why Logwatch was written... > > >> I like the scalar guide but I disagree with their advice on active >> monitoring I think its smarter to use real user requests to test if >> servers are up. i have seen many high profile sites that end up >> serving more synthetic requests than real customer initiated >> requests. > > I'm not sure I understood what you mean by "active monitoring". I've > understood "sending http queries to see if they are handled properly". > > In that context: I think both submitting queries (from outside one's > own network) and passively watching stats on the service itself are > essential. Passively watching stats gives me information on internal > state, useful in itself but also when debugging problems. Active > monitoring from a different network can alert me to problems that may > not be specific to any one service, maybe even are at the network > level. > > Of course, yes, active monitoring shouldn't be trying to DoS my > service. ;-) > > Jeff Abrahamson > https://www.p27.eu/jeff/ > > >> On 11 Apr 2018, at 12:19 AM, Jeff Abrahamson wrote: >> >> I want to monitor nginx better: http returns (e.g., how many >> 500's, how many 404's, how many 200's, etc.), as well as request >> rates, response times, etc. All the solutions I've found start >> with "set up something to watch and parse your logs, then ..." >> >> Here's one of the better examples of that: >> >> https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide >> >> Perhaps I'm wrong to find this curious. It seems somewhat heavy >> and inefficient to put this functionality into log watching, >> which means another service and being sensitive to an eventual >> change in log format. >> >> Is this, indeed, the recommended solution? >> >> And, for my better understanding, can anyone explain why this >> makes more sense than native nginx support of sending UDP >> packets to a monitor collector (in our case, telegraf)? >> >> -- >> >> Jeff Abrahamson >> +33 6 24 40 01 57 >> +44 7920 594 255 >> >> http://p27.eu/jeff/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From adrian at staff.netvirtue.com.au Thu Apr 12 06:55:01 2018 From: adrian at staff.netvirtue.com.au (Adrian Acosta) Date: Thu, 12 Apr 2018 16:55:01 +1000 Subject: sub_filter (CSS,JS) Message-ID: Hi Everyone! We're using nginx as a reverse proxy for a staging system so customers can view the contents of their directories without a resolving domain. It's working really well, but the issue is, references to their domain aren't being filtered inside their CSS or JS files which are also loading via the proxy URL. *Currently the configuration of each host is as follows. * server { server_name example-com-au.tempexample.com.au; location / { proxy_set_header Host real-domain.com; add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; proxy_pass http://ip.ip.ip.ip/; sub_filter_types text/html text/css text/javascript; proxy_set_header Accept-Encoding "*"; sub_filter "http://real-domain.com/" " http://example-com-au.tempexample.com.au/"; sub_filter_once off; } } Does anybody know if we can further configure a sub_filter so that the contents of JS and CSS files are also filtered when loaded via the proxy? It has been very hard finding relevant information, the docs aren't overly helpful and even stackoverflow hasn't wielded good results. Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rentorbuy at yahoo.com Thu Apr 12 07:25:27 2018 From: rentorbuy at yahoo.com (Vieri) Date: Thu, 12 Apr 2018 07:25:27 +0000 (UTC) Subject: push directive is forcing local stream to fail References: <1689601253.1412848.1523517927320.ref@mail.yahoo.com> Message-ID: <1689601253.1412848.1523517927320@mail.yahoo.com> Hi, I have RTMP clients streaming to my local nginx-rtmp server so others can view the stream via HTTP (HLS and DASH). I also push the RTMP stream to Youtube on a live channel. Pushing the stream to Youtube should be optional, in my case. In other words, if that fails for whatever reason, the client should still be able to publish to my local nginx server. Today, for some reason the youtube service failed, and the client stopped publishing. I have two questions regarding this scenario. 1) Is it possible to silently and automatically by-pass the push to youtube, and keep streaming "locally"? 2) How can I set up two "applications" in nginx.conf so that both do exactly the same, except one streams (pushes) to youtube while the other doesn't? HLS and DASH settings (as well as paths) should be the same. Also, I currently cannot use ffmpeg to, eg., stream copy from one "application" context to another. Is there a way to do this without ffmpeg? Here's part of my nginx.conf: rtmp { ??? server { ?[...] ?????? application live { ??????????? live on; ??????????? record all; [...] ??????????? allow publish 10.215.145.120; ??????????? allow publish 10.215.248.68; ??????????? allow publish 10.215.248.54; ??????????? allow publish 10.215.144.116; ??????????? allow publish 127.0.0.1; ??????????? deny publish all; ??????????? allow play all; ??????????? hls on; ?[...] ??????????? dash on; [...] ??????????? push rtmp://a.rtmp.youtube.com/live2/mystreamname; [...] ?????? } Here's the error log: 2018/04/12 08:24:02 [info] 29518#0: *1 client connected '10.215.145.120' 2018/04/12 08:24:02 [info] 29518#0: *1 connect: app='live' args='' flashver='' swf_url='' tc_url='rtmp://10.215.144.91:1935/live' page_url='' acodecs=0 vcodecs=0 object_encoding=0, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:02 [info] 29518#0: *1 createStream, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:02 [info] 29518#0: *1 publish: name='SHH' args='' type=live silent=0, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:02 [info] 29518#0: *1 exec: starting unmanaged child '/opt/custom/scripts/run/scripts/streaming/nginx_notifier.sh', client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:02 [info] 29518#0: *1 relay: create push name='SHH' app='' playpath='' url='a.rtmp.youtube.com/live2/streamname', client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:02 [notice] 29518#0: signal 17 (SIGCHLD) received from 1186 2018/04/12 08:24:02 [notice] 29518#0: unknown process 1186 exited with code 0 2018/04/12 08:24:02 [info] 29518#0: epoll_wait() failed (4: Interrupted system call) 2018/04/12 08:24:02 [info] 29518#0: *2 handshake: digest not found, client: a.rtmp.youtube.com/live2/streamname, server: ngx-relay 2018/04/12 08:24:11 [info] 29518#0: *1 deleteStream, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:11 [info] 29518#0: *1 exec: starting unmanaged child '/opt/custom/scripts/run/scripts/streaming/nginx_notifier.sh', client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:11 [info] 29518#0: *1 exec: starting unmanaged child '/opt/custom/scripts/run/scripts/streaming/index_flv.sh', client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:11 [info] 29518#0: *2 disconnect, client: a.rtmp.youtube.com/live2/streamname, server: ngx-relay 2018/04/12 08:24:11 [info] 29518#0: *2 deleteStream, client: a.rtmp.youtube.com/live2/streamname, server: ngx-relay 2018/04/12 08:24:11 [info] 29518#0: *1 disconnect, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:11 [info] 29518#0: *1 deleteStream, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:11 [notice] 29518#0: signal 17 (SIGCHLD) received from 1238 2018/04/12 08:24:11 [notice] 29518#0: unknown process 1238 exited with code 1 2018/04/12 08:24:11 [info] 29518#0: epoll_wait() failed (4: Interrupted system call) 2018/04/12 08:24:11 [notice] 29518#0: signal 17 (SIGCHLD) received from 1237 2018/04/12 08:24:11 [notice] 29518#0: unknown process 1237 exited with code 0 2018/04/12 08:24:11 [info] 29518#0: epoll_wait() failed (4: Interrupted system call) 2018/04/12 08:24:12 [info] 29518#0: *3 client connected '10.215.145.120' 2018/04/12 08:24:12 [info] 29518#0: *3 connect: app='live' args='' flashver='' swf_url='' tc_url='rtmp://10.215.144.91:1935/live' page_url='' acodecs=0 vcodecs=0 object_encoding=0, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:12 [info] 29518#0: *3 createStream, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:12 [info] 29518#0: *3 publish: name='SHH' args='' type=live silent=0, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:12 [info] 29518#0: *3 exec: starting unmanaged child '/opt/custom/scripts/run/scripts/streaming/nginx_notifier.sh', client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:12 [info] 29518#0: *3 relay: create push name='SHH' app='' playpath='' url='a.rtmp.youtube.com/live2/streamname, client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:12 [error] 29518#0: connect() to [2a00:1450:401f:7::c]:1935 failed (101: Network is unreachable) 2018/04/12 08:24:12 [error] 29518#0: *3 relay: push failed name='SHH' app='' playpath='' url='a.rtmp.youtube.com/live2/streamname', client: 10.215.145.120, server: 0.0.0.0:1935 2018/04/12 08:24:12 [notice] 29517#0: signal 17 (SIGCHLD) received from 29518 2018/04/12 08:24:12 [alert] 29517#0: worker process 29518 exited on signal 11 2018/04/12 08:24:12 [notice] 29517#0: start worker process 1284 2018/04/12 08:24:12 [notice] 29517#0: signal 29 (SIGIO) received Thanks, Vieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Apr 12 07:39:10 2018 From: r at roze.lv (Reinis Rozitis) Date: Thu, 12 Apr 2018 10:39:10 +0300 Subject: sub_filter (CSS,JS) In-Reply-To: References: Message-ID: <003c01d3d231$58701fd0$09505f70$@roze.lv> > It's working really well, but the issue is, references to their domain aren't being filtered inside their CSS or JS files which are also loading via the proxy URL. Typically it means that the response from the upstream comes compressed (gzipped) and sub_filter module doesn't handle that. Try instead of: > proxy_set_header Accept-Encoding "*"; setting it to empty: proxy_set_header Accept-Encoding ""; rr From nginx-forum at forum.nginx.org Thu Apr 12 14:17:26 2018 From: nginx-forum at forum.nginx.org (Dineshkumar) Date: Thu, 12 Apr 2018 10:17:26 -0400 Subject: unknown directive "js_include" Message-ID: Hi All, Im getting the following error when using nginscript module in RHEL 5 server. nginx: [emerg] unknown directive "js_include" in /etc/nginx/nginx.conf:13 Installed the nginx using the following RPM https://nginx.org/packages/rhel/5/x86_64/RPMS/nginx-1.10.0-1.el5.ngx.x86_64.rpm Installed the nginx-module-njs module using the following RPM https://nginx.org/packages/rhel/5/x86_64/RPMS/nginx-module-njs-1.10.0.0.0.20160414.1c50334fbea6-1.el5.ngx.x86_64.rpm Please let me know if there are any alternatives to mitigate this issue. Regards, Dinesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279415,279415#msg-279415 From xeioex at nginx.com Thu Apr 12 14:22:07 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 12 Apr 2018 17:22:07 +0300 Subject: unknown directive "js_include" In-Reply-To: References: Message-ID: <74ee67bd-5b19-7b94-eb43-2ca1aa2123f9@nginx.com> On 12.04.2018 17:17, Dineshkumar wrote: > Hi All, > > Im getting the following error when using nginscript module in RHEL 5 > server. > nginx: [emerg] unknown directive "js_include" in /etc/nginx/nginx.conf:13 > > Installed the nginx using the following RPM > https://nginx.org/packages/rhel/5/x86_64/RPMS/nginx-1.10.0-1.el5.ngx.x86_64.rpm > > Installed the nginx-module-njs module using the following RPM > https://nginx.org/packages/rhel/5/x86_64/RPMS/nginx-module-njs-1.10.0.0.0.20160414.1c50334fbea6-1.el5.ngx.x86_64.rpm > > Please let me know if there are any alternatives to mitigate this issue. > Did you enabled the module in the nginx config? http://nginx.org/en/docs/ngx_core_module.html#load_module > > Regards, > Dinesh > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279415,279415#msg-279415 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Thu Apr 12 14:28:02 2018 From: nginx-forum at forum.nginx.org (Dineshkumar) Date: Thu, 12 Apr 2018 10:28:02 -0400 Subject: unknown directive "js_include" In-Reply-To: <74ee67bd-5b19-7b94-eb43-2ca1aa2123f9@nginx.com> References: <74ee67bd-5b19-7b94-eb43-2ca1aa2123f9@nginx.com> Message-ID: Hi Dmitry, The module has been loaded in the nginx.conf file using the following load_module modules/ngx_http_js_module.so; and the compile module files are available in the path as well. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279415,279417#msg-279417 From nginx-forum at forum.nginx.org Thu Apr 12 14:33:36 2018 From: nginx-forum at forum.nginx.org (agile6v) Date: Thu, 12 Apr 2018 10:33:36 -0400 Subject: Is the auto parameter of the worker_processes directive planned to support the Docker runtime? Message-ID: Hi, Maxim Dounin Currently obtaining the number of CPU cores in Docker actually obtains the number of CPU cores for the host, resulting in that the number of processes started by "worker_processes auto" cannot match the CPU resources requested by the container itself. For example, if the host has 24 CPU cores and the number of CPU cores allocated to the container is 4, Nginx will also start 24 worker processes if the worker_processes auto directive is set, which is not what we expected. Is there any plan to support this feature? Best regards agile6v Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279418,279418#msg-279418 From xeioex at nginx.com Thu Apr 12 14:46:17 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 12 Apr 2018 17:46:17 +0300 Subject: unknown directive "js_include" In-Reply-To: References: <74ee67bd-5b19-7b94-eb43-2ca1aa2123f9@nginx.com> Message-ID: <771b8c93-ddf2-3abf-062f-f1a69273ccb3@nginx.com> On 12.04.2018 17:28, Dineshkumar wrote: > Hi Dmitry, > > The module has been loaded in the nginx.conf file using the following > load_module modules/ngx_http_js_module.so; > > and the compile module files are available in the path as well. Please note that RHEL5 packets are outdated (js_include was not available back then). I would recommend using RHEL 6 or 7. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279415,279417#msg-279417 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Thu Apr 12 15:30:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Apr 2018 18:30:15 +0300 Subject: Is the auto parameter of the worker_processes directive planned to support the Docker runtime? In-Reply-To: References: Message-ID: <20180412153015.GP77253@mdounin.ru> Hello! On Thu, Apr 12, 2018 at 10:33:36AM -0400, agile6v wrote: > Hi, Maxim Dounin > > Currently obtaining the number of CPU cores in Docker actually obtains the > number of CPU cores for the host, resulting in that the number of processes > started by "worker_processes auto" cannot match the CPU resources requested > by the container itself. > > For example, if the host has 24 CPU cores and the number of CPU cores > allocated to the container is 4, Nginx will also start 24 worker processes > if the worker_processes auto directive is set, which is not what we > expected. > > Is there any plan to support this feature? See https://trac.nginx.org/nginx/ticket/1151. If you have a good solution, consider sharing. -- Maxim Dounin http://mdounin.ru/ From vbart at nginx.com Thu Apr 12 18:34:57 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Apr 2018 21:34:57 +0300 Subject: Unit 1.0 release Message-ID: <117200257.x7Pz00JNEh@vbart-workstation> Hello, I'm happy to congratulate you on International Day of Human Space Flight and glad to announce the release of NGINX Unit 1.0. Changes with Unit 1.0 12 Apr 2018 *) Change: configuration object moved into "/config/" path. *) Feature: basic access logging. *) Bugfix: 503 error occurred if Go application did not write response header or body. *) Bugfix: Ruby applications that use encoding conversions might not work. *) Bugfix: various stability issues. With this release Unit ends its beta period. If you wish to know more about the project and our plans, please read the announcement blog post: - https://www.nginx.com/blog/nginx-unit-1-0-released/ wbr, Valentin V. Bartenev From jombik at platon.org Fri Apr 13 00:40:17 2018 From: jombik at platon.org (Ondrej Jombik) Date: Fri, 13 Apr 2018 02:40:17 +0200 (CEST) Subject: Perl Inline C code inside nginx Perl module Message-ID: We have some proprietary code in C language, which we cannot convert into Perl. We would like to use this C code in Perl nginx module. Code runs well under Perl's Inline C. However when I try to run Inline C code in nginx Perl module, it does not work. Not only this code does not work, in fact no Inline C code work inside Perl nginx environment. For example, look at this very simple Perl module inlinetest.pm: package inlinetest; use strict; use nginx; $ENV{'PATH'} = '/bin/:/usr/bin/'; use Inline Config => DIRECTORY => '/tmp/'; use Inline "C" => <<'...'; void test_fnc(int num) { fprintf(stderr, "%d\n", num); } ... sub handler { $request->send_http_header('text/html'); return OK; } 1; Related nginx configuration is pretty standard: perl_modules /etc/nginx/perl/; perl_require inlinetest.pm; server { listen 127.0.0.1:80; location /auth { perl inlinetest::handler; } } As you can see in my example, I am not even using or calling test_fnc() yet. But Perl code simply fails on startup with this error message: -- Unit nginx.service has begun starting up. nginx[20011]: nginx: [emerg] require_pv("inlinetest.pm") failed: "Running Mkbootstrap for inlinetest_0cff nginx[20011]: chmod 644 "inlinetest_0cff.bs" nginx[20011]: "/usr/bin/perl" "/usr/share/perl/5.24/ExtUtils/xsubpp" -typemap "/usr/share/perl/5.24/ExtUt nginx[20011]: x86_64-linux-gnu-gcc -c -I"/" -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-alias nginx[20011]: x86_64-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory nginx[20011]: Makefile:332: recipe for target 'inlinetest_0cff.o' failed nginx[20011]: make: *** [inlinetest_0cff.o] Error 1 nginx[20011]: A problem was encountered while attempting to compile and install your Inline nginx[20011]: C code. The command that failed was: nginx[20011]: "make > out.make 2>&1" with error code 2 nginx[20011]: The build directory was: nginx[20011]: /tmp/build/inlinetest_0cff nginx[20011]: To debug the problem, cd to the build directory, and inspect the output files. nginx[20011]: at /etc/nginx/perl/inlinetest.pm line 10. nginx[20011]: ...propagated at /usr/share/perl5/Inline/C.pm line 869. nginx[20011]: BEGIN failed--compilation aborted at /etc/nginx/perl/inlinetest.pm line 10. nginx[20011]: Compilation failed in require at (eval 1) line 1." nginx[20011]: nginx: configuration file /etc/nginx/nginx.conf test failed systemd[1]: nginx.service: Control process exited, code=exited status=1 systemd[1]: Failed to start A high performance web server and a reverse proxy server. -- Subject: Unit nginx.service has failed Does anyone has any idea why it fails to run under nginx Perl? -- Ondrej JOMBIK Platon Technologies s.r.o., Hlavna 3, Sala SK-92701 +421222111321 - info at platon.net - http://platon.net Read our latest blog: https://blog.platon.sk/icann-sknic-tld-problemy/ My current location: Bratislava, Slovakia My current timezone: +0100 GMT (CET) (updated automatically) From m16+nginx at monksofcool.net Fri Apr 13 10:14:55 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 13 Apr 2018 12:14:55 +0200 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: <117200257.x7Pz00JNEh@vbart-workstation> References: <117200257.x7Pz00JNEh@vbart-workstation> Message-ID: <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> Congratulations to the whole team for reaching the release 1.0 milestone! I'm trying to build Unit on Gentoo Linux, and while module configs for Python and Perl work as expected, I'm struggling with the PHP module: $ ./configure php --config=/usr/lib64/php7.1/bin/php-config configuring PHP module checking for PHP ... found + PHP SAPI: [cli fpm apache2handler] checking for PHP embed SAPI ... not found Here is the content of build/autoconf.err: configuring PHP module ... checking for PHP ... 7.1.16 ---------------------------------------- checking for PHP embed SAPI /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lphp7 collect2: error: ld returned 1 exit status ---------- #include #include int main() { php_request_startup(); return 0; } ---------- cc -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes -Werror -g -I/usr/lib64/php7.1/include/php -I/usr/lib64/php7.1/include/php/main -I/usr/lib64/php7.1/include/php/TSRM -I/usr/lib64/php7.1/include/php/Zend -I/usr/lib64/php7.1/include/php/ext -I/usr/lib64/php7.1/include/php/ext/date/lib -o build/autotest build/autotest.c -lphp7 ---------- A search turned up https://github.com/nginx/unit/issues/47 but I am not sure if/how this applies to my issue and what to do next? -Ralph From igor at sysoev.ru Fri Apr 13 10:52:54 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 13 Apr 2018 13:52:54 +0300 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> Message-ID: > On 13 Apr 2018, at 13:14, Ralph Seichter wrote: > > Congratulations to the whole team for reaching the release 1.0 milestone! > > I'm trying to build Unit on Gentoo Linux, and while module configs for > Python and Perl work as expected, I'm struggling with the PHP module: > > $ ./configure php --config=/usr/lib64/php7.1/bin/php-config > configuring PHP module > checking for PHP ... found > + PHP SAPI: [cli fpm apache2handler] > checking for PHP embed SAPI ... not found PHP package was built without embed SAPI support. Otherwise it shows something like this: + PHP SAPI: [cli fpm embed apache2handler] > Here is the content of build/autoconf.err: > > configuring PHP module ... > checking for PHP ... > 7.1.16 > ---------------------------------------- > checking for PHP embed SAPI > /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lphp7 > collect2: error: ld returned 1 exit status > ---------- > > #include > #include > > int main() { > php_request_startup(); > return 0; > } > ---------- > cc -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes > -Werror -g -I/usr/lib64/php7.1/include/php -I/usr/lib64/php7.1/include/php/main -I/usr/lib64/php7.1/include/php/TSRM > -I/usr/lib64/php7.1/include/php/Zend -I/usr/lib64/php7.1/include/php/ext -I/usr/lib64/php7.1/include/php/ext/date/lib -o > build/autotest build/autotest.c -lphp7 > ---------- > > A search turned up https://github.com/nginx/unit/issues/47 but I am not > sure if/how this applies to my issue and what to do next? This is different issue. -- Igor Sysoev http://nginx.com From m16+nginx at monksofcool.net Fri Apr 13 11:45:01 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 13 Apr 2018 13:45:01 +0200 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> Message-ID: <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> On 13.04.18 12:52, Igor Sysoev wrote: > PHP package was built without embed SAPI support. > Otherwise it shows something like this: > + PHP SAPI: [cli fpm embed apache2handler] Thanks, Igor. I have rebuilt PHP 7.1 with the following USE flags: # /etc/portage/package.use/php dev-lang/php apache2 embed fpm curl gd intl ldap mysql mysqli \ pdo sockets xmlreader xmlwriter xslt zip Although 'embed' is now listed, I still see the error when attempting to configure the PHP module: $ ./configure php --config=/usr/lib64/php7.1/bin/php-config configuring PHP module checking for PHP ... found + PHP SAPI: [embed cli fpm apache2handler] checking for PHP embed SAPI ... not found -Ralph From igor at sysoev.ru Fri Apr 13 12:49:15 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 13 Apr 2018 15:49:15 +0300 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> Message-ID: <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> > On 13 Apr 2018, at 14:45, Ralph Seichter wrote: > > On 13.04.18 12:52, Igor Sysoev wrote: > >> PHP package was built without embed SAPI support. >> Otherwise it shows something like this: >> + PHP SAPI: [cli fpm embed apache2handler] > > Thanks, Igor. I have rebuilt PHP 7.1 with the following USE flags: > > # /etc/portage/package.use/php > dev-lang/php apache2 embed fpm curl gd intl ldap mysql mysqli \ > pdo sockets xmlreader xmlwriter xslt zip > > Although 'embed' is now listed, I still see the error when attempting > to configure the PHP module: > > $ ./configure php --config=/usr/lib64/php7.1/bin/php-config > configuring PHP module > checking for PHP ... found > + PHP SAPI: [embed cli fpm apache2handler] > checking for PHP embed SAPI ... not found Could you show the last lines from build/autoconf.err relevant to PHP? -- Igor Sysoev http://nginx.com From mdounin at mdounin.ru Fri Apr 13 14:03:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Apr 2018 17:03:40 +0300 Subject: Perl Inline C code inside nginx Perl module In-Reply-To: References: Message-ID: <20180413140340.GR77253@mdounin.ru> Hello! On Fri, Apr 13, 2018 at 02:40:17AM +0200, Ondrej Jombik wrote: > We have some proprietary code in C language, which we cannot convert > into Perl. We would like to use this C code in Perl nginx module. Code > runs well under Perl's Inline C. > > However when I try to run Inline C code in nginx Perl module, it does > not work. Not only this code does not work, in fact no Inline C code > work inside Perl nginx environment. > > For example, look at this very simple Perl module inlinetest.pm: > > package inlinetest; > use strict; > use nginx; > > $ENV{'PATH'} = '/bin/:/usr/bin/'; > use Inline Config => > DIRECTORY => '/tmp/'; > use Inline "C" => <<'...'; > void test_fnc(int num) > { > fprintf(stderr, "%d\n", num); > } [...] > As you can see in my example, I am not even using or calling test_fnc() > yet. But Perl code simply fails on startup with this error message: > > -- Unit nginx.service has begun starting up. > nginx[20011]: nginx: [emerg] require_pv("inlinetest.pm") failed: "Running Mkbootstrap for inlinetest_0cff > nginx[20011]: chmod 644 "inlinetest_0cff.bs" > nginx[20011]: "/usr/bin/perl" "/usr/share/perl/5.24/ExtUtils/xsubpp" -typemap "/usr/share/perl/5.24/ExtUt > nginx[20011]: x86_64-linux-gnu-gcc -c -I"/" -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-alias > nginx[20011]: x86_64-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory > nginx[20011]: Makefile:332: recipe for target 'inlinetest_0cff.o' failed The problem is that your PATH is empty when relevant compilation happens. You try to set it in the code using "$ENV{'PATH'} = ...", but it doesn't work as this happens _after_ "use Inline...", because "use ..." operators are executed at compile time in Perl, much like BEGIN{}. A simple fix would be to set PATH in a BEGIN{} block at compile time: BEGIN { $ENV{'PATH'} = '/bin/:/usr/bin/'; } use Inline ... Alternativel, you can use nginx "env" directive to set PATH or preserve it from original environment, see http://nginx.org/r/env. -- Maxim Dounin http://mdounin.ru/ From m16+nginx at monksofcool.net Fri Apr 13 14:23:37 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 13 Apr 2018 16:23:37 +0200 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> Message-ID: On 13.04.2018 14:49, Igor Sysoev wrote: >> $ ./configure php --config=/usr/lib64/php7.1/bin/php-config >> configuring PHP module >> checking for PHP ... found >> + PHP SAPI: [embed cli fpm apache2handler] >> checking for PHP embed SAPI ... not found > > Could you show the last lines from build/autoconf.err relevant to PHP? Sure, here you go again. I only added word-wrapping: ---------- configuring PHP module ... checking for PHP ... 7.1.16 ---------------------------------------- checking for PHP embed SAPI /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lphp7 collect2: error: ld returned 1 exit status ---------- #include #include int main() { php_request_startup(); return 0; } ---------- cc -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes -Werror -g -I/usr/lib64/php7.1/include/php -I/usr/lib64/php7.1/include/php/main -I/usr/lib64/php7.1/include/php/TSRM -I/usr/lib64/php7.1/include/php/Zend -I/usr/lib64/php7.1/include/php/ext -I/usr/lib64/php7.1/include/php/ext/date/lib -o build/autotest build/autotest.c -lphp7 ---------- I've placed a full console log of the steps I've taken and the results displayed here: https://pastebin.com/ys2zWqnD (one week expiry). -Ralph From igor at sysoev.ru Fri Apr 13 14:40:17 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 13 Apr 2018 17:40:17 +0300 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> Message-ID: > On 13 Apr 2018, at 17:23, Ralph Seichter wrote: > > On 13.04.2018 14:49, Igor Sysoev wrote: > >>> $ ./configure php --config=/usr/lib64/php7.1/bin/php-config >>> configuring PHP module >>> checking for PHP ... found >>> + PHP SAPI: [embed cli fpm apache2handler] >>> checking for PHP embed SAPI ... not found >> >> Could you show the last lines from build/autoconf.err relevant to PHP? > > Sure, here you go again. I only added word-wrapping: > > ---------- > configuring PHP module ... > checking for PHP ... > 7.1.16 > ---------------------------------------- > checking for PHP embed SAPI > /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: > cannot find -lphp7 > collect2: error: ld returned 1 exit status > ---------- > > #include > #include > > int main() { > php_request_startup(); > return 0; > } > ---------- > > cc -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra > -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes -Werror -g > -I/usr/lib64/php7.1/include/php -I/usr/lib64/php7.1/include/php/main > -I/usr/lib64/php7.1/include/php/TSRM > -I/usr/lib64/php7.1/include/php/Zend -I/usr/lib64/php7.1/include/php/ext > -I/usr/lib64/php7.1/include/php/ext/date/lib -o build/autotest > build/autotest.c -lphp7 > ---------- > > I've placed a full console log of the steps I've taken and the results > displayed here: https://pastebin.com/ys2zWqnD (one week expiry). On Gentoo you should also use --lib-path: ./configure php --config=/usr/lib64/php7.1/bin/php-config --lib-path=/usr/lib64/php7.1/lib64 -- Igor Sysoev http://nginx.com From m16+nginx at monksofcool.net Fri Apr 13 15:07:15 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 13 Apr 2018 17:07:15 +0200 Subject: [SOLVED] Re: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> Message-ID: <02a571e6-7f11-05e1-fbc2-f52d01fedd5a@monksofcool.net> On 13.04.2018 16:40, Igor Sysoev wrote: > On Gentoo you should also use --lib-path Thank you, Igor! The following works on my Gentoo test server: ./configure php --config=/usr/lib64/php7.1/bin/php-config --lib-path=/usr/lib64/php7.1/lib64 I think it would be worth mentioning this particular detail in https://unit.nginx.org/installation/#configuring-sources . -Ralph From igor at sysoev.ru Fri Apr 13 15:12:11 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 13 Apr 2018 18:12:11 +0300 Subject: [SOLVED] Re: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: <02a571e6-7f11-05e1-fbc2-f52d01fedd5a@monksofcool.net> References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> <02a571e6-7f11-05e1-fbc2-f52d01fedd5a@monksofcool.net> Message-ID: > On 13 Apr 2018, at 18:07, Ralph Seichter wrote: > > On 13.04.2018 16:40, Igor Sysoev wrote: > >> On Gentoo you should also use --lib-path > > Thank you, Igor! The following works on my Gentoo test server: > > ./configure php --config=/usr/lib64/php7.1/bin/php-config --lib-path=/usr/lib64/php7.1/lib64 > > I think it would be worth mentioning this particular detail in > https://unit.nginx.org/installation/#configuring-sources . Almost the same example is here: https://unit.nginx.org/installation/#configuring-php-modules -- Igor Sysoev http://nginx.com From motiee at embedsys.ir Fri Apr 13 16:11:39 2018 From: motiee at embedsys.ir (motiee at embedsys.ir) Date: Fri, 13 Apr 2018 20:41:39 +0430 Subject: How can I do cross Compile Nginx Web Server? Message-ID: <4fedc994168f4ab58621cc3e195d55cb@embedsys.ir> Hi, I need to do Cross Compile Nginx Web Server to use on my target board. I tried , but ... hesam at hesam-MS-7392:~/temp/nginx-1.9.9$ ./configure --crossbuild=Linux::arm --with-cc="/home/hesam/g25/arm-2011.09/bin/arm-none-linux-gnueabi-gcc" --with-ld-opt="/usr/arm-linux-gnueabi/lib" --with-cc-opt="/usr/arm-linux-gnueabi/include"building for Linux::arm checking for C compiler ... found but is not working ./configure: error: C compiler /home/hesam/g25/arm-2011.09/bin/arm-none-linux-gnueabi-gcc is not found Please tell me how can I do it? Note: The "autoconf.err" file is attached. Best Regards Hesam.H.Motiee -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: autoconf.err Type: application/octet-stream Size: 147 bytes Desc: not available URL: From oscaretu at gmail.com Fri Apr 13 16:12:37 2018 From: oscaretu at gmail.com (oscaretu) Date: Fri, 13 Apr 2018 18:12:37 +0200 Subject: Monitoring http returns In-Reply-To: References: Message-ID: Perhaps this can be useful for you: https://github.com/Lax/nginx-http-accounting-module Kind regards, Oscar On Wed, Apr 11, 2018 at 6:19 AM, Jeff Abrahamson wrote: > I want to monitor nginx better: http returns (e.g., how many 500's, how > many 404's, how many 200's, etc.), as well as request rates, response > times, etc. All the solutions I've found start with "set up something to > watch and parse your logs, then ..." > > Here's one of the better examples of that: > > https://www.scalyr.com/community/guides/how-to- > monitor-nginx-the-essential-guide > > Perhaps I'm wrong to find this curious. It seems somewhat heavy and > inefficient to put this functionality into log watching, which means > another service and being sensitive to an eventual change in log format. > > Is this, indeed, the recommended solution? > > And, for my better understanding, can anyone explain why this makes more > sense than native nginx support of sending UDP packets to a monitor > collector (in our case, telegraf)? > > -- > > Jeff Abrahamson > +33 6 24 40 01 57 > +44 7920 594 255 > http://p27.eu/jeff/ > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Fri Apr 13 16:18:00 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 13 Apr 2018 18:18:00 +0200 Subject: Trouble configuring PHP 7.1 module for Unit 1.0 on Gentoo Linux In-Reply-To: References: <117200257.x7Pz00JNEh@vbart-workstation> <1d3db1b7-964a-e6cb-1d02-7776e81cf182@monksofcool.net> <4bf53e0d-b1ae-1f7f-037f-b9b3940da354@monksofcool.net> <6439FDAF-DBD1-4951-8C78-2558E0A64732@sysoev.ru> <02a571e6-7f11-05e1-fbc2-f52d01fedd5a@monksofcool.net> Message-ID: <4f66b72a-e929-61a1-f5e9-6ea2f46c7468@monksofcool.net> On 13.04.18 17:12, Igor Sysoev wrote: > > I think it would be worth mentioning this particular detail in > > https://unit.nginx.org/installation/#configuring-sources . > > Almost the same example is here: > https://unit.nginx.org/installation/#configuring-php-modules I should probably have been more specific. ;-) What I meant with "this particular detail" is that specifying --lib-path is apparently required with Unit version 1.0 on Gentoo Linux, even though lib-path is optional per se. -Ralph From mdounin at mdounin.ru Fri Apr 13 18:35:29 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Apr 2018 21:35:29 +0300 Subject: How can I do cross Compile Nginx Web Server? In-Reply-To: <4fedc994168f4ab58621cc3e195d55cb@embedsys.ir> References: <4fedc994168f4ab58621cc3e195d55cb@embedsys.ir> Message-ID: <20180413183529.GS77253@mdounin.ru> Hello! On Fri, Apr 13, 2018 at 08:41:39PM +0430, motiee at embedsys.ir wrote: > Hi, > I need to do Cross Compile Nginx Web Server to use on my target board. > I tried , but ... > > hesam at hesam-MS-7392:~/temp/nginx-1.9.9$ ./configure > --crossbuild=Linux::arm > --with-cc="/home/hesam/g25/arm-2011.09/bin/arm-none-linux-gnueabi-gcc" > --with-ld-opt="/usr/arm-linux-gnueabi/lib" > --with-cc-opt="/usr/arm-linux-gnueabi/include"building for Linux::arm > checking for C compiler ... found but is not working > ./configure: error: C compiler > /home/hesam/g25/arm-2011.09/bin/arm-none-linux-gnueabi-gcc is not found > > > Please tell me how can I do it? > > Note: The "autoconf.err" file is attached. Cross-compilation is not supported by nginx (except cross-compilation for win32 using wine, and this what --crossbuild to be used for). You need a native toolchain to build nginx. If your board is not powerful enough, consider using a virtual machine. -- Maxim Dounin http://mdounin.ru/ From motiee at embedsys.ir Fri Apr 13 20:04:39 2018 From: motiee at embedsys.ir (motiee at embedsys.ir) Date: Sat, 14 Apr 2018 00:34:39 +0430 Subject: How can I do cross Compile Nginx Web Server? In-Reply-To: <20180413183529.GS77253@mdounin.ru> References: <4fedc994168f4ab58621cc3e195d55cb@embedsys.ir> <20180413183529.GS77253@mdounin.ru> Message-ID: Dear Maxim Thanks for attention. My target Board is http://armdevs.com/Product/CORE9G25-CON.html (target OS: Linux 3.6.9) I need Web server , what is your advice? would you please tell me more? (I am new in Linux world). I have Install Nginx on my Host system (abuntu 16.0.4 LTS ) with out any problem, but I need to a web server on my Target . Regards ---------------------------------------- From: "Maxim Dounin" Sent: Friday, April 13, 2018 11:05 PM To: nginx at nginx.org Subject: Re: How can I do cross Compile Nginx Web Server? Hello! On Fri, Apr 13, 2018 at 08:41:39PM +0430, motiee at embedsys.ir wrote: > Hi, > I need to do Cross Compile Nginx Web Server to use on my target board. > I tried , but ... > > hesam at hesam-MS-7392:~/temp/nginx-1.9.9$ ./configure > --crossbuild=Linux::arm > --with-cc="/home/hesam/g25/arm-2011.09/bin/arm-none-linux-gnueabi-gcc" > --with-ld-opt="/usr/arm-linux-gnueabi/lib" > --with-cc-opt="/usr/arm-linux-gnueabi/include"building for Linux::arm > checking for C compiler ... found but is not working > ./configure: error: C compiler > /home/hesam/g25/arm-2011.09/bin/arm-none-linux-gnueabi-gcc is not found > > > Please tell me how can I do it? > > Note: The "autoconf.err" file is attached. Cross-compilation is not supported by nginx (except cross-compilation for win32 using wine, and this what --crossbuild to be used for). You need a native toolchain to build nginx. If your board is not powerful enough, consider using a virtual machine. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Apr 15 01:03:46 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 15 Apr 2018 04:03:46 +0300 Subject: How can I do cross Compile Nginx Web Server? In-Reply-To: References: <4fedc994168f4ab58621cc3e195d55cb@embedsys.ir> <20180413183529.GS77253@mdounin.ru> Message-ID: <20180415010346.GT77253@mdounin.ru> Hello! On Sat, Apr 14, 2018 at 12:34:39AM +0430, motiee at embedsys.ir wrote: > My target Board is http://armdevs.com/Product/CORE9G25-CON.html (target > OS: Linux 3.6.9) > I need Web server , what is your advice? > would you please tell me more? (I am new in Linux world). > I have Install Nginx on my Host system (abuntu 16.0.4 LTS ) with out any > problem, but I need to a web server on my Target . If you are new to *nix systems, the best approach might be to just install a pre-compiled package from your board OS vendor. Consult your board documentation for details. If you want to compile nginx from sources, you have to do it on the board itself (== native compilation). To do this, you have to install compiler on the board itself, and then run "./configure && make" (again, on the board itself). -- Maxim Dounin http://mdounin.ru/ From jombik at platon.org Sun Apr 15 17:01:42 2018 From: jombik at platon.org (Ondrej Jombik) Date: Sun, 15 Apr 2018 19:01:42 +0200 (CEST) Subject: Perl Inline C code inside nginx Perl module In-Reply-To: <20180413140340.GR77253@mdounin.ru> References: <20180413140340.GR77253@mdounin.ru> Message-ID: On Fri, 13 Apr 2018, Maxim Dounin wrote: >> As you can see in my example, I am not even using or calling test_fnc() >> yet. But Perl code simply fails on startup with this error message: >> >> -- Unit nginx.service has begun starting up. >> nginx[20011]: nginx: [emerg] require_pv("inlinetest.pm") failed: "Running Mkbootstrap for inlinetest_0cff >> nginx[20011]: chmod 644 "inlinetest_0cff.bs" >> nginx[20011]: "/usr/bin/perl" "/usr/share/perl/5.24/ExtUtils/xsubpp" -typemap "/usr/share/perl/5.24/ExtUt >> nginx[20011]: x86_64-linux-gnu-gcc -c -I"/" -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-alias >> nginx[20011]: x86_64-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory >> nginx[20011]: Makefile:332: recipe for target 'inlinetest_0cff.o' failed > > The problem is that your PATH is empty when relevant compilation > happens. You try to set it in the code using "$ENV{'PATH'} = > ...", but it doesn't work as this happens _after_ "use Inline...", > because "use ..." operators are executed at compile time in Perl, > much like BEGIN{}. > > A simple fix would be to set PATH in a BEGIN{} block at compile > time: > > BEGIN { $ENV{'PATH'} = '/bin/:/usr/bin/'; } > use Inline ... > > Alternativel, you can use nginx "env" directive to set PATH or > preserve it from original environment, see http://nginx.org/r/env. Thanks Maxim, it worked like a charm. Your help is very appreciated! I prefer using BEGIN { $ENV{'PATH'} = ... } construction, because this will have scope for Perl module only, while nginx "env" directive would be nginx-wide. Is this assumption correct? Also here is some unrelated additional info which may be useful for someone in the future: If Inine C code is in Perl module and your C code is in __DATA__/__C__ section, using "Inline->init" statement is neccessary. Also trailing "1;" is important and obviously must be placed before __DATA__. Example: package PACKAGENAME; use strict; use warnings; use base qw(Exporter); use Carp; use Exporter (); use Inline 'C' => 'DATA', name => 'PACKAGENAME'; Inline->init; use vars qw(@EXPORT @EXPORT_OK %EXPORT_TAGS $VERSION); $VERSION = do { [ q$Revision: 30336 $ =~ /(\d+)/g ]->[0]; }; @EXPORT = qw( example_function_for_export ); @EXPORT_OK = qw( example_function_for_export ); %EXPORT_TAGS = qw(); 1; __DATA__ __C__ /* here goes your C code */ void example_function_for_export(int num) { fprintf(stderr, "%d\n", num); } -- Ondrej JOMBIK Platon Technologies s.r.o., Hlavna 3, Sala SK-92701 +421222111321 - info at platon.net - http://platon.net Read our latest blog: https://blog.platon.sk/icann-sknic-tld-problemy/ My current location: Bratislava, Slovakia My current timezone: +0100 GMT (CET) (updated automatically) From gfrankliu at gmail.com Mon Apr 16 07:26:11 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Apr 2018 07:26:11 +0000 Subject: Virtual hosts sharing same port Message-ID: Can I use different listen parameters for virtual hosts using the same port? Eg, one vh has ?listen 443 ssl;? and the other one has ?listen 443 ssl h2;? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Mon Apr 16 08:16:13 2018 From: lagged at gmail.com (Andrei) Date: Mon, 16 Apr 2018 03:16:13 -0500 Subject: Exclude from cache by content-length Message-ID: Hello! I have an odd upstream application (out of my control) which sometimes responds with incomplete pages, and a 200 error.. This causes blank pages to appear in cache. Is there a way to exclude from/bypass cache if the content-length header from the upstream is lower than 5kb for example? Thanks everyone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Mon Apr 16 09:19:09 2018 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 16 Apr 2018 11:19:09 +0200 Subject: Virtual hosts sharing same port In-Reply-To: Message-ID: <20180416111909.Horde.AcnnK4B_QHogqkqt2vii8ta@andreasschulze.de> Frank Liu: > Can I use different listen parameters for virtual hosts using the same > port? Eg, one vh has ?listen 443 ssl;? and the other one has ?listen 443 > ssl h2;? no, that's impossible (I think...) https://nginx.org/r/listen ... The listen directive can have several additional parameters specific to socket-related system calls. These parameters can be specified in any listen directive, but only once for a given address:port pair. ... Andreas From richarddemeny at gmail.com Mon Apr 16 09:21:36 2018 From: richarddemeny at gmail.com (Richard Demeny) Date: Mon, 16 Apr 2018 09:21:36 +0000 Subject: Virtual hosts sharing same port In-Reply-To: <20180416111909.Horde.AcnnK4B_QHogqkqt2vii8ta@andreasschulze.de> References: <20180416111909.Horde.AcnnK4B_QHogqkqt2vii8ta@andreasschulze.de> Message-ID: It's possible if the so-called 'virtual machines' of yours are NOT on the same machine On Mon, 16 Apr 2018 10:19 A. Schulze, wrote: > > Frank Liu: > > > Can I use different listen parameters for virtual hosts using the same > > port? Eg, one vh has ?listen 443 ssl;? and the other one has ?listen 443 > > ssl h2;? > > no, that's impossible (I think...) > > https://nginx.org/r/listen > ... > The listen directive can have several additional parameters specific > to socket-related system calls. These parameters can be specified in > any listen directive, but only once for a given address:port pair. > ... > > Andreas > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 16 12:28:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Apr 2018 15:28:59 +0300 Subject: Perl Inline C code inside nginx Perl module In-Reply-To: References: <20180413140340.GR77253@mdounin.ru> Message-ID: <20180416122859.GU77253@mdounin.ru> Hello! On Sun, Apr 15, 2018 at 07:01:42PM +0200, Ondrej Jombik wrote: > On Fri, 13 Apr 2018, Maxim Dounin wrote: > > >> As you can see in my example, I am not even using or calling test_fnc() > >> yet. But Perl code simply fails on startup with this error message: > >> > >> -- Unit nginx.service has begun starting up. > >> nginx[20011]: nginx: [emerg] require_pv("inlinetest.pm") failed: "Running Mkbootstrap for inlinetest_0cff > >> nginx[20011]: chmod 644 "inlinetest_0cff.bs" > >> nginx[20011]: "/usr/bin/perl" "/usr/share/perl/5.24/ExtUtils/xsubpp" -typemap "/usr/share/perl/5.24/ExtUt > >> nginx[20011]: x86_64-linux-gnu-gcc -c -I"/" -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-alias > >> nginx[20011]: x86_64-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory > >> nginx[20011]: Makefile:332: recipe for target 'inlinetest_0cff.o' failed > > > > The problem is that your PATH is empty when relevant compilation > > happens. You try to set it in the code using "$ENV{'PATH'} = > > ...", but it doesn't work as this happens _after_ "use Inline...", > > because "use ..." operators are executed at compile time in Perl, > > much like BEGIN{}. > > > > A simple fix would be to set PATH in a BEGIN{} block at compile > > time: > > > > BEGIN { $ENV{'PATH'} = '/bin/:/usr/bin/'; } > > use Inline ... > > > > Alternativel, you can use nginx "env" directive to set PATH or > > preserve it from original environment, see http://nginx.org/r/env. > > Thanks Maxim, it worked like a charm. > Your help is very appreciated! > > I prefer using BEGIN { $ENV{'PATH'} = ... } construction, because this > will have scope for Perl module only, while nginx "env" directive would > be nginx-wide. Is this assumption correct? No. Changing process environment will change the whole process environment, regardless of how you do it. But doing this in the perl code might be the better option since it is closer to the code which actually needs PATH set. [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 16 13:32:09 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Apr 2018 16:32:09 +0300 Subject: Virtual hosts sharing same port In-Reply-To: References: Message-ID: <20180416133209.GV77253@mdounin.ru> Hello! On Mon, Apr 16, 2018 at 07:26:11AM +0000, Frank Liu wrote: > Can I use different listen parameters for virtual hosts using the same > port? Eg, one vh has ?listen 443 ssl;? and the other one has ?listen 443 > ssl h2;? No. Options like "ssl" and "h2" can be repeated multiple times to make configuring listening sockets more clear. But whenever you set it or not in a given server{} block, the listening socket in question will have the option set as long as it is set in at least one "listen" directive. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 16 14:20:47 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Apr 2018 17:20:47 +0300 Subject: Exclude from cache by content-length In-Reply-To: References: Message-ID: <20180416142047.GW77253@mdounin.ru> Hello! On Mon, Apr 16, 2018 at 03:16:13AM -0500, Andrei wrote: > I have an odd upstream application (out of my control) which sometimes > responds with incomplete pages, and a 200 error.. This causes blank pages > to appear in cache. Is there a way to exclude from/bypass cache if the > content-length header from the upstream is lower than 5kb for example? > Thanks everyone! Try proxy_no_cache combined with map on $upstream_http_content_length. Something like this should work: map $upstream_http_content_length $nocache { "~^[0-5][0-9]{,3}" 1; } proxy_no_cache $nocache; See here for details: http://nginx.org/r/proxy_no_cache http://nginx.org/r/map http://nginx.org/r/$upstream_http_ -- Maxim Dounin http://mdounin.ru/ From peter_booth at me.com Mon Apr 16 15:04:16 2018 From: peter_booth at me.com (Peter Booth) Date: Mon, 16 Apr 2018 11:04:16 -0400 Subject: Virtual hosts sharing same port In-Reply-To: <20180416133209.GV77253@mdounin.ru> References: <20180416133209.GV77253@mdounin.ru> Message-ID: <93AF28D1-2356-4850-A031-310672C73791@me.com> Does this imply that that different behavior *could* be achieved by first defining virtual IP addresses (additional private IPs defined at the OS) which were bound to same physical NIC, and then defining virtual hosts that reference the different VIPs, in a similar fashion to how someone might configure a hardware load balancer? Sent from my iPhone > On Apr 16, 2018, at 9:32 AM, Maxim Dounin wrote: > > Hello! > >> On Mon, Apr 16, 2018 at 07:26:11AM +0000, Frank Liu wrote: >> >> Can I use different listen parameters for virtual hosts using the same >> port? Eg, one vh has ?listen 443 ssl;? and the other one has ?listen 443 >> ssl h2;? > > No. Options like "ssl" and "h2" can be repeated multiple times to > make configuring listening sockets more clear. But whenever you > set it or not in a given server{} block, the listening socket in > question will have the option set as long as it is set in at least > one "listen" directive. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From gfrankliu at gmail.com Mon Apr 16 15:13:42 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Apr 2018 08:13:42 -0700 Subject: Virtual hosts sharing same port In-Reply-To: <93AF28D1-2356-4850-A031-310672C73791@me.com> References: <20180416133209.GV77253@mdounin.ru> <93AF28D1-2356-4850-A031-310672C73791@me.com> Message-ID: <61979AD1-917D-4F6E-8EC1-AC977CEE58B6@gmail.com> Does that mean nginx will read and combine listen options from all virtual hosts and use that to create listening socket? > On Apr 16, 2018, at 8:04 AM, Peter Booth wrote: > > Does this imply that that different behavior *could* be achieved by first defining virtual IP addresses (additional private IPs defined at the OS) which were bound to same physical NIC, and then defining virtual hosts that reference the different VIPs, in a similar fashion to how someone might configure a hardware load balancer? > > > > Sent from my iPhone > >> On Apr 16, 2018, at 9:32 AM, Maxim Dounin wrote: >> >> Hello! >> >>> On Mon, Apr 16, 2018 at 07:26:11AM +0000, Frank Liu wrote: >>> >>> Can I use different listen parameters for virtual hosts using the same >>> port? Eg, one vh has ?listen 443 ssl;? and the other one has ?listen 443 >>> ssl h2;? >> >> No. Options like "ssl" and "h2" can be repeated multiple times to >> make configuring listening sockets more clear. But whenever you >> set it or not in a given server{} block, the listening socket in >> question will have the option set as long as it is set in at least >> one "listen" directive. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Apr 16 16:31:17 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Apr 2018 19:31:17 +0300 Subject: Virtual hosts sharing same port In-Reply-To: <93AF28D1-2356-4850-A031-310672C73791@me.com> References: <20180416133209.GV77253@mdounin.ru> <93AF28D1-2356-4850-A031-310672C73791@me.com> Message-ID: <20180416163117.GX77253@mdounin.ru> Hello! On Mon, Apr 16, 2018 at 11:04:16AM -0400, Peter Booth wrote: > Does this imply that that different behavior *could* be achieved > by first defining virtual IP addresses (additional private IPs > defined at the OS) which were bound to same physical NIC, and > then defining virtual hosts that reference the different VIPs, > in a similar fashion to how someone might configure a hardware > load balancer? Yes, you can have different listening sockets configured with different options, e.g.: server { listen :443 ssl http2; ... } server { listen :443 ssl; # no http2 here ... } Note though that you have to direct clients to these different IP addresses, so using private IPs won't work. Rather, you have to use different public IPs. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 16 16:49:20 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Apr 2018 19:49:20 +0300 Subject: Virtual hosts sharing same port In-Reply-To: <61979AD1-917D-4F6E-8EC1-AC977CEE58B6@gmail.com> References: <20180416133209.GV77253@mdounin.ru> <93AF28D1-2356-4850-A031-310672C73791@me.com> <61979AD1-917D-4F6E-8EC1-AC977CEE58B6@gmail.com> Message-ID: <20180416164920.GY77253@mdounin.ru> Hello! On Mon, Apr 16, 2018 at 08:13:42AM -0700, Frank Liu wrote: > Does that mean nginx will read and combine listen options from > all virtual hosts and use that to create listening socket? Yes. You can configure something like this: server { listen 443 ssl; ... } server { listen 443; ... } and both servers will use SSL. Moreover, currently you can do something like this: server { listen 443 ssl; ... } server { listen 443 http2; ... } and both servers will use SSL and HTTP/2. (The latter is actually very confusing, and likely will result in warnings / errors during configuration parsing in future versions.) -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Mon Apr 16 21:16:12 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Apr 2018 14:16:12 -0700 Subject: Virtual hosts sharing same port In-Reply-To: <20180416164920.GY77253@mdounin.ru> References: <20180416133209.GV77253@mdounin.ru> <93AF28D1-2356-4850-A031-310672C73791@me.com> <61979AD1-917D-4F6E-8EC1-AC977CEE58B6@gmail.com> <20180416164920.GY77253@mdounin.ru> Message-ID: Thanks Maxim! This is something interesting to know. We had an outage last year when we had bunch of virtual hosts all with listen a.b.c.d:443 ssl; and someone added a new virtual host with listen a.b.c.d:443; and caused 443 no longer doing SSL. Based on what you said, this should not happen. I need to dig deeper into it. Frank On Mon, Apr 16, 2018 at 9:49 AM, Maxim Dounin wrote: > Hello! > > On Mon, Apr 16, 2018 at 08:13:42AM -0700, Frank Liu wrote: > > > Does that mean nginx will read and combine listen options from > > all virtual hosts and use that to create listening socket? > > Yes. You can configure something like this: > > server { > listen 443 ssl; > ... > } > > server { > listen 443; > ... > } > > and both servers will use SSL. Moreover, currently you can do > something like this: > > server { > listen 443 ssl; > ... > } > > server { > listen 443 http2; > ... > } > > and both servers will use SSL and HTTP/2. (The latter is actually > very confusing, and likely will result in warnings / errors during > configuration parsing in future versions.) > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Mon Apr 16 23:23:04 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Apr 2018 16:23:04 -0700 Subject: ssl_protocols per server and SNI Message-ID: This topic has been discussed in the past. eg: 3 years ago @ http://mailman.nginx.org/pipermail/nginx/2014-November/045738.html and nginx couldn't fix it due to OpenSSL. Has anything changed since then, with newer versions of OpenSSL? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Tue Apr 17 00:07:48 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Apr 2018 17:07:48 -0700 Subject: ssl_protocols per server and SNI In-Reply-To: References: Message-ID: Looks like OpenSSL 1.1.1 finally fixed this ( https://github.com/openssl/openssl/issues/4301) and added early callback (new in OpenSSL 1.1.1), which allows the application to switch SSL_CTXes *before* TLS version negotiation. Hopefully nginx 1.15 milestone will be able to take advantage of this. Thanks! Frank On Mon, Apr 16, 2018 at 4:23 PM, Frank Liu wrote: > This topic has been discussed in the past. eg: 3 years ago @ > http://mailman.nginx.org/pipermail/nginx/2014-November/045738.html and > nginx couldn't fix it due to OpenSSL. > Has anything changed since then, with newer versions of OpenSSL? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Apr 17 04:18:59 2018 From: lagged at gmail.com (Andrei) Date: Mon, 16 Apr 2018 23:18:59 -0500 Subject: Exclude from cache by content-length In-Reply-To: <20180416142047.GW77253@mdounin.ru> References: <20180416142047.GW77253@mdounin.ru> Message-ID: Thanks Maxim! On Mon, Apr 16, 2018 at 9:20 AM, Maxim Dounin wrote: > Hello! > > On Mon, Apr 16, 2018 at 03:16:13AM -0500, Andrei wrote: > > > I have an odd upstream application (out of my control) which sometimes > > responds with incomplete pages, and a 200 error.. This causes blank pages > > to appear in cache. Is there a way to exclude from/bypass cache if the > > content-length header from the upstream is lower than 5kb for example? > > Thanks everyone! > > Try proxy_no_cache combined with map on > $upstream_http_content_length. Something like this should work: > > map $upstream_http_content_length $nocache { > "~^[0-5][0-9]{,3}" 1; > } > > proxy_no_cache $nocache; > > See here for details: > > http://nginx.org/r/proxy_no_cache > http://nginx.org/r/map > http://nginx.org/r/$upstream_http_ > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 17 13:02:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Apr 2018 16:02:15 +0300 Subject: ssl_protocols per server and SNI In-Reply-To: References: Message-ID: <20180417130215.GC77253@mdounin.ru> Hello! On Mon, Apr 16, 2018 at 05:07:48PM -0700, Frank Liu wrote: > Looks like OpenSSL 1.1.1 finally fixed this ( > https://github.com/openssl/openssl/issues/4301) and added early callback > (new in OpenSSL 1.1.1), which allows the application to switch SSL_CTXes > *before* TLS version negotiation. > Hopefully nginx 1.15 milestone will be able to take advantage of this. As per the issue referenced, OpenSSL folks simply closed the issue without even trying to understand the problem. Another issue linked there (https://github.com/openssl/openssl/issues/4302) seems to suggest that it should be possible to use the clienthello callback as available in 1.1.1 to switch protocols supported. This might work (not tested), though certainly will require much more work than using the servername callback as we do now. -- Maxim Dounin http://mdounin.ru/ From randomdev4 at gmail.com Tue Apr 17 15:17:57 2018 From: randomdev4 at gmail.com (Tim Smith) Date: Tue, 17 Apr 2018 16:17:57 +0100 Subject: NGINX only enabling TLS1.2 ? Message-ID: Hi, Is there any reason why SSLlabs would report only 1.2 as being available despite the config showing otherwise ? nginx version: nginx/1.13.12 listen 10.10.10.10:443 ssl http2; ssl on; ssl_certificate /etc/nginx/keys/blah.pem; ssl_certificate_key /etc/nginx/keys/blah.key; ssl_dhparam /etc/nginx/keys/blah.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_session_timeout 10m; ssl_session_tickets off; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; ssl_prefer_server_ciphers on; From sca at andreasschulze.de Tue Apr 17 15:39:50 2018 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 17 Apr 2018 17:39:50 +0200 Subject: NGINX only enabling TLS1.2 ? In-Reply-To: References: Message-ID: Am 17.04.2018 um 17:17 schrieb Tim Smith: > ssl_ciphers > 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; TLS1.1 and older don't know/support these ciphers and your SSL library don't support TLS1.3 Andreas From mdounin at mdounin.ru Tue Apr 17 15:41:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Apr 2018 18:41:06 +0300 Subject: nginx-1.14.0 Message-ID: <20180417154105.GE77253@mdounin.ru> Changes with nginx 1.14.0 17 Apr 2018 *) 1.14.x stable branch. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Apr 17 17:26:37 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 17 Apr 2018 13:26:37 -0400 Subject: [nginx-announce] nginx-1.14.0 In-Reply-To: <20180417154111.GF77253@mdounin.ru> References: <20180417154111.GF77253@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.14.0 for Windows https://kevinworthington.com/nginxwin1140 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 17, 2018 at 11:41 AM, Maxim Dounin wrote: > Changes with nginx 1.14.0 17 Apr > 2018 > > *) 1.14.x stable branch. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Apr 17 17:45:33 2018 From: r at roze.lv (Reinis Rozitis) Date: Tue, 17 Apr 2018 20:45:33 +0300 Subject: NGINX only enabling TLS1.2 ? In-Reply-To: References: Message-ID: <003201d3d673$e2b19bb0$a814d310$@roze.lv> > Is there any reason why SSLlabs would report only 1.2 as being available despite the config showing otherwise ? Also SSLLabs supports only tls 1.3 draft18 while for example OpenSSL 1.1.1pre4 is draft 28, so it won't show that the server supports tls1.3. rr From djczaski at gmail.com Tue Apr 17 18:30:54 2018 From: djczaski at gmail.com (djczaski at gmail.com) Date: Tue, 17 Apr 2018 14:30:54 -0400 Subject: nginScript question In-Reply-To: <6563af4f-bd50-39ce-9835-d19182d09f18@nginx.com> References: <98aad8c8-8885-58e5-649d-fa8796d7cf96@nginx.com> <786faa9105dbfec1a87376b081cca5e7.NginxMailingListEnglish@forum.nginx.org> <6563af4f-bd50-39ce-9835-d19182d09f18@nginx.com> Message-ID: <09CC4A85-7471-4F08-B832-0ED91D4834B5@gmail.com> Is there a roadmap for nginScript and any plans to make it a part of the official nginx release? > On Apr 10, 2018, at 8:45 AM, Dmitry Volyntsev wrote: > > > >> On 13.07.2017 18:14, aledbf wrote: >> Thanks! > > Hi, > > I am glad to inform you that since njs-0.2.0 it is possible to create arbitrary http subrequests from js_content phase. > > Here you can find the subrequest API description: http://hg.nginx.org/njs/rev/750f7c6f071c > > Here you can find some usage examples: http://hg.nginx.org/nginx-tests/rev/8e593b068fc0 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From artem.povaluhin at gmail.com Tue Apr 17 19:33:44 2018 From: artem.povaluhin at gmail.com (Artem S. Povalyukhin) Date: Tue, 17 Apr 2018 22:33:44 +0300 Subject: nginScript question In-Reply-To: <09CC4A85-7471-4F08-B832-0ED91D4834B5@gmail.com> References: <98aad8c8-8885-58e5-649d-fa8796d7cf96@nginx.com> <786faa9105dbfec1a87376b081cca5e7.NginxMailingListEnglish@forum.nginx.org> <6563af4f-bd50-39ce-9835-d19182d09f18@nginx.com> <09CC4A85-7471-4F08-B832-0ED91D4834B5@gmail.com> Message-ID: <824d7eb5-268f-a96f-480a-86b4c2956ebc@gmail.com> Hi! On 04/17/2018 09:30 PM, djczaski at gmail.com wrote: > Is there a roadmap for nginScript and any plans to make it a part of the official nginx release? It is the official part. $ apt show nginx-module-njs Package: nginx-module-njs Version: 1.13.12.0.2.0-1~xenial Priority: optional Section: httpd Maintainer: Sergey Budnevitch Installed-Size: 1,707 kB Depends: libc6 (>= 2.14), libedit2 (>= 2.11-20080614), libpcre3, nginx (= 1.13.12-1~xenial) Homepage: http://nginx.org/ Download-Size: 184 kB APT-Manual-Installed: yes APT-Sources: http://nginx.org/packages/mainline/ubuntu xenial/nginx amd64 Packages Description: nginx nginScript dynamic modules ?nginScript dynamic modules for nginx wbr, Artem From nginx-forum at forum.nginx.org Tue Apr 17 22:13:13 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 17 Apr 2018 18:13:13 -0400 Subject: Nginx not respecting locations execution ordering Message-ID: So I have a location setup like this. location /media/files/ { add_header X-Location-Order First; } location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$ { add_header X-Location-Order Second; } When I access URL : domain_name_dot_com/media/files/image.jpg The Header response is X-Location-Order: Second I want it to be using the first location for all URL's that match that not the regex location can anybody help ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279504,279504#msg-279504 From r at roze.lv Tue Apr 17 22:21:05 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 18 Apr 2018 01:21:05 +0300 Subject: Nginx not respecting locations execution ordering In-Reply-To: References: Message-ID: <000001d3d69a$60c0ebc0$2242c340$@roze.lv> > When I access URL : domain_name_dot_com/media/files/image.jpg > > The Header response is X-Location-Order: Second > > I want it to be using the first location for all URL's that match that not the regex location can anybody help ? Change the first block to: location ^~/media/files/ { add_header X-Location-Order First; } "If the longest matching prefix location has the ?^~? modifier then regular expressions are not checked." http://nginx.org/en/docs/http/ngx_http_core_module.html#location rr From nginx-forum at forum.nginx.org Tue Apr 17 22:35:23 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 17 Apr 2018 18:35:23 -0400 Subject: Nginx not respecting locations execution ordering In-Reply-To: <000001d3d69a$60c0ebc0$2242c340$@roze.lv> References: <000001d3d69a$60c0ebc0$2242c340$@roze.lv> Message-ID: <387afd9586f9f9db2bd35de723046261.NginxMailingListEnglish@forum.nginx.org> Thank you for the help :) A new dilemma has occurred from this. I add a location like so. location ^~/media/files/ { add_header X-Location-Order First; } location ~ \.mp4$ { add_header X-Location-MP4 Served-from-MP4-location; } location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$ { add_header X-Location-Order Second; } How can i make it so my MP4 location is not overridden by the ^~/media/files/ location. I would like the responses to be like this. URL : domain_name_dot_com/media/files/image.jpg Header response is X-Location-Order: First URL : domain_name_dot_com/media/files/video.mp4 Header response is X-Location-MP4: Served-from-MP4-location URL : domain_name_dot_com/media/files/other.css Header response is X-Location-Order: Second How can I achieve that is it possible to have a location inside a location ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279504,279506#msg-279506 From ekgermann at semperen.com Wed Apr 18 01:01:03 2018 From: ekgermann at semperen.com (Eric Germann) Date: Tue, 17 Apr 2018 21:01:03 -0400 Subject: NGINX only enabling TLS1.2 ? In-Reply-To: <003201d3d673$e2b19bb0$a814d310$@roze.lv> References: <003201d3d673$e2b19bb0$a814d310$@roze.lv> Message-ID: <7C8FF862-0526-43A9-8991-C68B89FCB6B5@semperen.com> Piling on this, I built nginx-1.14.0 from source with openssl-1.1.1-pre5 compiled in. The macro in the header says it?s at TLS 1.3 Draft 26 Chrome 66 claims to support Draft 23 (via chrome://flags )? Neither Cloudflare nor Chrome report TLS 1.3 Yet when I do this from the command line for testing (openssl s_client host:443 ) I get New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384 Server public key is 384 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 ssl_ciphers are set to TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-AES-128 -CCM-8-SHA256:TLS13-AES-128-CCM-SHA256:EECDH+CHACHA20:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE- RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:EDH+AESGCM:ECD HE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECD HE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AE S128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:HIG H:!RC4:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK; My questions: 1. Do the drafts try to negotiate to a common draft? 2. the server is compiled statically to the source for openssl that the openssl command is executed from. I?d think they would be able to negotiate the first protocol listed. 3. Why does the protocol come up (even with the openssl command) as TLS_AES_256_GCM_SHA384 and not the TLS13 variants? ChaCha20-Poly1305 works in TLS1.2 just fine. Thoughts? EKG > On Apr 17, 2018, at 1:45 PM, Reinis Rozitis > wrote: > >> Is there any reason why SSLlabs would report only 1.2 as being available despite the config showing otherwise ? > > Also SSLLabs supports only tls 1.3 draft18 while for example OpenSSL 1.1.1pre4 is draft 28, so it won't show that the server supports tls1.3. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From Ajay_Sonawane at symantec.com Wed Apr 18 06:15:51 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Wed, 18 Apr 2018 06:15:51 +0000 Subject: Loadbalancer and failover issues Message-ID: I am using nginx as a reverse proxy for my backend servers. My client is able to communicate to backend servers through proxy ( I have setup correct configuration). To setup a load balancer I have used upstream directive to define a cluster of backend servers and default method of round robin. My clients are now connecting to backend servers in round robin fashion. Now I want to see how load balancers handles the failover. To test this, I deliberately stopped one server so all connection will go to second server. I have used below directive as well. proxy_next_upstream_error timeout invalid_header http_500 http_502 proxy_connect_timeout 2 The problem I am facing is I get error 502 at client side intermittently. This is when one of the server is down. I was expecting that client will get connected to second server when first server is down. This works for most of time but intermittently I get 502 error. Let me know if I am missing anything. Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Apr 18 07:08:57 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 18 Apr 2018 10:08:57 +0300 Subject: Nginx not respecting locations execution ordering In-Reply-To: <387afd9586f9f9db2bd35de723046261.NginxMailingListEnglish@forum.nginx.org> References: <000001d3d69a$60c0ebc0$2242c340$@roze.lv> <387afd9586f9f9db2bd35de723046261.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1B62F636-B097-46BA-83C4-348ACD45DFBC@sysoev.ru> > On 18 Apr 2018, at 01:35, c0nw0nk wrote: > > Thank you for the help :) > > A new dilemma has occurred from this. > > I add a location like so. > > location ^~/media/files/ { > add_header X-Location-Order First; > } > location ~ \.mp4$ { > add_header X-Location-MP4 Served-from-MP4-location; > } > location ~* > \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$ > { > add_header X-Location-Order Second; > } > > > How can i make it so my MP4 location is not overridden by the > ^~/media/files/ location. > > I would like the responses to be like this. > > URL : domain_name_dot_com/media/files/image.jpg > Header response is X-Location-Order: First > > URL : domain_name_dot_com/media/files/video.mp4 > Header response is X-Location-MP4: Served-from-MP4-location > > URL : domain_name_dot_com/media/files/other.css > Header response is X-Location-Order: Second > > > How can I achieve that is it possible to have a location inside a location ? If you prefer execution of the listed order, the you should use regex locations only: location ~ ^/media/files/.+\.mp4$ { add_header X-Location-MP4 Served-from-MP4-location; } location ~ ^/media/files/ { add_header X-Location-Order First; } location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$ { add_header X-Location-Order Second; } However this approach is not scaleable. Suppose you have hundreds of locations, then you have to find appropriate place where to add a new location and have to investigate the whole large configuration to understand how this tiny change will affect the configuration. The other approach is to use prefix locations and to isolate regex location inside prefix ones: location / { location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$ { add_header X-Location-Order Second; } } location /media/files/ { add_header X-Location-Order First; location ~ \.mp4$ { add_header X-Location-MP4 Served-from-MP4-location; } } With this configuration you can sort the first level prefix locations in any order. -- Igor Sysoev http://nginx.com From r at roze.lv Wed Apr 18 07:30:55 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 18 Apr 2018 10:30:55 +0300 Subject: NGINX only enabling TLS1.2 ? In-Reply-To: <7C8FF862-0526-43A9-8991-C68B89FCB6B5@semperen.com> References: <003201d3d673$e2b19bb0$a814d310$@roze.lv> <7C8FF862-0526-43A9-8991-C68B89FCB6B5@semperen.com> Message-ID: <000001d3d6e7$3018fdd0$904af970$@roze.lv> > 3. Why does the protocol come up (even with the openssl command) as TLS_AES_256_GCM_SHA384 and not the TLS13 variants? ChaCha20-Poly1305 works in TLS1.2 just fine. You can look at https://github.com/openssl/openssl/pull/5392 The default TLSv1.3 ciphersuites (and the way those are configured (https://github.com/openssl/openssl/commit/f865b08143b453962ad4afccd69e698d13c60f77) ) have been changed to: "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256" Maybe a developer can comment on this as it could be that nginx isn't fully compatible (and works just because the tlsv1.3 ciphers are always enabled) with the latest openssl pre/beta-release... rr From xeioex at nginx.com Wed Apr 18 15:25:48 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 18 Apr 2018 18:25:48 +0300 Subject: nginScript question In-Reply-To: <09CC4A85-7471-4F08-B832-0ED91D4834B5@gmail.com> References: <98aad8c8-8885-58e5-649d-fa8796d7cf96@nginx.com> <786faa9105dbfec1a87376b081cca5e7.NginxMailingListEnglish@forum.nginx.org> <6563af4f-bd50-39ce-9835-d19182d09f18@nginx.com> <09CC4A85-7471-4F08-B832-0ED91D4834B5@gmail.com> Message-ID: <7a1a95e0-269a-8cc3-d1cd-4fd5631018f9@nginx.com> On 17.04.2018 21:30, djczaski at gmail.com wrote: > Is there a roadmap for nginScript There is. The short term preliminary plan is: - stream integration refactoring to match the way it done in http - access to shared memory storage - base64 encode From scotgram at scotgram.com Wed Apr 18 17:51:41 2018 From: scotgram at scotgram.com (ScotGram) Date: Wed, 18 Apr 2018 10:51:41 -0700 Subject: Domain name wildcard option in NGINX configuration Message-ID: Hello, We have created a Nginx configuration which with multiple cnames and the same domain name, and it is working correctly? but we want to be able to wildcard the domain name so during our system configuration we do not have to modify the nginx configuration for different domains we are hosting. What we want is to replace the server { server_name cname1.domainname.com ... } server { server_anem cname2.domainname.com } -----> with server { server_name cname1.*.com } server{ server_name cname2.*.com } The associated SSL keys would still be in the SSL directoy, but would also need to specify a * in the file name Thanks ScotGram From nginx-forum at forum.nginx.org Wed Apr 18 21:00:21 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 18 Apr 2018 17:00:21 -0400 Subject: Nginx not respecting locations execution ordering In-Reply-To: <1B62F636-B097-46BA-83C4-348ACD45DFBC@sysoev.ru> References: <1B62F636-B097-46BA-83C4-348ACD45DFBC@sysoev.ru> Message-ID: <1c18c858b5948a8dad98ad42368a7ecd.NginxMailingListEnglish@forum.nginx.org> Igor Sysoev Wrote: ------------------------------------------------------- > > On 18 Apr 2018, at 01:35, c0nw0nk > wrote: > > > > Thank you for the help :) > > > > A new dilemma has occurred from this. > > > > I add a location like so. > > > > location ^~/media/files/ { > > add_header X-Location-Order First; > > } > > location ~ \.mp4$ { > > add_header X-Location-MP4 Served-from-MP4-location; > > } > > location ~* > > > \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg > |swf|css|js)$ > > { > > add_header X-Location-Order Second; > > } > > > > > > How can i make it so my MP4 location is not overridden by the > > ^~/media/files/ location. > > > > I would like the responses to be like this. > > > > URL : domain_name_dot_com/media/files/image.jpg > > Header response is X-Location-Order: First > > > > URL : domain_name_dot_com/media/files/video.mp4 > > Header response is X-Location-MP4: Served-from-MP4-location > > > > URL : domain_name_dot_com/media/files/other.css > > Header response is X-Location-Order: Second > > > > > > How can I achieve that is it possible to have a location inside a > location ? > > If you prefer execution of the listed order, the you should use regex > locations only: > > location ~ ^/media/files/.+\.mp4$ { > add_header X-Location-MP4 Served-from-MP4-location; > } > location ~ ^/media/files/ { > add_header X-Location-Order First; > } > location ~* > \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg > |swf|css|js)$ { > add_header X-Location-Order Second; > } > > However this approach is not scaleable. Suppose you have hundreds of > locations, then you have > to find appropriate place where to add a new location and have to > investigate the whole large > configuration to understand how this tiny change will affect the > configuration. > > The other approach is to use prefix locations and to isolate regex > location inside prefix ones: > > location / { > location ~* > \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg > |swf|css|js)$ { > add_header X-Location-Order Second; > } > } > > location /media/files/ { > add_header X-Location-Order First; > > location ~ \.mp4$ { > add_header X-Location-MP4 Served-from-MP4-location; > } > } > > With this configuration you can sort the first level prefix locations > in any order. > > > -- > Igor Sysoev > http://nginx.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank you Igor for the information :) My soloution has become this. location ^~/media/files/ { add_header X-Location-Order First; location ~ \.mp4$ { add_header X-Location-MP4 Served-from-MP4-location; } location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$ { add_header X-Location-Order Second; } } Super appreciate the help from everyone and info now I know what to do and reflect upon in future :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279504,279528#msg-279528 From shanshan.lqs at gmail.com Thu Apr 19 01:48:55 2018 From: shanshan.lqs at gmail.com (qingshan luo) Date: Thu, 19 Apr 2018 09:48:55 +0800 Subject: When compiling NGINX from source fails when the --with-libatomic="path to libatomic_ops source dir" option is specified Message-ID: When compiling nginx with the --with-libatomic=/usr/local/src/libatomic_ops-7.6.4 option, it is reported that libatomic_ops.a does not exist. If the libatomic_ops version is greater than 7.4.0, the libatomic_ops.a file is located in ./src/.libs instead of ./src see: https://github.com/ivmai/libatomic_ops/issues/35 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 19 12:53:16 2018 From: nginx-forum at forum.nginx.org (Xander2020) Date: Thu, 19 Apr 2018 08:53:16 -0400 Subject: Nginx Log File from Specific PathName in link Message-ID: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> Hello everyone , Im new here , first post today , been using Nginx for a few years now. People are POST ing links like this to our server : POST https://domain.com/name/test/sub/aaa-bbb-ccc I want to log in a different file all the links that are having the word : name in a file called name.log Eg : From the bellow exemples i should have in the log file name.log only the first and the last of the links. POST https://domain.com/name/test/sub/aaa-bbb-ccc POST https://domain.com/name3/test/sub/aaa-bbb-ddd POST https://domain.com/name2/test/sub/aaa-bbb-zzz POST https://domain.com/name/test/sub/aaa-bbb-aaa Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279535,279535#msg-279535 From lists at lazygranch.com Thu Apr 19 19:45:34 2018 From: lists at lazygranch.com (Gary) Date: Thu, 19 Apr 2018 12:45:34 -0700 Subject: Nginx Log File from Specific PathName in link In-Reply-To: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> Message-ID: Why wouldn't you just grep the regular log file? ? Original Message ? From: nginx-forum at forum.nginx.org Sent: April 19, 2018 5:53 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Nginx Log File from Specific PathName in link Hello everyone , Im new here , first post today , been using Nginx for a few years now. People are POST ing links like this to our server : POST https://domain.com/name/test/sub/aaa-bbb-ccc I want to log in a different file all the links that are having the word : name in a file called name.log Eg : From the bellow exemples i should have in the log file name.log only the first and the last of the links. POST https://domain.com/name/test/sub/aaa-bbb-ccc POST https://domain.com/name3/test/sub/aaa-bbb-ddd POST https://domain.com/name2/test/sub/aaa-bbb-zzz POST https://domain.com/name/test/sub/aaa-bbb-aaa Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279535,279535#msg-279535 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Apr 20 14:59:57 2018 From: nginx-forum at forum.nginx.org (Xander2020) Date: Fri, 20 Apr 2018 10:59:57 -0400 Subject: Nginx Log File from Specific PathName in link In-Reply-To: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> References: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> Message-ID: I need it in a separate file so i can fastly use it somewhere else. Is there a way? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279535,279543#msg-279543 From nginx-forum at forum.nginx.org Sun Apr 22 06:16:19 2018 From: nginx-forum at forum.nginx.org (tangyan) Date: Sun, 22 Apr 2018 02:16:19 -0400 Subject: Nginx Log File from Specific PathName in link In-Reply-To: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> References: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> Message-ID: <11000fddb2d26109228c48aef107e6cc.NginxMailingListEnglish@forum.nginx.org> access_log is valid in location context, so you can write nginx.conf like this: location /name/ { access_log name.log; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279535,279544#msg-279544 From nginx-forum at forum.nginx.org Sun Apr 22 07:37:26 2018 From: nginx-forum at forum.nginx.org (Xander2020) Date: Sun, 22 Apr 2018 03:37:26 -0400 Subject: Nginx Log File from Specific PathName in link In-Reply-To: <11000fddb2d26109228c48aef107e6cc.NginxMailingListEnglish@forum.nginx.org> References: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> <11000fddb2d26109228c48aef107e6cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <99c671bb8cabe9d2ddcc73756bc628ce.NginxMailingListEnglish@forum.nginx.org> Thanks Tangyan , Will try and reply back . Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279535,279545#msg-279545 From veerareddy at email.arizona.edu Sun Apr 22 17:19:20 2018 From: veerareddy at email.arizona.edu (Krishna Sai Veera Reddy) Date: Sun, 22 Apr 2018 10:19:20 -0700 Subject: Nginx - Rate limit when origin server response code is 401 Message-ID: Hello, I would like to limit access to my API endpoints when unauthorized requests (i.e. when origin server responds with 401 status code) are made but I wasn't able to find any information on how to go about this online. Is this possible using nginx? Please let me know. Regards, Krishna -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Apr 23 04:20:42 2018 From: nginx-forum at forum.nginx.org (Xander2020) Date: Mon, 23 Apr 2018 00:20:42 -0400 Subject: Nginx Log File from Specific PathName in link In-Reply-To: <11000fddb2d26109228c48aef107e6cc.NginxMailingListEnglish@forum.nginx.org> References: <2a980d6407fa665a360e999f13dc5479.NginxMailingListEnglish@forum.nginx.org> <11000fddb2d26109228c48aef107e6cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <68176a08f46319de176008274a51757d.NginxMailingListEnglish@forum.nginx.org> Hello , It worked perfectly. Thank you Tangyan. U saved my day. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279535,279548#msg-279548 From nginx-forum at forum.nginx.org Mon Apr 23 08:43:26 2018 From: nginx-forum at forum.nginx.org (Joncheski) Date: Mon, 23 Apr 2018 04:43:26 -0400 Subject: Reverse proxy from NGINX to Keycloak with 2FA Message-ID: <4e57e3f20bd0e930cb3438a0d5e33a56.NginxMailingListEnglish@forum.nginx.org> Hello all, I have a problem with NGINX. In addition, I will provide you with a configuration file and a picture of the architecture schema ( https://ibb.co/jqvc8c ). I want to access Keycloak via nginx and log in to it. I use it as an Identity Management where I have a login with a username and password and a certificate where I check the certificate, that is 2FA. My problem is that when I access the browser through NGINX, I do not get popup to submit my user certificate, but then go to the second step to enter a username and password, but after that, Keycloak tells me I'm missing a certificate. Something I've tried and worked on is if I add these things to the configuration file, proxy_ssl_certificate and proxy_ssl_certificate_key will pass it on, but only for one user. An example if proxy_ssl_certificate and proxy_ssl_certificate_key are a certificate and a key from the user joncheski and log in to Keycloak with the user joncheski will pass successfully. But if I want to log in with another user, it will not pass, because the certificate and the username are not equal. I need your help. How to set this up for more users to work. nginx.conf: user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; server_name nginx.poc.com; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_trusted_certificate /etc/nginx/certs/ca/ROOT-CA.crt; ssl_prefer_server_ciphers on; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; ssl_certificate /etc/nginx/certs/server/SERVER.crt; ssl_certificate_key /etc/nginx/certs/server/SERVER.key; ssl_trusted_certificate /etc/nginx/certs/ca/ROOT-CA.crt; #KEYCLOAK location '/auth' { proxy_pass https://keycloak.poc.com:8443/auth; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_http_version 1.1; } } } Best regards, Goce Joncheski Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279549,279549#msg-279549 From nginx-forum at forum.nginx.org Mon Apr 23 09:40:30 2018 From: nginx-forum at forum.nginx.org (BettyStacy) Date: Mon, 23 Apr 2018 05:40:30 -0400 Subject: Buy a Professional Essay from Best Essay Writing Service Online Message-ID: Now buying a best essay is easy one. Now one of the academic challenges is writing a professional essay. Essay writing is best way to improve student?s knowledge and writing skill. Students think essay writing is very tough, because of their lack of time or they are not confident in writing skill. Swift essays are the best essay writing service online. Simple procedure for apply your essay. Our service is fully customer friendly service. If we did not meet your requirements, you can ask us to revise your essay. We will promise you never resell the paper written for you. Our service is 100% genuine and we provide money back guarantee. Visit :https://swiftessays.com/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279550,279550#msg-279550 From ryan at bbnx.net Mon Apr 23 11:16:13 2018 From: ryan at bbnx.net (Ryan A. Krenzischek) Date: Mon, 23 Apr 2018 07:16:13 -0400 Subject: CentOS 7 SRPMS Missing for 1.14.0 Message-ID: <14d9bd5bef46fa2b9e4952a2527530c3@bbnx.net> Just a heads up to whom ever manages the SRPMS for nginx, 1.14.0 SRPMS packages are missing from: http://nginx.org/packages/mainline/centos/7/SRPMS/ Thanks, Ryan From thresh at nginx.com Mon Apr 23 12:10:10 2018 From: thresh at nginx.com (Konstantin Pavlov) Date: Mon, 23 Apr 2018 15:10:10 +0300 Subject: CentOS 7 SRPMS Missing for 1.14.0 In-Reply-To: <14d9bd5bef46fa2b9e4952a2527530c3@bbnx.net> References: <14d9bd5bef46fa2b9e4952a2527530c3@bbnx.net> Message-ID: Hi Ryan, 23.04.2018 14:16, Ryan A. Krenzischek wrote: > > Just a heads up to whom ever manages the SRPMS for nginx, 1.14.0 SRPMS > packages are missing from: > http://nginx.org/packages/mainline/centos/7/SRPMS/ You can get the 1.14.0 SRPMS and binaries from http://nginx.org/packages/centos/7/SRPMS/ since it's a stable release (as opposed to mainline as used in your link). Have a good one, -- Konstantin Pavlov https://www.nginx.com/ From nginx-forum at forum.nginx.org Tue Apr 24 16:58:47 2018 From: nginx-forum at forum.nginx.org (agile6v) Date: Tue, 24 Apr 2018 12:58:47 -0400 Subject: Is the auto parameter of the worker_processes directive planned to support the Docker runtime? In-Reply-To: <20180412153015.GP77253@mdounin.ru> References: <20180412153015.GP77253@mdounin.ru> Message-ID: Hi, Maxim Dounin I submitted a patch that supports the auto parameter of the worker_processes directive to detect the container environment automatically. Refers to the JDK implementation: https://bugs.openjdk.java.net/browse/JDK-8146115 If you have time please review it. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279418,279565#msg-279565 From nginx-forum at forum.nginx.org Tue Apr 24 17:06:48 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 24 Apr 2018 13:06:48 -0400 Subject: FASTCGI_CACHE | How many keys (cached files) can a 100m zone store Message-ID: As it says on the Nginx docs for limit_req One megabyte zone can keep about 16 thousand 64-byte states or about 8 thousand 128-byte states. What can a 100m zone for the fastcgi_cache store ? depending on the length of the fastcgi_cache_key and how many variables that contains i am sure could affect it but be nice to have a example for better understanding of how many file paths are expected to be in the 100mb zone. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279567,279567#msg-279567 From mdounin at mdounin.ru Tue Apr 24 17:14:51 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Apr 2018 20:14:51 +0300 Subject: FASTCGI_CACHE | How many keys (cached files) can a 100m zone store In-Reply-To: References: Message-ID: <20180424171451.GN1606@mdounin.ru> Hello! On Tue, Apr 24, 2018 at 01:06:48PM -0400, c0nw0nk wrote: > As it says on the Nginx docs for limit_req > > One megabyte zone can keep about 16 thousand 64-byte states or about 8 > thousand 128-byte states. > > > What can a 100m zone for the fastcgi_cache store ? > > depending on the length of the fastcgi_cache_key and how many variables that > contains i am sure could affect it but be nice to have a example for better > understanding of how many file paths are expected to be in the 100mb zone. Quoting http://nginx.org/r/fastcgi_cache_path: : One megabyte zone can store about 8 thousand keys. It does not depend on the length of fastcgi_cache_key or anything else, as only md5 of the key is stored in memory. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Apr 24 18:47:16 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 24 Apr 2018 14:47:16 -0400 Subject: FASTCGI_CACHE | How many keys (cached files) can a 100m zone store In-Reply-To: <20180424171451.GN1606@mdounin.ru> References: <20180424171451.GN1606@mdounin.ru> Message-ID: <85c56f52887f22665297a05c805fa2bc.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Apr 24, 2018 at 01:06:48PM -0400, c0nw0nk wrote: > > > As it says on the Nginx docs for limit_req > > > > One megabyte zone can keep about 16 thousand 64-byte states or about > 8 > > thousand 128-byte states. > > > > > > What can a 100m zone for the fastcgi_cache store ? > > > > depending on the length of the fastcgi_cache_key and how many > variables that > > contains i am sure could affect it but be nice to have a example for > better > > understanding of how many file paths are expected to be in the 100mb > zone. > > Quoting http://nginx.org/r/fastcgi_cache_path: > > : One megabyte zone can store about 8 thousand keys. > > It does not depend on the length of fastcgi_cache_key or anything > else, as only md5 of the key is stored in memory. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank you maxim did not realize it was the md5 sum generated that gets stored awesome :) Your the best. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279567,279570#msg-279570 From jack.terranova at gmail.com Wed Apr 25 18:48:24 2018 From: jack.terranova at gmail.com (Jack Terranova) Date: Wed, 25 Apr 2018 14:48:24 -0400 Subject: nginx customization best practices Message-ID: We have a made some minor customizations to nginx to expose some of the stub module metrics, so we can configure them through the access log. We were able to follow the build-from-source instructions and made the needed changes in the stub module. We may make further changes, but were wondering what is the best practice for maintaining a customized nginx repo? Do we want to continue to download release tar balls or should we fork the nginx GitHub repo and build/release from the fork? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 26 03:15:57 2018 From: nginx-forum at forum.nginx.org (viet) Date: Wed, 25 Apr 2018 23:15:57 -0400 Subject: Nginx - Rate limit when origin server response code is 401 In-Reply-To: References: Message-ID: Hello Krishna, I have a similar question asking here https://serverfault.com/questions/907860/nginx-limit-request-based-on-response-status-code (no reply yet). I've tried several combinations, but haven't found any working solution yet. It seems that limit was done in "receiving" state of requests only, which mean $status is always null. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279547,279582#msg-279582 From Ajay_Sonawane at symantec.com Thu Apr 26 10:28:00 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Thu, 26 Apr 2018 10:28:00 +0000 Subject: NGINX closes the upstream connection in 60 seconds Message-ID: I am using Nginx proxy server as a reverse proxy and https load balancer. Client connects to backend server through the reverse proxy in a load balanced environment. I have setup the correct https configuration (with ssl certificates and all) so that my ssl communication is going through proxy. In my case, server gracefully disconnect connection after 120 seconds (IDLE TIMEOUT of my server). But before that, nginx proxy itself closes after 60 seconds. This happens for every connect cycle. Due to which my client don't get ssl disconnect event and just receives tcp socket close event. If I change the IDLE_TIMEOUT of my server less than 60 seconds, everything works fine. Want to know if there is any timeout on nginx server that I need to configure to keep the connection open for more than 60 seconds. Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Thu Apr 26 12:27:39 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 26 Apr 2018 12:27:39 +0000 Subject: NGINX closes the upstream connection in 60 seconds In-Reply-To: References: Message-ID: <9ECBFEEF-1BAA-4BE5-B8A8-2FBBD2500A2C@yale.edu> I have the same problem and used this to extent it to 2 minutes proxy_read_timeout 120s; ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Ajay Sonawane Reply-To: "nginx at nginx.org" Date: Thursday, April 26, 2018 at 8:25 AM To: "nginx at nginx.org" Subject: NGINX closes the upstream connection in 60 seconds I am using Nginx proxy server as a reverse proxy and https load balancer. Client connects to backend server through the reverse proxy in a load balanced environment. I have setup the correct https configuration (with ssl certificates and all) so that my ssl communication is going through proxy. In my case, server gracefully disconnect connection after 120 seconds (IDLE TIMEOUT of my server). But before that, nginx proxy itself closes after 60 seconds. This happens for every connect cycle. Due to which my client don't get ssl disconnect event and just receives tcp socket close event. If I change the IDLE_TIMEOUT of my server less than 60 seconds, everything works fine. Want to know if there is any timeout on nginx server that I need to configure to keep the connection open for more than 60 seconds. Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From liulantao at gmail.com Thu Apr 26 12:28:22 2018 From: liulantao at gmail.com (Liu Lantao) Date: Thu, 26 Apr 2018 20:28:22 +0800 Subject: Monitoring http returns In-Reply-To: References: Message-ID: Author here :) Thanks Oscar. With `accounting` module, the metrics such as status codes, rates, and response time are logged, you can let it write to a local file, or (by default) via syslog to forward them to remote host/app. Another way is use ELK stack, document here: https://translate.google.com/translate?u=http%3A%2F%2Fchenlinux.com%2F2014%2F02%2F19%2Fngx-accounting-to-logstash%2F 2018-04-14 0:12 GMT+08:00 oscaretu : > Perhaps this can be useful for you: https://github.com/Lax/nginx- > http-accounting-module > > Kind regards, > Oscar > > On Wed, Apr 11, 2018 at 6:19 AM, Jeff Abrahamson wrote: > >> I want to monitor nginx better: http returns (e.g., how many 500's, how >> many 404's, how many 200's, etc.), as well as request rates, response >> times, etc. All the solutions I've found start with "set up something to >> watch and parse your logs, then ..." >> >> Here's one of the better examples of that: >> >> https://www.scalyr.com/community/guides/how-to-monitor- >> nginx-the-essential-guide >> >> Perhaps I'm wrong to find this curious. It seems somewhat heavy and >> inefficient to put this functionality into log watching, which means >> another service and being sensitive to an eventual change in log format. >> >> Is this, indeed, the recommended solution? >> >> And, for my better understanding, can anyone explain why this makes more >> sense than native nginx support of sending UDP packets to a monitor >> collector (in our case, telegraf)? >> >> -- >> >> Jeff Abrahamson >> +33 6 24 40 01 57 >> +44 7920 594 255 >> http://p27.eu/jeff/ >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Oscar Fernandez Sierra > oscaretu at gmail.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Liu Lantao EMAIL: liulantao ( at ) gmail ( dot ) com WEBSITE: http://blog.liulantao.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Apr 26 16:45:55 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 26 Apr 2018 19:45:55 +0300 Subject: Unit 1.1 release Message-ID: <6078149.ajOLTDs9qK@vbart-workstation> Hello, I'm glad to announce a new release of NGINX Unit. This is mostly a bugfix release with stability and compatibility improvements. Changes with Unit 1.1 26 Apr 2018 *) Bugfix: Python applications that use the write() callable did not work. *) Bugfix: virtual environments created with Python 3.3 or above might not have worked. *) Bugfix: the request.Read() function in Go applications did not produce EOF when the whole body was read. *) Bugfix: a segmentation fault might have occurred while access log reopening. *) Bugfix: in parsing of IPv6 control socket addresses. *) Bugfix: loading of application modules was broken on OpenBSD. *) Bugfix: a segmentation fault might have occurred when there were two modules with the same type and version; the bug had appeared in 1.0. *) Bugfix: alerts "freed pointer points to non-freeble page" might have appeared in log on 32-bit platforms. A half of these issues were reported on GitHub by our users. Thank you all for helping us make Unit better. If you have encountered a problem with Unit or have any ideas for improvements, please feel free to share here: - Mailing list: http://mailman.nginx.org/mailman/listinfo/unit - GitHub: https://github.com/nginx/unit/issues wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Apr 26 17:10:10 2018 From: nginx-forum at forum.nginx.org (veerareddy) Date: Thu, 26 Apr 2018 13:10:10 -0400 Subject: Nginx - Rate limit when origin server response code is 401 In-Reply-To: References: Message-ID: <7769b355eef5410fdbea824f7539f56c.NginxMailingListEnglish@forum.nginx.org> Hello Viet, I posted the same question on Digital Ocean forums and got a response suggesting to intercept errors from upstream and to rate limit based on the error using the `error_page` directive. I haven't tried it myself yet but it's worth a shot. Here's a link to the full post: https://www.digitalocean.com/community/questions/use-nginx-to-rate-limit-only-when-origin-server-response-code-is-401. Let me know if this works for you. Regards, Krishna Veera Reddy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279547,279595#msg-279595 From nginx-forum at forum.nginx.org Thu Apr 26 20:56:57 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 26 Apr 2018 16:56:57 -0400 Subject: Nginx fastcgi_cache_background_update Issue/Question Message-ID: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_background_update How can I switch between an On and a Off version of this function within a Nginx server { set $var 1; if ($var) { fastcgi_cache_background_update On; } Is there a way to do this even with Nginx + Lua i can't figure out a solution that will allow me to toggle / switch between a On and Off fastcgi background update state. What ever way or methods can be used with Lua especially I would be extremely grateful for help with thanks everyone :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279603,279603#msg-279603 From mauro.tridici at cmcc.it Thu Apr 26 23:41:26 2018 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Fri, 27 Apr 2018 01:41:26 +0200 Subject: NGINX non-HTTP port forwarding from internet to private LAN preserving the client IP Message-ID: <91C3E99C-677E-4F5B-ACD5-38C285676007@cmcc.it> Dear Users, I really appreciate NGINX tool and I've been using it for a while, but I?m not an expert user. So, I would like to ask you if I can use NGINX i order to start a port forwarding from an internet client to a server machine in my private LAN preserving the client IP. IMPORTANT: the source and the destination ports are the same (11750) and they are not HTTP ports. At the moment, I?m doing it using firewalld port forwarding, but, on the server machine, I have no information about the client IP address. Is there a way to do it better using NGINX? If yes, is there a running example to refer to? My NGINX version is the following one: nginx -V nginx version: nginx/1.10.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' Thank you very much for your attention. Sorry if I said something wrong. Regards Mauro T. From hitman at itglowz.com Fri Apr 27 06:42:45 2018 From: hitman at itglowz.com (Matthew VK3EVL) Date: Fri, 27 Apr 2018 16:42:45 +1000 Subject: nginx-plus and nginx-asg-sync Message-ID: <676356d8-f33a-d761-04fc-6848cc1b725f@itglowz.com> Hi all, New to nginx so still finding my way around. I am running nginx-plus on amazon linux through AWS. I have setup nginx-asg-sync and i find i have 2 problems. The main one is sync isn't working. The nginx logs are spewing out 2018/04/27 06:19:24 [error] 21780#21780: *492 missing "upstream" argument, client: 127.0.0.1, server: , request: "GET /upstream_conf HTTP/1.1", host: "127.0.0.1:8080" 2018/04/27 06:19:29 [error] 21781#21781: *494 missing "upstream" argument, client: 127.0.0.1, server: , request: "GET /upstream_conf HTTP/1.1", host: "127.0.0.1:8080" 2018/04/27 06:19:29 [error] 21781#21781: *495 missing "upstream" argument, client: 127.0.0.1, server: , request: "GET /upstream_conf HTTP/1.1", host: "127.0.0.1:8080" 2018/04/27 06:19:34 [error] 21781#21781: *497 missing "upstream" argument, client: 127.0.0.1, server: , request: "GET /upstream_conf HTTP/1.1", host: "127.0.0.1:8080" To me that says that nginx-asg-sync isn't passing any parameters. nginx config--------------------------------------------------------------------------- server { ??????? # Status page is enabled on port 8080 by default. ??????? listen 8080; ??????? # Status zone allows the status page to display statistics for the whole server block. ??????? # It should be enabled for every server block in other configuration files. ??????? status_zone status-page; ??????? # In case of nginx process listening on multiple IPs you can restrict status page ??????? # to single IP only ??????? # listen 10.2.3.4:8080; ??????? # HTTP basic Authentication is enabled by default. ??????? # You can add users with any htpasswd generator. ??????? # Command line and online tools are very easy to find. ??????? # You can also reuse your htpasswd file from Apache web server installation. ??????? #auth_basic on; ??????? #auth_basic_user_file /etc/nginx/users; ??????? # It is recommended to limit the use of status page to admin networks only ??????? # Uncomment and change the network accordingly. ??????? #allow 10.0.0.0/8; ??????? #deny all; ??????? # NGINX provides a sample HTML status page for easy dashboard view ??????? root /usr/share/nginx/html; ??????? location = /status.html { } ??????? # Standard HTTP features are fully supported with the status page. ??????? # An example below provides a redirect from "/" to "/status.html" ??????? location = / { ??????????????? return 301 /status.html; ??????? } ??????? # Main status location. HTTP features like authentication, access control, ??????? # header changes, logging are fully supported. ??????? location /status { ??????????????? status; ??????????????? status_format json; ??????? } ??????? location /upstream_conf { ??????????????? upstream_conf; ??????? } } stream { ??????? upstream mqtt_cluster { ??????????????? state /var/lib/nginx/state/mqtt_cluster.conf; ??????? } ??????? server { ??????????????? listen 1883; ??????????????? proxy_pass mqtt_cluster; ??????????????? status_zone mqtt_servers; ??????? } ??????? upstream coap_cluster { ??????????????? state /var/lib/nginx/state/coap_cluster.conf; ??????? } ??????? server { ??????????????? listen 5683 udp; ??????????????? proxy_bind 10.130.3.170:6000; ??????????????? proxy_pass coap_cluster; ??????????????? status_zone coap_servers; ??????????????? proxy_responses 1; ??????? } } aws.yaml------------------------------------------------------------------------------------- region: us-east-2 upstream_conf_endpoint: http://127.0.0.1:8080/upstream_conf status_endpoint: http://127.0.0.1:8080/status sync_interval_in_seconds: 5 upstreams: ?- name: mqtt_cluster ?? autoscaling_group: xxxxx ?? port: 1883 ?? kind: stream ?- name: coap_cluster ?? autoscaling_group: xxxxx ?? port: 5683 ?? kind: stream Is there anything that looks out of place there? Cheers Matthew From hitman at itglowz.com Fri Apr 27 06:49:19 2018 From: hitman at itglowz.com (Matthew VK3EVL) Date: Fri, 27 Apr 2018 16:49:19 +1000 Subject: nginx-plus and nginx-asg-sync In-Reply-To: <676356d8-f33a-d761-04fc-6848cc1b725f@itglowz.com> References: <676356d8-f33a-d761-04fc-6848cc1b725f@itglowz.com> Message-ID: oops. 2nd problem is nginx won't start when the conf files listed in state don't have data in them, and they won't get data in them until nginx starts. Currently i just put in a dummy entry to get me by. On 27/04/2018 16:42, Matthew VK3EVL wrote: > Hi all, > > New to nginx so still finding my way around. I am running nginx-plus > on amazon linux through AWS. > I have setup nginx-asg-sync and i find i have 2 problems. > > The main one is sync isn't working. The nginx logs are spewing out > > 2018/04/27 06:19:24 [error] 21780#21780: *492 missing "upstream" > argument, client: 127.0.0.1, server: , request: "GET /upstream_conf > HTTP/1.1", host: "127.0.0.1:8080" > 2018/04/27 06:19:29 [error] 21781#21781: *494 missing "upstream" > argument, client: 127.0.0.1, server: , request: "GET /upstream_conf > HTTP/1.1", host: "127.0.0.1:8080" > 2018/04/27 06:19:29 [error] 21781#21781: *495 missing "upstream" > argument, client: 127.0.0.1, server: , request: "GET /upstream_conf > HTTP/1.1", host: "127.0.0.1:8080" > 2018/04/27 06:19:34 [error] 21781#21781: *497 missing "upstream" > argument, client: 127.0.0.1, server: , request: "GET /upstream_conf > HTTP/1.1", host: "127.0.0.1:8080" > > To me that says that nginx-asg-sync isn't passing any parameters. > > nginx > config--------------------------------------------------------------------------- > > server { > ??????? # Status page is enabled on port 8080 by default. > ??????? listen 8080; > > ??????? # Status zone allows the status page to display statistics for > the whole server block. > ??????? # It should be enabled for every server block in other > configuration files. > ??????? status_zone status-page; > > ??????? # In case of nginx process listening on multiple IPs you can > restrict status page > ??????? # to single IP only > ??????? # listen 10.2.3.4:8080; > > ??????? # HTTP basic Authentication is enabled by default. > ??????? # You can add users with any htpasswd generator. > ??????? # Command line and online tools are very easy to find. > ??????? # You can also reuse your htpasswd file from Apache web server > installation. > ??????? #auth_basic on; > ??????? #auth_basic_user_file /etc/nginx/users; > > ??????? # It is recommended to limit the use of status page to admin > networks only > ??????? # Uncomment and change the network accordingly. > ??????? #allow 10.0.0.0/8; > ??????? #deny all; > > ??????? # NGINX provides a sample HTML status page for easy dashboard > view > ??????? root /usr/share/nginx/html; > ??????? location = /status.html { } > > ??????? # Standard HTTP features are fully supported with the status > page. > ??????? # An example below provides a redirect from "/" to "/status.html" > ??????? location = / { > ??????????????? return 301 /status.html; > ??????? } > > ??????? # Main status location. HTTP features like authentication, > access control, > ??????? # header changes, logging are fully supported. > ??????? location /status { > ??????????????? status; > ??????????????? status_format json; > ??????? } > > ??????? location /upstream_conf { > ??????????????? upstream_conf; > ??????? } > } > > stream { > ??????? upstream mqtt_cluster { > ??????????????? state /var/lib/nginx/state/mqtt_cluster.conf; > ??????? } > > ??????? server { > ??????????????? listen 1883; > ??????????????? proxy_pass mqtt_cluster; > ??????????????? status_zone mqtt_servers; > ??????? } > > ??????? upstream coap_cluster { > ??????????????? state /var/lib/nginx/state/coap_cluster.conf; > ??????? } > > ??????? server { > ??????????????? listen 5683 udp; > ??????????????? proxy_bind 10.130.3.170:6000; > ??????????????? proxy_pass coap_cluster; > ??????????????? status_zone coap_servers; > ??????????????? proxy_responses 1; > ??????? } > > } > > aws.yaml------------------------------------------------------------------------------------- > > > region: us-east-2 > upstream_conf_endpoint: http://127.0.0.1:8080/upstream_conf > status_endpoint: http://127.0.0.1:8080/status > sync_interval_in_seconds: 5 > upstreams: > ?- name: mqtt_cluster > ?? autoscaling_group: xxxxx > ?? port: 1883 > ?? kind: stream > ?- name: coap_cluster > ?? autoscaling_group: xxxxx > ?? port: 5683 > ?? kind: stream > > > Is there anything that looks out of place there? > > Cheers > Matthew > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From maxim at nginx.com Fri Apr 27 09:36:35 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 27 Apr 2018 12:36:35 +0300 Subject: nginx-plus and nginx-asg-sync In-Reply-To: References: <676356d8-f33a-d761-04fc-6848cc1b725f@itglowz.com> Message-ID: <40303061-324d-4f32-1c88-62e1f4aff8b6@nginx.com> Hi Matthew, If you are nginx-plus customer it makes sense to open a support ticket. Thanks, Maxim -- Maxim Konovalov From mohanaprakashme at yahoo.co.in Fri Apr 27 13:08:58 2018 From: mohanaprakashme at yahoo.co.in (mohan prakash) Date: Fri, 27 Apr 2018 13:08:58 +0000 (UTC) Subject: Error: Couldn't connect to server References: <93108399.1659282.1524834538326.ref@mail.yahoo.com> Message-ID: <93108399.1659282.1524834538326@mail.yahoo.com> Hi Team I am trying execute ~1000 curl request from my CentOS machine to my nginx server in ~5 sec. The same exercise continuous every ~5sec. I am using libcurl to make the HTTP request. During this process i see most of my request are failed with reason Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ] Can someone suggest whether i am missing any configuration info in my nginx server. Below is my nginx server configuration user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; worker_rlimit_nofile 262144; events { ??? worker_connections 16384; } http { ??? log_format? main? '$remote_addr - $remote_user [$time_local] "$request" ' ????????????????????? '$status $body_bytes_sent "$http_referer" ' ????????????????????? '"$http_user_agent" "$http_x_forwarded_for"'; ??? access_log? /var/log/nginx/access.log? main; ??? sendfile??????????? on; ??? tcp_nopush????????? on; ??? tcp_nodelay???????? on; ??? keepalive_timeout?? 65; ??? types_hash_max_size 2048; ??? include???????????? /etc/nginx/mime.types; ??? default_type??????? application/octet-stream; ??? # Load modular configuration files from the /etc/nginx/conf.d directory. ??? # See http://nginx.org/en/docs/ngx_core_module.html#include ??? # for more information. ??? include /etc/nginx/conf.d/*.conf; ??? limit_conn_zone $binary_remote_addr zone=perip:10m; ??? limit_conn_zone $server_name zone=perserver:10m; ??? server { ??????? limit_conn perip 2000; ??????? limit_conn perserver 20000; ??????? listen *:8080 backlog=16384; ??? } } RegardsMohanaprakash T -------------- next part -------------- An HTML attachment was scrubbed... URL: From hitman at itglowz.com Fri Apr 27 13:26:44 2018 From: hitman at itglowz.com (Matthew VK3EVL) Date: Fri, 27 Apr 2018 23:26:44 +1000 Subject: nginx-plus and nginx-asg-sync In-Reply-To: <40303061-324d-4f32-1c88-62e1f4aff8b6@nginx.com> References: <676356d8-f33a-d761-04fc-6848cc1b725f@itglowz.com> <40303061-324d-4f32-1c88-62e1f4aff8b6@nginx.com> Message-ID: I eventually worked out how. being a purchase through AWS marketplace I had to dig a little and logged a case. In case anyone else is playing along, i was missing "zone mqtt_cluster 64k;" from my upstream config. I was trying without the 64k. On 27/04/2018 19:36, Maxim Konovalov wrote: > Hi Matthew, > > If you are nginx-plus customer it makes sense to open a support ticket. > > Thanks, > > Maxim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Fri Apr 27 13:28:11 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 27 Apr 2018 16:28:11 +0300 Subject: nginx-plus and nginx-asg-sync In-Reply-To: References: <676356d8-f33a-d761-04fc-6848cc1b725f@itglowz.com> <40303061-324d-4f32-1c88-62e1f4aff8b6@nginx.com> Message-ID: <04c572b6-8339-9729-f487-9be9027a7925@nginx.com> Good to hear it was resolved. On 27/04/2018 16:26, Matthew VK3EVL wrote: > I eventually worked out how. being a purchase through AWS > marketplace I had to dig a little and logged a case. > > In case anyone else is playing along, i was missing "zone > mqtt_cluster 64k;" from my upstream config. I was trying without the > 64k. > > > On 27/04/2018 19:36, Maxim Konovalov wrote: >> Hi Matthew, >> >> If you are nginx-plus customer it makes sense to open a support ticket. >> >> Thanks, >> >> Maxim >> > -- Maxim Konovalov From liulantao at gmail.com Fri Apr 27 13:36:39 2018 From: liulantao at gmail.com (Liu Lantao) Date: Fri, 27 Apr 2018 13:36:39 +0000 Subject: Error: Couldn't connect to server In-Reply-To: <93108399.1659282.1524834538326@mail.yahoo.com> References: <93108399.1659282.1524834538326.ref@mail.yahoo.com> <93108399.1659282.1524834538326@mail.yahoo.com> Message-ID: It seems like your client has reach the limit of max open files. >From the shell where you start you client program, run ?ulimit -a? to check the settings. You can also check the files open by your client in /proc//fd/. Increase that value is simple, you can change is temporarily or save to config file, there are tons of documents online about how to change it. On Fri, Apr 27, 2018 at 9:09 PM mohan prakash via nginx wrote: > Hi Team > > I am trying execute ~1000 curl request from my CentOS machine to my nginx > server in ~5 sec. > The same exercise continuous every ~5sec. > > I am using libcurl to make the HTTP request. > > During this process i see most of my request are failed with reason > > *Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ]* > > Can someone suggest whether i am missing any configuration info in my > nginx server. Below is my nginx server configuration > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > > worker_rlimit_nofile 262144; > > events { > worker_connections 16384; > } > > http { > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > # Load modular configuration files from the /etc/nginx/conf.d > directory. > # See http://nginx.org/en/docs/ngx_core_module.html#include > # for more information. > include /etc/nginx/conf.d/*.conf; > > limit_conn_zone $binary_remote_addr zone=perip:10m; > limit_conn_zone $server_name zone=perserver:10m; > > server { > limit_conn perip 2000; > limit_conn perserver 20000; > listen *:8080 backlog=16384; > } > } > > Regards > Mohanaprakash T > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 27 14:25:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Apr 2018 17:25:15 +0300 Subject: Nginx fastcgi_cache_background_update Issue/Question In-Reply-To: References: Message-ID: <20180427142514.GC32137@mdounin.ru> Hello! On Thu, Apr 26, 2018 at 04:56:57PM -0400, c0nw0nk wrote: > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_background_update > > > How can I switch between an On and a Off version of this function within a > Nginx server { > > > set $var 1; > > if ($var) { > fastcgi_cache_background_update On; > } > > > Is there a way to do this even with Nginx + Lua i can't figure out a > solution that will allow me to toggle / switch between a On and Off fastcgi > background update state. > > What ever way or methods can be used with Lua especially I would be > extremely grateful for help with thanks everyone :) Try using different locations for requests where you need background update to be on and off. -- Maxim Dounin http://mdounin.ru/ From mohanaprakashme at yahoo.co.in Fri Apr 27 15:32:52 2018 From: mohanaprakashme at yahoo.co.in (mohan prakash) Date: Fri, 27 Apr 2018 15:32:52 +0000 (UTC) Subject: Error: Couldn't connect to server In-Reply-To: References: <93108399.1659282.1524834538326.ref@mail.yahoo.com> <93108399.1659282.1524834538326@mail.yahoo.com> Message-ID: <1386183153.1684548.1524843172038@mail.yahoo.com> Hi Liu Client side I have increased the file descriptor value to 10000 , but still the same issue . Also increased the FD in server side also then also same issue continuous. Followed below link to increase the FD limit. Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) - nixCraft | | | | | | | | | | | Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) ... How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under... | | | RegardsMohanaprakash T On Friday 27 April 2018, 7:06:51 PM IST, Liu Lantao wrote: It seems like your client has reach the limit of max open files. >From the shell where you start you client program, run ?ulimit -a? to check the settings. You can also check the files open by your client in /proc//fd/. Increase that value is simple, you can change is temporarily or save to config file, there are tons of documents online about how to change it. On Fri, Apr 27, 2018 at 9:09 PM mohan prakash via nginx wrote: Hi Team I am trying execute ~1000 curl request from my CentOS machine to my nginx server in ~5 sec. The same exercise continuous every ~5sec. I am using libcurl to make the HTTP request. During this process i see most of my request are failed with reason Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ] Can someone suggest whether i am missing any configuration info in my nginx server. Below is my nginx server configuration user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; worker_rlimit_nofile 262144; events { ??? worker_connections 16384; } http { ??? log_format? main? '$remote_addr - $remote_user [$time_local] "$request" ' ????????????????????? '$status $body_bytes_sent "$http_referer" ' ????????????????????? '"$http_user_agent" "$http_x_forwarded_for"'; ??? access_log? /var/log/nginx/access.log? main; ??? sendfile??????????? on; ??? tcp_nopush????????? on; ??? tcp_nodelay???????? on; ??? keepalive_timeout?? 65; ??? types_hash_max_size 2048; ??? include???????????? /etc/nginx/mime.types; ??? default_type??????? application/octet-stream; ??? # Load modular configuration files from the /etc/nginx/conf.d directory. ??? # See http://nginx.org/en/docs/ngx_core_module.html#include ??? # for more information. ??? include /etc/nginx/conf.d/*.conf; ??? limit_conn_zone $binary_remote_addr zone=perip:10m; ??? limit_conn_zone $server_name zone=perserver:10m; ??? server { ??????? limit_conn perip 2000; ??????? limit_conn perserver 20000; ??????? listen *:8080 backlog=16384; ??? } } RegardsMohanaprakash T _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Fri Apr 27 16:00:33 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 27 Apr 2018 12:00:33 -0400 Subject: Error: Couldn't connect to server In-Reply-To: <1386183153.1684548.1524843172038@mail.yahoo.com> References: <93108399.1659282.1524834538326.ref@mail.yahoo.com> <93108399.1659282.1524834538326@mail.yahoo.com> <1386183153.1684548.1524843172038@mail.yahoo.com> Message-ID: <9DBC1F15-3C6B-4B46-9621-7CF3FE8E28A6@me.com> I?m guessing that you have script that keeps executing curl. What you can do is use curl -K ./fileWithListOfUrls.txt and the one curl process will visit each url in turn reusing the socket (aka HTTP keep alive) That said, curl isn?t a great workload simulator and, in the long time, you can get better results from something like wrk2 > On 27 Apr 2018, at 11:32 AM, mohan prakash via nginx wrote: > > Hi Liu > > Client side I have increased the file descriptor value to 10000 , but still the same issue . > > Also increased the FD in server side also then also same issue continuous. > > > Followed below link to increase the FD limit. > > Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) - nixCraft > > > Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) ... > How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under... > > > > > > Regards > Mohanaprakash T > > > On Friday 27 April 2018, 7:06:51 PM IST, Liu Lantao wrote: > > > It seems like your client has reach the limit of max open files. > > From the shell where you start you client program, run ?ulimit -a? to check the settings. > You can also check the files open by your client in /proc//fd/. > > Increase that value is simple, you can change is temporarily or save to config file, > there are tons of documents online about how to change it. > On Fri, Apr 27, 2018 at 9:09 PM mohan prakash via nginx > wrote: > Hi Team > > I am trying execute ~1000 curl request from my CentOS machine to my nginx server in ~5 sec. > The same exercise continuous every ~5sec. > > I am using libcurl to make the HTTP request. > > During this process i see most of my request are failed with reason > > Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ] > > Can someone suggest whether i am missing any configuration info in my nginx server. Below is my nginx server configuration > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > > worker_rlimit_nofile 262144; > > events { > worker_connections 16384; > } > > http { > log_format main '$remote_addr - $remote_user [$time_local] "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > # Load modular configuration files from the /etc/nginx/conf.d directory. > # See http://nginx.org/en/docs/ngx_core_module.html#include > # for more information. > include /etc/nginx/conf.d/*.conf; > > limit_conn_zone $binary_remote_addr zone=perip:10m; > limit_conn_zone $server_name zone=perserver:10m; > > server { > limit_conn perip 2000; > limit_conn perserver 20000; > listen *:8080 backlog=16384; > } > } > > Regards > Mohanaprakash T > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Fri Apr 27 18:04:48 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 27 Apr 2018 18:04:48 +0000 Subject: duplicate MIME type "text/html" Message-ID: <3994E761-A8DE-43A8-A9DF-320CF5ADEB3B@yale.edu> Just curious, I have a config file that has this sub_filter_types: sub_filter_types text/html application/json application/javascript text/javascript; But in the error logs I have this repeating quite a bit: duplicate MIME type "text/html" in /etc/nginx/conf.d/main-settings.conf Does this mean that I don?t have to specify text/html and my setting is just redundant? I?ve used grep to locate any other occurrence and there are not any. Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From scotgram at scotgram.com Fri Apr 27 18:06:03 2018 From: scotgram at scotgram.com (ScotGram) Date: Fri, 27 Apr 2018 11:06:03 -0700 Subject: Fwd: duplicate MIME type "text/html" In-Reply-To: <3994E761-A8DE-43A8-A9DF-320CF5ADEB3B@yale.edu> References: <3994E761-A8DE-43A8-A9DF-320CF5ADEB3B@yale.edu> Message-ID: <1bb3cdd1-9be4-d5fd-a0c5-1b78eaf8d9fb@scotgram.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From scotgram at scotgram.com Fri Apr 27 18:16:03 2018 From: scotgram at scotgram.com (ScotGram) Date: Fri, 27 Apr 2018 11:16:03 -0700 Subject: Fwd: Fwd: duplicate MIME type "text/html" In-Reply-To: <1bb3cdd1-9be4-d5fd-a0c5-1b78eaf8d9fb@scotgram.com> References: <1bb3cdd1-9be4-d5fd-a0c5-1b78eaf8d9fb@scotgram.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Apr 28 11:15:15 2018 From: nginx-forum at forum.nginx.org (elc) Date: Sat, 28 Apr 2018 07:15:15 -0400 Subject: Slice purge cache problem. Message-ID: <2e6335c3812736cd8da158b66819d8d8.NginxMailingListEnglish@forum.nginx.org> Hi all. Is there any known solutions pugre cache with enabled slice ? (purgeall is not an option :) ) Without slice, i can purge items by hash. With slice, i delete only 1 from many slices. or maybe Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279625,279625#msg-279625 From mohanaprakashme at yahoo.co.in Sat Apr 28 11:32:13 2018 From: mohanaprakashme at yahoo.co.in (mohan prakash) Date: Sat, 28 Apr 2018 11:32:13 +0000 (UTC) Subject: Error: Couldn't connect to server In-Reply-To: <9DBC1F15-3C6B-4B46-9621-7CF3FE8E28A6@me.com> References: <93108399.1659282.1524834538326.ref@mail.yahoo.com> <93108399.1659282.1524834538326@mail.yahoo.com> <1386183153.1684548.1524843172038@mail.yahoo.com> <9DBC1F15-3C6B-4B46-9621-7CF3FE8E28A6@me.com> Message-ID: <251743658.1894547.1524915133257@mail.yahoo.com> Hi Peter Thanks for your reply. I am not using script, I am creating a streamer project where i am using libcurl to download the content from nginx server. Since the content i am downloading is HLS, i am downloading every ~5sec. During the stress test i am seeing "couldn't connect to server" error for HTTP request. With one or two service i don't see this problem. RegardsMohanaprakash T On Friday, 27 April, 2018, 9:30:46 PM IST, Peter Booth wrote: I?m guessing that you have ?script that keeps executing curl. What you can do is use curl -K ./fileWithListOfUrls.txtand the one curl process will visit each url in turn reusing the socket (aka HTTP keep alive) That said, curl isn?t a great workload simulator and, in the long time, you can get better results from something like wrk2 On 27 Apr 2018, at 11:32 AM, mohan prakash via nginx wrote: Hi Liu Client side I have increased the file descriptor value to 10000 , but still the same issue . Also increased the FD in server side also then also same issue continuous. Followed below link to increase the FD limit. Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) - nixCraft | | | | | | | | | | | Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) ... How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under... | | | RegardsMohanaprakash T On Friday 27 April 2018, 7:06:51 PM IST, Liu Lantao wrote: It seems like your client has reach the limit of max open files. >From the shell where you start you client program, run ?ulimit -a? to check the settings. You can also check the files open by your client in /proc//fd/. Increase that value is simple, you can change is temporarily or save to config file, there are tons of documents online about how to change it. On Fri, Apr 27, 2018 at 9:09 PM mohan prakash via nginx wrote: Hi Team I am trying execute ~1000 curl request from my CentOS machine to my nginx server in ~5 sec. The same exercise continuous every ~5sec. I am using libcurl to make the HTTP request. During this process i see most of my request are failed with reason Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ] Can someone suggest whether i am missing any configuration info in my nginx server. Below is my nginx server configuration user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; worker_rlimit_nofile 262144; events { ??? worker_connections 16384; } http { ??? log_format? main? '$remote_addr - $remote_user [$time_local] "$request" ' ????????????????????? '$status $body_bytes_sent "$http_referer" ' ????????????????????? '"$http_user_agent" "$http_x_forwarded_for"'; ??? access_log? /var/log/nginx/access.log? main; ??? sendfile??????????? on; ??? tcp_nopush????????? on; ??? tcp_nodelay???????? on; ??? keepalive_timeout?? 65; ??? types_hash_max_size 2048; ??? include???????????? /etc/nginx/mime.types; ??? default_type??????? application/octet-stream; ??? # Load modular configuration files from the /etc/nginx/conf.d directory. ??? # See http://nginx.org/en/docs/ngx_core_module.html#include ??? # for more information. ??? include /etc/nginx/conf.d/*.conf; ??? limit_conn_zone $binary_remote_addr zone=perip:10m; ??? limit_conn_zone $server_name zone=perserver:10m; ??? server { ??????? limit_conn perip 2000; ??????? limit_conn perserver 20000; ??????? listen *:8080 backlog=16384; ??? } } RegardsMohanaprakash T _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Sat Apr 28 12:27:21 2018 From: oscaretu at gmail.com (oscaretu) Date: Sat, 28 Apr 2018 14:27:21 +0200 Subject: Error: Couldn't connect to server In-Reply-To: <251743658.1894547.1524915133257@mail.yahoo.com> References: <93108399.1659282.1524834538326.ref@mail.yahoo.com> <93108399.1659282.1524834538326@mail.yahoo.com> <1386183153.1684548.1524843172038@mail.yahoo.com> <9DBC1F15-3C6B-4B46-9621-7CF3FE8E28A6@me.com> <251743658.1894547.1524915133257@mail.yahoo.com> Message-ID: Hello, Mohan. Have you tried to make simultaneous request against the server from another computer, using curl from the command line? It the request work in the second computer, there is no problem in the server, and it will be in the client. Perhaps you are looking for a problem in the nginx that it doesn't exist. Or perhaps, check the TIME_WAIT in the sockets, so they can be reused more quickly. This can give you a clue: https://www.thecodingforums.com/threads/how-to-reuse-tcp-listening-socket-immediately-after-it-was-connectedat-least-once.685380/ I suggest using "*sysdig* " to monitor the server or client while you are doing the request, so you'll be able to watch what happening in your computers. Kind regards, Oscar On Sat, Apr 28, 2018 at 1:32 PM, mohan prakash via nginx wrote: > Hi Peter > > Thanks for your reply. > > I am not using script, I am creating a streamer project where i am using > libcurl to download the content from nginx server. > Since the content i am downloading is HLS, i am downloading every ~5sec. > > During the stress test i am seeing "couldn't connect to server" error for > HTTP request. With one or two service i don't see this problem. > > > > Regards > Mohanaprakash T > > > On Friday, 27 April, 2018, 9:30:46 PM IST, Peter Booth > wrote: > > > I?m guessing that you have script that keeps executing curl. What you can > do is use curl -K ./fileWithListOfUrls.txt > and the one curl process will visit each url in turn reusing the socket > (aka HTTP keep alive) > > That said, curl isn?t a great workload simulator and, in the long time, > you can get better results from something like wrk2 > > > On 27 Apr 2018, at 11:32 AM, mohan prakash via nginx > wrote: > > Hi Liu > > Client side I have increased the file descriptor value to 10000 , but > still the same issue . > > Also increased the FD in server side also then also same issue continuous. > > > Followed below link to increase the FD limit. > > Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) - > nixCraft > > > > Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) ... > How do I increase the maximum number of open files under CentOS Linux? How > do I open more file descriptors under... > > > > > > > Regards > Mohanaprakash T > > > On Friday 27 April 2018, 7:06:51 PM IST, Liu Lantao > wrote: > > > It seems like your client has reach the limit of max open files. > > From the shell where you start you client program, run ?ulimit -a? to > check the settings. > You can also check the files open by your client in /proc//fd/. > > Increase that value is simple, you can change is temporarily or save to > config file, > there are tons of documents online about how to change it. > On Fri, Apr 27, 2018 at 9:09 PM mohan prakash via nginx > wrote: > > Hi Team > > I am trying execute ~1000 curl request from my CentOS machine to my nginx > server in ~5 sec. > The same exercise continuous every ~5sec. > > I am using libcurl to make the HTTP request. > > During this process i see most of my request are failed with reason > > *Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ]* > > Can someone suggest whether i am missing any configuration info in my > nginx server. Below is my nginx server configuration > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > > worker_rlimit_nofile 262144; > > events { > worker_connections 16384; > } > > http { > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > # Load modular configuration files from the /etc/nginx/conf.d > directory. > # See http://nginx.org/en/docs/ngx_core_module.html#include > # for more information. > include /etc/nginx/conf.d/*.conf; > > limit_conn_zone $binary_remote_addr zone=perip:10m; > limit_conn_zone $server_name zone=perserver:10m; > > server { > limit_conn perip 2000; > limit_conn perserver 20000; > listen *:8080 backlog=16384; > } > } > > Regards > Mohanaprakash T > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From scotgram at scotgram.com Sat Apr 28 16:15:32 2018 From: scotgram at scotgram.com (ScotGram) Date: Sat, 28 Apr 2018 09:15:32 -0700 Subject: STOP EMAILING Message-ID: STOP EMAILING From danny at trisect.uk Sat Apr 28 16:17:31 2018 From: danny at trisect.uk (Danny Horne) Date: Sat, 28 Apr 2018 17:17:31 +0100 Subject: STOP EMAILING In-Reply-To: References: Message-ID: <132d4241-77c4-b2b1-a7c8-491792147e61@trisect.uk> On 28/04/18 17:15, ScotGram wrote: > STOP EMAILING > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx If you want to unsubscribe, just follow the link in this email From pchychi at gmail.com Sat Apr 28 22:18:18 2018 From: pchychi at gmail.com (Payam Chychi) Date: Sat, 28 Apr 2018 22:18:18 +0000 Subject: STOP EMAILING In-Reply-To: <132d4241-77c4-b2b1-a7c8-491792147e61@trisect.uk> References: <132d4241-77c4-b2b1-a7c8-491792147e61@trisect.uk> Message-ID: Lol On Sat, Apr 28, 2018 at 9:17 AM Danny Horne via nginx wrote: > On 28/04/18 17:15, ScotGram wrote: > > STOP EMAILING > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > If you want to unsubscribe, just follow the link in this email > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Apr 29 08:40:51 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 29 Apr 2018 09:40:51 +0100 Subject: duplicate MIME type "text/html" In-Reply-To: <3994E761-A8DE-43A8-A9DF-320CF5ADEB3B@yale.edu> References: <3994E761-A8DE-43A8-A9DF-320CF5ADEB3B@yale.edu> Message-ID: <20180429084051.GA19311@daoine.org> On Fri, Apr 27, 2018 at 06:04:48PM +0000, Friscia, Michael wrote: Hi there, > sub_filter_types text/html application/json application/javascript text/javascript; > > But in the error logs I have this repeating quite a bit: > duplicate MIME type "text/html" in /etc/nginx/conf.d/main-settings.conf > > Does this mean that I don?t have to specify text/html and my setting is just redundant? Yes. http://nginx.org/r/sub_filter_types f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Apr 29 09:08:52 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 29 Apr 2018 10:08:52 +0100 Subject: NGINX non-HTTP port forwarding from internet to private LAN preserving the client IP In-Reply-To: <91C3E99C-677E-4F5B-ACD5-38C285676007@cmcc.it> References: <91C3E99C-677E-4F5B-ACD5-38C285676007@cmcc.it> Message-ID: <20180429090852.GB19311@daoine.org> On Fri, Apr 27, 2018 at 01:41:26AM +0200, Mauro Tridici wrote: Hi there, > So, I would like to ask you if I can use NGINX i order to start a port forwarding from an internet client to a server machine in my private LAN preserving the client IP. In general, what you want cannot be done (I believe). There are some specific cases where it can be made to work. Maybe your case is, or can be made, one of those. One case is where the upstream service can be told to expect the "proxy protocol". The client connects to nginx; nginx is configured with a suitable "proxy_protocol on" directive, and writes some extra information at the start of the tcp connection to the upstream service; that service reads that information and knows the original client address. Another case is where the upstream server will always send all IP traffic addressed to the original clients, through the port-forwarding server; and where the network between the port-forwarding server and the upstream server is happy for spoofed source addresses on IP packets to pass. In that case, the port-forwarding server can be clever with the packets that it forwards, and can be clever with the response packets from the upstream server. Nginx is not the right tool to be the port-forwarding service in that case; something within your operating system's IP stack should be investigated instead. Good luck with it, f -- Francis Daly francis at daoine.org From mauro.tridici at cmcc.it Sun Apr 29 09:26:32 2018 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Sun, 29 Apr 2018 09:26:32 +0000 Subject: NGINX non-HTTP port forwarding from internet to private LAN preserving the client IP In-Reply-To: <20180429090852.GB19311@daoine.org> References: <91C3E99C-677E-4F5B-ACD5-38C285676007@cmcc.it> <20180429090852.GB19311@daoine.org> Message-ID: Dear Francis, thank you very much for your detailed explanation. I will investigate in order to detect the right way (and tool) to rich my goal thinking about your words. Have a great day. Regards, Mauro Il dom 29 apr 2018 11:09 Francis Daly ha scritto: > On Fri, Apr 27, 2018 at 01:41:26AM +0200, Mauro Tridici wrote: > > Hi there, > > > So, I would like to ask you if I can use NGINX i order to start a port > forwarding from an internet client to a server machine in my private LAN > preserving the client IP. > > In general, what you want cannot be done (I believe). > > There are some specific cases where it can be made to work. Maybe your > case is, or can be made, one of those. > > One case is where the upstream service can be told to expect the > "proxy protocol". The client connects to nginx; nginx is configured > with a suitable "proxy_protocol on" directive, and writes some extra > information at the start of the tcp connection to the upstream service; > that service reads that information and knows the original client address. > > Another case is where the upstream server will always send all IP traffic > addressed to the original clients, through the port-forwarding server; > and where the network between the port-forwarding server and the upstream > server is happy for spoofed source addresses on IP packets to pass. In > that case, the port-forwarding server can be clever with the packets > that it forwards, and can be clever with the response packets from the > upstream server. Nginx is not the right tool to be the port-forwarding > service in that case; something within your operating system's IP stack > should be investigated instead. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Sun Apr 29 12:03:18 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Sun, 29 Apr 2018 12:03:18 +0000 Subject: duplicate MIME type "text/html" In-Reply-To: <20180429084051.GA19311@daoine.org> References: <3994E761-A8DE-43A8-A9DF-320CF5ADEB3B@yale.edu>, <20180429084051.GA19311@daoine.org> Message-ID: thank you, I must have read that 5 times and totally missed "in addition to" each time! ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 ? office (203) 931-5381 ? mobile http://web.yale.edu ________________________________ From: nginx on behalf of Francis Daly Sent: Sunday, April 29, 2018 4:40 AM To: nginx at nginx.org Subject: Re: duplicate MIME type "text/html" On Fri, Apr 27, 2018 at 06:04:48PM +0000, Friscia, Michael wrote: Hi there, > sub_filter_types text/html application/json application/javascript text/javascript; > > But in the error logs I have this repeating quite a bit: > duplicate MIME type "text/html" in /etc/nginx/conf.d/main-settings.conf > > Does this mean that I don?t have to specify text/html and my setting is just redundant? Yes. https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.org_r_sub-5Ffilter-5Ftypes&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=9roo66uQB7_-u7wr0eB6oNxO_jdIzwdrVjstI1DL7r8&s=tlMtEFzS4T1AOv3lXWe9xEiSfLKK9c6DWphLCLlfA5Q&e= f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=9roo66uQB7_-u7wr0eB6oNxO_jdIzwdrVjstI1DL7r8&s=c3TT_V7uNwt6GcmH-eNPdoxNtpWDqYwW1oV5NJhLz80&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Sun Apr 29 14:10:08 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Sun, 29 Apr 2018 16:10:08 +0200 Subject: Flask app with virtual Python environment in Unit 1.1 ? Message-ID: <64c5cc07-94c0-1392-9f99-7f80758c3ece@monksofcool.net> Hello, I have built a Flask application with a Python 3.6 virtual environment which I would like to run using NGINX Unit 1.1 instead of the usual "source venv/bin/activate; flask run". When I try to apply the following configuration { "listeners": { "*:5080": { "application": "myapp" } }, "applications": { "myapp": { "type": "python", "processes": 1, "module": "wsgi", "user": "nginx", "group": "nginx", "path": "/var/www/myapp" } } } My log file shows [info] 21422#21422 "myapp" application started [alert] 21422#21422 Python failed to import module "wsgi" [notice] 20803#20803 process 21422 exited with code 1 [warn] 20812#20812 failed to start application "myapp" [alert] 20812#20812 failed to apply new conf Here's my minimal wsgi.py: # /var/www/myapp/wsgi.py import mypackage if __name__ == "__main__": mypackage.run() The Flask application object is defined in mypackage.__init__.py: app = Flask(__name__) NGINX Unit does not know about the virtual Python environment at this time, and I don't know how I can set the required library paths. Can somebody please point me in the right direction? -Ralph From vbart at nginx.com Sun Apr 29 15:06:51 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 29 Apr 2018 18:06:51 +0300 Subject: Flask app with virtual Python environment in Unit 1.1 ? In-Reply-To: <64c5cc07-94c0-1392-9f99-7f80758c3ece@monksofcool.net> References: <64c5cc07-94c0-1392-9f99-7f80758c3ece@monksofcool.net> Message-ID: <1612484.hKBuOICmmL@vbart-laptop> On Sunday, 29 April 2018 17:10:08 MSK Ralph Seichter wrote: [..] > Here's my minimal wsgi.py: > > # /var/www/myapp/wsgi.py > import mypackage > if __name__ == "__main__": > mypackage.run() > > The Flask application object is defined in mypackage.__init__.py: > > app = Flask(__name__) > > NGINX Unit does not know about the virtual Python environment at this > time, and I don't know how I can set the required library paths. Can > somebody please point me in the right direction? > [..] You can set a path to Python virtual environment using the "home" parameter of application object. "myapp": { "type": "python", "module": "wsgi", "user": "nginx", "group": "nginx", "path": "/var/www/myapp", "home": "/path/to/your/venv/directory" } Please also note that your application callable need to be named "application" (not "app"). That can be easily achievable by: application = app in your wsgi.py wbr, Valentin V. Bartenev From m16+nginx at monksofcool.net Sun Apr 29 16:02:47 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Sun, 29 Apr 2018 18:02:47 +0200 Subject: Flask app with virtual Python environment in Unit 1.1 ? In-Reply-To: <1612484.hKBuOICmmL@vbart-laptop> References: <64c5cc07-94c0-1392-9f99-7f80758c3ece@monksofcool.net> <1612484.hKBuOICmmL@vbart-laptop> Message-ID: On 29.04.18 17:06, Valentin V. Bartenev wrote: > You can set a path to Python virtual environment using the "home" > parameter of application object. Ah, that was the missing piece, thank you. > Please also note that your application callable need to be named > "application" (not "app"). Alright, I changed my wsgi.py to this: from mypackage import app as application if __name__ == "__main__": application.run() My application can now be called via NGINX -> NGINX Unit -> App, which is exactly what wanted. It also requires certain environment variables to be set, and I am now wondering how to pass these on? I found the enhancement request https://github.com/nginx/unit/issues/12 but since this feature does not seem to be implemented yet, what is the recommended method to pass env variables to Unit workers? -Ralph From vbart at nginx.com Sun Apr 29 21:03:00 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 30 Apr 2018 00:03:00 +0300 Subject: Flask app with virtual Python environment in Unit 1.1 ? In-Reply-To: References: <64c5cc07-94c0-1392-9f99-7f80758c3ece@monksofcool.net> <1612484.hKBuOICmmL@vbart-laptop> Message-ID: <3452428.xtBDV0VjCC@vbart-laptop> On Sunday, 29 April 2018 19:02:47 MSK Ralph Seichter wrote: [..] > My application can now be called via NGINX -> NGINX Unit -> App, which > is exactly what wanted. It also requires certain environment variables > to be set, and I am now wondering how to pass these on? I found the > enhancement request https://github.com/nginx/unit/issues/12 but since > this feature does not seem to be implemented yet, what is the > recommended method to pass env variables to Unit workers? > [..] Unfortunately, the only way right now is to set them for the main process (when unitd is executed) or in the application code. Also, you can pass custom data from nginx using headers. Setting environment variables through API is planned for the next release in June. wbr, Valentin V. Bartenev From aclion at yepmail.net Sun Apr 29 21:48:34 2018 From: aclion at yepmail.net (aclion at yepmail.net) Date: Sun, 29 Apr 2018 14:48:34 -0700 Subject: nginx + php-fpm ERROR 'FastCGI sent in stderr: "Primary script unknown"' for 2nd app (WP) in a subdir. Main site is OK. Message-ID: <1525038514.2688716.1354879680.6F050918@webmail.messagingengine.com> Hi, I'm trying to set up WordPress in a subdir on an Nginx+PHPFPM setup. I'm running nginx/1.14.0 PHP 7.2.4-dev (fpm-fcgi) wordpress/4.9.5 The skeleton I have so far is tree -L 3 . . ??? includes ??? ??? front.inc ??? public ??? ??? css ??? ??? ??? global.css ??? ??? ??? min ??? ??? index.php ??? wp ??? composer.json ??? composer.lock ??? public ??? ??? blog ??? ??? content ??? ??? index.php ??? ??? wp-config.php ??? README.md ??? vendor ??? autoload.php ??? composer ??? johnpbloch WP was populated into the tree using Composer My Nginx web config includes server { root /src/www/test/public; index index.php; rewrite_log on; access_log /var/log/nginx/test.example.com.access.log main; error_log /var/log/nginx/test.example.com.error.log error; ssl on; ssl_verify_client off; include includes/ssl_protocol.inc; ssl_trusted_certificate "ssldir/myCA.crt.pem"; ssl_certificate "ssldir/test.example.com.crt.pem"; ssl_certificate_key "ssldir/test.example.com.key.pem"; location ~* /(\.|~$) { deny all; } location ~* (settings.php|schema|htpasswd|password|config) { deny all; } location ~* .(inc|rb|json)$ { deny all; } location ^~ /blog { alias /src/www/test/wp/public/blog; index index.php; try_files $uri $uri/ /index.php$is_args$args; # try_files $uri $uri/ /index.php?$args =404; location ~ \.php$ { fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_param HTTP_PROXY ""; fastcgi_pass phpfpm; fastcgi_index index.php; include includes/fastcgi/fastcgi_params; } } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_param HTTP_PROXY ""; fastcgi_pass phpfpm; fastcgi_index index.php; include includes/fastcgi/fastcgi_params; } ... } and grep -i script includes/fastcgi/fastcgi_params fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; With that config visiting the TOP level of my site https://test.example.com/ works like it should. But visiting the WP app in the /blog subdir alias https://test.example.com/blog/ shows these errors in log ==> /var/log/nginx/test.example.com.error.log <== 2018/04/29 13:10:16 [error] 6374#6374: *4 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.30.11.7, server: test.example.com, request: "GET /blog/ HTTP/2.0", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "test.example.com" ==> /var/log/nginx/test.example.com.access.log <== 172.30.11.7 test.example.com - [29/Apr/2018:13:10:16 -0700] GET /blog/ HTTP/2.0 "404" 20 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" "-" I _think_ that the problem might be that I have to have a different SCRIPT_FILENAME for the WP part of this. Not sure IF that's the problem, or what to change it TO. Any help? Thanks, AC From aclion at yepmail.net Sun Apr 29 22:12:34 2018 From: aclion at yepmail.net (aclion at yepmail.net) Date: Sun, 29 Apr 2018 15:12:34 -0700 Subject: nginx + php-fpm ERROR 'FastCGI sent in stderr: "Primary script unknown"' for 2nd app (WP) in a subdir. Main site is OK. In-Reply-To: <1525038514.2688716.1354879680.6F050918@webmail.messagingengine.com> References: <1525038514.2688716.1354879680.6F050918@webmail.messagingengine.com> Message-ID: <1525039954.2695077.1354881656.7AC167EF@webmail.messagingengine.com> This config seems to get me further, or maybe just different location / { index index.php; try_files $uri $uri/ /index.php?q=$uri&$args; } location ^~ /blog { alias /srv/www/test/wp/public/blog; index index.php; try_files $uri $uri/ /blog/index.php?q=$uri&$args; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_param HTTP_PROXY ""; fastcgi_pass phpfpm; fastcgi_index index.php; include includes/fastcgi/fastcgi_params; } With that when I visit https://test.example.com/blog/ I see this _source_ file in the browser Hi Nginx Team, I am unable to use bcrypt function on CentOS 7.4 with nginx version 1.12.2. Any idea what could be the reason? This is working fine with MD5 nginx -v nginx version: nginx/1.12.2 CentOS Linux release 7.4.1708 (Core) nginx version: nginx/1.12.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_auth_request_module --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279645,279645#msg-279645 From m16+nginx at monksofcool.net Mon Apr 30 09:06:45 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Mon, 30 Apr 2018 11:06:45 +0200 Subject: Flask app with virtual Python environment in Unit 1.1 ? In-Reply-To: <3452428.xtBDV0VjCC@vbart-laptop> References: <64c5cc07-94c0-1392-9f99-7f80758c3ece@monksofcool.net> <1612484.hKBuOICmmL@vbart-laptop> <3452428.xtBDV0VjCC@vbart-laptop> Message-ID: <5c8f399c-47ff-1435-fa6a-d8d31c892ae6@monksofcool.net> On 29.04.18 23:03, Valentin V. Bartenev wrote: > Unfortunately, the only way right now is to set them for the main > process (when unitd is executed) or in the application code. Ok. I've now written an openrc-run init script that uses '-e NAME=VALUE' arguments for start-stop-daemon. This currently works for me, because I don't run applications with conflicting environment variables. Yet. > Setting environment variables through API is planned for the next > release in June. That's good news, I'm looking forward to it. I already like Unit a lot, and I also appreciate you guys being so quick to respond and helpful on this mailing list. -Ralph From aclion at yepmail.net Mon Apr 30 20:33:16 2018 From: aclion at yepmail.net (aclion at yepmail.net) Date: Mon, 30 Apr 2018 13:33:16 -0700 Subject: nginx + php-fpm ERROR 'FastCGI sent in stderr: "Primary script unknown"' for 2nd app (WP) in a subdir. Main site is OK. In-Reply-To: <1525039954.2695077.1354881656.7AC167EF@webmail.messagingengine.com> References: <1525038514.2688716.1354879680.6F050918@webmail.messagingengine.com> <1525039954.2695077.1354881656.7AC167EF@webmail.messagingengine.com> Message-ID: <1525120396.1384063.1356093624.149DF8E7@webmail.messagingengine.com> Got this sorted! AC From aclion at yepmail.net Mon Apr 30 20:35:08 2018 From: aclion at yepmail.net (aclion at yepmail.net) Date: Mon, 30 Apr 2018 13:35:08 -0700 Subject: Installed WP in a subdir, can access wp-admin, but URLs are getting re-written without the subdir PATH. Where do I set this? Message-ID: <1525120508.1384346.1356095120.3FCAFEFF@webmail.messagingengine.com> I've installed WP in a site subdir. I can now access wp-admin pages at the expected URL. Sort of. I can get there manually, but my site config keeps redirecting, stripping URLs of the subdir path to WordPress. My config now has wp-config.php References: <1525120508.1384346.1356095120.3FCAFEFF@webmail.messagingengine.com> Message-ID: Hello there! > I've installed WP in a site subdir. > > I can now access wp-admin pages at the expected URL. > > Sort of. I can get there manually, but my site config keeps redirecting, stripping URLs of the subdir path to WordPress. > > My config now has > > wp-config.php > define('WP_HOME', 'https://test.example.com/blog'); > define('WP_SITEURL','https://test.example.com/blog'); According to the documentation: https://codex.wordpress.org/Editing_wp-config.php#WP_SITEURL you should define the Home and Website URL so : define( 'WP_SITEURL', 'http://test.example.com/srv/www/test/wp/public' ); define( 'WP_HOME', 'http://test.example.com/blog' ); > > > and for Nginx server { blah-blah; root /srv/www/test/wp/public; index index.php; > ??????????? => > location ^~ /blog { > root /srv/www/test/wp/public; > > index index.php; # Not necessary, but don't forget index in the SERVER BLOCK > } > } > ????????? > location / { > root /srv/www/test/public; > > index index.php; > try_files $uri $uri/ /index.php?$args; > > location ~ \.php { > try_files $uri =404; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass phpfpm; > fastcgi_index index.php; > } > } > location ~ \.php$ { include /etc/nginx/snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; } You can visit this page too: https://codex.wordpress.org/Nginx Regards, Ph. Gras From francis at daoine.org Mon Apr 30 22:35:48 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 30 Apr 2018 23:35:48 +0100 Subject: Reverse proxy from NGINX to Keycloak with 2FA In-Reply-To: <4e57e3f20bd0e930cb3438a0d5e33a56.NginxMailingListEnglish@forum.nginx.org> References: <4e57e3f20bd0e930cb3438a0d5e33a56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180430223548.GC19311@daoine.org> On Mon, Apr 23, 2018 at 04:43:26AM -0400, Joncheski wrote: Hi there, > I have a problem with NGINX. In addition, I will provide you with a > configuration file and a picture of the architecture schema ( > https://ibb.co/jqvc8c ). > > I want to access Keycloak via nginx and log in to it. I use it as an > Identity Management where I have a login with a username and password and a > certificate where I check the certificate, that is 2FA. My problem is that > when I access the browser through NGINX, I do not get popup to submit my > user certificate, but then go to the second step to enter a username and > password, but after that, Keycloak tells me I'm missing a certificate. As I understand it, Keycloak receives a user/pass combination, and wants to receive a SSL certificate, and wants to know that the client knows the private key that matches the certificate. There are two ways that Keycloak (or anything) can know that the client knows the matching private key: * the client can talk SSL directly to Keycloak * something that Keycloak trusts can tell it that the client knows the matching private key If you can configure Keycloak to believe nginx when nginx says that the client knows the private key to *this* certificate, then you can use nginx's ssl_verify_client directive with the optional_no_ca argument. (http://nginx.org/r/ssl_verify_client) If you cannot configure Keycloak to believe that, then you will probably have to change your design so that the client "does" SSL directly with Keycloak - perhaps by removing nginx from the loop, or perhaps by using nginx as a tcp port forwarder ("stream"). That would have other effects on the overall architecture. f -- Francis Daly francis at daoine.org