From nginx-forum at forum.nginx.org Fri Jan 1 00:32:20 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 31 Dec 2015 19:32:20 -0500 (EST) Subject: nginx as the number 2 in 2016? Message-ID: Happy new year ya all! Lets see if we can surpass Apache and get nginx as the number 2 in the world this year ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263816,263816#msg-263816 From lenaigst at maelenn.org Fri Jan 1 16:27:02 2016 From: lenaigst at maelenn.org (Thierry) Date: Fri, 1 Jan 2016 18:27:02 +0200 Subject: config reverse proxy nginx vers apache2 Message-ID: <1936325121.20160101182702@maelenn.org> Bonjour ? tous, Et bonne ann?e. J'ai un serveur web Ubuntu/Apache2 + SSL avec 3 Vhosts (site1.domain.org site2.domain.org site3.domain.org)... Je voudrais monter en amont, sur la m?me machine un reverse proxy avec nginx. Mon firewall/routeur fait du NAT : toutes les requ?tes en 443 se dirigent vers le port 443 du serveur apache. Je fais pour l'instant quelques essais sans succ?s. - Je ne touche pas au serveur apache. - Avec nginx j'ai fait la config suivante: server { listen 443; server_name site1.domain.org site2.domain.org site3.domain.org; location / { proxy_pass http://localhost:81; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } L'ennui c'est que nginx ?coute en 443 ainsi que le serveur apache (??) Merci pour votre aide :) -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From nginx-forum at forum.nginx.org Fri Jan 1 22:09:52 2016 From: nginx-forum at forum.nginx.org (Tyrdl2) Date: Fri, 1 Jan 2016 17:09:52 -0500 (EST) Subject: (110: Connection timed out Message-ID: Hey, This is my first post on this forum. I do not speak good English but I will try to write clearly. I have problem with Nginx + php-fpm and fast cgi connection time out. My error log 2016/01/01 22:58:58 [error] 2367#0: *13 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.10.12.101, server: xxx.org.pl, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.org.pl", referrer: "http://xxx.org.pl/" and my nginx .conf server { listen 80; server_name xxx.pvp.org.pl; access_log /var/log/nginx/access-forum.log; error_log /var/log/nginx/error-forum.log; #server_tokens off; root /home/produkcja/forum; index index.html index.php; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location / { try_files $uri $uri/ /index.php?$args; } location ~* \.php$ { include fastcgi_params; fastcgi_index index.php; fastcgi_read_timeout 300; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263819,263819#msg-263819 From nginx-forum at forum.nginx.org Fri Jan 1 22:18:44 2016 From: nginx-forum at forum.nginx.org (Tyrdl2) Date: Fri, 1 Jan 2016 17:18:44 -0500 (EST) Subject: (110: Connection timed out In-Reply-To: References: Message-ID: <3e5b128d69100e941696e248f5c35966.NginxMailingListEnglish@forum.nginx.org> PS: I use nginx 1.8.0, debian 8, php-fpm, fcgiwrap, spawn-fcgi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263819,263820#msg-263820 From lenaigst at maelenn.org Fri Jan 1 22:51:11 2016 From: lenaigst at maelenn.org (Thierry) Date: Sat, 2 Jan 2016 00:51:11 +0200 Subject: config reverse proxy nginx vers apache2 In-Reply-To: <1936325121.20160101182702@maelenn.org> References: <1936325121.20160101182702@maelenn.org> Message-ID: <569658229.20160102005111@maelenn.org> petite modif: Config nginx: server { listen 445; server_name site1.domain.org site2.domain.org site3.domain.org; access_log /var/log/nginx/access-proxy.log; location / { proxy_pass https://192.168.1.81/; include /etc/nginx/proxy.conf; } } Config apache2: Installation module praf Et apr?s le de chaque vhost: RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 ::1 Au niveau du NAT, je redirige les requ?tes 443 vers 445 (donc le serveur nginx) J'ai du log concerne le serveur nginx (essai avec mon telephone): [02/Jan/2016:00:40:14 +0200] "\x16\x03\x01\x00\xD0\x01\x00\x00\xCC\x03\x03Ew\xF1Q(\x8F\xA5\xB3!\x0B\x84\xE6\xE1\xCD\x9A\x0E\x12\x1C8\xC6\xEE\x8D';z\xC3\x9C(\x22F\x18\xCE\x00\x00\x22\xCC\x14\xCC\x13\xCC\x15\xC0+\xC0/\x00\x9E\xC0" 400 166 "-" "-" Il semblerait que cela soit une erreur SSL. Merci > Bonjour ? tous, > Et bonne ann?e. > J'ai un serveur web Ubuntu/Apache2 + SSL avec 3 Vhosts > (site1.domain.org site2.domain.org site3.domain.org)... Je voudrais > monter en amont, sur la m?me machine un reverse proxy avec nginx. > Mon firewall/routeur fait du NAT : toutes les requ?tes en 443 se > dirigent vers le port 443 du serveur apache. > Je fais pour l'instant quelques essais sans succ?s. > - Je ne touche pas au serveur apache. > - Avec nginx j'ai fait la config suivante: > server { > listen 443; > server_name site1.domain.org site2.domain.org site3.domain.org; > location / { > proxy_pass http://localhost:81; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > } > } > L'ennui c'est que nginx ?coute en 443 ainsi que le serveur apache (??) > Merci pour votre aide :) > From bertrand.caplet at chunkz.net Sat Jan 2 00:47:34 2016 From: bertrand.caplet at chunkz.net (Bertrand Caplet) Date: Sat, 2 Jan 2016 01:47:34 +0100 Subject: config reverse proxy nginx vers apache2 In-Reply-To: <569658229.20160102005111@maelenn.org> References: <1936325121.20160101182702@maelenn.org> <569658229.20160102005111@maelenn.org> Message-ID: <56871E26.4040508@chunkz.net> > petite modif: > > Config nginx: > > server { > listen 445; > server_name site1.domain.org site2.domain.org site3.domain.org; > > access_log /var/log/nginx/access-proxy.log; > > > location / { > proxy_pass https://192.168.1.81/; > include /etc/nginx/proxy.conf; > } > } > > Config apache2: > > Installation module praf > Et apr?s le de chaque vhost: > > > RPAFenable On > RPAFsethostname On > RPAFproxy_ips 127.0.0.1 ::1 > > > > Au niveau du NAT, je redirige les requ?tes 443 vers 445 (donc le serveur nginx) > J'ai du log concerne le serveur nginx (essai avec mon telephone): > [02/Jan/2016:00:40:14 +0200] "\x16\x03\x01\x00\xD0\x01\x00\x00\xCC\x03\x03Ew\xF1Q(\x8F\xA5\xB3!\x0B\x84\xE6\xE1\xCD\x9A\x0E\x12\x1C8\xC6\xEE\x8D';z\xC3\x9C(\x22F\x18\xCE\x00\x00\x22\xCC\x14\xCC\x13\xCC\x15\xC0+\xC0/\x00\x9E\xC0" 400 166 "-" "-" > > Il semblerait que cela soit une erreur SSL. > > Merci > _French aside :_ Bonjour Thierry, ceci est une liste de diffusion internationale, nous nous devons donc de parler exclusivement en anglais. Merci d'avance. Le probl?me vient du fait que tu as oubli? d'ajouter le param?tre ssl apr?s ton num?ro de port. Ce qui nous donne : listen 445 ssl; _Back to English : _You forget to activate ssl option after your port. The right configuration would be like this: listen 445 ssl; Regards, -- CHUNKZ.NET - script kiddie and computer technician Bertrand Caplet, Flers (FR) Feel free to send encrypted/signed messages Key ID: 6E494EB9 GPG FP: CB1B 664A 9165 98F8 459F 0807 CA35 B76F 6E49 4EB9 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From thomas at glanzmann.de Sat Jan 2 07:37:36 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Sat, 2 Jan 2016 08:37:36 +0100 Subject: Debian Jessie, Nginx, PHP, UWSGI quick start Message-ID: <20160102073736.GA18350@glanzmann.de> Hello, I had to host a potential unsecure PHP web application. So I though about writing a small c programm which creates a network, filesystem, pid, uts, and ipc namespace and run php-fpm inside it. I needd from the PHP web application access to a mysql database, mailserver and ftp server of the localhost. I stumbled across uwsgi and thought less programming for me, but it took me several hours to get it finally running so I write up this quickstart guide for others who will face the same problem. I would also like feedback from others who are doing the same. Maybe you can add this quickstart guide to the uwsgi website after reviewing it. - First it is necessary to setup a debootstrap of the distribution you want to use. For me it was Debian Jessie amd64. And add the php5-mysql package and any other php5 extensions you might need. I also searched for any user writable directories, deleted them and symlinked them to /tmp /usr/sbin/debootstrap --arch amd64 jessie /distro/jessie cd /distro/jessie chroot . mount -t proc none /proc apt-get --no-install-recommends install php5-mysql find . -type d -perm -o+w -exec ls -adl {} \; rm ./var/lib/php5/sessions ln -s /tmp ./var/lib/php5/sessions ... exit - Than I installed on the host the uwsgi-emperor and uwsgi-plugin-php apt-get install uwsgi-emperor uwsgi-plugin-php - Next is the configuration. I tried to create a root from the existing system without using debootstrap which always failed at the chroot step for me. So at one point I decided to go with debootstrap. I changed one other challenge: I needed to be able to reach the socket of uwsgi-plugin-php and have communication to mysql, postfix, and the ftp server. As soon as I enabled network namespaces that communication was gone. The options are: Don't do network namespaces (not an option for me), use unix sockets (not an option for ftp and mail) or use veth, macvlan or the uwsgi TunTap Router (I did not want to go the network communication through userland). So I used veth. Note that the new network namespace does not get a default route. For the network configuration I used a /30 but I probably should have used pointopoint. One problem I faces is reaching the socket of uwsgi after enable network namespaces. Is there a way to let the emperor listen and forward the communication channel to the vassals using two filedescriptors so that I don't need shared filesystem for the unix socket or an ip address to communicate from the host to the vassal? (infra) [~] cat /etc/uwsgi-emperor/emperor.ini [uwsgi] emperor = /etc/uwsgi-emperor/vassals emperor-use-clone = ipc,uts,pid,net master = true exec-as-emperor = ip link del veth0 exec-as-emperor = ip link add veth0 type veth peer name veth1 exec-as-emperor = ifconfig veth0 10.12.13.1 netmask 255.255.255.252 up exec-as-emperor = ip link set veth1 netns $UWSGI_VASSAL_PID (infra) [~] cat /etc/uwsgi-emperor/vassals/shell.ini [uwsgi] socket = 10.12.13.2:12345 uid = www-data gid = 33 unshare = fs hook-post-jail = mount:none /distro/jessie /ns bind,ro pivot_root = /ns /ns/.old hook-as-root = mount:proc none /proc nodev hidepid=2 hook-as-root = mount:tmpfs none /tmp hook-as-root = mount:none /.old/srv/www/shell /srv/www/shell bind hook-as-root = mount:none /.old/dev/pts /dev/pts bind hook-as-root = umount:/.old rec,detach wait-for-interface = veth1 exec-as-root = hostname vassal001 exec-as-root = ifconfig lo up exec-as-root = ifconfig veth1 10.12.13.2 netmask 255.255.255.252 up plugin = 0:php php-allowed-ext = .php php-docroot = /srv/www/shell php-index = index.php php-set = extension=mysql.so processes = 10 cheaper = 2 (infra) [~] cat /etc/nginx/nginx.conf ... server { listen 1.2.3.4:443; server_name shell.glanzmann.de; # the host is gone . :-) root /srv/www/shell; disable_symlinks on; autoindex off; index index.php; location ~ \.php$ { include /etc/nginx/uwsgi_params; uwsgi_pass 10.12.13.2:12345; } } ... Here are some useful sources that I used during the process: https://github.com/gdamjan/uwsgi-php-in-a-namespace http://lists.unbit.it/pipermail/uwsgi/2013-September/006405.html http://superuser.com/questions/868602/owncloud-in-subdirectoy-on-arch-nginx-uwsgi https://github.com/unbit/uwsgi-docs/blob/master/Changelog-1.9.15.rst https://github.com/unbit/uwsgi-docs/blob/master/articles/MassiveHostingWithEmperorAndNamespaces.rst I appreciate any tips to improve the above configuration, errors that I might did and also if there is another tool which is probably easier than uwsgi to configure which can do the same thing. I like uwsgi, but it took me yesterday several hours to get it running and I found the documentation to be very confusing. Cheers, Thomas From lenaigst at maelenn.org Sat Jan 2 07:52:03 2016 From: lenaigst at maelenn.org (Thierry) Date: Sat, 2 Jan 2016 09:52:03 +0200 Subject: config reverse proxy nginx vers apache2 In-Reply-To: <56871E26.4040508@chunkz.net> References: <1936325121.20160101182702@maelenn.org> <569658229.20160102005111@maelenn.org><56871E26.4040508@chunkz.net> Message-ID: <2810149052.20160102095203@maelenn.org> Hi there, Sorry for my french ... So no luck, still not working even if I add ssl after the port number. This time, I do not have any logs to show you .... All my vhost on my apache server are all listening on port 443. My nginx server is listening on port 445. From my routeur, all connexion for https are forwarded to port 445. Thx for your help. >> petite modif: >> >> Config nginx: >> >> server { >> listen 445; >> server_name site1.domain.org site2.domain.org site3.domain.org; >> >> access_log /var/log/nginx/access-proxy.log; >> >> >> location / { >> proxy_pass https://192.168.1.81/; >> include /etc/nginx/proxy.conf; >> } >> } >> >> Config apache2: >> >> Installation module praf >> Et apr?s le de chaque vhost: >> >> >> RPAFenable On >> RPAFsethostname On >> RPAFproxy_ips 127.0.0.1 ::1 >> >> >> >> Au niveau du NAT, je redirige les requ?tes 443 vers 445 (donc le serveur nginx) >> J'ai du log concerne le serveur nginx (essai avec mon telephone): >> [02/Jan/2016:00:40:14 +0200] "\x16\x03\x01\x00\xD0\x01\x00\x00\xCC\x03\x03Ew\xF1Q(\x8F\xA5\xB3!\x0B\x84\xE6\xE1\xCD\x9A\x0E\x12\x1C8\xC6\xEE\x8D';z\xC3\x9C(\x22F\x18\xCE\x00\x00\x22\xCC\x14\xCC\x13\xCC\x15\xC0+\xC0/\x00\x9E\xC0" 400 166 "-" "-" >> >> Il semblerait que cela soit une erreur SSL. >> >> Merci >> > _French aside :_ > Bonjour Thierry, ceci est une liste de diffusion internationale, nous > nous devons donc de parler exclusivement en anglais. > Merci d'avance. > Le probl?me vient du fait que tu as oubli? d'ajouter le param?tre ssl > apr?s ton num?ro de port. Ce qui nous donne : > listen 445 ssl; > _Back to English : > _You forget to activate ssl option after your port. The right > configuration would be like this: > listen 445 ssl; > Regards, From lenaigst at maelenn.org Sat Jan 2 09:57:17 2016 From: lenaigst at maelenn.org (Thierry) Date: Sat, 2 Jan 2016 11:57:17 +0200 Subject: config reverse proxy nginx to apache2 Message-ID: <1858627475.20160102115717@maelenn.org> Bonjour, I have made some modification on my nginx reverse proxy server. I have add these lines: listen 445; server_name *.domain.org; ssl on; ssl_certificate /etc/ssl/certs/file.crt; (same as apache) ssl_certificate_key /etc/ssl/private/file.key; (same as apache) ssl_protocols TLSv1 TLSv1.1 TLSv1.2; I have access to my web server from outside, but I do not understand how the ssl certificate is managed. Why do I need to add on nginx those certificates ? This is already handled by my apache server through his vhosts. How to deal when I have three vhosts, 2 have the same ssl certificate but the third one his using a different one. Thx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From anoopalias01 at gmail.com Sat Jan 2 10:27:53 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 2 Jan 2016 15:57:53 +0530 Subject: config reverse proxy nginx to apache2 In-Reply-To: <1858627475.20160102115717@maelenn.org> References: <1858627475.20160102115717@maelenn.org> Message-ID: Is this apache behind nginx or nginx behing apache?. Whichever be the case - The rule is that the frontend (or the server terminating 443 ) need to have the cert configured as the web browsers need to talk to it with ssl .So in short if nginx is the frontend it must have the SSL eventhough apache(if the proxy backend) also has ssl on it. All your individual vhost need individual ssl entries. If 2 vhost use the same cert all you have as an advantage is you can use the same filenames . On Sat, Jan 2, 2016 at 3:27 PM, Thierry wrote: > Bonjour, > > I have made some modification on my nginx reverse proxy > server. > > I have add these lines: > > listen 445; > server_name *.domain.org; > ssl on; > ssl_certificate /etc/ssl/certs/file.crt; (same as apache) > ssl_certificate_key /etc/ssl/private/file.key; (same as > apache) > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > I have access to my web server from outside, but I do not > understand how > the ssl certificate is managed. > > Why do I need to add on nginx those certificates ? This is > already handled by my apache server through his vhosts. > > How to deal when I have three vhosts, 2 have the same ssl > certificate but the third one his using a different one. > > Thx > > > > -- > Cordialement, > Thierry e-mail : lenaigst at maelenn.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lenaigst at maelenn.org Sat Jan 2 12:27:47 2016 From: lenaigst at maelenn.org (Thierry) Date: Sat, 2 Jan 2016 14:27:47 +0200 Subject: config reverse proxy nginx to apache2 In-Reply-To: References: <1858627475.20160102115717@maelenn.org> Message-ID: <79866093.20160102142747@maelenn.org> Hi, The nginx is the frontend. My main apache web server has 3 vhosts. vhost1 and vhost2 have the same ssl certif when vhost3 has a different one. I have add on my reverse proxy nginx the ssl_certificate and ssl_certificate_key (same as my apache config) but how to deal with the third ssl certif who is different from the two first one ? Is it possible to split them ? Thx > Is this apache behind nginx or nginx behing apache?.? > Whichever be the case - The rule is that the frontend (or the > server terminating 443 ) need to have the cert configured as the web > browsers need to talk to it with ssl .So in short if nginx is the > frontend it must have the SSL eventhough apache(if the proxy backend) also has ssl on it. > All your individual vhost need individual ssl entries.? If 2 vhost > use the same cert all you have as an advantage is you can use the same filenames .? > On Sat, Jan 2, 2016 at 3:27 PM, Thierry wrote: > Bonjour, > > ? ? ? ? ?I? have? made? some? modification on my? nginx reverse proxy > ? ? ? ? ?server. > > ? ? ? ? ?I have add these lines: > > ? ? ? ? ?listen 445; > ? ? ? ? ?server_name *.domain.org; > ? ? ? ? ?ssl on; > ? ? ? ? ?ssl_certificate /etc/ssl/certs/file.crt; (same as apache) > ? ? ? ? ?ssl_certificate_key? ?/etc/ssl/private/file.key;? (same? as > ? ? ? ? ?apache) > ? ? ? ? ?ssl_protocols? TLSv1 TLSv1.1 TLSv1.2; > > ? ? ? ? I? have? access to my web server from outside, but I do not understand how > the ssl certificate is managed. > > ? ? ? ? Why? do? I? need? to add on nginx those certificates ? This is > ? ? ? ? already handled by my apache server through his vhosts. > > ? ? ? ? How? to? deal? when? I have three vhosts, 2 have the same ssl > ? ? ? ? certificate but the third one his using a different one. > > ? ? ? ? Thx > > > > -- > Cordialement, > ?Thierry? ? ? ? ? ? ? ? ? ? ? ? ? e-mail : lenaigst at maelenn.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From rainer at ultra-secure.de Sun Jan 3 02:00:58 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 3 Jan 2016 03:00:58 +0100 Subject: Debian Jessie, Nginx, PHP, UWSGI quick start In-Reply-To: <20160102073736.GA18350@glanzmann.de> References: <20160102073736.GA18350@glanzmann.de> Message-ID: <6B67BC9B-1ABA-4EB2-9A1F-27B35A821589@ultra-secure.de> > Am 02.01.2016 um 08:37 schrieb Thomas Glanzmann : > > Hello, > I had to host a potential unsecure PHP web application. So I though about > writing a small c programm which creates a network, filesystem, pid, > uts, and ipc namespace and run php-fpm inside it. Excuse me if I?m blunt, but: can?t you just use php-fpm?s chroot feature? What?s the advantage of using uwsgi for php, apart from some tricks you really cannot do with php-fpm (running multiple PHPs at the same time). I?ve chrooted all my php-fpm instances (under FreeBSD) and it works very well. Of course, you can?t talk to any socket directly anymore. But that?s a minor issue IMO. From jimssupp at rushpost.com Mon Jan 4 00:18:07 2016 From: jimssupp at rushpost.com (jimssupp at rushpost.com) Date: Sun, 03 Jan 2016 16:18:07 -0800 Subject: Nginx configuration problem when splitting across front & backends, when using php ? Message-ID: <1451866687.1442857.481867410.4D4D74B8@webmail.messagingengine.com> I installed an Nginx server this weekend. I got the Symfony PHP framework, and a test app, running on it, using php-fpm, and a simple all-in-one Nginx config. I'm trying to figure out splitting front- and back-ends using Nginx proxy. I set up a FrontEnd Nginx config server { server_name test.lan; listen 10.0.0.1:443 ssl http2; root /dev/null; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log info; autoindex off; rewrite_log off; ssl on; ssl_verify_client off; ssl_certificate "ssl/test.crt"; ssl_trusted_certificate "ssl/test.crt"; ssl_certificate_key "ssl/test.key"; include includes/ssl.conf; location / { proxy_pass http://127.0.0.1:10000; } } and the Backend proxy listener for it, server { server_name test.lan; listen 127.0.0.1:10000 default; root /srv/www/test/symfony/web/; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log info; ssl off; gzip off; rewrite_log on; location / { try_files $uri /app.php$is_args$args; fastcgi_intercept_errors on; } location ~ ^/(app_dev|config)\.php(/|$) { fastcgi_pass http://127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; } location ~ ^/app\.php(/|$) { fastcgi_pass http://127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; internal; } } When I start Nginx I get no errors. When I visit https://test.lan/ I see Welcome to Symfony 3.0.1 Your application is ready to start working on it at: /srv/www/test/symfony/ So that's the same as with the all-in-one Nginx config. But, with this split config I'm trying out, when I visit https://test.lan/app.php I get 404 Not Found And when I visit https://test.lan/app_dev.php I get File access denied. cref: app_dev.php for more info. Since I got this working right in my all-in-one config, but not in my split-config, I guess the problem's in my config and not with Symfony or php-fpm. How come I can get "/" working but not either "/app.php" or "/app_dev.php"? From chencw1982 at gmail.com Mon Jan 4 03:00:35 2016 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Mon, 4 Jan 2016 11:00:35 +0800 Subject: [ANNOUNCE] Tengine-2.1.2 released Message-ID: Hi folks, We are very excited to announce that Tengine-2.1.2 (stable version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine (tag: tengine-2.1.2_f) or download the tarball directly: http://tengine.taobao.org/download/tengine-2.1.2.tar.gz This release supports HTTP v2 and SPDY v3.1 simultaneously, and it will select the reasonable protocol for SSL connections. The full changelog is as follows: *) Feature: ngx_http_reqstat_module now will trace requests if they are redirected internally by 'rewrite' or 'error_page'. (cfsego) *) Feature: porting HTTP/2 from nginx v1.9.7, and support for spdy fallback. (PeterDaveHello, cfsego) *) Feature: added ngx_debug_pool module to check nginx memory. (chobits) *) Feature: porting $upstream_cookie from nginx. *) Bugfix: fixed merging the duplicate peers for ngx_http_dyups_module. (FqqCS, taoyuanyuan) *) Bugfix: lua-upstream-nginx-module cann't compile successfully. (cfsego) *) Bugfix: fixed ngx_http_concat_module that took no effect on javascript. (IYism) See our website for more details: http://tengine.taobao.org Have fun! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 4 13:34:15 2016 From: nginx-forum at forum.nginx.org (Keyur) Date: Mon, 4 Jan 2016 08:34:15 -0500 (EST) Subject: Debugging 504 Gateway Timeout and its actual cause and solution Message-ID: Hi, We are running following stack on our web server Varnish + Nginx + FastCGI (php-fpm) on RHEL 6.6 Its a dynamic website with different result sets everytime and has around 2 million url's indexed with Google. * Its running on nginx/1.5.12 and PHP 5.3.3 (Will be upgraded to latest nginx and PHP soon) * Nginx connects to php-fpm running locally on same server on port 9000 We are getting 504 Gateway timeout intermittently on some pages which we are unable to resolve. The URL's which give 504 works fine after sometime. We get to know about 504 from our logs and we haven't been able to replicate this as it randomly happens on any URL and works after sometime. I have had couple of discussions with developer but as per him the underlying php script hardly does anything and it should not take this long (120 seconds) but still it is giving 504 Gateway timeout. Need to establish where exactly the issue occurs : * Is it a problem with Nginx ? * Is it a problem with php-fpm ? * Is it a problem with underlying php scripts ? * Is it possible that nginx is not able to connect to php-fpm ? * Would it resolve if we use Unix socket instead of TCP/IP connection to ? The URL times out after 120 seconds with 504 Below is the error seen : 2016/01/04 17:29:20 [error] 1070#0: *196333149 upstream timed out (110: Connection timed out) while connecting to upstream, client: 66.249.74.95, server: x.x.x.x, request: "GET /Some/url HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "example.com" Earlier with fastcgi_connect_timeout of 150 seconds - it used to give at 502 status code after 63 seconds with default net.ipv4.tcp_syn_retries = 5 on RHEL 6.6 ; afterwards we set net.ipv4.tcp_syn_retries = 6 and then it started giving 502 after 127 seconds. Once I set fastcgi_connect_timeout = 120 it started giving 504 status code. I understand fastcgi_connect_timeout with such high value is not good. Need to findout why exactly we are getting 504 (I know its timeout but the cause is unknown). Need to get to the root cause to fix it permanently. How do I confirm where exactly the issue is ? If its poorly written code then I need to inform the developer that 504 is happening due to issue in php code and not due to nginx or php-fpm and if its due to Nginx or Php-fpm then need to fix that. Thanks in Advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263842,263842#msg-263842 From nginx-forum at forum.nginx.org Mon Jan 4 13:53:13 2016 From: nginx-forum at forum.nginx.org (Keyur) Date: Mon, 4 Jan 2016 08:53:13 -0500 (EST) Subject: Debugging 504 Gateway Timeout and its actual cause and solution In-Reply-To: References: Message-ID: <6f75498811aec2aa1e254364eeccf691.NginxMailingListEnglish@forum.nginx.org> Here are some of the timeouts already defined : Under server wide nginx.conf : keepalive_timeout 5; send_timeout 150; under specific vhost.conf : proxy_send_timeout 100; proxy_read_timeout 100; proxy_connect_timeout 100; fastcgi_connect_timeout 120; fastcgi_send_timeout 300; fastcgi_read_timeout 300; Different values for timeouts are used so I can figured out which timeout was exactly triggered. Below are some of the settings from sysctl.conf : net.ipv4.ip_local_port_range = 1024 65500 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_syn_retries = 6 net.core.netdev_max_backlog = 8192 net.ipv4.tcp_max_tw_buckets = 2000000 net.core.somaxconn = 4096 net.ipv4.tcp_no_metrics_save = 1 vm.max_map_count = 256000 Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263842,263843#msg-263843 From r1ch+nginx at teamliquid.net Mon Jan 4 14:10:40 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 4 Jan 2016 15:10:40 +0100 Subject: Debugging 504 Gateway Timeout and its actual cause and solution In-Reply-To: <6f75498811aec2aa1e254364eeccf691.NginxMailingListEnglish@forum.nginx.org> References: <6f75498811aec2aa1e254364eeccf691.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have you checked the php-fpm logs? It seems like your backend is overloaded and not accepting connections fast enough. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 4 14:34:45 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 4 Jan 2016 09:34:45 -0500 (EST) Subject: Debugging 504 Gateway Timeout and its actual cause and solution In-Reply-To: References: Message-ID: <5864f0143800f2b6b3755088b72e9c72.NginxMailingListEnglish@forum.nginx.org> Keyur Wrote: ------------------------------------------------------- > If its poorly written code then I need to inform the developer that > 504 is happening due to issue in php code and not due to nginx or > php-fpm and if its due to Nginx or Php-fpm then need to fix that. How many php master/slave processes are running generally and how many when you get 5xx errors? It is best to define and use a pool (upstream) for php. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263842,263845#msg-263845 From peterchencs at gmail.com Tue Jan 5 05:30:32 2016 From: peterchencs at gmail.com (Peter Chen) Date: Tue, 5 Jan 2016 00:30:32 -0500 Subject: Nginx Vulnerability on FreeBSD Message-ID: Hi, I am trying to do a security research experiment on FreeBSD. I try to test the Nginx Vulnerability CVE-2013-2028 on FreeBSD 10.1 x86-64, with Nginx 1.3.9/1.4.0. (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2028) However, most exploit samples can succeed on Linux, but not FreeBSD. The basic idea for the exploit, is to send a packet with a very large chunk size, making the victim process stack-overflow. After Nginx's many crashes, the attacker can find enough gadgets to launch a return-oriented programming attack. However, it is hard to let Nginx worker process crash (due to overwritten return address) on FreeBSD. Process crash is the first step of the whole exploit. I do the experiment on both local and remote (LAN) machines. This exploit requires: ----------------------------------------------------------------- This also includes an IP fragmentation router to make the attack possible on WANs. Nginx does a non-blocking read on a 4096 byte buffer, and typical MTUs are 1500, so IP fragmentation is needed to deliver a large TCP segment that will result in a single read of over 4096 bytes. ------------------------------------------------------------------ Any comments/suggestions on this, just to make the victim process crash? Here are two exploit code examples, which can run against Linux target, but fail to make the Nginx worker process crash on FreeBSD: http://www.scs.stanford.edu/brop/ http://www.scs.stanford.edu/brop/nginx-1.4.0-exp.tgz https://www.exploit-db.com/docs/27074.pdf http://seclists.org/fulldisclosure/2013/Jul/att-90/ngxunlock_pl.bin Thanks very much for your time!! Best, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 6 05:55:47 2016 From: nginx-forum at forum.nginx.org (austevo) Date: Wed, 6 Jan 2016 00:55:47 -0500 (EST) Subject: How to setup Nginx as REALLY static-cache reverse proxy In-Reply-To: References: Message-ID: <7d380e24605282ff9ae330a04833f211.NginxMailingListEnglish@forum.nginx.org> I'm having the same issue with cache being browser dependent. I've tried setting up a crawl job using wget --recursive with Firefox and Chrome headers, but that doesn't seem to trigger server-side caching either. If I browse the site using Firefox, then caching works for Firefox, and Firefox only. If I browse the site using Chrome, then caching works for Chrome, and Chrome only. I'd like cache to be browser agnostic. Any ideas? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255256,263868#msg-263868 From rpaprocki at fearnothingproductions.net Wed Jan 6 06:07:44 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 5 Jan 2016 22:07:44 -0800 Subject: How to setup Nginx as REALLY static-cache reverse proxy In-Reply-To: <7d380e24605282ff9ae330a04833f211.NginxMailingListEnglish@forum.nginx.org> References: <7d380e24605282ff9ae330a04833f211.NginxMailingListEnglish@forum.nginx.org> Message-ID: Can you show us your config, debug logs, or any info that would help troubleshoot the issue? See https://www.nginx.com/resources/wiki/start/topics/tutorials/debugging/ for help on setting up debug logging. On Tue, Jan 5, 2016 at 9:55 PM, austevo wrote: > I'm having the same issue with cache being browser dependent. I've tried > setting up a crawl job using wget --recursive with Firefox and Chrome > headers, but that doesn't seem to trigger server-side caching either. > > If I browse the site using Firefox, then caching works for Firefox, and > Firefox only. > > If I browse the site using Chrome, then caching works for Chrome, and > Chrome > only. > > I'd like cache to be browser agnostic. > > Any ideas? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,255256,263868#msg-263868 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 6 07:56:43 2016 From: nginx-forum at forum.nginx.org (Keyur) Date: Wed, 6 Jan 2016 02:56:43 -0500 (EST) Subject: Debugging 504 Gateway Timeout and its actual cause and solution In-Reply-To: References: Message-ID: <9303eeffa333a7c5067841e8c9a2cba7.NginxMailingListEnglish@forum.nginx.org> Thanks Richard & itpp2015 for your response. Further update : There are 2 cases : 1. 504 @ 120 seconds coming with below mentioned error : 2016/01/05 03:50:54 [error] 1070#0: *201650845 upstream timed out (110: Connection timed out) while connecting to upstream, client: 66.249.74.99, server: x.x.x.x, request: "GET /some/url HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "example.com" 2. 504 @ 300 seconds coming with below mentioned error : 2016/01/05 00:51:43 [error] 1067#0: *200656359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 115.112.161.9, server: 192.168.12.101, request: "GET /some/url HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "example.com" * No errors found in php-fpm logs. * Number of php-fpm processes were also normal. Backend doesn't look overloaded as other requests were served out fine at the same time. * Only one php-fpm pool is being used. One php-fpm master (parent) process and other slave (child) processes are usually at normal range only when 5xx are observed. There is no significant growth in number of php-fpm processes and even if grows then server has enough capacity to fork new ones and serve the request. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263842,263870#msg-263870 From r at roze.lv Wed Jan 6 12:49:33 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 6 Jan 2016 14:49:33 +0200 Subject: How to setup Nginx as REALLY static-cache reverse proxy In-Reply-To: <7d380e24605282ff9ae330a04833f211.NginxMailingListEnglish@forum.nginx.org> References: <7d380e24605282ff9ae330a04833f211.NginxMailingListEnglish@forum.nginx.org> Message-ID: <549332A9F8D44FB6A4E33DE99A93E35A@NeiRoze> > I'm having the same issue with cache being browser dependent. I've tried > setting up a crawl job using wget --recursive with Firefox and Chrome > headers, but that doesn't seem to trigger server-side caching either. > > > I'd like cache to be browser agnostic. > > Any ideas? It is hard to identify problems without getting a look at the configuration but for the sake of general ideas: For one particular case of static file caching we use nginx's proxy_store feature ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_store ) - the files are literally stored as is (in the same tree structure/naming) on the cache server. The configuration is way simple: proxy_ignore_client_abort on; // optional upstream backend { server 10.10.10.10; } server { root /path; error_page 404 = @store; location @store { internal; proxy_pass http://imgstore; proxy_store on; } } Obviously there are some drawbacks (or advantages in some cases) in caching this way - the expire headers (or any other headers for that matter) from the backend (or the client) have no effect on the cache server, the cache management (purging items / used space) is fully your responsibility, 404 (or any other non-200) responses are not cached (which might or might not be a problem). rr From nginx-forum at forum.nginx.org Wed Jan 6 14:11:54 2016 From: nginx-forum at forum.nginx.org (tammini) Date: Wed, 6 Jan 2016 09:11:54 -0500 (EST) Subject: logging Message-ID: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> Is it possible to log websocket requests in nginx access log ? Or is this logging restricted only to http requests ? tammini Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263878,263878#msg-263878 From amir.kaivani at gmail.com Wed Jan 6 23:52:20 2016 From: amir.kaivani at gmail.com (Amir Kaivani) Date: Wed, 6 Jan 2016 17:52:20 -0600 Subject: Reverse proxy, proxy_pass Message-ID: Hi there, Here is a part of my nginx config file: location /test/ { proxy_pass http://localhost:1024/; } If I have it as above the GET /test/xxxx request will be sent to port 1024 as /xxxx and it also decodes the URI. I know that if I remove / from the end of proxy_pass then it'll send the URI without decoding special characters (such as %2F). But in this case it sends /test/xxxx to port 1024. So, my question is how I can get nginx to remove /test/ from the URI but does NOT decode special characters (such as %2F)? Thank you for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 7 22:30:45 2016 From: nginx-forum at forum.nginx.org (djeyewater) Date: Thu, 7 Jan 2016 17:30:45 -0500 (EST) Subject: Limit reqs per user / bot Message-ID: <3cc8e7b7c9d8bd7383cf525c17b6226c.NginxMailingListEnglish@forum.nginx.org> I want to limit requests to 1 per second for each user, counting a bot that makes requests from multiple ips as a single user. Does this make sense: map $http_user_agent $single_user { default $binary_remote_addr; ~PaperLiBot 1; } limit_req_zone $single_user zone=one:10m rate=1r/s; ... limit_req zone=one burst=2; Thanks Dave Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263891,263891#msg-263891 From nginx-forum at forum.nginx.org Fri Jan 8 01:03:02 2016 From: nginx-forum at forum.nginx.org (austevo) Date: Thu, 7 Jan 2016 20:03:02 -0500 (EST) Subject: How to setup Nginx as REALLY static-cache reverse proxy In-Reply-To: <549332A9F8D44FB6A4E33DE99A93E35A@NeiRoze> References: <549332A9F8D44FB6A4E33DE99A93E35A@NeiRoze> Message-ID: Thanks for the responses guys. I've tried proxy_store on one config, but now I'm just receiving time-outs when I block the origin server. No stale cache on error at all. Here are two separate configs I'm using. The first one is as described earlier, with caching and stale cache errors working, although cache is browser dependent. proxy_cache_path /etc/nginx/cache/abc123.org levels=1:2 keys_zone=abc123:64m inactive=10d max_size=1000m; server { listen 443 ssl; server_name abc123.org; access_log /var/log/nginx/abc123.org.access.log; error_log /var/log/nginx/abc123.org.error.log; include ssl.conf; #moved from location proxy_cache_key abc123$request_uri; location / { proxy_cache abc123; #proxy_cache_key abc123$request_uri; add_header X-Proxy-Cache $upstream_cache_status; proxy_pass https://abc123.org; proxy_cache_valid 200 720m; proxy_cache_valid 301 304 302 720m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504 http_404; expires max; add_header Cache-control "public"; proxy_connect_timeout 3s; proxy_read_timeout 3s; proxy_send_timeout 3s; proxy_cache_revalidate on; proxy_cache_min_uses 1; } } server { listen 80; server_name abc123.org; return 301 https://$server_name$request_uri; } Config with proxy_store proxy_cache_path /etc/nginx/cache/zxf123.org levels=1:2 keys_zone=zxf123:64m inactive=10d max_size=1000m; server { listen 443 ssl; server_name zxf123.org; access_log /var/log/nginx/zxf123.org.access.log; error_log /var/log/nginx/zxf123.org.error.log; include ssl.conf; #moved from location proxy_cache_key $host$request_uri; location / { #proxy_cache zxf123; proxy_store on; #proxy_cache_key $host$request_uri; add_header X-Proxy-Cache $upstream_cache_status; proxy_pass https://zxf123.org; proxy_cache_valid 200 720m; proxy_cache_valid 301 304 302 720m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504 http_404; expires max; add_header Cache-control "public"; proxy_connect_timeout 3s; proxy_read_timeout 3s; proxy_send_timeout 3s; proxy_cache_revalidate on; proxy_cache_min_uses 1; } } server { listen 80; server_name zxf123.org; return 301 https://zxf123.org$request_uri; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255256,263893#msg-263893 From nginx-forum at forum.nginx.org Fri Jan 8 06:39:19 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 8 Jan 2016 01:39:19 -0500 (EST) Subject: Limit reqs per user / bot In-Reply-To: <3cc8e7b7c9d8bd7383cf525c17b6226c.NginxMailingListEnglish@forum.nginx.org> References: <3cc8e7b7c9d8bd7383cf525c17b6226c.NginxMailingListEnglish@forum.nginx.org> Message-ID: No because one user (web browser) can easily open 20 or more simultaneous connections to get a better web response. A bot might be less prone to do the same but most connect at about 5 simultaneous connections. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263891,263894#msg-263894 From nginx-forum at forum.nginx.org Fri Jan 8 08:33:56 2016 From: nginx-forum at forum.nginx.org (atsushi2550) Date: Fri, 8 Jan 2016 03:33:56 -0500 (EST) Subject: Client Authentication Problem when access from android phone Message-ID: Hi there, I'm trying to set up reverse proxy server with client authentication. --- Environment --- My CA is 2 tier. Root CA - intermediate CA - Client Certificate. --- Problem Discripton --- When I accessed proxy server from laptop pc, only the correct client certificate was suggested, and authenticate successfully. But when I accessed proxy server from android phone, ALL installed client certificate was suggested, and if I choose *wrong client certificate authenticate successfully. *wrong client certificate : certificate that Root CA is same but intermediate CA is different, My nginx configuration is as follows. ------------------------ ssl on; ssl_certificate cert/servercert; ssl_certificate_key cert/serverkey; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_prefer_server_ciphers on; ssl_verify_client on; ssl_verify_depth 2; ssl_client_certificate cert/intermediate.cert; ssl_trusted_certificate cert/intermediate_and_root.cert; --- END Best Regards, atsushi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263895,263895#msg-263895 From mdounin at mdounin.ru Fri Jan 8 16:34:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jan 2016 19:34:23 +0300 Subject: Debugging 504 Gateway Timeout and its actual cause and solution In-Reply-To: <9303eeffa333a7c5067841e8c9a2cba7.NginxMailingListEnglish@forum.nginx.org> References: <9303eeffa333a7c5067841e8c9a2cba7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160108163423.GI74233@mdounin.ru> Hello! On Wed, Jan 06, 2016 at 02:56:43AM -0500, Keyur wrote: > Thanks Richard & itpp2015 for your response. > > Further update : > > There are 2 cases : > > 1. 504 @ 120 seconds coming with below mentioned error : > > 2016/01/05 03:50:54 [error] 1070#0: *201650845 upstream timed out (110: > Connection timed out) while connecting to upstream, client: 66.249.74.99, > server: x.x.x.x, request: "GET /some/url HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "example.com" This means that nginx failed to connect to your backend server in time. This can happen in two basic cases: - network problems (unlikely for localhost though); e.g., this can happen if you have a statefull firewall configured between nginx and there aren't enough states. - backend is overloaded and doesn't accept connections fast enough; The latter is more likely, and usually happens when using Linux. Try watching your backend listen socket queue (something like "ss -nlt" should work on Linux) and/or try switching on net.ipv4.tcp_abort_on_overflow sysctl to see if it's the case. > 2. 504 @ 300 seconds coming with below mentioned error : > > 2016/01/05 00:51:43 [error] 1067#0: *200656359 upstream timed out (110: > Connection timed out) while reading response header from upstream, client: > 115.112.161.9, server: 192.168.12.101, request: "GET /some/url HTTP/1.1", > upstream: "fastcgi://127.0.0.1:9000", host: "example.com" The message suggests the backend failed to respond in time to a particular request. Depending on the request this may be either some generic problem (i.e., the backend is overloaded) or a problem with handling of the particular request. Try debugging what happens on the backend. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 8 17:05:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jan 2016 20:05:18 +0300 Subject: Client Authentication Problem when access from android phone In-Reply-To: References: Message-ID: <20160108170518.GJ74233@mdounin.ru> Hello! On Fri, Jan 08, 2016 at 03:33:56AM -0500, atsushi2550 wrote: > Hi there, > > I'm trying to set up reverse proxy server with client authentication. > > --- Environment --- > My CA is 2 tier. > Root CA - intermediate CA - Client Certificate. > > --- Problem Discripton --- > When I accessed proxy server from laptop pc, > only the correct client certificate was suggested, > and authenticate successfully. > > But when I accessed proxy server from android phone, > ALL installed client certificate was suggested, > and if I choose *wrong client certificate authenticate successfully. > > *wrong client certificate : certificate that Root CA is same but > intermediate CA is different, It's not possible to limit client authentication to only allow certs issued by an intermediate CA. All certificates which can be verified up to the trusted root CA are allowed. If you need to additionally limit access to only allow certain certs, you can do so based on variables provided by the SSL module, see here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables Something like if ($ssl_client_i_dn != "...") { return 403; } should be appropriate in your case. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 8 17:27:07 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jan 2016 20:27:07 +0300 Subject: logging In-Reply-To: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> References: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160108172707.GK74233@mdounin.ru> Hello! On Wed, Jan 06, 2016 at 09:11:54AM -0500, tammini wrote: > Is it possible to log websocket requests in nginx access log ? Or is this > logging restricted only to http requests ? WebSocket requests are no different from other HTTP requests - they just establish a WebSocket connection using the Upgrade HTTP mechanism. They are logged to access logs much like other HTTP requests once the WebSocket connection is closed. Note though that it's not possible to log what happens inside a WebSocket connection. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 8 18:02:46 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jan 2016 21:02:46 +0300 Subject: Reverse proxy, proxy_pass In-Reply-To: References: Message-ID: <20160108180246.GM74233@mdounin.ru> Hello! On Wed, Jan 06, 2016 at 05:52:20PM -0600, Amir Kaivani wrote: > Hi there, > > Here is a part of my nginx config file: > > location /test/ { > proxy_pass http://localhost:1024/; > } > > If I have it as above the GET /test/xxxx request will be sent to port 1024 > as /xxxx and it also decodes the URI. > > I know that if I remove / from the end of proxy_pass then it'll send the > URI without decoding special characters (such as %2F). But in this case it > sends /test/xxxx to port 1024. > > So, my question is how I can get nginx to remove /test/ from the URI but > does NOT decode special characters (such as %2F)? How do you expect nginx to remove "/test/" without decoding special characters? E.g., what should happen if the request is to "/test%2F", "/t%65st/", or "/test//"? As long as you have an answer, you can try constructing appropriate URI to upstream request yourself by using proxy_pass with variables. When proxy_pass is used with variables, nginx won't try to do anything with URI specified and will pass it ass is. E.g., assuming all requests start with "/test/" and there are no escaping problems: location /test/ { set $changed_uri "/"; if ($request_uri ~ "^/test(/.*)") { set $changed_uri $1; } proxy_pass http://localhost:1024$changed_uri; } Note though, that such approach is likely to cause problems unless used with care. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Jan 8 19:52:27 2016 From: nginx-forum at forum.nginx.org (djeyewater) Date: Fri, 8 Jan 2016 14:52:27 -0500 (EST) Subject: Limit reqs per user / bot In-Reply-To: References: <3cc8e7b7c9d8bd7383cf525c17b6226c.NginxMailingListEnglish@forum.nginx.org> Message-ID: itpp2012 Wrote: ------------------------------------------------------- > No because one user (web browser) can easily open 20 or more > simultaneous connections to get a better web response. > A bot might be less prone to do the same but most connect at about 5 > simultaneous connections. The limit_req will only be used for requests to dynamic pages, so there should only be one connection per user at a time. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263891,263907#msg-263907 From reallfqq-nginx at yahoo.fr Fri Jan 8 20:03:50 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 8 Jan 2016 21:03:50 +0100 Subject: Limit reqs per user / bot In-Reply-To: References: <3cc8e7b7c9d8bd7383cf525c17b6226c.NginxMailingListEnglish@forum.nginx.org> Message-ID: You should use limit_conn in conjunction with limit_req . They are supplementing each other. --- *B. R.* On Fri, Jan 8, 2016 at 8:52 PM, djeyewater wrote: > itpp2012 Wrote: > ------------------------------------------------------- > > No because one user (web browser) can easily open 20 or more > > simultaneous connections to get a better web response. > > A bot might be less prone to do the same but most connect at about 5 > > simultaneous connections. > > > The limit_req will only be used for requests to dynamic pages, so there > should only be one connection per user at a time. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263891,263907#msg-263907 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Jan 9 13:44:11 2016 From: nginx-forum at forum.nginx.org (djeyewater) Date: Sat, 9 Jan 2016 08:44:11 -0500 (EST) Subject: Limit reqs per user / bot In-Reply-To: References: Message-ID: B.R. Wrote: ------------------------------------------------------- > You should use limit_conn > onn> > in conjunction with limit_req > q>. > They are supplementing each other. > --- > *B. R.* > > On Fri, Jan 8, 2016 at 8:52 PM, djeyewater > > wrote: > > > itpp2012 Wrote: > > ------------------------------------------------------- > > > No because one user (web browser) can easily open 20 or more > > > simultaneous connections to get a better web response. > > > A bot might be less prone to do the same but most connect at about > 5 > > > simultaneous connections. > > > > > > The limit_req will only be used for requests to dynamic pages, so > there > > should only be one connection per user at a time. But using my example config, which only allows 1 request per second per user, then wouldn't limit_conn be superfluous? You can't have more than one connection for a single request, surely? I'll paste the config again here as it got missed off the previous quote: map $http_user_agent $single_user { default $binary_remote_addr; ~PaperLiBot 1; } limit_req_zone $single_user zone=one:10m rate=1r/s; ... limit_req zone=one burst=2; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263891,263913#msg-263913 From l at ymx.ch Sun Jan 10 13:39:34 2016 From: l at ymx.ch (Lukas) Date: Sun, 10 Jan 2016 14:39:34 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 Message-ID: <20160110133934.GA65438@lpr.ch> Dear all Fascinated by nginx, I attempted to integrate it with modsecurity. Unfortunately, ever when modsecurity is enabled, nginx reports a sefault in sysmessages. Searching the web did not reveal any solution, i.e. I switched off SecAudit* and even started modsecurity without rules -- it continued crashing. Thank you for any hint on solving this issue. Please find next information related to my setup including some logs. wbr, Lukas == My current setup: Platform: Linux/4.3.3 running on Debian/wheezy nginx: self-compiled from sources according to https://blog.stickleback.dk/nginx-modsec-on-ubuntu-14-04-lts/ modsecurity: installed and configured according to https://www.howtoforge.com/tutorial/install-nginx-with-mod_security-on-ubuntu-15-04/ Relevant Logs: $ /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.9.9 built by gcc 4.7.2 (Debian 4.7.2-5) built with OpenSSL 1.0.1e 11 Feb 2013 TLS SNI support enabled configure arguments: --user=www-data --group=www-data --with-pcre-jit --with-ipv6 --with-http_ssl_module --add-module=../modsecurity-2.9.0/nginx/modsecurity --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log $ tail error.log 2016/01/10 13:13:34 [notice] 10256#0: ModSecurity: LIBXML compiled version="2.8.0" 2016/01/10 13:13:34 [notice] 10256#0: ModSecurity: Status engine is currently disabled, enable it by set SecStatusEngine to On. 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity for nginx (STABLE)/2.9.0 (http://www.modsecurity.org/) configured. 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: APR compiled version="1.4.6"; loaded version="1.4.6" 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: PCRE compiled version="8.30 "; loaded version="8.30 2012-02-04" 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: LIBXML compiled version="2.8.0" 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: Status engine is currently disabled, enable it by set SecStatusEngine to On. 2016/01/10 13:13:38 [alert] 10261#0: worker process 10267 exited on signal 11 2016/01/10 13:13:38 [alert] 10261#0: worker process 10264 exited on signal 11 2016/01/10 13:13:38 [alert] 10261#0: worker process 10265 exited on signal 11 $ dmesg [605432.202671] nginx[10267]: segfault at 70 ip 08093ba1 sp bfc9a7c0 error 4 in nginx[8048000+123000] [605432.385414] nginx[10264]: segfault at 70 ip 08093ba1 sp bfc9a7c0 error 4 in nginx[8048000+123000] [605432.409089] nginx[10265]: segfault at 70 ip 08093ba1 sp bfc9a7c0 error 4 in nginx[8048000+123000] -- Lukas Ruf | Ad Personam Consecom | Ad Laborem From rainer at ultra-secure.de Sun Jan 10 13:46:22 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 10 Jan 2016 14:46:22 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <20160110133934.GA65438@lpr.ch> References: <20160110133934.GA65438@lpr.ch> Message-ID: <88222F62-D2FB-4276-A83F-8BE8493AD870@ultra-secure.de> > Am 10.01.2016 um 14:39 schrieb Lukas : > > Dear all > > Fascinated by nginx, I attempted to integrate it with modsecurity. > > Unfortunately, ever when modsecurity is enabled, nginx reports a > sefault in sysmessages. > > Searching the web did not reveal any solution, i.e. I switched off > SecAudit* and even started modsecurity without rules -- it continued > crashing. > > Thank you for any hint on solving this issue. > > Please find next information related to my setup including some logs. By chance, I tried to get this to work just yesterday and also got only SIGSEGV from it. (nginx 1.8, FreeBSD 10.1-amd64, ap22-mod_security-2.9.0, all from my own repository) I found this: https://github.com/SpiderLabs/ModSecurity/issues/839 So, you need to set proxy_force_ranges on; in the location you want to protect with mod_security. It didn?t segfault any more after this - but I haven?t had time to check how well it actually works. Rainer From felipe at zimmerle.org Sun Jan 10 13:49:10 2016 From: felipe at zimmerle.org (Felipe Zimmerle) Date: Sun, 10 Jan 2016 13:49:10 +0000 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <20160110133934.GA65438@lpr.ch> References: <20160110133934.GA65438@lpr.ch> Message-ID: Hi Lukas, You may want to use the ModSecurity's nginx_refactoring branch instead of the master branch. Here is the link to the branch: https://github.com/SpiderLabs/ModSecurity/tree/nginx_refactoring Br., Felipe Zimmerle Lead dev for ModSecurity On Sun, Jan 10, 2016 at 10:39 AM Lukas wrote: > Dear all > > Fascinated by nginx, I attempted to integrate it with modsecurity. > > Unfortunately, ever when modsecurity is enabled, nginx reports a > sefault in sysmessages. > > Searching the web did not reveal any solution, i.e. I switched off > SecAudit* and even started modsecurity without rules -- it continued > crashing. > > Thank you for any hint on solving this issue. > > Please find next information related to my setup including some logs. > > wbr, Lukas > > == > > My current setup: > > Platform: Linux/4.3.3 running on Debian/wheezy > > nginx: self-compiled from sources according to > https://blog.stickleback.dk/nginx-modsec-on-ubuntu-14-04-lts/ > > modsecurity: installed and configured according to > > https://www.howtoforge.com/tutorial/install-nginx-with-mod_security-on-ubuntu-15-04/ > > Relevant Logs: > > $ /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.9.9 > built by gcc 4.7.2 (Debian 4.7.2-5) > built with OpenSSL 1.0.1e 11 Feb 2013 > TLS SNI support enabled > configure arguments: --user=www-data --group=www-data --with-pcre-jit > --with-ipv6 --with-http_ssl_module > --add-module=../modsecurity-2.9.0/nginx/modsecurity > --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > > $ tail error.log > 2016/01/10 13:13:34 [notice] 10256#0: ModSecurity: LIBXML compiled > version="2.8.0" > 2016/01/10 13:13:34 [notice] 10256#0: ModSecurity: Status engine is > currently disabled, enable it by set SecStatusEngine to On. > 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity for nginx > (STABLE)/2.9.0 (http://www.modsecurity.org/) configured. > 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: APR compiled > version="1.4.6"; loaded version="1.4.6" > 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: PCRE compiled > version="8.30 "; loaded version="8.30 2012-02-04" > 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: LIBXML compiled > version="2.8.0" > 2016/01/10 13:13:35 [notice] 10260#0: ModSecurity: Status engine is > currently disabled, enable it by set SecStatusEngine to On. > 2016/01/10 13:13:38 [alert] 10261#0: worker process 10267 exited on signal > 11 > 2016/01/10 13:13:38 [alert] 10261#0: worker process 10264 exited on signal > 11 > 2016/01/10 13:13:38 [alert] 10261#0: worker process 10265 exited on signal > 11 > > $ dmesg > [605432.202671] nginx[10267]: segfault at 70 ip 08093ba1 sp bfc9a7c0 error > 4 in nginx[8048000+123000] > [605432.385414] nginx[10264]: segfault at 70 ip 08093ba1 sp bfc9a7c0 error > 4 in nginx[8048000+123000] > [605432.409089] nginx[10265]: segfault at 70 ip 08093ba1 sp bfc9a7c0 error > 4 in nginx[8048000+123000] > > -- > Lukas Ruf | Ad Personam > Consecom | Ad Laborem > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at ymx.ch Sun Jan 10 14:03:46 2016 From: l at ymx.ch (Lukas) Date: Sun, 10 Jan 2016 15:03:46 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <88222F62-D2FB-4276-A83F-8BE8493AD870@ultra-secure.de> References: <20160110133934.GA65438@lpr.ch> <88222F62-D2FB-4276-A83F-8BE8493AD870@ultra-secure.de> Message-ID: <20160110140346.GA1474@lpr.ch> > Rainer Duffner [2016-01-10 14:46]: > > > > Am 10.01.2016 um 14:39 schrieb Lukas : > > > > Unfortunately, ever when modsecurity is enabled, nginx reports a > > sefault in sysmessages. > > > > Searching the web did not reveal any solution, i.e. I switched off > > SecAudit* and even started modsecurity without rules -- it continued > > crashing. > > > > > By chance, I tried to get this to work just yesterday and also got only SIGSEGV from it. > (nginx 1.8, FreeBSD 10.1-amd64, ap22-mod_security-2.9.0, all from my own repository) > > > I found this: > > https://github.com/SpiderLabs/ModSecurity/issues/839 > > So, you need to set > > proxy_force_ranges on; > > in the location you want to protect with mod_security. > > It didn???t segfault any more after this - but I haven???t had time to check how well it actually works. > Thanks for your hint, Rainer. Now it has not crahed anymore..... wbr Lukas From l at ymx.ch Sun Jan 10 14:05:31 2016 From: l at ymx.ch (Lukas) Date: Sun, 10 Jan 2016 15:05:31 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: References: <20160110133934.GA65438@lpr.ch> Message-ID: <20160110140531.GB1474@lpr.ch> Hi Felipe > Felipe Zimmerle [2016-01-10 14:49]: > > You may want to use the ModSecurity's nginx_refactoring branch instead of > the master branch. Here is the link to the branch: > > https://github.com/SpiderLabs/ModSecurity/tree/nginx_refactoring > Thanks for your hint. I found that recommendation. Since I also read that it would not be fully compatible with OWASP/CRS I have not given it a try. What is the situation regrading OWASP/CRS? wbr Lukas From reallfqq-nginx at yahoo.fr Sun Jan 10 14:32:29 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 10 Jan 2016 15:32:29 +0100 Subject: Limit reqs per user / bot In-Reply-To: References: Message-ID: As you were said before, a client might create multiple connections. nginx works per request on a connection. Several requests in parallel from different TCP connections (for the HTTP module) are not the same as several following requests on the same connection. Limiting the number of requests applies to every connection in parallel, so the total requests rate per client is nbConn * nbReq / timeUnit. limit_conn and limit_req work together in this formula. Do not assume things that are not said. I personally did exactly that on numerous occasions. :o) --- *B. R.* On Sat, Jan 9, 2016 at 2:44 PM, djeyewater wrote: > B.R. Wrote: > ------------------------------------------------------- > > You should use limit_conn > > > onn> > > in conjunction with limit_req > > > q>. > > They are supplementing each other. > > --- > > *B. R.* > > > > On Fri, Jan 8, 2016 at 8:52 PM, djeyewater > > > > wrote: > > > > > itpp2012 Wrote: > > > ------------------------------------------------------- > > > > No because one user (web browser) can easily open 20 or more > > > > simultaneous connections to get a better web response. > > > > A bot might be less prone to do the same but most connect at about > > 5 > > > > simultaneous connections. > > > > > > > > > The limit_req will only be used for requests to dynamic pages, so > > there > > > should only be one connection per user at a time. > > But using my example config, which only allows 1 request per second per > user, then wouldn't limit_conn be superfluous? You can't have more than one > connection for a single request, surely? > I'll paste the config again here as it got missed off the previous quote: > > map $http_user_agent $single_user { > default $binary_remote_addr; > ~PaperLiBot 1; > } > > limit_req_zone $single_user zone=one:10m rate=1r/s; > > ... > > limit_req zone=one burst=2; > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263891,263913#msg-263913 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 11 05:51:06 2016 From: nginx-forum at forum.nginx.org (Keyur) Date: Mon, 11 Jan 2016 00:51:06 -0500 Subject: Debugging 504 Gateway Timeout and its actual cause and solution In-Reply-To: <20160108163423.GI74233@mdounin.ru> References: <20160108163423.GI74233@mdounin.ru> Message-ID: <697fa159000d4e1cd2196f0c8559755a.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim for the details. Will monitor / debug as per your suggestion as let you know. Regards, Keyur Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263842,263927#msg-263927 From felipe at zimmerle.org Mon Jan 11 16:12:00 2016 From: felipe at zimmerle.org (Felipe Zimmerle) Date: Mon, 11 Jan 2016 16:12:00 +0000 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <20160110140531.GB1474@lpr.ch> References: <20160110133934.GA65438@lpr.ch> <20160110140531.GB1474@lpr.ch> Message-ID: Hi Lukas, On Sun, Jan 10, 2016 at 11:05 AM Lukas wrote: > I found that recommendation. Since I also read that it would not be > fully compatible with OWASP/CRS I have not given it a try. > > What is the situation regrading OWASP/CRS? > Currently there are three different versions of ModSecurity for nginx: - Version 2.9.0: That is the last released version, I think that is the one that you are using. - nginx_refactoring: That version contains some fixes on the top of v2.9.0, but those fixes may lead to instabilities depending on your configuration. - ModSecurity-connector: That is something that still under development and we have some work to do, to be exactly: https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20documentation https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20features https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20operators https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20transformation https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20variables Only use the ModSecurity-connector if you understands well the ModSecurity rules and the consequences of the missing pieces. Further information about libModSecurity can be found here: http://blog.zimmerle.org/2016/01/an-overview-of-upcoming-libmodsecurity.html or: https://www.trustwave.com/Resources/SpiderLabs-Blog/An-Overview-of-the-Upcoming-libModSecurity/ Br., Felipe. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 11 19:13:49 2016 From: nginx-forum at forum.nginx.org (piyushmalhotra) Date: Mon, 11 Jan 2016 14:13:49 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: <20150715160927.GC93501@mdounin.ru> References: <20150715160927.GC93501@mdounin.ru> Message-ID: <0d7f93139d9e46fda9cf847c22f27a1f.NginxMailingListEnglish@forum.nginx.org> I am facing the same issue on my Debian 7 Server. I downgraded to 1.0.1e-2+deb7u12 version of libssl1.0.0 and restarted nginx but the issue is still occurring for me. I can still see the same logs. I also tried following these instructions(installed the deb packages made by these instructions) but even these didn't help: apt-get update ; apt-get source libssl1.0.0 > cd openssl-1.0.1e > dquilt pop Support-TLS_FALLBACK_SCSV > dquilt delete Support-TLS_FALLBACK_SCSV > dpkg-source --commit > dpkg-buildpackage Can anybody suggest as to what i can do to fix this? After trying both methods, i restarted nginx by using sudo service nginx restart But i can still see the same error logs. How can i be sure that nginx is picking up the updated libssl and openssl binaries? Is there any way to check if the SSL library which NGINX is using has support for TLS_FALLBACK_SCSV? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,256373,263933#msg-263933 From nginx-forum at forum.nginx.org Tue Jan 12 00:59:11 2016 From: nginx-forum at forum.nginx.org (atsushi2550) Date: Mon, 11 Jan 2016 19:59:11 -0500 Subject: Client Authentication Problem when access from android phone In-Reply-To: <20160108170518.GJ74233@mdounin.ru> References: <20160108170518.GJ74233@mdounin.ru> Message-ID: <55bede92980b39c44291dadf3ce604a0.NginxMailingListEnglish@forum.nginx.org> Dear Maxim Dounin Hello ! Thank you for quick response. I understand your answer. Add if ($ssl_client_i_dn != "...") { return 403; } and I can limit access from issued intermediate CA. Regards, Atsushi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263895,263936#msg-263936 From nginx-forum at forum.nginx.org Tue Jan 12 04:21:35 2016 From: nginx-forum at forum.nginx.org (tammini) Date: Mon, 11 Jan 2016 23:21:35 -0500 Subject: logging In-Reply-To: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> References: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> Message-ID: Does that mean once a websocket connection is opened successfully, any subsequent requests sent on that connection cannot be logged ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263878,263938#msg-263938 From vikrant.thakur at gmail.com Tue Jan 12 05:49:38 2016 From: vikrant.thakur at gmail.com (vikrant singh) Date: Mon, 11 Jan 2016 21:49:38 -0800 Subject: logging In-Reply-To: References: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> Message-ID: Only time you see a log for a web socket connection is when it get disconnected. So you will not see a log when it connects, transfer data over it. On Mon, Jan 11, 2016 at 8:21 PM, tammini wrote: > Does that mean once a websocket connection is opened successfully, any > subsequent requests sent on that connection cannot be logged ? > > Thanks. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263878,263938#msg-263938 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 12 10:55:51 2016 From: nginx-forum at forum.nginx.org (footplus) Date: Tue, 12 Jan 2016 05:55:51 -0500 Subject: Nginx / LRO on vmxnet3 / missing ACKs Message-ID: Hello, I'm currently investigating an issue with Linux (3.13.0), nginx (1.6.2), vmxnet3 (1.2.0.0-k-NAPI), IPv6 connections and large receive offload (LRO) enabled. The workflow we are investigating is a POST of a small file (jpg) towards a php5-fpm pool. >From a network (tcpdump) point a view, it seems that when LRO is disabled on the vmxnet3 interface, all tcp packets are ack'ed correctly after reception. However, when LRO is enabled, only the request part of the POST is acked at tcp-level before the client retransmits the packets. I couldn't reproduce the issue reliably yet on another server with a more simple config, or with another daemon. The issue is only occurring over IPv6. We use sendfile/tcp_nopush/tcp_nodelay. I know there's not enough detail here to solve the issue, but can someone tell me if there's any specific code path or specific configuration values that could trigger this kind of behavior on nginx's side ? The CURL we use to test - nothing special: curl -6 -F foo=abcd -F bar=dcba -F file=@/tmp/jpeg-home.jpg 'http://server/v1/images' -lv --trace-time > /dev/null LRO enabled: 12:01:15.094022 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [S], seq 3856909919, win 65535, options [mss 1400,nop,wscale 5,nop,nop,TS val 963369552 ecr 0,sackOK,eol], length 0 12:01:15.094043 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [S.], seq 1346488220, ack 3856909920, win 28560, options [mss 1440,sackOK,TS val 228121950 ecr 963369552,nop,wscale 7], length 0 12:01:15.100478 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], ack 1, win 4120, options [nop,nop,TS val 963369558 ecr 228121950], length 0 12:01:15.101827 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [P.], seq 1:238, ack 1, win 4120, options [nop,nop,TS val 963369558 ecr 228121950], length 237: HTTP: POST /v1/images HTTP/1.1 12:01:15.101837 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 238, win 232, options [nop,nop,TS val 228121952 ecr 963369558], length 0 12:01:15.101873 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [P.], seq 1:26, ack 238, win 232, options [nop,nop,TS val 228121952 ecr 963369558], length 25: HTTP: HTTP/1.1 100 Continue 12:01:15.109132 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], ack 26, win 4119, options [nop,nop,TS val 963369566 ecr 228121952], length 0 12:01:15.109846 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [P.], seq 238:580, ack 26, win 4119, options [nop,nop,TS val 963369566 ecr 228121952], length 342: HTTP 12:01:15.114752 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 580:3356, ack 26, win 4119, options [nop,nop,TS val 963369566 ecr 228121952], length 2776: HTTP 12:01:15.114762 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 3356:4744, ack 26, win 4119, options [nop,nop,TS val 963369566 ecr 228121952], length 1388: HTTP 12:01:15.147172 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 580, win 240, options [nop,nop,TS val 228121964 ecr 963369566], length 0 [problem starts here - seq up to 4474 was received, but only 580 are acked] 12:01:15.160117 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 4744:6132, ack 26, win 4119, options [nop,nop,TS val 963369611 ecr 228121952], length 1388: HTTP 12:01:15.160138 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 580, win 263, options [nop,nop,TS val 228121967 ecr 963369566,nop,nop,sack 1 {4744:6132}], length 0 12:01:15.421491 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 580:1968, ack 26, win 4119, options [nop,nop,TS val 963369870 ecr 228121967], length 1388: HTTP 12:01:15.421523 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 1968, win 285, options [nop,nop,TS val 228122032 ecr 963369870,nop,nop,sack 1 {4744:6132}], length 0 12:01:15.435450 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 1968:4744, ack 26, win 4119, options [nop,nop,TS val 963369884 ecr 228122032], length 2776: HTTP 12:01:15.739853 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 1968:3356, ack 26, win 4119, options [nop,nop,TS val 963370186 ecr 228122032], length 1388: HTTP 12:01:15.739879 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 3356, win 307, options [nop,nop,TS val 228122112 ecr 963370186,nop,nop,sack 1 {4744:6132}], length 0 12:01:15.751112 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 3356:4744, ack 26, win 4119, options [nop,nop,TS val 963370197 ecr 228122112], length 1388: HTTP 12:01:15.751131 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 6132, win 330, options [nop,nop,TS val 228122114 ecr 963370197], length 0 12:01:15.751136 IP6 2001:db8::1.49870 > 2001:db8::2.http: Flags [.], seq 6132:7520, ack 26, win 4119, options [nop,nop,TS val 963370197 ecr 228122112], length 1388: HTTP 12:01:15.751141 IP6 2001:db8::2.http > 2001:db8::1.49870: Flags [.], ack 7520, win 352, options [nop,nop,TS val 228122115 ecr 963370197], length 0 [...] LRO disabled: 15:21:23.262654 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [S], seq 1600883588, win 65535, options [mss 1400,nop,wscale 5,nop,nop,TS val 975315949 ecr 0,sackOK,eol], length 0 15:21:23.262680 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [S.], seq 1261254534, ack 1600883589, win 28560, options [mss 1440,sackOK,TS val 231123992 ecr 975315949,nop,wscale 7], length 0 15:21:23.269919 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], ack 1, win 4120, options [nop,nop,TS val 975315956 ecr 231123992], length 0 15:21:23.273546 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [P.], seq 1:238, ack 1, win 4120, options [nop,nop,TS val 975315956 ecr 231123992], length 237: HTTP: POST /v1/images HTTP/1.1 15:21:23.273563 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [.], ack 238, win 232, options [nop,nop,TS val 231123995 ecr 975315956], length 0 15:21:23.273586 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [P.], seq 1:26, ack 238, win 232, options [nop,nop,TS val 231123995 ecr 975315956], length 25: HTTP: HTTP/1.1 100 Continue 15:21:23.279832 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], ack 26, win 4119, options [nop,nop,TS val 975315965 ecr 231123995], length 0 15:21:23.281329 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [P.], seq 238:580, ack 26, win 4119, options [nop,nop,TS val 975315965 ecr 231123995], length 342: HTTP 15:21:23.285367 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 580:1968, ack 26, win 4119, options [nop,nop,TS val 975315965 ecr 231123995], length 1388: HTTP 15:21:23.285379 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [.], ack 1968, win 263, options [nop,nop,TS val 231123998 ecr 975315965], length 0 15:21:23.285440 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 1968:3356, ack 26, win 4119, options [nop,nop,TS val 975315965 ecr 231123995], length 1388: HTTP 15:21:23.285463 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 3356:4744, ack 26, win 4119, options [nop,nop,TS val 975315965 ecr 231123995], length 1388: HTTP 15:21:23.285469 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [.], ack 4744, win 307, options [nop,nop,TS val 231123998 ecr 975315965], length 0 15:21:23.297518 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 4744:6132, ack 26, win 4119, options [nop,nop,TS val 975315977 ecr 231123998], length 1388: HTTP 15:21:23.297690 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 6132:7520, ack 26, win 4119, options [nop,nop,TS val 975315977 ecr 231123998], length 1388: HTTP 15:21:23.297701 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [.], ack 7520, win 352, options [nop,nop,TS val 231124001 ecr 975315977], length 0 15:21:23.297827 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 7520:8908, ack 26, win 4119, options [nop,nop,TS val 975315977 ecr 231123998], length 1388: HTTP 15:21:23.299739 IP6 2001:db8::1.51774 > 2001:db8::2.http: Flags [.], seq 8908:10296, ack 26, win 4119, options [nop,nop,TS val 975315977 ecr 231123998], length 1388: HTTP 15:21:23.299747 IP6 2001:db8::2.http > 2001:db8::1.51774: Flags [.], ack 10296, win 397, options [nop,nop,TS val 231124002 ecr 975315977], length 0 Thanks, Best regards, Aur?lien Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263944,263944#msg-263944 From luky-37 at hotmail.com Tue Jan 12 11:07:25 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 12 Jan 2016 12:07:25 +0100 Subject: Nginx / LRO on vmxnet3 / missing ACKs In-Reply-To: References: Message-ID: > Hello, > > I'm currently investigating an issue with Linux (3.13.0), nginx (1.6.2), > vmxnet3 (1.2.0.0-k-NAPI), IPv6 connections and large receive offload (LRO) > enabled. The workflow we are investigating is a POST of a small file (jpg) > towards a php5-fpm pool. > > From a network (tcpdump) point a view, it seems that when LRO is disabled on > the vmxnet3 interface, all tcp packets are ack'ed correctly after reception. > However, when LRO is enabled, only the request part of the POST is acked at > tcp-level before the client retransmits the packets. This is clearly not a userspace issue. Its either a kernel or a hypervisor issue. I would start by using a supported and uptodate kernel, because 3.13.0 is neither. From nginx-forum at forum.nginx.org Tue Jan 12 12:42:19 2016 From: nginx-forum at forum.nginx.org (footplus) Date: Tue, 12 Jan 2016 07:42:19 -0500 Subject: Nginx / LRO on vmxnet3 / missing ACKs In-Reply-To: References: Message-ID: <0c02275be5610e80c3752f5a6e95d805.NginxMailingListEnglish@forum.nginx.org> Thanks for this confirmation. These were my next steps anyways. I will update this post if I have any definite indications that something is amiss in userspace. Best regards, Aur?lien Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263944,263949#msg-263949 From mdounin at mdounin.ru Tue Jan 12 12:55:12 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jan 2016 15:55:12 +0300 Subject: logging In-Reply-To: References: <1c0109ebb1fa50380e87c697cd21bb55.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160112125512.GV74233@mdounin.ru> Hello! On Mon, Jan 11, 2016 at 11:21:35PM -0500, tammini wrote: > Does that mean once a websocket connection is opened successfully, any > subsequent requests sent on that connection cannot be logged ? No HTTP requests can be sent into a connection which was upgraded to a WebSocket connection. It's a one way operation. Anything inside a websocket connection happens more or less as a body of the HTTP request (and the response). (Note well that there are no requests inside WebSocket connections. WebSockets use frames to ensure data delivery, and doesn't specify anything else. And, as previously said, nginx doesn't try to parse and/or log anything inside WebSocket connections.) -- Maxim Dounin http://nginx.org/ From xiaokai.wang at live.com Wed Jan 13 06:57:44 2016 From: xiaokai.wang at live.com (Xiaokai Wang) Date: Wed, 13 Jan 2016 06:57:44 +0000 Subject: Writing a module-syncing upstreams from conf-servers Message-ID: Hi Nginxers,I'm writing a new module that syncs upstreams from conf-servers(consul, etcd...). During the nginx offering service, the module makes requests to the conf-server(consul, etcd...) to pull upstreams backend list if upstreams value has changed and update upstreams peers to make new backend servers working. The reason I write the module is that: Last the Spring Festival, the company need extend the backend resources for the festival, need reload the new configuration of nginx, and find that partly requests cost much more time during the reloading, suspect it's caused by reloading. The last, I think the fact prove it's true. The contrast between reloading and using module. Other refer: https://www.nginx.com/blog/dynamic-reconfiguration-with-nginx-plus/ .On the other hand, the module I am writing is mostly different than other modules I could learn from, and all my knowledge about nginx is from looking at how other modules is written, I'm wondering and hoping if anyone could comment on how I designed the module and raise any issues if I did anything problematic, wrong, weird or even stupid. Any convenience, the module is here: https://github.com/weibocom/nginx-upsync-module As everyone knows every work process has it's conf, so the module realizes that every work-process pulls upstreams independently and updates its own peers. I think that's easy and more reliable. Every peers is a array, so every updating need a new array and delete old peers, but, when delete old peers needing judge if having old requests use that. It's a little complicated.What do you think? work-process flow: startup->pull from conf-servers->pull success->try to update->work; startup->pull from conf-servers->failed, parse dump file->try to update->work;I am not sure I make that clear, but I really want to get you advise. Any comments, suggestions, warnings are welcome. -----Regards,Xiaokai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 15 23:36:54 2016 From: nginx-forum at forum.nginx.org (flechamobile) Date: Fri, 15 Jan 2016 18:36:54 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: Yeah I removed the double blocks and it solved the problem... The 'possible bug' though is that the problem seems completely random.. instead of giving error all the time sometimes it works and sometimes it doesn't... Just refreshing the site a few times and it worked.. So it looks like Nginx just randomly picks the cert.. Also SNI is enabled I checked. B.R. Wrote: ------------------------------------------------------- > Out of thin air, I suspect it is a certificate problem. > You seem to have configured *the same* certificate (and private key) > for > those 2 domains. Since certificates are generally tied to a single > domain, > that could explain errors. > > Another idea: have you checked nginx has been built with SNI support > and > you client also supports it? Problems with SNI would mean the default > server certificate (since you did not define a default server for your > IP > address/port pair, nginx would pick up the first block containing a > 'listen' directive configured for it) would be presented whatever > domain > you are trying to access, ending up with certificate/domain mismatch. > See http://nginx.org/en/docs/http/configuring_https_servers.html. > --- > *B. R.* > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,256373,263987#msg-263987 From nginx-forum at forum.nginx.org Fri Jan 15 23:41:34 2016 From: nginx-forum at forum.nginx.org (flechamobile) Date: Fri, 15 Jan 2016 18:41:34 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: <9cb5a30f43761d8343e263458878c2da.NginxMailingListEnglish@forum.nginx.org> do want to add the cert I was using was subdomain wildcard and the blocks where for different subdomains so that should not have been a problem with the cert.. maybe its an access issue to the cert? (nginx can't access it multiple times at the same moment or something) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,256373,263988#msg-263988 From info at jonkri.com Sun Jan 17 00:23:57 2016 From: info at jonkri.com (Jon Kristensen) Date: Sun, 17 Jan 2016 01:23:57 +0100 Subject: Rewriting a "multipart/form-data" request to a "binary" "x-www-form-urlencoded" request Message-ID: Hi, everyone! I would like to know if there's a simple way to have Nginx rewrite a "multipart/form-data" request to a "x-www-form-urlencoded" request with only a (binary) file as the content. That is, I want the rewritten request to look like it was produced with the Curl --data-binary option. The reason I want to do this is because I'm working with a backend service that only supports this file encoding. Thanks in advance! All the best, Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 18 13:54:42 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 18 Jan 2016 08:54:42 -0500 Subject: Windows Nginx Network Sharing (Mapped Hard Drive) Issue Message-ID: So i have two machines in different locations. Web server is C:/ Storage Server is Z:/ (This one is the external mapped hard drive) Now when you upload files to nginx especialy large files in my tests 2GB. Nginx then pushes the file from the temp location to the Z drive, The issue with this is it locks up and stops serving traffic until it finishes pushing the file onto the storage server. So if i was to try and get Nginx to respond to any connection or request it will sit in the connecting state until the file has been pushed onto the external machine. Picture on network connection below. Shows the Internal network connection "Local area connection 2" using 12% bandwidth pushing the file to the mapped hard drive. And throughout the entire time it is doing this "Local area connection" (That would serve internet traffic) Stops until the file is on the mapped hard drive. I have also tried the following setting. "client_body_temp_path Z:/nginx/temp/client_body_temp;" But has the exact same result. Nginx config is completely default other than obviously the "client_max_body_size 2G;". Nginx Builds i use are http://nginx-win.ecsds.eu/ Screenshot to be helpful : http://demo.ovh.eu/en/c79e88864393ae948b552f623bbbcb11/ Any help would be much appreciated i get the feeling this is a completely unique issue but maybe if anyone else is running tests or using a similar setup they can let me know if they have the same problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263995,263995#msg-263995 From mdounin at mdounin.ru Mon Jan 18 14:14:46 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Jan 2016 17:14:46 +0300 Subject: Windows Nginx Network Sharing (Mapped Hard Drive) Issue In-Reply-To: References: Message-ID: <20160118141446.GX74233@mdounin.ru> Hello! On Mon, Jan 18, 2016 at 08:54:42AM -0500, c0nw0nk wrote: > So i have two machines in different locations. > > Web server is C:/ > > Storage Server is Z:/ (This one is the external mapped hard drive) > > Now when you upload files to nginx especialy large files in my tests 2GB. > Nginx then pushes the file from the temp location to the Z drive, The issue > with this is it locks up and stops serving traffic until it finishes pushing > the file onto the storage server. So if i was to try and get Nginx to > respond to any connection or request it will sit in the connecting state > until the file has been pushed onto the external machine. When nginx has to move a file from a temporary location to a permanent one, this operation will be costly when moving between filesystems. Moreover, during this operation nginx worker will be blocked and won't be able to process any other events. At least two places in the documentation explicitly warn about such setups, dav_methods and proxy_cache_path. E.g., the dav_methods directive description say (http://nginx.org/r/dav_methods): : A file uploaded with the PUT method is first written to a : temporary file, and then the file is renamed. Starting from : version 0.8.9, temporary files and the persistent store can be put : on different file systems. However, be aware that in this case a : file is copied across two file systems instead of the cheap : renaming operation. It is thus recommended that for any given : location both saved files and a directory holding temporary files, : set by the client_body_temp_path directive, are put on the same : file system. Additionally, in your paricular setup you are using network filesystem to save files. This is expected to cause even bigger problems than usual copy, as network filesystems usually have much higher latency than local disks. Using network filesystems with nginx is generally not recommended. That is, the problem you are seeing is expected with the configuration you have. Reconsider the configuration you are using. -- Maxim Dounin http://nginx.org/ From lists.md at gmail.com Mon Jan 18 18:33:50 2016 From: lists.md at gmail.com (Marcelo MD) Date: Mon, 18 Jan 2016 16:33:50 -0200 Subject: High number connections-writing stuck In-Reply-To: <3722539.PurOuhtCdi@vbart-laptop> References: <3722539.PurOuhtCdi@vbart-laptop> Message-ID: Hi, Just getting back to you. Sorry about the delay. This behaviour was caused by a custom patch in a module. The patch was ending the requests without finalizing them, leaking connections. Eventually Nginx just exploded =) Nothing like production workload to stress things out. Thanks a lot. On Sun, Dec 20, 2015 at 11:04 AM, Valentin V. Bartenev wrote: > On Friday 18 December 2015 15:55:47 Marcelo MD wrote: > > Hi, > > > > Recently we added a 'thread_pool' directive to our main configuration. A > > few hours later we saw a huge increase in the connections_writing stat as > > reported by stub_status module. This number reached +- 3800 and is stuck > > there since. The server in question is operating normally, but this is > very > > strange. > > > > Any hints on what this could be? > > > > > > Some info: > > > > - Here is a graph of the stats reported, for a server with thread_pool > and > > another without: http://imgur.com/a/lF2EL > > > > - I don`t have older data anymore, but the jump from <100 to +- 3800 > > connections_writing happened in two sharp jumps. The first one following > a > > reload; > > > > - The machines' hardware and software are identical except for the > > thread_pool directive in their nginx.conf. They live in two different > data > > centers; > > > > - Both machines are performing normally. Nothing unusual in CPU or RAM > > usage. Nginx performance is about the same. > > > > - Reloading Nginx with 'nginx -s reload' does nothing. Restarting the > > process brings connections_writing down. > [..] > > As I understand from your message everything works well. You should also > check the error_log, if it doesn't have anything suspicious then there is > nothing to worry about. > > The increased number of writing connections can be explained by increased > concurrency. Now nginx processing cycle doesn't block on disk and can > accept more connections at the same time. All the connections that were > waiting in listen socket before are waiting now in thread pool. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Marcelo Mallmann Dias -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Jan 19 08:47:54 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 19 Jan 2016 09:47:54 +0100 Subject: High number connections-writing stuck In-Reply-To: References: <3722539.PurOuhtCdi@vbart-laptop> Message-ID: Testingin production? Well you are not the only one... far from that. There are other means to create 'realistic' traffic on benches, provided you are thinking those tests well, though. --- *B. R.* On Mon, Jan 18, 2016 at 7:33 PM, Marcelo MD wrote: > Hi, > > Just getting back to you. Sorry about the delay. > > This behaviour was caused by a custom patch in a module. The patch was > ending the requests without finalizing them, leaking connections. > > Eventually Nginx just exploded =) Nothing like production workload to > stress things out. > > Thanks a lot. > > > > On Sun, Dec 20, 2015 at 11:04 AM, Valentin V. Bartenev > wrote: > >> On Friday 18 December 2015 15:55:47 Marcelo MD wrote: >> > Hi, >> > >> > Recently we added a 'thread_pool' directive to our main configuration. A >> > few hours later we saw a huge increase in the connections_writing stat >> as >> > reported by stub_status module. This number reached +- 3800 and is stuck >> > there since. The server in question is operating normally, but this is >> very >> > strange. >> > >> > Any hints on what this could be? >> > >> > >> > Some info: >> > >> > - Here is a graph of the stats reported, for a server with thread_pool >> and >> > another without: http://imgur.com/a/lF2EL >> > >> > - I don`t have older data anymore, but the jump from <100 to +- 3800 >> > connections_writing happened in two sharp jumps. The first one >> following a >> > reload; >> > >> > - The machines' hardware and software are identical except for the >> > thread_pool directive in their nginx.conf. They live in two different >> data >> > centers; >> > >> > - Both machines are performing normally. Nothing unusual in CPU or RAM >> > usage. Nginx performance is about the same. >> > >> > - Reloading Nginx with 'nginx -s reload' does nothing. Restarting the >> > process brings connections_writing down. >> [..] >> >> As I understand from your message everything works well. You should also >> check the error_log, if it doesn't have anything suspicious then there is >> nothing to worry about. >> >> The increased number of writing connections can be explained by increased >> concurrency. Now nginx processing cycle doesn't block on disk and can >> accept more connections at the same time. All the connections that were >> waiting in listen socket before are waiting now in thread pool. >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Marcelo Mallmann Dias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmilas at noa.gr Tue Jan 19 09:21:46 2016 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 19 Jan 2016 11:21:46 +0200 Subject: php not working from aliased subdir Message-ID: <569E002A.5090909@noa.gr> Hello, I have been adding (to my nginx config) directories outside of the default "root" tree like: location ~ /newlocation(.*) { alias /var/websites/externaldir$1; } This works OK. however, I find that the above config does not process php files. I tried adding (before the above): location ~ /newlocation/(.*)\.php$ { alias /var/websites/externaldir$1.php; fastcgi_cache off; try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; } This config however always leads to "404 Not Found" errors for php files. What am I doing wrong? Please correct me. Thanks! Nick From lists.md at gmail.com Tue Jan 19 13:29:21 2016 From: lists.md at gmail.com (Marcelo MD) Date: Tue, 19 Jan 2016 11:29:21 -0200 Subject: High number connections-writing stuck In-Reply-To: References: <3722539.PurOuhtCdi@vbart-laptop> Message-ID: =) That module passed all our tests, both in dev and staging environments. We were deploying in batches and got it on the first one. We also rolled back before any incidents. Not perfect but pretty safe, I think. Maybe we need to be more 'creative' with our testing. On Tue, Jan 19, 2016 at 6:47 AM, B.R. wrote: > Testingin production? > Well you are not the only one... far from that. > > There are other means to create 'realistic' traffic on benches, provided > you are thinking those tests well, though. > --- > *B. R.* > > On Mon, Jan 18, 2016 at 7:33 PM, Marcelo MD wrote: > >> Hi, >> >> Just getting back to you. Sorry about the delay. >> >> This behaviour was caused by a custom patch in a module. The patch was >> ending the requests without finalizing them, leaking connections. >> >> Eventually Nginx just exploded =) Nothing like production workload to >> stress things out. >> >> Thanks a lot. >> >> >> >> On Sun, Dec 20, 2015 at 11:04 AM, Valentin V. Bartenev >> wrote: >> >>> On Friday 18 December 2015 15:55:47 Marcelo MD wrote: >>> > Hi, >>> > >>> > Recently we added a 'thread_pool' directive to our main configuration. >>> A >>> > few hours later we saw a huge increase in the connections_writing stat >>> as >>> > reported by stub_status module. This number reached +- 3800 and is >>> stuck >>> > there since. The server in question is operating normally, but this is >>> very >>> > strange. >>> > >>> > Any hints on what this could be? >>> > >>> > >>> > Some info: >>> > >>> > - Here is a graph of the stats reported, for a server with thread_pool >>> and >>> > another without: http://imgur.com/a/lF2EL >>> > >>> > - I don`t have older data anymore, but the jump from <100 to +- 3800 >>> > connections_writing happened in two sharp jumps. The first one >>> following a >>> > reload; >>> > >>> > - The machines' hardware and software are identical except for the >>> > thread_pool directive in their nginx.conf. They live in two different >>> data >>> > centers; >>> > >>> > - Both machines are performing normally. Nothing unusual in CPU or RAM >>> > usage. Nginx performance is about the same. >>> > >>> > - Reloading Nginx with 'nginx -s reload' does nothing. Restarting the >>> > process brings connections_writing down. >>> [..] >>> >>> As I understand from your message everything works well. You should also >>> check the error_log, if it doesn't have anything suspicious then there is >>> nothing to worry about. >>> >>> The increased number of writing connections can be explained by increased >>> concurrency. Now nginx processing cycle doesn't block on disk and can >>> accept more connections at the same time. All the connections that were >>> waiting in listen socket before are waiting now in thread pool. >>> >>> wbr, Valentin V. Bartenev >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Marcelo Mallmann Dias >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Marcelo Mallmann Dias -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jan 19 20:58:14 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Jan 2016 20:58:14 +0000 Subject: php not working from aliased subdir In-Reply-To: <569E002A.5090909@noa.gr> References: <569E002A.5090909@noa.gr> Message-ID: <20160119205814.GT19381@daoine.org> On Tue, Jan 19, 2016 at 11:21:46AM +0200, Nikolaos Milas wrote: Hi there, > location ~ /newlocation/(.*)\.php$ { > > alias /var/websites/externaldir$1.php; > > fastcgi_cache off; > > try_files $uri =404; > include /etc/nginx/fastcgi_params; > This config however always leads to "404 Not Found" errors for php files. > > What am I doing wrong? add "debug_connection 127.0.0.12;" to your events{} section, then do "curl -v http://127.0.0.12/newlocation/X.php", and then read the error log that was generated. You should see, among other lines, something like http request line: "GET /newlocation/X.php HTTP/1.1" http uri: "/newlocation/X.php" test location: ~ "/newlocation/(.*)\.php$" using configuration "/newlocation/(.*)\.php$" try files phase: 11 trying to use file: "/newlocation/X.php" "/var/websites/externaldirX.php/newlocation/X.php" http special response: 404, "/newlocation/X.php?" Two things there: "alias" and "try_files" is not a good combination; and your config loses the "/" that should be between "externaldir" and "X.php". If you temporarily remove the try_files line that is blocking you, and repeat the "curl" command, then you will see what is sent to the fastcgi server. Usually, the most interesting parameter is SCRIPT_FILENAME. What value do you see for that? Is it exactly the filename that you want your fastcgi server to process? If not, what file do you want your fastcgi server to process? After that, you can decide whether you want to use a try_files directive in this location{} block, and if so, what exactly you want there. (I tested this using a nginx/1.9.1, because that's what I had handy. Different versions may show different output.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 19 22:01:45 2016 From: nginx-forum at forum.nginx.org (Fry-kun) Date: Tue, 19 Jan 2016 17:01:45 -0500 Subject: limit_req off? Message-ID: <8f975d5ae232de92c8bba3599e822c35.NginxMailingListEnglish@forum.nginx.org> I have a location with limit_req and would like to set up an inner path without the limit but with other settings same, e.g. location / { limit_req ...; proxy_pass ...; location /private/ { limit_req off; } } How come "limit_req off" is not an available option? Any other easy way of achieving this? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263998,263998#msg-263998 From l at ymx.ch Tue Jan 19 23:10:37 2016 From: l at ymx.ch (Lukas) Date: Wed, 20 Jan 2016 00:10:37 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: References: <20160110133934.GA65438@lpr.ch> <20160110140531.GB1474@lpr.ch> Message-ID: <20160119231037.GA2280@lpr.ch> Hi Felipe > Felipe Zimmerle [2016-01-11 17:12]: > > On Sun, Jan 10, 2016 at 11:05 AM Lukas wrote: > > > I found that recommendation. Since I also read that it would not be > > fully compatible with OWASP/CRS I have not given it a try. > > > > What is the situation regrading OWASP/CRS? > > > > Currently there are three different versions of ModSecurity for nginx: > > - Version 2.9.0: That is the last released version, I think that is the one > that you are using. > - nginx_refactoring: That version contains some fixes on the top of v2.9.0, > but those fixes may lead to instabilities depending on your configuration. > - ModSecurity-connector: That is something that still under development and > we have some work to do, to be exactly: > > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20documentation > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20features > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20operators > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20transformation > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20variables > > Only use the ModSecurity-connector if you understands well the ModSecurity > rules and the consequences of the missing pieces. > > Further information about libModSecurity can be found here: > http://blog.zimmerle.org/2016/01/an-overview-of-upcoming-libmodsecurity.html > or: > https://www.trustwave.com/Resources/SpiderLabs-Blog/An-Overview-of-the-Upcoming-libModSecurity/ > Thanks for pointing this out. What worries me a "little bit" is that nginx started crashing with an Out-of-Memory Exception when ModSecurity 2.9.0 with OWASP/CRS was activated. Have others experienced similar problems? Isn't there at least a run-time control in nginx that kills subprocesses like ModSecurity as soon as they start overconsuming resources/execution time? Thanks. wbr Lukas From rpaprocki at fearnothingproductions.net Tue Jan 19 23:14:14 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 19 Jan 2016 15:14:14 -0800 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <20160119231037.GA2280@lpr.ch> References: <20160110133934.GA65438@lpr.ch> <20160110140531.GB1474@lpr.ch> <20160119231037.GA2280@lpr.ch> Message-ID: ModSecurity isn't a sub-process, it's compiled into the nginx binary and runs as part of the worker process(es). Nginx doesn't have a concept of spawning children in the manner you're referencing, so there's nothing to be monitored wrt. resource consumption. Any resource monitoring would be done by the kernel, and the target would be nginx itself. If you're running into an OOM condition with the nginx worker process, it sounds like a leak within one of the modules (possible, but not definitely, ModSecurity, if it only happens when you load the OWASP CRS). On Tue, Jan 19, 2016 at 3:10 PM, Lukas wrote: > Hi Felipe > > > Felipe Zimmerle [2016-01-11 17:12]: > > > > On Sun, Jan 10, 2016 at 11:05 AM Lukas wrote: > > > > > I found that recommendation. Since I also read that it would not be > > > fully compatible with OWASP/CRS I have not given it a try. > > > > > > What is the situation regrading OWASP/CRS? > > > > > > > Currently there are three different versions of ModSecurity for nginx: > > > > - Version 2.9.0: That is the last released version, I think that is the > one > > that you are using. > > - nginx_refactoring: That version contains some fixes on the top of > v2.9.0, > > but those fixes may lead to instabilities depending on your > configuration. > > - ModSecurity-connector: That is something that still under development > and > > we have some work to do, to be exactly: > > > > > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20documentation > > > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20features > > > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20operators > > > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20transformation > > > https://github.com/SpiderLabs/ModSecurity/labels/libmodsec%20-%20missing%20variables > > > > Only use the ModSecurity-connector if you understands well the > ModSecurity > > rules and the consequences of the missing pieces. > > > > Further information about libModSecurity can be found here: > > > http://blog.zimmerle.org/2016/01/an-overview-of-upcoming-libmodsecurity.html > > or: > > > https://www.trustwave.com/Resources/SpiderLabs-Blog/An-Overview-of-the-Upcoming-libModSecurity/ > > > > Thanks for pointing this out. > > What worries me a "little bit" is that nginx started crashing with an > Out-of-Memory Exception when ModSecurity 2.9.0 with OWASP/CRS was > activated. > > Have others experienced similar problems? > > Isn't there at least a run-time control in nginx that kills > subprocesses like ModSecurity as soon as they start overconsuming > resources/execution time? > > Thanks. > > wbr > Lukas > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at ymx.ch Tue Jan 19 23:27:27 2016 From: l at ymx.ch (Lukas) Date: Wed, 20 Jan 2016 00:27:27 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: References: <20160110133934.GA65438@lpr.ch> <20160110140531.GB1474@lpr.ch> <20160119231037.GA2280@lpr.ch> Message-ID: <20160119232727.GA3470@lpr.ch> Hi Robert > Robert Paprocki [2016-01-20 00:14]: > > ModSecurity isn't a sub-process, it's compiled into the nginx binary and > runs as part of the worker process(es). Nginx doesn't have a concept of > spawning children in the manner you're referencing, so there's nothing to > be monitored wrt. resource consumption. Any resource monitoring would be > done by the kernel, and the target would be nginx itself. > Thanks for clarifying. > If you're running into an OOM condition with the nginx worker process, it > sounds like a leak within one of the modules (possible, but not definitely, > ModSecurity, if it only happens when you load the OWASP CRS). > I have not had the time to test different variants yet. The proxy-setup, however, works perfectly fine with "ModSecurityEnabled off;" but crashes otherwise. My current config: server { listen 443 ssl; listen [::]:443 ssl; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name foobar; ssl on; ssl_certificate crt.stack.pem; ssl_certificate_key key.pem; ssl_session_timeout 5m; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; location / { ModSecurityEnabled off; ModSecurityConfig modsecurity/modsecurity_crs_10_setup.conf; proxy_force_ranges on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://ip.ad.dr.ess:80; proxy_redirect http://ip.ad.dr.ess:80 https://$host$request_uri; client_max_body_size 10m; client_body_buffer_size 128k; client_body_temp_path /var/cache/nginx/client_body_temp; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /var/cache/nginx/proxy_temp; } location ~ /\.ht { deny all; } access_log /var/log/nginx/access.log upstreamlog; error_log /var/log/nginx/error.log debug; } Thanks. wbr. Lukas From reallfqq-nginx at yahoo.fr Wed Jan 20 02:51:54 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 20 Jan 2016 03:51:54 +0100 Subject: limit_req off? In-Reply-To: <8f975d5ae232de92c8bba3599e822c35.NginxMailingListEnglish@forum.nginx.org> References: <8f975d5ae232de92c8bba3599e822c35.NginxMailingListEnglish@forum.nginx.org> Message-ID: You might wish to set up locations like that: location /outer-path { limit_req ... } location /outer-path/inner-path { ... } --- *B. R.* On Tue, Jan 19, 2016 at 11:01 PM, Fry-kun wrote: > I have a location with limit_req and would like to set up an inner path > without the limit but with other settings same, e.g. > > location / { > limit_req ...; > proxy_pass ...; > location /private/ { > limit_req off; > } > } > > How come "limit_req off" is not an available option? > Any other easy way of achieving this? > > Thanks > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263998,263998#msg-263998 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrien.saladin at gmail.com Wed Jan 20 10:58:12 2016 From: adrien.saladin at gmail.com (Adrien Saladin) Date: Wed, 20 Jan 2016 11:58:12 +0100 Subject: log rate limits violations without enforcing them Message-ID: Hi list, Before applying new rate limit values, I would like to check if these rules will have an impact on legitimate clients. Is there a way to either set these rules in a "log only" mode, or a tool to analyse existing logs and see if the rule would have been triggered ? Thanks, Adrien From parakrama1282 at gmail.com Wed Jan 20 12:55:35 2016 From: parakrama1282 at gmail.com (Dhanushka Parakrama) Date: Wed, 20 Jan 2016 18:25:35 +0530 Subject: nginx mod security module issue Message-ID: Hi All I have complied the* nginx 1.9.9* with ModSecurity support . and configured the nginx,conf as fallows *Nginx Version : 1.9.9* *Modsecurity Version : 2.9* #nginx -V nginx version: nginx/1.9.9 built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) built with OpenSSL 1.0.1f 6 Jan 2014 TLS SNI support enabled configure arguments: --user=www-data --group=www-data --with-pcre-jit --with-debug --with-ipv6 --with-http_ssl_module --add-module=/opt/modsecurity-2.9.0/nginx/modsecurity *nginx.conf=========*user www-data; worker_processes 1; error_log /var/log/nginx/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; server_name example.org; location / { ModSecurityEnabled on; ModSecurityConfig modsecurity.conf; root html; index index.html index.htm; } location /sonar { ModSecurityEnabled on; ModSecurityConfig modsecurity.conf; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.8.52:443/sonar; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } -> When i access the *http://example.org * i can access the web page and modsecurity lodged the access data to to Audit log . -> Then i access the * http://example.org/sonar *i get the following error in my logfile 2016/01/20 18:17:28 [alert] 3520#0: worker process 3549 exited on signal 11 (core dumped) 2016/01/20 18:17:29 [alert] 3520#0: worker process 3551 exited on signal 11 (core dumped) 2016/01/20 18:17:29 [alert] 3520#0: worker process 3553 exited on signal 11 (core dumped) 2016/01/20 18:17:29 [alert] 3520#0: worker process 3555 exited on signal 11 (core dumped) Can you guys please how to over come this issue -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnelson at archive.org Wed Jan 20 23:48:50 2016 From: jnelson at archive.org (Jim Nelson) Date: Wed, 20 Jan 2016 15:48:50 -0800 Subject: Unencoded Location: header when redirecting Message-ID: We?re seeing the following behavior in nginx 1.4.6: * User navigates to location w/ spaces in the URL ("http://example.org/When%20Harry%20Met%20Sally"). The location points to a directory on the filesystem with spaces in its name ("/items/When Harry Met Sally?). * nginx returns ?301 Moved Permanently? with the Location: URL unencoded and a trailing slash added: Location: http://example.org/When Harry Met Sally/ * Some software (i.e. PHP) will automatically follow the redirect, but because it expects an encoded Location: header, it sends exactly what was returned from the server. (Note that curl, wget, and others will fixup unencoded Location: headers, but that?s not what HTTP spec requires.) * nginx will normally process URLs with spaces in them, but because of its request parsing algorithm, it fails w/ a ?400 Bad Request? when it sees the uppercase ?H? in ?Harry? in the URL (https://trac.nginx.org/nginx/ticket/196?cversion=0&cnum_hist=2). In other words, this is the transaction chain: C: GET http://example.org/When%20Harry%20Met%20Sally HTTP/1.1 S: HTTP/1.1 301 Moved Permanently S: Location: http://example.org/When Harry Met Sally/ C: GET http://example.org/When Harry Met Sally/ HTTP/1.1 S: 400 Bad Request I believe the 301 originates from within the nginx code itself (ngx_http_static_module.c:147-193?) and not from our rewrite rules. As I read the HTTP spec, Location: must be encoded. ? Jim From highclass99 at gmail.com Thu Jan 21 12:41:31 2016 From: highclass99 at gmail.com (highclass99) Date: Thu, 21 Jan 2016 21:41:31 +0900 Subject: I'm not sure how "hash $cachekey" works... Message-ID: I have a question about "hash $cachekey" consistent; Would the following Config 1, Config 2, Config 3 be exactly the same of different? i.e. if I swap the configurations would the cache keys stay consistent? The reason I am asking is I have a large configuration similar to Config 2 and am not sure it would be "safe" to clean up the order and merge duplicates as weighted config values. Thank you. Would the following be different of the same? Config 1: upstream ImageCluster { server 10.1.1.1; server 10.1.1.1; server 10.1.1.2; server 10.1.1.2; server 10.1.1.3; server 10.1.1.3; hash $cachekey consistent; } Config 2: upstream ImageCluster { server 10.1.1.2; server 10.1.1.2; server 10.1.1.1; server 10.1.1.1; server 10.1.1.3; server 10.1.1.3; hash $cachekey consistent; } Config 3: upstream ImageCluster { server 10.1.1.1 weight=2; server 10.1.1.2 weight=2; server 10.1.1.3 weight=2; hash $cachekey consistent; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Thu Jan 21 16:05:41 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 21 Jan 2016 19:05:41 +0300 Subject: I'm not sure how "hash $cachekey" works... In-Reply-To: References: Message-ID: Hi, > On 21 Jan 2016, at 15:41, highclass99 wrote: > > I have a question about "hash $cachekey" consistent; > Would the following Config 1, Config 2, Config 3 be exactly the same of different? > i.e. if I swap the configurations would the cache keys stay consistent? > The reason I am asking is I have a large configuration similar to Config 2 and am not sure it would be "safe" to clean up the order and merge duplicates as weighted config values. > > Thank you. > > Would the following be different of the same? > Config 1: > upstream ImageCluster { > server 10.1.1.1; > server 10.1.1.1; > server 10.1.1.2; > server 10.1.1.2; > server 10.1.1.3; > server 10.1.1.3; > > hash $cachekey consistent; > } > > Config 2: > upstream ImageCluster { > server 10.1.1.2; > server 10.1.1.2; > server 10.1.1.1; > server 10.1.1.1; > server 10.1.1.3; > server 10.1.1.3; > > hash $cachekey consistent; > } These two are the same. With consistent hash balancer the order of servers does not matter. > Config 3: > upstream ImageCluster { > server 10.1.1.1 weight=2; > server 10.1.1.2 weight=2; > server 10.1.1.3 weight=2; > > hash $cachekey consistent; > } This one is different. With Config1 and Config2 you have 160 hash points per each server (duplicates are removed). With Config3 you have 320 points per each server. Statistically all configurations are similar. -- Roman Arutyunyan From eliezer at ngtech.co.il Thu Jan 21 18:16:36 2016 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Thu, 21 Jan 2016 20:16:36 +0200 Subject: What modules are using the query term "token" for access control? Message-ID: <56A12084.4000207@ngtech.co.il> I have seen that couple media sites are using the "token" query term for access control to some media content and I was wondering what module can do that? For examples the request: http://example.com/media/111111.mp4?token=xyz_very_long_token allows access to only this 111111.mp4 specific file and not to 111112.mp4 . I can write a web application that does the same thing but was wondering what might be used if there are couple web servers and couple content servers. In this specific case the token should be either shared between the servers or that the token has some encrypted data in it. Are there any modules that implements this function? Thanks, Eliezer From me at myconan.net Thu Jan 21 18:38:03 2016 From: me at myconan.net (nanaya) Date: Fri, 22 Jan 2016 03:38:03 +0900 Subject: What modules are using the query term "token" for access control? In-Reply-To: <56A12084.4000207@ngtech.co.il> References: <56A12084.4000207@ngtech.co.il> Message-ID: <1453401483.2438630.498852394.34537928@webmail.messagingengine.com> Hi, On Fri, Jan 22, 2016, at 03:16, Eliezer Croitoru wrote: > I have seen that couple media sites are using the "token" query term for > access control to some media content and I was wondering what module can > do that? > For examples the request: > http://example.com/media/111111.mp4?token=xyz_very_long_token > > allows access to only this 111111.mp4 specific file and not to 111112.mp4 > . > > I can write a web application that does the same thing but was wondering > what might be used if there are couple web servers and couple content > servers. In this specific case the token should be either shared between > the servers or that the token has some encrypted data in it. > Are there any modules that implements this function? > Something like this? http://nginx.org/en/docs/http/ngx_http_secure_link_module.html From eliezer at ngtech.co.il Thu Jan 21 22:54:11 2016 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Fri, 22 Jan 2016 00:54:11 +0200 Subject: What modules are using the query term "token" for access control? In-Reply-To: <1453401483.2438630.498852394.34537928@webmail.messagingengine.com> References: <56A12084.4000207@ngtech.co.il> <1453401483.2438630.498852394.34537928@webmail.messagingengine.com> Message-ID: <56A16193.7070802@ngtech.co.il> On 21/01/2016 20:38, nanaya wrote: > Something like this? > http://nginx.org/en/docs/http/ngx_http_secure_link_module.html No. The idea is that a client have the full url to the resource but it will be restricted using a token. The token can be either stored in a DB such as memcached\redis or another option. If such a thing doesn't exist I would probably write something up (not for nginx) Thanks, Eliezer From highclass99 at gmail.com Fri Jan 22 00:48:49 2016 From: highclass99 at gmail.com (highclass99) Date: Fri, 22 Jan 2016 09:48:49 +0900 Subject: I'm not sure how "hash $cachekey" works... In-Reply-To: References: Message-ID: Thank you for your reply. You replied: > With Config1 and Config2 you have 160 hash points per each server (duplicates are removed) Does that mean I could just shorten the whole config to upstream ImageCluster { server 10.1.1.1; server 10.1.1.2; server 10.1.1.3; hash $cachekey consistent; } and it would be exactly the same and keep the hashes exactly the same as before? If I wanted to "weight" the servers to all initially weight=4 and then slowly optimize the weightings. Would it be a good idea to slowly add the weightings as weight=2 -restart and wait-> weight=3 -restart and wait-> weight=4 (the servers have really high load, and sudden changes in hash can cause problems) or would it be okay to just change at once like upstream ImageCluster { server 10.1.1.1 weight=4; server 10.1.1.2 weight=4; server 10.1.1.3 weight=4; hash $cachekey consistent; } Again, thanks for your reply. On Fri, Jan 22, 2016 at 1:05 AM, Roman Arutyunyan wrote: > Hi, > > > On 21 Jan 2016, at 15:41, highclass99 wrote: > > > > I have a question about "hash $cachekey" consistent; > > Would the following Config 1, Config 2, Config 3 be exactly the same of > different? > > i.e. if I swap the configurations would the cache keys stay consistent? > > The reason I am asking is I have a large configuration similar to Config > 2 and am not sure it would be "safe" to clean up the order and merge > duplicates as weighted config values. > > > > Thank you. > > > > Would the following be different of the same? > > Config 1: > > upstream ImageCluster { > > server 10.1.1.1; > > server 10.1.1.1; > > server 10.1.1.2; > > server 10.1.1.2; > > server 10.1.1.3; > > server 10.1.1.3; > > > > hash $cachekey consistent; > > } > > > > Config 2: > > upstream ImageCluster { > > server 10.1.1.2; > > server 10.1.1.2; > > server 10.1.1.1; > > server 10.1.1.1; > > server 10.1.1.3; > > server 10.1.1.3; > > > > hash $cachekey consistent; > > } > > These two are the same. With consistent hash balancer the order of servers > does not matter. > > > Config 3: > > upstream ImageCluster { > > server 10.1.1.1 weight=2; > > server 10.1.1.2 weight=2; > > server 10.1.1.3 weight=2; > > > > hash $cachekey consistent; > > } > > This one is different. > > With Config1 and Config2 you have 160 hash points per each server > (duplicates are removed). > With Config3 you have 320 points per each server. > > Statistically all configurations are similar. > > -- > Roman Arutyunyan > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From highclass99 at gmail.com Fri Jan 22 00:58:23 2016 From: highclass99 at gmail.com (highclass99) Date: Fri, 22 Jan 2016 09:58:23 +0900 Subject: What is the recommended way to swap hash consistent servers (with least sudden hash change) Message-ID: Hello, If I have the following config upstream ImageCluster { server 10.1.1.1; server 10.1.1.2; server 10.1.1.3; hash $cachekey consistent; } and wish to swap 10.1.1.3 to 10.1.1.5 would it be better to delete 10.1.1.3, restart and wait for the hash keys to update, add 10.1.1.5 and then restart, or just delete and swap 10.1.1.3 with 10.1.1.5 all at once? Also do you think it's a good strategy to add weightings so I can slowly decrease the weightings value and restart nginx serveral times until the weighting becomes 0, so that it will slowly change the keys? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Fri Jan 22 05:28:49 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 21 Jan 2016 21:28:49 -0800 Subject: [ANN] OpenResty 1.9.7.2 released Message-ID: Hi folks I am happy to announce the new formal release, 1.9.7.2, of the OpenResty web platform based on NGINX and Lua: https://openresty.org/#Download Both the (portable) source code distribution and the Win32 binary distribution are provided on this Download page. This version is an important milestone in OpenResty's release history and we have the following 3 big features included in the bundle: 1. ngx_lua's new directives ssl_certificate_by_lua* directives that allow dynamic control of the downstream SSL/TLS handshake of NGINX with Lua. Nonblocking I/O operations on the Lua land like cosockets and "light threads" are also available in this context. This feature has been powering CloudFlare's SSL gateway for more than a year now. Thanks CloudFlare for supporting this work. See the official documentation of these features for more details: https://github.com/openresty/lua-nginx-module#ssl_certificate_by_lua_block https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/ssl.md#readme https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/ocsp.md#readme 2. ngx_lua's new ngx.semaphore API contributed by Kugou Inc., which allows very efficient "light thread" control flow synchronizations across the request and context boundaries (though still limited to a single NGINX worker process). This feature also plays an important role in their engineers' C2000K experiment atop OpenResty. See the corresponding documentation for more details: https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/semaphore.md#readme 3. ngx_lua's new balancer_by_lua* directives that allow highly dynamic upstream peer selections and retries in pure Lua, making dynamic custom load balancers written in Lua possible. Such mechanism is based upon NGINX core's upstream facility, which can work with existing nginx upstream C modules like ngx_proxy and ngx_fastcgi, out of the box. Thanks CloudFlare for supporting this work. See the corresponding documentation for more details: https://github.com/openresty/lua-nginx-module#balancer_by_lua_block https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md#readme Special thanks go to all our contributors and users. Changes since the last (formal) release, 1.9.7.1: * feature: applied the ssl_cert_cb_yield patch to the bundled version of the NGINX core to allow yielding in OpenSSL's SSL_CTX_set_cert_cb() callbacks (needed by the ngx_lua module's ssl_certificate_by_lua* directives, for example). * bugfix: the "./configure" options "--with-dtrace-probes" and "--with-stream" did not work together and led to compilation failures. * upgraded the ngx_lua module to 0.10.0. * feature: better SSL/TLS handshake control. * implemented the ssl_certificate_by_lua_block and ssl_certifcate_by_lua_file directives for controlling the NGINX downstream SSL handshake dynamically with Lua. thanks Piotr Sikora, Zi Lin, yejingx, and others for the help. * added an optional "send_status_req" argument to stream-typed cosockets' sslhandshake() method to send OCSP status request. * feature: implemented the balancer_by_lua_block and balancer_by_lua_file directives to allow NGINX load balancers written in Lua. thanks Shuxin Yang, Dejiang Zhu, Brandon Beveridge, and others for the help. * feature: added pure C API for the ngx.semaphore Lua module implemented in lua-resty-core. this ngx.semaphore API provides efficient synchronization among "light threads" across request/context boundaries. thanks Weixie Cui and Dejiang Zhu from Kugou Inc. for contributing this feature. also thanks Kugou Inc. for supporting this work. * doc: made clear the ngx.ctx scoping issues. thanks Robert Paprocki for asking. * doc: typo fix for the contexts of ngx.worker.id. thanks RocFang for the patch. * upgraded the lua-resty-core library to 0.1.4. * feature: added new Lua modules ngx.ssl and ngx.ocsp. these two modules provide Lua API mostly useful in the context of the ngx_lua module's ssl_certificiate_by_lua*. thanks Piotr Sikora, Zi Lin, yejingx, Aapo Talvensaari, and others for the help. * feature: implemented the ngx.balancer Lua module to support dynamic nginx upstream balancers written in Lua. the ngx.balancer module is expected to be used in the ngx_lua module's balancer_by_lua* context. thanks Shuxin Yang, Aapo Talvensaari, and Guanlan Dai for the help. * feature: feature: added new Lua module, ngx.semaphore. this ngx.semaphore API provides efficient synchronization among "light threads" across request/context boundaries. thanks Weixie Cui and Dejiang Zhu from Kugou Inc. for contributing this feature. Also thanks Kugou Inc. for supporting this work. * upgraded LuaJIT to v2.1-20160108: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest changes: * FFI: properly unsink non-standard cdata allocations. * ARM: added external frame unwinding. thanks to Nick Zavaritsky. * MIPS soft-float support. contributed by Djordje Kovacevic and Stefan Pejic from RT-RK.com. sponsored by Cisco Systems, Inc. * added soft-float support to interpreter. * added soft-float FFI support. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/#ChangeLog1009007 OpenResty (aka. ngx_openresty) is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org Enjoy! -agentzh From anoopalias01 at gmail.com Fri Jan 22 09:08:13 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 22 Jan 2016 14:38:13 +0530 Subject: directio sendfile aio Message-ID: >From an nginx book i read setting ########### http { sendfile on; sendfile_max_chunk 512k; aio threads=default; directio 4m; ############ is good as it use (if i understand it correctly) sendfile for files less than 4m and directio for files larger than 4m But the above config is causing issues like static css files images etc not being served. I am not sure what exactly is the issue But commenting out directio from the above fix it or commenting out sendfile fix it . But adding them both creates a mess. The question is is the above combination valid and if yes what might be causing the issue . -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Jan 22 09:53:02 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 22 Jan 2016 10:53:02 +0100 Subject: directio sendfile aio In-Reply-To: References: Message-ID: You did not provide any configuration snippet for the thread pool you are using and only showing partial configuration bits. I hope the unshown is configured properly... --- *B. R.* On Fri, Jan 22, 2016 at 10:08 AM, Anoop Alias wrote: > From an nginx book i read setting > > ########### > http { > > sendfile on; > sendfile_max_chunk 512k; > aio threads=default; > directio 4m; > ############ > > is good as it use (if i understand it correctly) > > sendfile for files less than 4m and directio for files larger than 4m > > But the above config is causing issues like static css files images etc > not being served. I am not sure what exactly is the issue But commenting > out > > directio from the above fix it or commenting out sendfile fix it . > > But adding them both creates a mess. > > The question is is the above combination valid and if yes what might be > causing the issue . > > > -- > *Anoop P Alias* > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jan 22 10:05:59 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 22 Jan 2016 13:05:59 +0300 Subject: directio sendfile aio In-Reply-To: References: Message-ID: <3455492.6Dye9OLk5Q@vbart-workstation> On Friday 22 January 2016 14:38:13 Anoop Alias wrote: > From an nginx book i read setting > What's the name of the book? > ########### > http { > > sendfile on; > sendfile_max_chunk 512k; > aio threads=default; > directio 4m; > ############ > > is good as it use (if i understand it correctly) In some specific use case scenarios these settings can be good. > > sendfile for files less than 4m and directio for files larger than 4m > > But the above config is causing issues like static css files images etc not > being served. I am not sure what exactly is the issue But commenting out > > directio from the above fix it or commenting out sendfile fix it . > > But adding them both creates a mess. > > The question is is the above combination valid and if yes what might be > causing the issue . > Could you provide the full configuration and a debug log (see http://nginx.org/en/docs/debugging_log.html)? I'm unable to reproduce any issues on a simple configuration example with the settings above. wbr, Valentin V. Bartenev From anoopalias01 at gmail.com Fri Jan 22 11:00:38 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 22 Jan 2016 16:30:38 +0530 Subject: directio sendfile aio In-Reply-To: <3455492.6Dye9OLk5Q@vbart-workstation> References: <3455492.6Dye9OLk5Q@vbart-workstation> Message-ID: I think the weird issue I mentioned had something to do with ngx_pagespeed with a memcached backed and memcached was not running . It is working fine with memcached now running . Somehow the sendfile and directio setting was affecting that. as I mentioned the issue fixed with enabling either sendfile and directio on with memcached not running (I think pagespeed falls back to a file based cache if memcached is not running) . Right now with sendfile on; sendfile_max_chunk 512k; aio threads=iopool; directio 4m; and memcached running ;I dont see any issues . if memcached is not running (used by pagespeed) and the above setting produce weird errors that goes away if directio and sendfile is used in a mutually exclusive fashion. ######### the book is NGINX High Performance By Rahul Sharma You can check the exact section in page #53 available in google books as a sample. ######### So the setting sendfile on; sendfile_max_chunk 512k; aio threads=iopool; #thread_pool iopool is defined in the main context directio 4m; is good ? On Fri, Jan 22, 2016 at 3:35 PM, Valentin V. Bartenev wrote: > On Friday 22 January 2016 14:38:13 Anoop Alias wrote: > > From an nginx book i read setting > > > > What's the name of the book? > > > > ########### > > http { > > > > sendfile on; > > sendfile_max_chunk 512k; > > aio threads=default; > > directio 4m; > > ############ > > > > is good as it use (if i understand it correctly) > > In some specific use case scenarios these settings can be good. > > > > > > sendfile for files less than 4m and directio for files larger than 4m > > > > But the above config is causing issues like static css files images etc > not > > being served. I am not sure what exactly is the issue But commenting out > > > > directio from the above fix it or commenting out sendfile fix it . > > > > But adding them both creates a mess. > > > > The question is is the above combination valid and if yes what might be > > causing the issue . > > > > Could you provide the full configuration and a debug log > (see http://nginx.org/en/docs/debugging_log.html)? > > I'm unable to reproduce any issues on a simple configuration > example with the settings above. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jan 22 11:27:45 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 22 Jan 2016 14:27:45 +0300 Subject: directio sendfile aio In-Reply-To: References: <3455492.6Dye9OLk5Q@vbart-workstation> Message-ID: <1515547.xnXauH0CFs@vbart-workstation> On Friday 22 January 2016 16:30:38 Anoop Alias wrote: > I think the weird issue I mentioned had something to do with ngx_pagespeed > with a memcached backed and memcached was not running . It is working fine > with memcached now running . > > Somehow the sendfile and directio setting was affecting that. as I > mentioned the issue fixed with enabling either sendfile and directio on > with memcached not running (I think pagespeed falls back to a file based > cache if memcached is not running) . > > Right now with > > sendfile on; > sendfile_max_chunk 512k; > aio threads=iopool; > directio 4m; > > and memcached running ;I dont see any issues . > > if memcached is not running (used by pagespeed) and the above setting > produce weird errors that goes away if directio and sendfile is used in a > mutually exclusive fashion. > > > > ######### > the book is NGINX High Performance > By Rahul Sharma > > You can check the exact section in page #53 available in google books as a > sample. > ######### > > So the setting > > sendfile on; > sendfile_max_chunk 512k; > aio threads=iopool; #thread_pool iopool is defined in the main context > directio 4m; > > > is good ? > [..] I wouldn't recommend these settings for everyone. Actually there are no settings that work for everyone and that's the reason why these directives exist and tunable. Direct IO is questionable if you aren't serving gigabytes of movies. You don't need direct IO and aio if your working data set is less than the amount of RAM. "sendfile_max_chunk 512k" with "directio 4m" looks surplus. Actually the default settings usually work well. So in general it's not a good idea to tune anything unless you're experiencing problems and fully understand what you're going to change and why. Also you need some metrics and/or benchmarks to test your settings, blind tuning can make the result worse. wbr, Valentin V. Bartenev From anoopalias01 at gmail.com Fri Jan 22 11:55:42 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 22 Jan 2016 17:25:42 +0530 Subject: directio sendfile aio In-Reply-To: <1515547.xnXauH0CFs@vbart-workstation> References: <3455492.6Dye9OLk5Q@vbart-workstation> <1515547.xnXauH0CFs@vbart-workstation> Message-ID: My use case is mixed mass hosting environment where some vhost may be serving large files and some may be serving small files and where adding something like location /video with directio enabled is not practical as I being the webhost may not be knowing if the vhost user is serving a video etc . In such cases ..do you recommend using something like sendfile on; sendfile_max_chunk 512k; aio threads=default; directio 100m; in the http context . The logic being file served of size 100m or less use sendfile and anything larger than 100m ( in which case it may have a high chance of being a multimedia file) is served via directio . Part of these setting are derived from what I understood is good from https://www.nginx.com/blog/thread-pools-boost-performance-9x/ -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jan 22 12:14:05 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 22 Jan 2016 15:14:05 +0300 Subject: directio sendfile aio In-Reply-To: References: <1515547.xnXauH0CFs@vbart-workstation> Message-ID: <2458277.rZU7xq3jDZ@vbart-workstation> On Friday 22 January 2016 17:25:42 Anoop Alias wrote: > My use case is mixed mass hosting environment where some vhost may be > serving large files and some may be serving small files and where adding > something like location /video with directio enabled is not practical as I > being the webhost may not be knowing if the vhost user is serving a video > etc . > > In such cases ..do you recommend using something like > > sendfile on; > sendfile_max_chunk 512k; > aio threads=default; > directio 100m; > [..] Something like this can work better, since it reduces usage of Direct IO only for reading really quite big files. But a possible side effect of this setting will be slowdown of serving such files. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Wed Jan 20 19:14:16 2016 From: nginx-forum at forum.nginx.org (Fry-kun) Date: Wed, 20 Jan 2016 14:14:16 -0500 Subject: limit_req off? In-Reply-To: References: Message-ID: Sure, except there are a few dozen identical config lines in these locations... If "limit_req off" was an option, that would be the only difference. B.R. Wrote: ------------------------------------------------------- > You might wish to set up locations like that: > > location /outer-path { > limit_req ... > } > > location /outer-path/inner-path { > ... > } > --- > *B. R.* > > On Tue, Jan 19, 2016 at 11:01 PM, Fry-kun > > wrote: > > > I have a location with limit_req and would like to set up an inner > path > > without the limit but with other settings same, e.g. > > > > location / { > > limit_req ...; > > proxy_pass ...; > > location /private/ { > > limit_req off; > > } > > } > > > > How come "limit_req off" is not an available option? > > Any other easy way of achieving this? > > > > Thanks > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,263998,263998#msg-263998 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263998,264025#msg-264025 From nginx-forum at forum.nginx.org Thu Jan 21 07:19:03 2016 From: nginx-forum at forum.nginx.org (rgrraj) Date: Thu, 21 Jan 2016 02:19:03 -0500 Subject: Absolute rather than relative times in expires directives In-Reply-To: <20151118134510.GK3351@daoine.org> References: <20151118134510.GK3351@daoine.org> Message-ID: HI Francis Thanks for the same. It works fine in 1.9.6. But on 1.9.2 with similar configruation its throwing "expires" directive invalid value in sites-enabled file. Any thoughts ? Thanks Govind Posted at Nginx Forum: https://forum.nginx.org/read.php?2,115406,264031#msg-264031 From nginx-forum at forum.nginx.org Wed Jan 20 19:20:07 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 20 Jan 2016 14:20:07 -0500 Subject: Windows Nginx Network Sharing (Mapped Hard Drive) Issue In-Reply-To: <20160118141446.GX74233@mdounin.ru> References: <20160118141446.GX74233@mdounin.ru> Message-ID: <85fa4013c7cdd601377ba3128daf40aa.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Jan 18, 2016 at 08:54:42AM -0500, c0nw0nk wrote: > > > So i have two machines in different locations. > > > > Web server is C:/ > > > > Storage Server is Z:/ (This one is the external mapped hard drive) > > > > Now when you upload files to nginx especialy large files in my tests > 2GB. > > Nginx then pushes the file from the temp location to the Z drive, > The issue > > with this is it locks up and stops serving traffic until it finishes > pushing > > the file onto the storage server. So if i was to try and get Nginx > to > > respond to any connection or request it will sit in the connecting > state > > until the file has been pushed onto the external machine. > > When nginx has to move a file from a temporary location to a > permanent one, this operation will be costly when moving between > filesystems. Moreover, during this operation nginx worker will be > blocked and won't be able to process any other events. > > At least two places in the documentation explicitly warn about > such setups, dav_methods and proxy_cache_path. E.g., the > dav_methods directive description say > (http://nginx.org/r/dav_methods): > > : A file uploaded with the PUT method is first written to a > : temporary file, and then the file is renamed. Starting from > : version 0.8.9, temporary files and the persistent store can be put > : on different file systems. However, be aware that in this case a > : file is copied across two file systems instead of the cheap > : renaming operation. It is thus recommended that for any given > : location both saved files and a directory holding temporary files, > : set by the client_body_temp_path directive, are put on the same > : file system. > > Additionally, in your paricular setup you are using network > filesystem to save files. This is expected to cause even bigger > problems than usual copy, as network filesystems usually have much > higher latency than local disks. Using network filesystems with > nginx is generally not recommended. > > That is, the problem you are seeing is expected with the > configuration you have. Reconsider the configuration you are > using. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks for the reply and information. I also noticed that on the Z:/ machine even though that is just moving the uploaded file from the temp location to a permanent one it, It also locks up while doing this not for the lengths of time as the other machine the length that server stops serving request while pushing temp files is maybe a second at max depending on how large the file is. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263995,264027#msg-264027 From nginx-forum at forum.nginx.org Fri Jan 22 18:00:31 2016 From: nginx-forum at forum.nginx.org (blason) Date: Fri, 22 Jan 2016 13:00:31 -0500 Subject: Pages rewrite Message-ID: Hi Guys, I need a help on below topic and I wanted to achieve URL Rewrite like this We want to redirect our domain pages from source to destination one Source : Original Page www.xxxx.com/index.php?id=news Destination : www.xxxxx.com/news.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264068,264068#msg-264068 From l at ymx.ch Sat Jan 23 00:44:58 2016 From: l at ymx.ch (Lukas) Date: Sat, 23 Jan 2016 01:44:58 +0100 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <20160110133934.GA65438@lpr.ch> References: <20160110133934.GA65438@lpr.ch> Message-ID: <20160123004458.GA62219@lpr.ch> Dear all > Lukas [2016-01-10 14:39]: > > Fascinated by nginx, I attempted to integrate it with modsecurity. > > Unfortunately, ever when modsecurity is enabled, nginx reports a > sefault in sysmessages. > I tried debugging the issue a bit further (from a user perspective) with common web-page and CalDAV with the following results: * nginx with modsecurity switched off works perfectly as a proxy nginx * nginx with modsecurity switched on with one owasp rule-set (modsecurity_crs_20_protocol_violations.conf) works for common web-pages with multi-media content (quick test without any errors reported) * nginx with modsecurity switched on with one owasp rule-set (modsecurity_crs_20_protocol_violations.conf) does not work for CalDAV. error.log: 2016/01/23 01:19:07 [emerg] 4844#0: *7 posix_memalign(16, 4096) failed (12: Cannot allocate memory) while logging request * nginx with modsecurity switched on without any ruleset does not work for CalDAV -- same error * nginx with modsecurity switched off without any ruleset does work for CalDAV perfectly. With modsecurity switched on, an Out-of-Memory exception took place always reporting: [876715.533926] nginx invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0 [876715.533930] nginx cpuset=/ mems_allowed=0 [876715.533936] CPU: 0 PID: 4844 Comm: nginx Not tainted 4.3.3-consecom-ag #1 [876715.533937] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS debian/1.7.5-1-0-g506b58d-dirty-20140812_231322-gandalf 04/01/2014 [876715.533939] f5a53ed0 d52542a6 f5a6b7c0 d5110792 d55a6db0 f5a6bab4 000280da 00000000 [876715.533943] 00000000 ffffffff 0d3f1361 00031d5e f4929cb8 00200282 f4929cb8 f4929cb0 [876715.533946] d50babb7 00200206 d525956e 00000002 00000002 f5020840 f5020bc4 d55a5702 [876715.533949] Call Trace: [876715.533955] [] ? dump_stack+0x3e/0x58 [876715.533959] [] ? dump_header.isra.8+0x65/0x1be [876715.533963] [] ? delayacct_end+0x47/0xa0 [876715.533967] [] ? ___ratelimit+0x7e/0xe0 [876715.533970] [] ? oom_kill_process+0x1d9/0x380 [876715.533973] [] ? security_capable_noaudit+0x3a/0x60 [876715.533977] [] ? has_ns_capability_noaudit+0xb/0x20 [876715.533979] [] ? oom_badness+0x96/0x100 [876715.533981] [] ? out_of_memory+0x252/0x320 [876715.533984] [] ? __alloc_pages_nodemask+0x77e/0x7a0 [876715.533989] [] ? handle_mm_fault+0xd54/0xf50 [876715.533990] [] ? vma_merge+0x1bf/0x280 [876715.533992] [] ? do_brk+0x1ca/0x2b0 [876715.533995] [] ? __do_page_fault+0x137/0x3a0 [876715.533998] [] ? vmalloc_sync_all+0x130/0x130 [876715.534001] [] ? error_code+0x5a/0x60 [876715.534003] [] ? vmalloc_sync_all+0x130/0x130 [876715.534004] Mem-Info: [876715.534008] active_anon:543864 inactive_anon:208884 isolated_anon:0 [876715.534008] active_file:54 inactive_file:77 isolated_file:0 [876715.534008] unevictable:0 dirty:1 writeback:0 unstable:0 [876715.534008] slab_reclaimable:326 slab_unreclaimable:997 [876715.534008] mapped:88 shmem:4 pagetables:957 bounce:0 [876715.534008] free:21502 free_pcp:289 free_cma:0 [876715.534014] DMA free:12152kB min:64kB low:80kB high:96kB active_anon:1676kB inactive_anon:1928kB active_file:8kB inactive_file:4kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15916kB mlocked:0kB dirty:0kB writeback:0kB mapped:8kB shmem:0kB slab_reclaimable:16kB slab_unreclaimable:76kB kernel_stack:8kB pagetables:20kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:120 all_unreclaimable? yes [876715.534016] lowmem_reserve[]: 0 839 3023 3023 [876715.534021] Normal free:73380kB min:3528kB low:4408kB high:5292kB active_anon:386788kB inactive_anon:386844kB active_file:208kB inactive_file:276kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:892920kB managed:859928kB mlocked:0kB dirty:4kB writeback:0kB mapped:324kB shmem:0kB slab_reclaimable:1288kB slab_unreclaimable:3912kB kernel_stack:672kB pagetables:3808kB unstable:0kB bounce:0kB free_pcp:564kB local_pcp:564kB free_cma:0kB writeback_tmp:0kB pages_scanned:115004 all_unreclaimable? yes [876715.534022] lowmem_reserve[]: 0 0 17471 17471 [876715.534027] HighMem free:476kB min:512kB low:2808kB high:5104kB active_anon:1786992kB inactive_anon:446764kB active_file:0kB inactive_file:28kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2236296kB managed:2236296kB mlocked:0kB dirty:0kB writeback:0kB mapped:20kB shmem:16kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:592kB local_pcp:592kB free_cma:0kB writeback_tmp:0kB pages_scanned:7836 all_unreclaimable? yes [876715.534028] lowmem_reserve[]: 0 0 0 0 [876715.534030] DMA: 4*4kB (E) 7*8kB (UE) 5*16kB (UEM) 3*32kB (U) 2*64kB (EM) 2*128kB (EM) 3*256kB (UEM) 1*512kB (E) 2*1024kB (UE) 2*2048kB (UE) 1*4096kB (M) = 12152kB [876715.534039] Normal: 149*4kB (UEM) 108*8kB (UEM) 63*16kB (UE) 32*32kB (UEM) 10*64kB (UE) 11*128kB (UEM) 5*256kB (UE) 2*512kB (EM) 2*1024kB (UM) 3*2048kB (UEM) 14*4096kB (M) = 73380kB [876715.534047] HighMem: 1*4kB (U) 1*8kB (U) 1*16kB (M) 2*32kB (UM) 0*64kB 1*128kB (M) 1*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 476kB [876715.534054] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=4096kB Thanks for any hints Lukas -- Lukas Ruf | Ad Personam Consecom | Ad Laborem From rpaprocki at fearnothingproductions.net Sat Jan 23 02:49:44 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 22 Jan 2016 18:49:44 -0800 Subject: nginx/1.9.9 with modsecurity/2.9.0 crashes with segfault and worker process exited on signal 11 In-Reply-To: <20160123004458.GA62219@lpr.ch> References: <20160110133934.GA65438@lpr.ch> <20160123004458.GA62219@lpr.ch> Message-ID: The modsec devel team is working hard on the new libmodsecurity. You may just be better off waiting for them to put the finishing touches on that project. Nginx + modsec 2.9 likely will get no dev attention moving forward, given that the whole system is being revamped now. Sent from my iPhone > On Jan 22, 2016, at 16:44, Lukas wrote: > > Dear all > >> Lukas [2016-01-10 14:39]: >> >> Fascinated by nginx, I attempted to integrate it with modsecurity. >> >> Unfortunately, ever when modsecurity is enabled, nginx reports a >> sefault in sysmessages. > > I tried debugging the issue a bit further (from a user perspective) > with common web-page and CalDAV with the following results: > > * nginx with modsecurity switched off works perfectly as a proxy nginx > * nginx with modsecurity switched on with one owasp rule-set > (modsecurity_crs_20_protocol_violations.conf) works for common > web-pages with multi-media content (quick test without any errors > reported) > * nginx with modsecurity switched on with one owasp rule-set > (modsecurity_crs_20_protocol_violations.conf) does not work for > CalDAV. > error.log: 2016/01/23 01:19:07 [emerg] 4844#0: *7 posix_memalign(16, > 4096) failed (12: Cannot allocate memory) while logging request > * nginx with modsecurity switched on without any ruleset > does not work for CalDAV -- same error > * nginx with modsecurity switched off without any ruleset > does work for CalDAV perfectly. > > With modsecurity switched on, an Out-of-Memory exception took place > always reporting: > > [876715.533926] nginx invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0 > [876715.533930] nginx cpuset=/ mems_allowed=0 > [876715.533936] CPU: 0 PID: 4844 Comm: nginx Not tainted 4.3.3-consecom-ag #1 > [876715.533937] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS debian/1.7.5-1-0-g506b58d-dirty-20140812_231322-gandalf 04/01/2014 > [876715.533939] f5a53ed0 d52542a6 f5a6b7c0 d5110792 d55a6db0 f5a6bab4 000280da 00000000 > [876715.533943] 00000000 ffffffff 0d3f1361 00031d5e f4929cb8 00200282 f4929cb8 f4929cb0 > [876715.533946] d50babb7 00200206 d525956e 00000002 00000002 f5020840 f5020bc4 d55a5702 > [876715.533949] Call Trace: > [876715.533955] [] ? dump_stack+0x3e/0x58 > [876715.533959] [] ? dump_header.isra.8+0x65/0x1be > [876715.533963] [] ? delayacct_end+0x47/0xa0 > [876715.533967] [] ? ___ratelimit+0x7e/0xe0 > [876715.533970] [] ? oom_kill_process+0x1d9/0x380 > [876715.533973] [] ? security_capable_noaudit+0x3a/0x60 > [876715.533977] [] ? has_ns_capability_noaudit+0xb/0x20 > [876715.533979] [] ? oom_badness+0x96/0x100 > [876715.533981] [] ? out_of_memory+0x252/0x320 > [876715.533984] [] ? __alloc_pages_nodemask+0x77e/0x7a0 > [876715.533989] [] ? handle_mm_fault+0xd54/0xf50 > [876715.533990] [] ? vma_merge+0x1bf/0x280 > [876715.533992] [] ? do_brk+0x1ca/0x2b0 > [876715.533995] [] ? __do_page_fault+0x137/0x3a0 > [876715.533998] [] ? vmalloc_sync_all+0x130/0x130 > [876715.534001] [] ? error_code+0x5a/0x60 > [876715.534003] [] ? vmalloc_sync_all+0x130/0x130 > [876715.534004] Mem-Info: > [876715.534008] active_anon:543864 inactive_anon:208884 isolated_anon:0 > [876715.534008] active_file:54 inactive_file:77 isolated_file:0 > [876715.534008] unevictable:0 dirty:1 writeback:0 unstable:0 > [876715.534008] slab_reclaimable:326 slab_unreclaimable:997 > [876715.534008] mapped:88 shmem:4 pagetables:957 bounce:0 > [876715.534008] free:21502 free_pcp:289 free_cma:0 > [876715.534014] DMA free:12152kB min:64kB low:80kB high:96kB active_anon:1676kB inactive_anon:1928kB active_file:8kB inactive_file:4kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15916kB mlocked:0kB dirty:0kB writeback:0kB mapped:8kB shmem:0kB slab_reclaimable:16kB slab_unreclaimable:76kB kernel_stack:8kB pagetables:20kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:120 all_unreclaimable? yes > [876715.534016] lowmem_reserve[]: 0 839 3023 3023 > [876715.534021] Normal free:73380kB min:3528kB low:4408kB high:5292kB active_anon:386788kB inactive_anon:386844kB active_file:208kB inactive_file:276kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:892920kB managed:859928kB mlocked:0kB dirty:4kB writeback:0kB mapped:324kB shmem:0kB slab_reclaimable:1288kB slab_unreclaimable:3912kB kernel_stack:672kB pagetables:3808kB unstable:0kB bounce:0kB free_pcp:564kB local_pcp:564kB free_cma:0kB writeback_tmp:0kB pages_scanned:115004 all_unreclaimable? yes > [876715.534022] lowmem_reserve[]: 0 0 17471 17471 > [876715.534027] HighMem free:476kB min:512kB low:2808kB high:5104kB active_anon:1786992kB inactive_anon:446764kB active_file:0kB inactive_file:28kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2236296kB managed:2236296kB mlocked:0kB dirty:0kB writeback:0kB mapped:20kB shmem:16kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:592kB local_pcp:592kB free_cma:0kB writeback_tmp:0kB pages_scanned:7836 all_unreclaimable? yes > [876715.534028] lowmem_reserve[]: 0 0 0 0 > [876715.534030] DMA: 4*4kB (E) 7*8kB (UE) 5*16kB (UEM) 3*32kB (U) 2*64kB (EM) 2*128kB (EM) 3*256kB (UEM) 1*512kB (E) 2*1024kB (UE) 2*2048kB (UE) 1*4096kB (M) = 12152kB > [876715.534039] Normal: 149*4kB (UEM) 108*8kB (UEM) 63*16kB (UE) 32*32kB (UEM) 10*64kB (UE) 11*128kB (UEM) 5*256kB (UE) 2*512kB (EM) 2*1024kB (UM) 3*2048kB (UEM) 14*4096kB (M) = 73380kB > [876715.534047] HighMem: 1*4kB (U) 1*8kB (U) 1*16kB (M) 2*32kB (UM) 0*64kB 1*128kB (M) 1*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 476kB > [876715.534054] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=4096kB > > Thanks for any hints > > Lukas > > > -- > Lukas Ruf | Ad Personam > Consecom | Ad Laborem > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Jan 23 11:14:51 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Jan 2016 11:14:51 +0000 Subject: Absolute rather than relative times in expires directives In-Reply-To: References: <20151118134510.GK3351@daoine.org> Message-ID: <20160123111451.GA19381@daoine.org> On Thu, Jan 21, 2016 at 02:19:03AM -0500, rgrraj wrote: Hi there, > It works fine in 1.9.6. But on 1.9.2 with similar configruation its throwing > "expires" directive invalid value in sites-enabled file. > Any thoughts ? "Use 1.9.6" would seem to be the easy option. Failing that: what is the small configuration that does not do what you want in 1.9.2? Copy-paste enough so that someone else can reproduce the problem that you are reporting. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jan 23 11:23:26 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Jan 2016 11:23:26 +0000 Subject: limit_req off? In-Reply-To: References: Message-ID: <20160123112326.GB19381@daoine.org> On Wed, Jan 20, 2016 at 02:14:16PM -0500, Fry-kun wrote: Hi there, > Sure, except there are a few dozen identical config lines in these > locations... If "limit_req off" was an option, that would be the only > difference. The usual reason why an option is not there, is that no-one has written it yet. (Sometimes, the idea of the option has been explicitly rejected. I do not know the history of this one.) However, one possible immediate workaround, untested by me, might be to define another limit_req_zone with a key variable that you do not set anywhere. (Call it "zone=off", for example.) Then in your nested location, use limit_req zone=off; and that should stop the outer value from being inherited. Good luck with it, f -- Francis Daly francis at daoine.org From highclass99 at gmail.com Sat Jan 23 14:42:46 2016 From: highclass99 at gmail.com (highclass99) Date: Sat, 23 Jan 2016 23:42:46 +0900 Subject: Is ngx_http_perl_module stable enough to use in high traffic production environment? Message-ID: I use perl a lot, and I noticed http://nginx.org/en/docs/http/ngx_http_perl_module.html for several years has been documented as "The module is experimental, caveat emptor applies." So I have been somewhat avoiding testing its use. Does anyone know if this is suitable to use in high traffic production environments? Would it often leak memory even if the perl code was deleting/undefining all variable and no circular references? Also, how does this work with yum rpm. If I update the perl on the system using yum update, will the nginx perl also update or will I have to recompile nginx? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sat Jan 23 22:09:07 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 23 Jan 2016 14:09:07 -0800 Subject: Is ngx_http_perl_module stable enough to use in high traffic production environment? In-Reply-To: References: Message-ID: Hello! On Sat, Jan 23, 2016 at 6:42 AM, highclass99 wrote: > I use perl a lot, > and I noticed > http://nginx.org/en/docs/http/ngx_http_perl_module.html > for several years has been documented as > "The module is experimental, caveat emptor applies." > So I have been somewhat avoiding testing its use. > > Does anyone know if this is suitable to use in high traffic production > environments? We used to use this perl module in production about 3 years ago for relatively heavy production traffic (we have way more traffic today) but it was slow, unscalable, and blocking on I/O. We switched to the ngx_http_lua_module since then and it has been much faster and guarantees 100% nonblocking network I/O. Disclaimer: I am the maintainer of the ngx_http_lua_module. See https://github.com/openresty/lua-nginx-module#readme We still use Perl for many offline work like automated testing (based on the CPAN module Test::Nginx [1]) and WAF's Lua code generation (the modsecurity rules to Lua translator is written in Perl). I've also been working on the Lemplate compiler [2] in Perl that compiles Perl's TT2 templates down to standalone Lua code runnable atop the ngx_http_lua_module. Hope it helps. Best regards, -agentzh [1] https://metacpan.org/pod/Test::Nginx [2] https://metacpan.org/pod/Lemplate From eliezer at ngtech.co.il Sat Jan 23 22:20:44 2016 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Sun, 24 Jan 2016 00:20:44 +0200 Subject: What modules are using the query term "token" for access control? In-Reply-To: <1453401483.2438630.498852394.34537928@webmail.messagingengine.com> References: <56A12084.4000207@ngtech.co.il> <1453401483.2438630.498852394.34537928@webmail.messagingengine.com> Message-ID: <56A3FCBC.6060503@ngtech.co.il> I wanted to mention a nice video about tokens in apis. https://www.youtube.com/watch?v=xgkNe6R4Un0 Eliezer From nginx-forum at forum.nginx.org Sun Jan 24 07:21:01 2016 From: nginx-forum at forum.nginx.org (rgrraj) Date: Sun, 24 Jan 2016 02:21:01 -0500 Subject: Absolute rather than relative times in expires directives In-Reply-To: <20160123111451.GA19381@daoine.org> References: <20160123111451.GA19381@daoine.org> Message-ID: <1eeca39c43178a638417590bfd28a0e6.NginxMailingListEnglish@forum.nginx.org> Hi Francis The same works like a charm in 1.9.6 but not in 1.9.2. The error log shows the follow, "[emerg] 6151#0: "expires" directive invalid value in" respective sites enabled file. And our configuration is just as follows, ## in nginx.conf map $time_iso8601 $expiresc { default "3h"; ~T22 "@00h00"; ~T23 "@00h00"; } ##in sites enabled file, location /path/to/files/ { expires $expiresc; } doesnt allows to restart nginx and fails with the above error in nginx error log. Any thoughts ?? Thanks Govind Posted at Nginx Forum: https://forum.nginx.org/read.php?2,115406,264082#msg-264082 From reallfqq-nginx at yahoo.fr Sun Jan 24 13:43:59 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 24 Jan 2016 14:43:59 +0100 Subject: Absolute rather than relative times in expires directives In-Reply-To: <1eeca39c43178a638417590bfd28a0e6.NginxMailingListEnglish@forum.nginx.org> References: <20160123111451.GA19381@daoine.org> <1eeca39c43178a638417590bfd28a0e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://nginx.org/en/CHANGES: Changes with nginx 1.9.7 17 Nov 2015 ... *) Bugfix: the "expires" directive might not work when using variables. Using the latest version of the branch? might solve problems... --- *B. R.* On Sun, Jan 24, 2016 at 8:21 AM, rgrraj wrote: > Hi Francis > > The same works like a charm in 1.9.6 but not in 1.9.2. The error log shows > the follow, > > "[emerg] 6151#0: "expires" directive invalid value in" respective sites > enabled file. > > And our configuration is just as follows, > ## in nginx.conf > map $time_iso8601 $expiresc { > default "3h"; > ~T22 "@00h00"; > ~T23 "@00h00"; > } > > ##in sites enabled file, > location /path/to/files/ { > expires $expiresc; > } > > > doesnt allows to restart nginx and fails with the above error in nginx > error > log. > > Any thoughts ?? > > Thanks > Govind > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,115406,264082#msg-264082 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jan 24 23:27:57 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Sun, 24 Jan 2016 18:27:57 -0500 Subject: Restart Nginx than root user? Message-ID: <9e8c7af35dbf733c7258f55e5ee92414.NginxMailingListEnglish@forum.nginx.org> Hi Team, root user is must for NGinx restart? Everytime of restart, prompts for root user than other users. PID's are not getting killed...for the running process with project users. I tried solution of adding entry to Sudoers file with an path of executable script. No luck. Following are the changes made /ectc/sudoers %gvp ALL=NOPASSWD: /var/gvp/Nginx/bin/restartNginx #To get rid of Nginx restart with root user and to attain with gvp user /var/gvp/Nginx/bin/restartNginx #! /bin/bash /bin/kill -HUP `cat /var/gvp/Nginx/nginx-1.8.0/logs/nginx.pid` Please review, if above instructions are correct. And, suggest how project users can be used to restart NGinx...to avoid manual intervention and dependency of root user always. Regards, Maddy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264087,264087#msg-264087 From steve at greengecko.co.nz Mon Jan 25 01:40:56 2016 From: steve at greengecko.co.nz (steve) Date: Mon, 25 Jan 2016 14:40:56 +1300 Subject: Restart Nginx than root user? In-Reply-To: <9e8c7af35dbf733c7258f55e5ee92414.NginxMailingListEnglish@forum.nginx.org> References: <9e8c7af35dbf733c7258f55e5ee92414.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56A57D28.2080304@greengecko.co.nz> HI, On 01/25/2016 12:27 PM, smsmaddy1981 wrote: > Hi Team, > root user is must for NGinx restart? > Everytime of restart, prompts for root user than other users. PID's are not > getting killed...for the running process with project users. > > I tried solution of adding entry to Sudoers file with an path of executable > script. No luck. Following are the changes made > > /ectc/sudoers > %gvp ALL=NOPASSWD: /var/gvp/Nginx/bin/restartNginx #To get rid of Nginx > restart with root user and to attain with gvp user > > /var/gvp/Nginx/bin/restartNginx > #! /bin/bash > /bin/kill -HUP `cat /var/gvp/Nginx/nginx-1.8.0/logs/nginx.pid` > > Please review, if above instructions are correct. > > And, suggest how project users can be used to restart NGinx...to avoid > manual intervention and dependency of root user always. > > > Regards, > Maddy > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264087,264087#msg-264087 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx You only need root privileges if you are trying to open a port with a number lower than 1024 ( might be 1000, been a while! ). You could set the setuid privileges on the start script, and run it that way... however, most linux shells will forbid that for security reasons. Your sudo solution should work fine, and really is the best one. Here's an entry on one of my servers to allow user alchemy to manage apache ( I know, they're in the stone age! ) on debian wheezy... I just provide them access to the standard System V init scripts via the service command # cat /etc/sudoers.d/alchemy alchemy ALL = NOPASSWD: /usr/sbin/service apache2 * Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at forum.nginx.org Mon Jan 25 04:02:48 2016 From: nginx-forum at forum.nginx.org (guitao_w) Date: Sun, 24 Jan 2016 23:02:48 -0500 Subject: [core] the sa_family of accept socket is equal 0 Message-ID: nginx version: 1.63 OS version: hp-ux B.11.23 Client connect server with telnet, by GDB, I find: ngx_event_accept.c, function ngx_event_accept, line 279, the value of 'c->addr_text.len' is zero, but the server socket's value is ok who can get this problem? thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264089,264089#msg-264089 From nginx-forum at forum.nginx.org Mon Jan 25 06:52:25 2016 From: nginx-forum at forum.nginx.org (linsonj) Date: Mon, 25 Jan 2016 01:52:25 -0500 Subject: Cache some API requests in Nginx Message-ID: <7bb866687ec34f10d437d637fe675ea9.NginxMailingListEnglish@forum.nginx.org> I'm seeking advise from experts here. We have the following scenario. We have a java application. Java app is running on tomcat7. tomcat7 acting as API server. User interface files ( Static html and css ) are served by nginx. Nginx is acting as reverse proxy here. All API request are passed to API server and rest are being server by nginx directly. What we want is to implement cache mechanism here. That is means we want to enable cache for all but with few exception. We want to exclude some API requests from being cached. Our configuration is like as shown below server { listen 443 ssl; server_name ~^(?.+)\.ourdomain\.com$; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 405; } open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; location / { root /var/www/html/userUI; location ~* \.(?:css|js)$ { expires 1M; access_log off; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ { expires 1M; access_log off; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } } location /server { proxy_pass http://upstream/server; proxy_set_header Host $subdomain.ourdomain.com; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /var/nginx/proxy_temp; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_redirect off; proxy_cache sd6; add_header X-Proxy-Cache $upstream_cache_status; proxy_cache_bypass $http_cache_control; } ssl on; ssl_certificate /etc/nginx/ssl/ourdomain.com.bundle.crt; ssl_certificate_key /etc/nginx/ssl/ourdomain.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; ssl_dhparam /etc/nginx/ssl/dhparams.pem; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_prefer_server_ciphers on; ssl_session_timeout 24h; keepalive_timeout 300; As above, we use cache only for static files located in /var/www/html/userUI We want to implement as such in location /server. This our api server. Means nginx passes api request to tomcat7 ( upstream ) server. We want to enable cache for specific API requests only but need to disable cache for rest of all requests. We want to do the following Exclude all json requests from cache and but need to enable cache for few. Request url will be something like as shown below Request URL:https://ourdomain.com/server/user/api/v7/userProfileImage/get?loginName=user1&_=1453442399073 What this url does is to get the Profile image. We want to enable cache for this specific url. So condition we would like to use is , if request url contains "/userProfileImage/get" we want to set cache and all other requests shouldn't cache. All our api request goes through https://ourdomain.com/server/user/api/v7/ ....... To achieve this we changed the settings to following location /server { set $no_cache 0; if ($request_uri ~* "/server/user/api/v7/userProfileImage/get*") { set $no_cache 1; } proxy_pass http://upstream/server; proxy_set_header Host $subdomain.ourdomain.com; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /var/nginx/proxy_temp; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_redirect off; proxy_cache sd6; add_header X-Proxy-Cache $upstream_cache_status; proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; } Below are the results of http responses General : Request URL:https://ourdomain.com/server/common/...oginName=user1 Request Method:GET Status Code:200 OK Remote Address:131.212.98.12:443 Response Headers : Cache-Control:no-cache, no-store, must-revalidate Connection:keep-alive Content-Type:image/png;charset=UTF-8 Date:Fri, 22 Jan 2016 07:36:56 GMT Expires:Thu, 01 Jan 1970 00:00:00 GMT Pragma:no-cache Server:nginx Transfer-Encoding:chunked X-Proxy-Cache:MISS It would be great if someone could provide a solution. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264091,264091#msg-264091 From nginx-forum at forum.nginx.org Mon Jan 25 06:52:54 2016 From: nginx-forum at forum.nginx.org (guitao_w) Date: Mon, 25 Jan 2016 01:52:54 -0500 Subject: [core] the sa_family of accept socket is equal 0 Message-ID: <25cb7da4f1fc0ae5d18a2516d34091ea.NginxMailingListEnglish@forum.nginx.org> nginx version: 1.63 OS version: hp-ux B.11.23 Client connect server with telnet, by GDB, I find: ngx_event_accept.c, function ngx_event_accept, line 279, the value of 'c->addr_text.len' is zero, but the server socket's value is ok who can get this problem? thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264090,264090#msg-264090 From nginx-forum at forum.nginx.org Mon Jan 25 08:10:09 2016 From: nginx-forum at forum.nginx.org (ex-para) Date: Mon, 25 Jan 2016 03:10:09 -0500 Subject: How to configure a one page site Message-ID: <136ec9bdfcf528544c8435269e8a6a07.NginxMailingListEnglish@forum.nginx.org> I would like to know how to configure a one page site in which I delete the welcome to nginx and add my site details. I know how to edit the site etc... as I have configured the site to work this way before but I have forgot how. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264093,264093#msg-264093 From nginx-forum at forum.nginx.org Mon Jan 25 10:33:01 2016 From: nginx-forum at forum.nginx.org (mrglobule) Date: Mon, 25 Jan 2016 05:33:01 -0500 Subject: [FR] https don't works with nginx on debian 8 Message-ID: <4e02acb4722f78a8ef834c8c4102e8cb.NginxMailingListEnglish@forum.nginx.org> Hello every body, first sorry for my poor english. Second i'm newby with nginx I wrote you to give some help about https and nginx. I have a site web works fine in http, but when i try to acces to this website throw https, my browser try to download a file and never open the web site. You can try to see the result on: http://wa.accary.net ===> OK https://wa.accary.net ===> KO here is my sites-enabel conf: [code]server { listen 80; server_name wa.accary.net; root /var/www/rainloop; index index.php; charset utf-8; location ^~ /data { deny all; } location / { try_files $uri $uri/ index.php; } location ~* \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } server { listen 443 ssl; server_name wa.accary.net; root /var/www/rainloop; index index.php; charset utf-8; ssl on; ssl_certificate /etc/ssl/nginx/accary.net.crt-unified; ssl_certificate_key /etc/ssl/nginx/accary.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+AESGCM:AES128+EECDH:AES256+EECDH"; ssl_prefer_server_ciphers on; ssl_ecdh_curve secp384r1; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.4.4 8.8.8.8 valid=300s; resolver_timeout 10s; add_header X-Frame-Options "DENY"; add_header X-Content-Type-Options "nosniff"; } [/code] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264095,264095#msg-264095 From sven at elite12.de Mon Jan 25 10:44:01 2016 From: sven at elite12.de (Sven Kirschbaum) Date: Mon, 25 Jan 2016 11:44:01 +0100 Subject: [FR] https don't works with nginx on debian 8 In-Reply-To: <4e02acb4722f78a8ef834c8c4102e8cb.NginxMailingListEnglish@forum.nginx.org> References: <4e02acb4722f78a8ef834c8c4102e8cb.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, you?re missing the PHP block in the SSL Server block. Just copy all your location directives into the other server and you?ll be fine. Mit Freundlichen Gr??en Sven Kirschbaum 2016-01-25 11:33 GMT+01:00 mrglobule : > Hello every body, > first sorry for my poor english. > Second i'm newby with nginx > I wrote you to give some help about https and nginx. > > I have a site web works fine in http, but when i try to acces to this > website throw https, my browser try to download a file and never open the > web site. > > You can try to see the result on: > http://wa.accary.net ===> OK > https://wa.accary.net ===> KO > > here is my sites-enabel conf: > > [code]server { > listen 80; > server_name wa.accary.net; > root /var/www/rainloop; > index index.php; > charset utf-8; > > location ^~ /data { > deny all; > } > > location / { > try_files $uri $uri/ index.php; > } > > location ~* \.php$ { > include /etc/nginx/fastcgi_params; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > } > } > > > server { > listen 443 ssl; > server_name wa.accary.net; > root /var/www/rainloop; > index index.php; > charset utf-8; > > ssl on; > ssl_certificate /etc/ssl/nginx/accary.net.crt-unified; > ssl_certificate_key /etc/ssl/nginx/accary.key; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers "EECDH+AESGCM:AES128+EECDH:AES256+EECDH"; > > ssl_prefer_server_ciphers on; > ssl_ecdh_curve secp384r1; > > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > > ssl_stapling on; > ssl_stapling_verify on; > > resolver 8.8.4.4 8.8.8.8 valid=300s; > resolver_timeout 10s; > > add_header X-Frame-Options "DENY"; > add_header X-Content-Type-Options "nosniff"; > } > [/code] > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,264095,264095#msg-264095 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 25 10:56:18 2016 From: nginx-forum at forum.nginx.org (mrglobule) Date: Mon, 25 Jan 2016 05:56:18 -0500 Subject: [FR] https don't works with nginx on debian 8 In-Reply-To: References: Message-ID: Thanks a lot, all it's working now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264095,264097#msg-264097 From mdounin at mdounin.ru Mon Jan 25 15:04:26 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Jan 2016 18:04:26 +0300 Subject: Absolute rather than relative times in expires directives In-Reply-To: <1eeca39c43178a638417590bfd28a0e6.NginxMailingListEnglish@forum.nginx.org> References: <20160123111451.GA19381@daoine.org> <1eeca39c43178a638417590bfd28a0e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160125150425.GD9449@mdounin.ru> Hello! On Sun, Jan 24, 2016 at 02:21:01AM -0500, rgrraj wrote: > Hi Francis > > The same works like a charm in 1.9.6 but not in 1.9.2. The error log shows > the follow, > > "[emerg] 6151#0: "expires" directive invalid value in" respective sites > enabled file. > > And our configuration is just as follows, > ## in nginx.conf > map $time_iso8601 $expiresc { > default "3h"; > ~T22 "@00h00"; > ~T23 "@00h00"; > } > > ##in sites enabled file, > location /path/to/files/ { > expires $expiresc; > } > > > doesnt allows to restart nginx and fails with the above error in nginx error > log. The message suggests that you are using not 1.9.2, but an older version without variables support in the expires directive. Variables support in the expires directive was introduced in nginx 1.7.9, see http://nginx.org/r/expires. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 25 15:44:26 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Jan 2016 18:44:26 +0300 Subject: [core] the sa_family of accept socket is equal 0 In-Reply-To: <25cb7da4f1fc0ae5d18a2516d34091ea.NginxMailingListEnglish@forum.nginx.org> References: <25cb7da4f1fc0ae5d18a2516d34091ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160125154426.GF9449@mdounin.ru> Hello! On Mon, Jan 25, 2016 at 01:52:54AM -0500, guitao_w wrote: > nginx version: 1.63 > OS version: hp-ux B.11.23 > > Client connect server with telnet, by GDB, I find: > > ngx_event_accept.c, function ngx_event_accept, line 279, the value of > 'c->addr_text.len' is zero, > but the server socket's value is ok > > who can get this problem? Unfortunately, HP-UX B.11.23 isn't something easy to find for testing. So I wouldn't expect this can be debugged by anyone except you. My best guess is that the problem is caused by multiple versions of socket functions on HP-UX, see this commit for some details (it was done after some basic testing on HP-UX B.11.31): http://hg.nginx.org/nginx/rev/489839d07b38 Though given that 11.23 is quite old it is possible that the situation is different there. -- Maxim Dounin http://nginx.org/ From yoel07 at gmail.com Mon Jan 25 17:32:30 2016 From: yoel07 at gmail.com (=?UTF-8?Q?Yoel_Jim=C3=A9nez_Del_Valle?=) Date: Mon, 25 Jan 2016 12:32:30 -0500 Subject: PHP path_info problem Message-ID: I have a web app in php that relays on path_info to processs a request but i always get a 404 when do http://localhost/folder/app/app.php/controller/method nginx always respond 404 any ideas how to solve this and gain access to the request i can acces to http://localhost/folder/app/app.php but after last p in php extension anything else show up 404 -- Yoel Jimenez Del Valle Ingeniero Inform?tico -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 26 07:02:52 2016 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 26 Jan 2016 02:02:52 -0500 Subject: Pages rewrite In-Reply-To: References: Message-ID: <92a9762f1e891e5eb525729b982d8bc4.NginxMailingListEnglish@forum.nginx.org> Hi Team, Any update I am still failing to achieve the same? How do I rewrite the URLs Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264068,264120#msg-264120 From h1994st at gmail.com Tue Jan 26 07:14:04 2016 From: h1994st at gmail.com (Shengtuo Hu) Date: Tue, 26 Jan 2016 15:14:04 +0800 Subject: HTTP/2 max header size Message-ID: Hi, I was using "nghttp" command line tool to test NGINX. I enabled "continuation" option in "nghttp" command line tool, which filled the HEADERS frame with a very large header field/value. Then I got an GOAWAY frame with error code of "ENHANCE_YOUR_CALM(0x0b)". Then I checked the debug log file of the server, and found "client exceeded http2_max_header_size limit while processing HTTP/2 connection". May I know the consideration about this limitation? For a client, it may not be able to know the precise value of "http2_max_header_size". Once a client gets this error, how can it recover from it or request the resource successfully? Thanks! Shengtuo Hu -------------- next part -------------- An HTML attachment was scrubbed... URL: From h1994st at gmail.com Tue Jan 26 08:00:08 2016 From: h1994st at gmail.com (Shengtuo Hu) Date: Tue, 26 Jan 2016 16:00:08 +0800 Subject: Error occurs when both padding and continuation are enabled Message-ID: Hi, Another error I met recently. It occurred when both padding and continuation are enabled. A normal HEADERS frame was divided as follows: HEADERS ===> HEADERS(PADDED_FLAG) + CONTINUATION + CONTINUATION After sending these frames, I got an error. In the debug log file, I found "client sent inappropriate frame while CONTINUATION was expected while processing HTTP/2 connection". Then I read the source code (v 1.9.9), and located the function "ngx_http_v2_handle_continuation" (ngx_http_v2.c, line 1749). It seems NGINX does not skip the "padding part", but tries to read "type" field in the next CONTINUATION frame directly. However, when testing padding and continuation frame separately, NGINX can handle both cases well. I don't know whether I did something wrong or this is a bug in NGINX. I also sent the same frames to other servers (nghttp, h2o, GWS), and got the responses successfully. Thanks! Shengtuo Hu -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jan 26 11:47:35 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 26 Jan 2016 14:47:35 +0300 Subject: HTTP/2 max header size In-Reply-To: References: Message-ID: <1843523.7fPZo31UlL@vbart-workstation> On Tuesday 26 January 2016 15:14:04 Shengtuo Hu wrote: > Hi, > > I was using "nghttp" command line tool to test NGINX. I enabled > "continuation" option in "nghttp" command line tool, which filled the > HEADERS frame with a very large header field/value. Then I got an GOAWAY > frame with error code of "ENHANCE_YOUR_CALM(0x0b)". Then I checked the > debug log file of the server, and found "client exceeded > http2_max_header_size limit while processing HTTP/2 connection". > > May I know the consideration about this limitation? For a client, it may > not be able to know the precise value of "http2_max_header_size". Decompression and processing of headers require proportional amount of memory. The consideration is simple: to limit the amount of memory that can be eaten by a client and prevent DoS attack on the server. > Once a client gets this error, how can it recover from it or request > the resource successfully? > [..] If a client gets this error then either it is doing something wrong by sending incorrect requests, or the server is configured incorrectly. wbr, Valentin V. Bartenev From vbart at nginx.com Tue Jan 26 11:54:56 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 26 Jan 2016 14:54:56 +0300 Subject: Error occurs when both padding and continuation are enabled In-Reply-To: References: Message-ID: <5749540.g02AWSFeFM@vbart-workstation> On Tuesday 26 January 2016 16:00:08 Shengtuo Hu wrote: > Hi, > > Another error I met recently. It occurred when both padding and > continuation are enabled. > > A normal HEADERS frame was divided as follows: > HEADERS ===> HEADERS(PADDED_FLAG) + CONTINUATION + CONTINUATION > > After sending these frames, I got an error. In the debug log file, I found > "client sent inappropriate frame while CONTINUATION was expected while > processing HTTP/2 connection". Then I read the source code (v 1.9.9), and > located the function "ngx_http_v2_handle_continuation" (ngx_http_v2.c, line > 1749). It seems NGINX does not skip the "padding part", but tries to read > "type" field in the next CONTINUATION frame directly. > [..] Yes, you're right. It isn't able to skip padding between HEADERS and CONTINUATION frames. wbr, Valentin V. Bartenev From rainer at ultra-secure.de Tue Jan 26 12:09:59 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Tue, 26 Jan 2016 13:09:59 +0100 Subject: How to check which directive actually delivers the files? Message-ID: Hi, I've setup nginx + php-fpm for a typo3. It looks like this: server { listen 80; server_name the_server; access_log /home/the_server/logs/nginx_access_log mycustom; error_log /home/the_server/logs/nginx_error_log; root /home/the_server/FTPROOT/htdocs ; index index.php; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). location ~ /\. { deny all; access_log off; log_not_found off; } location ~ [^/]\.php(/|$) { include /usr/local/etc/nginx/fastcgi-includes.conf; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass unix:/var/run/fastcgi/the_server.sock; fastcgi_index index.php; include fastcgi_params; } client_max_body_size 100M; location ~ /\.(js|css)$ { expires 604800s; } if (!-e $request_filename){ rewrite ^/(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ /$1.$3 last; } location ~* ^/fileadmin/(.*/)?_recycler_/ { deny all; } location ~* ^/fileadmin/templates/.*(\.txt|\.ts)$ { deny all; } location ~* ^/typo3conf/ext/[^/]+/Resources/Private/ { deny all; } location ~* ^/(typo3/|fileadmin/|typo3conf/|typo3temp/|uploads/|favicon\.ico) { } location / { if ($query_string ~ ".+") { return 405; } if ($http_cookie ~ 'nc_staticfilecache|be_typo_user|fe_typo_user' ) { return 405; } # pass POST requests to PHP if ($request_method !~ ^(GET|HEAD)$ ) { return 405; } if ($http_pragma = 'no-cache') { return 405; } if ($http_cache_control = 'no-cache') { return 405; } error_page 405 = @nocache; try_files /typo3temp/tx_ncstaticfilecache/${scheme}/$host${request_uri}index.html @nocache; } location @nocache { try_files $uri $uri/ /index.php$is_args$args; } } However, I'm not sure if nginx actually delivers the static file. The reason I'm not so sure is that I have varnish+nginx in front of this (on a different host) and varnish reports a "MISS" for what should be static deliveries from nginx. I activated access-logging in the php-fpm pool and it looks like it's actually not working as intended. So, how can I see what it's actually trying to do? From cata.vasile at nxp.com Tue Jan 26 15:15:40 2016 From: cata.vasile at nxp.com (Catalin Vasile) Date: Tue, 26 Jan 2016 15:15:40 +0000 Subject: multiple ssl_engine instances Message-ID: Does nginx support multiple ssl engines? I would like to use the cryptodev engine for encryption and the engine for?Intel RNG. Cata From mdounin at mdounin.ru Tue Jan 26 16:31:28 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jan 2016 19:31:28 +0300 Subject: nginx-1.9.10 Message-ID: <20160126163128.GO9449@mdounin.ru> Changes with nginx 1.9.10 26 Jan 2016 *) Security: invalid pointer dereference might occur during DNS server response processing if the "resolver" directive was used, allowing an attacker who is able to forge UDP packets from the DNS server to cause segmentation fault in a worker process (CVE-2016-0742). *) Security: use-after-free condition might occur during CNAME response processing if the "resolver" directive was used, allowing an attacker who is able to trigger name resolution to cause segmentation fault in a worker process, or might have potential other impact (CVE-2016-0746). *) Security: CNAME resolution was insufficiently limited if the "resolver" directive was used, allowing an attacker who is able to trigger arbitrary name resolution to cause excessive resource consumption in worker processes (CVE-2016-0747). *) Feature: the "auto" parameter of the "worker_cpu_affinity" directive. *) Bugfix: the "proxy_protocol" parameter of the "listen" directive did not work with IPv6 listen sockets. *) Bugfix: connections to upstream servers might be cached incorrectly when using the "keepalive" directive. *) Bugfix: proxying used the HTTP method of the original request after an "X-Accel-Redirect" redirection. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jan 26 16:31:48 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jan 2016 19:31:48 +0300 Subject: nginx-1.8.1 Message-ID: <20160126163147.GS9449@mdounin.ru> Changes with nginx 1.8.1 26 Jan 2016 *) Security: invalid pointer dereference might occur during DNS server response processing if the "resolver" directive was used, allowing an attacker who is able to forge UDP packets from the DNS server to cause segmentation fault in a worker process (CVE-2016-0742). *) Security: use-after-free condition might occur during CNAME response processing if the "resolver" directive was used, allowing an attacker who is able to trigger name resolution to cause segmentation fault in a worker process, or might have potential other impact (CVE-2016-0746). *) Security: CNAME resolution was insufficiently limited if the "resolver" directive was used, allowing an attacker who is able to trigger arbitrary name resolution to cause excessive resource consumption in worker processes (CVE-2016-0747). *) Bugfix: the "proxy_protocol" parameter of the "listen" directive did not work if not specified in the first "listen" directive for a listen socket. *) Bugfix: nginx might fail to start on some old Linux variants; the bug had appeared in 1.7.11. *) Bugfix: a segmentation fault might occur in a worker process if the "try_files" and "alias" directives were used inside a location given by a regular expression; the bug had appeared in 1.7.1. *) Bugfix: the "try_files" directive inside a nested location given by a regular expression worked incorrectly if the "alias" directive was used in the outer location. *) Bugfix: "header already sent" alerts might appear in logs when using cache; the bug had appeared in 1.7.5. *) Bugfix: a segmentation fault might occur in a worker process if different ssl_session_cache settings were used in different virtual servers. *) Bugfix: the "expires" directive might not work when using variables. *) Bugfix: if nginx was built with the ngx_http_spdy_module it was possible to use the SPDY protocol even if the "spdy" parameter of the "listen" directive was not specified. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jan 26 16:32:12 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jan 2016 19:32:12 +0300 Subject: nginx security advisory (CVE-2016-0742, CVE-2016-0746, CVE-2016-0747) Message-ID: <20160126163212.GW9449@mdounin.ru> Hello! Several problems in nginx resolver were identified, which might allow an attacker to cause worker process crash, or might have potential other impact: - Invalid pointer dereference might occur during DNS server response processing, allowing an attacker who is able to forge UDP packets from the DNS server to cause worker process crash (CVE-2016-0742). - Use-after-free condition might occur during CNAME response processing. This problem allows an attacker who is able to trigger name resolution to cause worker process crash, or might have potential other impact (CVE-2016-0746). - CNAME resolution was insufficiently limited, allowing an attacker who is able to trigger arbitrary name resolution to cause excessive resource consumption in worker processes (CVE-2016-0747). The problems affect nginx 0.6.18 - 1.9.9 if the "resolver" directive is used in a configuration file. The problems are fixed in nginx 1.9.10, 1.8.1. -- Maxim Dounin http://nginx.org/ From krishna.pmv at gmail.com Tue Jan 26 16:33:16 2016 From: krishna.pmv at gmail.com (Krishna PMV) Date: Tue, 26 Jan 2016 22:03:16 +0530 Subject: pcre_exec() failed: -10 Message-ID: Hello, I've a rule below in nginx config to 404 if the argument of search query is a non printable ascii character. Nginx is compiled with pcre and pcre seem** (refer 4.d below) to have utf8 support and unicode support enabled but nginx fails with "pcre_exec() failed: -10" error when it encounters the regex. Any pointers, please? Please refer below for more details. TIA, Krishna! *1. from nginx config* if ($args ~ "(*UTF8)^.*[^\x21-\x7E].*$") { return 404; } *..* *2. from access log:* 54.169.206.188 [26/Jan/2016:20:29:10 /search?userQuery=Taurm\xE9+Taurme+Corporate+Casual+Slip+On+Shoe+() *3. from error log:* 2016/01/26 20:29:10 [alert] 27700#0: *43933484 on "userQuery=Taurm+Taurme+Corporate+Casual+Slip+On+Shoe+()" using "(*UTF8)^.*[^\x00-\x7F].*$", client: 54.169.206.188, server: search.paytm.com, request: "GET /search?userQuery=Taurm+Taurme+Corporate+Casual+Slip+On+Shoe+() HTTP/1.1", host: "search.paytm.com" *4. nginx and pcre * a) nginx -V nginx version: openresty/1.7.2.1 b) ldd /usr/sbin/nginx|grep pcre lib*pcre*.so.3 => /lib/x86_64-linux-gnu/lib*pcre*.so.3 (0x00007fbdb64a7000) c) dpkg -l|grep pcre ii lib*pcre*3:amd64 1:8.31-2ubuntu2.1 amd64 Perl 5 Compatible Regular Expression Library - runtime files **d) Since ubuntu no longer ships with pcretest, there was no easy way to see if utf8 support is enabled but from ubuntu package build logs for our version, I can see that is how it is built. Also, can confirm the same from config.log when building the debian package locally on my machine: https://launchpadlibrarian.net/212590020/buildlog_ubuntu-trusty-amd64.pcre3_1%3A8.31-2ubuntu2.1_BUILDING.txt.gz -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 26 16:55:00 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jan 2016 19:55:00 +0300 Subject: pcre_exec() failed: -10 In-Reply-To: References: Message-ID: <20160126165459.GA9449@mdounin.ru> Hello! On Tue, Jan 26, 2016 at 10:03:16PM +0530, Krishna PMV wrote: > Hello, > > I've a rule below in nginx config to 404 if the argument of search query is > a non printable ascii character. Nginx is compiled with pcre and pcre > seem** (refer 4.d below) to have utf8 support and unicode support enabled > but nginx fails with "pcre_exec() failed: -10" error when it encounters the > regex. > > Any pointers, please? Please refer below for more details. > > TIA, > > Krishna! > > *1. from nginx config* > > if ($args ~ "(*UTF8)^.*[^\x21-\x7E].*$") { > > return 404; > > } > > *..* > > > *2. from access log:* > > 54.169.206.188 [26/Jan/2016:20:29:10 > /search?userQuery=Taurm\xE9+Taurme+Corporate+Casual+Slip+On+Shoe+() > > *3. from error log:* > > 2016/01/26 20:29:10 [alert] 27700#0: *43933484 on > "userQuery=Taurm+Taurme+Corporate+Casual+Slip+On+Shoe+()" using > "(*UTF8)^.*[^\x00-\x7F].*$", client: 54.169.206.188, server: > search.paytm.com, request: "GET > /search?userQuery=Taurm+Taurme+Corporate+Casual+Slip+On+Shoe+() HTTP/1.1", > host: "search.paytm.com" The error is returned by PCRE due to an invalid UTF-8 string being checked as UTF-8 one. Note "\xE9" in the access log entry. Some additional information can be found in the pcreunicode manpage. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Jan 26 22:31:25 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Jan 2016 22:31:25 +0000 Subject: PHP path_info problem In-Reply-To: References: Message-ID: <20160126223125.GC19381@daoine.org> On Mon, Jan 25, 2016 at 12:32:30PM -0500, Yoel Jim?nez Del Valle wrote: Hi there, > I have a web app in php that relays on path_info to processs a request but > i always get a 404 when do > http://localhost/folder/app/app.php/controller/method > nginx always respond 404 any ideas how to solve this and gain access to the > request Which of your location{} blocks did you tell nginx to use to process the request for /folder/app/app.php/controller/method ? Which of your location{} blocks do you want nginx to use to process that request? http://nginx.org/r/location > i can acces to > http://localhost/folder/app/app.php but after last p in php extension > anything else show up 404 If you have "location ~ php$", that would match the second request there, but not the first. Perhaps you want "location ~ php" or "location /folder/app/app.php" or something else instead? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jan 26 22:40:24 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Jan 2016 22:40:24 +0000 Subject: Pages rewrite In-Reply-To: References: Message-ID: <20160126224024.GD19381@daoine.org> On Fri, Jan 22, 2016 at 01:00:31PM -0500, blason wrote: Hi there, > I need a help on below topic and I wanted to achieve URL Rewrite like this > > We want to redirect our domain pages from source to destination one > > Source : Original Page > www.xxxx.com/index.php?id=news > > Destination : > www.xxxxx.com/news.html Option 1 - do it in php. Write an index.php that will issue suitable 301 redirects for whatever arguments it gets. Option 2 - do it in nginx.conf. In your "location = /index.php" block, use the appropriate logic. If you know you will always get exactly one "id" parameter that will always map to the obvious new url, something like return 301 /$arg_id.html; (untested) would probably work. If you have different logic -- what should happen with a request for /index.php?id=news&key=value, or for /index.php?id1=news, or for /index.php?id=news&id=help, or for /index.php -- then when you describe your intention, it may become obvious how to implement it. If it is straightforward, then http://nginx.org/r/map and examples may help; if it is not, you may find it simpler to work in a different language such as php. (Note: in the above I have assumed that the source and destination hostnames are the same. If they really are not, and the number of x's is intentionally different, then you would need to include the full http:// url in the return directive.) Good luck with it, f -- Francis Daly francis at daoine.org From kworthington at gmail.com Wed Jan 27 14:18:18 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 27 Jan 2016 09:18:18 -0500 Subject: [nginx-announce] nginx-1.9.10 In-Reply-To: <20160126163134.GP9449@mdounin.ru> References: <20160126163134.GP9449@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.10 for Windows https://kevinworthington.com/nginxwin1910 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jan 26, 2016 at 11:31 AM, Maxim Dounin wrote: > Changes with nginx 1.9.10 26 Jan > 2016 > > *) Security: invalid pointer dereference might occur during DNS server > response processing if the "resolver" directive was used, allowing > an > attacker who is able to forge UDP packets from the DNS server to > cause segmentation fault in a worker process (CVE-2016-0742). > > *) Security: use-after-free condition might occur during CNAME response > processing if the "resolver" directive was used, allowing an > attacker > who is able to trigger name resolution to cause segmentation fault > in > a worker process, or might have potential other impact > (CVE-2016-0746). > > *) Security: CNAME resolution was insufficiently limited if the > "resolver" directive was used, allowing an attacker who is able to > trigger arbitrary name resolution to cause excessive resource > consumption in worker processes (CVE-2016-0747). > > *) Feature: the "auto" parameter of the "worker_cpu_affinity" > directive. > > *) Bugfix: the "proxy_protocol" parameter of the "listen" directive did > not work with IPv6 listen sockets. > > *) Bugfix: connections to upstream servers might be cached incorrectly > when using the "keepalive" directive. > > *) Bugfix: proxying used the HTTP method of the original request after > an "X-Accel-Redirect" redirection. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Jan 27 14:21:53 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 27 Jan 2016 09:21:53 -0500 Subject: [nginx-announce] nginx-1.8.1 In-Reply-To: <20160126163151.GT9449@mdounin.ru> References: <20160126163151.GT9449@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.8.1 for Windows https://kevinworthington.com/nginxwin181 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jan 26, 2016 at 11:31 AM, Maxim Dounin wrote: > Changes with nginx 1.8.1 26 Jan > 2016 > > *) Security: invalid pointer dereference might occur during DNS server > response processing if the "resolver" directive was used, allowing > an > attacker who is able to forge UDP packets from the DNS server to > cause segmentation fault in a worker process (CVE-2016-0742). > > *) Security: use-after-free condition might occur during CNAME response > processing if the "resolver" directive was used, allowing an > attacker > who is able to trigger name resolution to cause segmentation fault > in > a worker process, or might have potential other impact > (CVE-2016-0746). > > *) Security: CNAME resolution was insufficiently limited if the > "resolver" directive was used, allowing an attacker who is able to > trigger arbitrary name resolution to cause excessive resource > consumption in worker processes (CVE-2016-0747). > > *) Bugfix: the "proxy_protocol" parameter of the "listen" directive did > not work if not specified in the first "listen" directive for a > listen socket. > > *) Bugfix: nginx might fail to start on some old Linux variants; the > bug > had appeared in 1.7.11. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "try_files" and "alias" directives were used inside a location given > by a regular expression; the bug had appeared in 1.7.1. > > *) Bugfix: the "try_files" directive inside a nested location given by > a > regular expression worked incorrectly if the "alias" directive was > used in the outer location. > > *) Bugfix: "header already sent" alerts might appear in logs when using > cache; the bug had appeared in 1.7.5. > > *) Bugfix: a segmentation fault might occur in a worker process if > different ssl_session_cache settings were used in different virtual > servers. > > *) Bugfix: the "expires" directive might not work when using variables. > > *) Bugfix: if nginx was built with the ngx_http_spdy_module it was > possible to use the SPDY protocol even if the "spdy" parameter of > the > "listen" directive was not specified. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 27 04:44:10 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 26 Jan 2016 23:44:10 -0500 Subject: nginx-1.9.10 In-Reply-To: <20160126163128.GO9449@mdounin.ru> References: <20160126163128.GO9449@mdounin.ru> Message-ID: Thanks updated to 1.9.10 fine with ngx_brotli + ngx_pagespeed 1.10 branch :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264158,264187#msg-264187 From nginx-forum at forum.nginx.org Tue Jan 26 16:54:15 2016 From: nginx-forum at forum.nginx.org (Ortal) Date: Tue, 26 Jan 2016 11:54:15 -0500 Subject: Best performance test tool Message-ID: <4d55c7a08e696eb077654ad028b95d5d.NginxMailingListEnglish@forum.nginx.org> Hello, I created a NGINX module, I am trying to do a benchmark on my module. I would like to check the performance on a post requests (different files...). I tried to use ab, wrk and locust. I tried running each one of the tools on the same NGINX servers and different servers. In all of my tests the NGINX did not passed the 30% CPU while the tolls got to over 100%. My question is: Which tool I can use to test NGINX with post requests in the best way? Thanks, Ortal Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264161,264161#msg-264161 From infos at opendoc.net Wed Jan 27 17:10:24 2016 From: infos at opendoc.net (Alexandre) Date: Wed, 27 Jan 2016 18:10:24 +0100 Subject: load balancer on nginx : how to monitoring backend ? Message-ID: <56A8FA00.8050604@opendoc.net> Hello everyone, I use nginx 1.8.0 on debian (official nginx package with nginx repo) I created a reverse proxy SSL cluster with load balancing. Everything works fine. --- upstream myapp { server srv1; server srv2; } --- --- location ~ ^/myapp { rewrite ^/(.*) /$1 break; proxy_pass http://myapp; } --- However I wish to monitor the status of the backend. How can I do ? Thank you. compiler option --- built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1k 8 Jan 2015 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6 --- From agentzh at gmail.com Wed Jan 27 19:40:01 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 27 Jan 2016 11:40:01 -0800 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56A8FA00.8050604@opendoc.net> References: <56A8FA00.8050604@opendoc.net> Message-ID: Hello! On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: > > However I wish to monitor the status of the backend. How can I do ? > You may find the lua-resty-upstream-healthcheck library helpful: https://github.com/openresty/lua-resty-upstream-healthcheck But it's much easier to install via the OpenResty bundle though: https://openresty.org/ Regards, -agentzh From infos at opendoc.net Wed Jan 27 21:37:35 2016 From: infos at opendoc.net (Alexandre) Date: Wed, 27 Jan 2016 22:37:35 +0100 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: References: <56A8FA00.8050604@opendoc.net> Message-ID: <56A9389F.7090807@opendoc.net> Hello, thank you I'll test. It is not possible to test the nginx upstream directly ? Thank you Alexandre On 27/01/16 20:40, Yichun Zhang (agentzh) wrote: > Hello! > > On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: >> >> However I wish to monitor the status of the backend. How can I do ? >> > > You may find the lua-resty-upstream-healthcheck library helpful: > > https://github.com/openresty/lua-resty-upstream-healthcheck > > But it's much easier to install via the OpenResty bundle though: > > https://openresty.org/ > > Regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From infos at opendoc.net Wed Jan 27 22:32:32 2016 From: infos at opendoc.net (Alexandre) Date: Wed, 27 Jan 2016 23:32:32 +0100 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56A9389F.7090807@opendoc.net> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> Message-ID: <56A94580.4010002@opendoc.net> Monitoring of backend seems to be possible with NGINX PLUS. Can you confirm ? http://nginx.org/en/docs/http/ngx_http_upstream_module.html https://www.nginx.com/products/ Thank you Alexande. On 27/01/16 22:37, Alexandre wrote: > Hello, thank you I'll test. > > It is not possible to test the nginx upstream directly ? > > Thank you > > Alexandre > > > On 27/01/16 20:40, Yichun Zhang (agentzh) wrote: >> Hello! >> >> On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: >>> >>> However I wish to monitor the status of the backend. How can I do ? >>> >> >> You may find the lua-resty-upstream-healthcheck library helpful: >> >> https://github.com/openresty/lua-resty-upstream-healthcheck >> >> But it's much easier to install via the OpenResty bundle though: >> >> https://openresty.org/ >> >> Regards, >> -agentzh >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Jan 28 04:16:18 2016 From: nginx-forum at forum.nginx.org (tdavis) Date: Wed, 27 Jan 2016 23:16:18 -0500 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <628cace4e5c7dd8fe6f300b9cd055cce.NginxMailingListEnglish@forum.nginx.org> References: <20150714173557.GX93501@mdounin.ru> <628cace4e5c7dd8fe6f300b9cd055cce.NginxMailingListEnglish@forum.nginx.org> Message-ID: <161033d5b126cb68d8df4e8f2d193174.NginxMailingListEnglish@forum.nginx.org> Any update on this issue? Is there a fix I can apply on the AWS side? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,237386,264195#msg-264195 From nginx-forum at forum.nginx.org Thu Jan 28 06:59:05 2016 From: nginx-forum at forum.nginx.org (dgobaud) Date: Thu, 28 Jan 2016 01:59:05 -0500 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <161033d5b126cb68d8df4e8f2d193174.NginxMailingListEnglish@forum.nginx.org> References: <20150714173557.GX93501@mdounin.ru> <628cace4e5c7dd8fe6f300b9cd055cce.NginxMailingListEnglish@forum.nginx.org> <161033d5b126cb68d8df4e8f2d193174.NginxMailingListEnglish@forum.nginx.org> Message-ID: Yes the answer is on the elastic load balancer you must use protocol TCP or SSL - not HTTP or HTTPS. The HTTP/HTTPS listeners keep the connections open for reuse........ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,237386,264196#msg-264196 From maxim at nginx.com Thu Jan 28 07:27:40 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 28 Jan 2016 10:27:40 +0300 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56A94580.4010002@opendoc.net> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> <56A94580.4010002@opendoc.net> Message-ID: <56A9C2EC.8010007@nginx.com> On 1/28/16 1:32 AM, Alexandre wrote: > Monitoring of backend seems to be possible with NGINX PLUS. Can you > confirm ? > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html > https://www.nginx.com/products/ > Yes, that's right. > Thank you > > Alexande. > > On 27/01/16 22:37, Alexandre wrote: >> Hello, thank you I'll test. >> >> It is not possible to test the nginx upstream directly ? >> >> Thank you >> >> Alexandre >> >> >> On 27/01/16 20:40, Yichun Zhang (agentzh) wrote: >>> Hello! >>> >>> On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: >>>> >>>> However I wish to monitor the status of the backend. How can I do ? >>>> >>> >>> You may find the lua-resty-upstream-healthcheck library helpful: >>> >>> https://github.com/openresty/lua-resty-upstream-healthcheck >>> >>> But it's much easier to install via the OpenResty bundle though: >>> >>> https://openresty.org/ >>> >>> Regards, >>> -agentzh >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From infos at opendoc.net Thu Jan 28 07:36:17 2016 From: infos at opendoc.net (Alexandre) Date: Thu, 28 Jan 2016 08:36:17 +0100 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56A9C2EC.8010007@nginx.com> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> <56A94580.4010002@opendoc.net> <56A9C2EC.8010007@nginx.com> Message-ID: <56A9C4F1.8020604@opendoc.net> Hello, On 28/01/16 08:27, Maxim Konovalov wrote: > On 1/28/16 1:32 AM, Alexandre wrote: >> Monitoring of backend seems to be possible with NGINX PLUS. Can you >> confirm ? >> >> http://nginx.org/en/docs/http/ngx_http_upstream_module.html >> https://www.nginx.com/products/ >> > Yes, that's right. OK, I can not use a load balancer in production without backend monitoring. I will use LVS. Thank you. Alexandre. > >> Thank you >> >> Alexande. >> >> On 27/01/16 22:37, Alexandre wrote: >>> Hello, thank you I'll test. >>> >>> It is not possible to test the nginx upstream directly ? >>> >>> Thank you >>> >>> Alexandre >>> >>> >>> On 27/01/16 20:40, Yichun Zhang (agentzh) wrote: >>>> Hello! >>>> >>>> On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: >>>>> >>>>> However I wish to monitor the status of the backend. How can I do ? >>>>> >>>> >>>> You may find the lua-resty-upstream-healthcheck library helpful: >>>> >>>> https://github.com/openresty/lua-resty-upstream-healthcheck >>>> >>>> But it's much easier to install via the OpenResty bundle though: >>>> >>>> https://openresty.org/ >>>> >>>> Regards, >>>> -agentzh >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > From sca at andreasschulze.de Thu Jan 28 09:45:04 2016 From: sca at andreasschulze.de (A. Schulze) Date: Thu, 28 Jan 2016 10:45:04 +0100 Subject: echo-nginx-module and HTTP2 Message-ID: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> Hello, The echo module (https://github.com/openresty/echo-nginx-module / v0.58) produce segfaults while accessing the following location: # echo back the client request location /echoback { echo_duplicate 1 $echo_client_request_headers; echo "\r"; echo_read_request_body; echo_request_body; } that happen only if http2 is enabled (usually at https servers). ... worker process 20658 exited on signal 11 Andreas From rainer at ultra-secure.de Thu Jan 28 14:12:16 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 28 Jan 2016 15:12:16 +0100 Subject: Question about rewrite directive Message-ID: Hi, a customer has this in his .htaccess file (among other things): RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ $1.$3 [L] This is to enable versioning of various files, so you can have long "Expires" on them and still update them as needed while retaining the old ones, if needed. I want to deliver static files directly from nginx, so I created this: location ~* ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ { rewrite ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ $1.$3 ; expires 1h; } This works in most cases, except for files which already have a version number of some sort. Namely: coda-slider.1.1.1.1452703531.js and two others from the jquery framework. What's wrong with my nginx rewrite? Because in apache, the rewrite rule works as intended. nginx 1.8.0 on FreeBSD 10-amd64. Regards Rainer From agentzh at gmail.com Thu Jan 28 19:00:45 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 28 Jan 2016 11:00:45 -0800 Subject: echo-nginx-module and HTTP2 In-Reply-To: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> Message-ID: Hello! On Thu, Jan 28, 2016 at 1:45 AM, A. Schulze wrote: > The echo module (https://github.com/openresty/echo-nginx-module / v0.58) > produce segfaults while accessing the following location: > > # echo back the client request > location /echoback { > echo_duplicate 1 $echo_client_request_headers; > echo "\r"; > > echo_read_request_body; > > echo_request_body; > } > > that happen only if http2 is enabled (usually at https servers). Yeah, the ngx_echo module does not support the HTTP/2 mode yet (as the maintainer, I've never tested it anyway). Patches welcome and volunteers welcome :) Best regards, -agentzh From yoel07 at gmail.com Thu Jan 28 20:43:13 2016 From: yoel07 at gmail.com (yoel07 at gmail.com) Date: Thu, 28 Jan 2016 20:43:13 +0000 (UTC) Subject: PHP path_info problem In-Reply-To: <20160126223125.GC19381@daoine.org> References: <20160126223125.GC19381@daoine.org> Message-ID: <4BA724E6BC9366A5.1-9d3a085a-deb3-4009-ae9c-72ac49d2747b@mail.outlook.com> Enviado desde Outlook Mobile De: Francis Daly Enviado: martes, 26 de enero 5:31 PM Asunto: Re: PHP path_info problem Para: nginx at nginx.org On Mon, Jan 25, 2016 at 12:32:30PM -0500, Yoel Jim?nez Del Valle wrote: Hi there, > I have a web app in php that relays on path_info to processs a request but > i always get a 404 when do > http://localhost/folder/app/app.php/controller/method > nginx always respond 404 any ideas how to solve this and gain access to the > request Which of your location{} blocks did you tell nginx to use to process the request for /folder/app/app.php/controller/method ? For instance i want yo access to? /final/app/app.PHP/controller/action/ i hace this location un nginx.conf Location /final/app{ try_files $uri /app.php$is_args$args; } but still same 404 im using wt-nmp tryin to move from apache Which of your location{} blocks do you want nginx to use to process that request? http://nginx.org/r/location > i can acces to > http://localhost/folder/app/app.php but after last p in php extension > anything else show up 404 If you have "location ~ php$", that would match the second request there, but not the first. Perhaps you want "location ~ php" or "location /folder/app/app.php" or something else instead? Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Jan 28 20:53:12 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 28 Jan 2016 12:53:12 -0800 Subject: [ANN] OpenResty 1.9.7.3 released Message-ID: Hi folks OpenResty 1.9.7.3 is now released with the latest security fixes from the mainline NGINX core (CVE-2016-0742, CVE-2016-0746, CVE-2016-0747). https://openresty.org/#Download Both the (portable) source code distribution and the Win32 binary distribution are provided on this Download page. Please see the following mail for more details: http://mailman.nginx.org/pipermail/nginx/2016-January/049700.html Changes since the last (formal) release, 1.9.7.2 are as follows: * bugfix: backported the security fixes in NGINX core's DNS resolver for CVE-2016-0742, CVE-2016-0746, and CVE-2016-0747. See for more details. * change: renamed the source distribution name from "ngx_openresty" to just "openresty". Best regards, -agentzh From l at ymx.ch Thu Jan 28 21:04:22 2016 From: l at ymx.ch (Lukas) Date: Thu, 28 Jan 2016 22:04:22 +0100 Subject: Question about rewrite directive In-Reply-To: References: Message-ID: <20160128210422.GA41423@lpr.ch> > rainer at ultra-secure.de [2016-01-28 15:12]: > > Hi, > > > a customer has this in his .htaccess file (among other things): > > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteRule ^(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ $1.$3 [L] > > This is to enable versioning of various files, so you can have long > "Expires" on them and still update them as needed while retaining > the old ones, if needed. > > I want to deliver static files directly from nginx, so I created this: > Not exactly sure about the notation in nginx but for regexp what about: > location ~* ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ { location ~* ^(.+)\.([\d\.]+)\.(js|css|png|jpg|gif|gzip)$ { wbr Lukas -- Lukas Ruf | Ad Personam Consecom | Ad Laborem From al-nginx at none.at Thu Jan 28 21:26:20 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 28 Jan 2016 22:26:20 +0100 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56A9C4F1.8020604@opendoc.net> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> <56A94580.4010002@opendoc.net> <56A9C2EC.8010007@nginx.com> <56A9C4F1.8020604@opendoc.net> Message-ID: <73a86454c064a54749a203a43fce6679@none.at> Hi. Am 28-01-2016 08:36, schrieb Alexandre: > Hello, > > On 28/01/16 08:27, Maxim Konovalov wrote: >> On 1/28/16 1:32 AM, Alexandre wrote: >>> Monitoring of backend seems to be possible with NGINX PLUS. Can you >>> confirm ? >>> >>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html >>> https://www.nginx.com/products/ >>> >> Yes, that's right. > > OK, I can not use a load balancer in production without backend > monitoring. I will use LVS. just for my curiosity why can't buying n+, for production!? BR Aleks From francis at daoine.org Thu Jan 28 22:48:13 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 28 Jan 2016 22:48:13 +0000 Subject: Question about rewrite directive In-Reply-To: References: Message-ID: <20160128224813.GF19381@daoine.org> On Thu, Jan 28, 2016 at 03:12:16PM +0100, rainer at ultra-secure.de wrote: Hi there, > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteRule ^(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ $1.$3 [L] > location ~* ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ { > rewrite ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ $1.$3 ; > expires 1h; > } > What's wrong with my nginx rewrite? > Because in apache, the rewrite rule works as intended. I see two main differences there: Your apache RewriteRule has [L] on the end. Your nginx rewrite does not. Possibly you want "break" -- http://nginx.org/r/rewrite Your apache RewriteRule is protected by RewriteCond. Your nginx rewrite is not. Possibly something involving try_files or error_page and a named location for fallback could achieve the same effect. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jan 28 22:57:37 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 28 Jan 2016 22:57:37 +0000 Subject: PHP path_info problem In-Reply-To: <4BA724E6BC9366A5.1-9d3a085a-deb3-4009-ae9c-72ac49d2747b@mail.outlook.com> References: <20160126223125.GC19381@daoine.org> <4BA724E6BC9366A5.1-9d3a085a-deb3-4009-ae9c-72ac49d2747b@mail.outlook.com> Message-ID: <20160128225737.GG19381@daoine.org> On Thu, Jan 28, 2016 at 08:43:13PM +0000, yoel07 at gmail.com wrote: Hi there, > For instance i want yo access to? /final/app/app.PHP/controller/action/ i hace this location un nginx.conf > > Location /final/app{ > > try_files $uri /app.php$is_args$args; > > } but still same 404 im using wt-nmp tryin to move from apache So the request /final/app/app.PHP/controller/action/ will serve the file $document_root/final/app/app.PHP/controller/action/ if it exists, or it will do an internal rewrite to the url /app.php. Which location{} in your config will handle that (sub)request? The page at https://www.nginx.com/resources/wiki/start/topics/examples/phpfcgi/ comes up when I search for "php nginx path_info". It has an example configuration that may be worth examining. Good luck with it, f -- Francis Daly francis at daoine.org From l at ymx.ch Thu Jan 28 23:06:29 2016 From: l at ymx.ch (Lukas) Date: Fri, 29 Jan 2016 00:06:29 +0100 Subject: Question about rewrite directive In-Reply-To: <20160128210422.GA41423@lpr.ch> References: <20160128210422.GA41423@lpr.ch> Message-ID: <20160128230629.GA44620@lpr.ch> > Lukas [2016-01-28 22:04]: > > > rainer at ultra-secure.de [2016-01-28 15:12]: > > > > a customer has this in his .htaccess file (among other things): > > > > RewriteCond %{REQUEST_FILENAME} !-f > > RewriteCond %{REQUEST_FILENAME} !-d > > RewriteRule ^(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ $1.$3 [L] > > > > This is to enable versioning of various files, so you can have long > > "Expires" on them and still update them as needed while retaining > > the old ones, if needed. > > > > I want to deliver static files directly from nginx, so I created this: > > > > Not exactly sure about the notation in nginx but for regexp what > about: > > location ~* ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ { Btw. a) ".+" matches a series of any character b) "\d+" matches a series of any digit > coda-slider.1.1.1.1452703531.js would then be returned as coda-slider.1.1.1.js since the first series of digit-dot is matched by a). If your customer has a file to be delivered that is named for example linux-4.2.1.gzip your regular expression would return linux-4.2.gzip since it strips off just the last digits-dot pair. wbr Lukas -- Lukas Ruf | Ad Personam Consecom | Ad Laborem From rainer at ultra-secure.de Thu Jan 28 23:14:53 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Fri, 29 Jan 2016 00:14:53 +0100 Subject: Question about rewrite directive In-Reply-To: <20160128230629.GA44620@lpr.ch> References: <20160128210422.GA41423@lpr.ch> <20160128230629.GA44620@lpr.ch> Message-ID: > Am 29.01.2016 um 00:06 schrieb Lukas : > >> Lukas [2016-01-28 22:04]: >> >>> rainer at ultra-secure.de [2016-01-28 15:12]: >>> >>> a customer has this in his .htaccess file (among other things): >>> >>> RewriteCond %{REQUEST_FILENAME} !-f >>> RewriteCond %{REQUEST_FILENAME} !-d >>> RewriteRule ^(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ $1.$3 [L] >>> >>> This is to enable versioning of various files, so you can have long >>> "Expires" on them and still update them as needed while retaining >>> the old ones, if needed. >>> >>> I want to deliver static files directly from nginx, so I created this: >>> >> >> Not exactly sure about the notation in nginx but for regexp what >> about: >>> location ~* ^(.+)\.(\d+)\.(js|css|png|jpg|gif|gzip)$ { > > Btw. > > a) ".+" matches a series of any character > b) "\d+" matches a series of any digit > >> coda-slider.1.1.1.1452703531.js > > would then be returned as > > coda-slider.1.1.1.js > > since the first series of digit-dot is matched by a). Which is correct. > > If your customer has a file to be delivered that is named for example > > linux-4.2.1.gzip > > your regular expression would return > > linux-4.2.gzip > > since it strips off just the last digits-dot pair. OK, that would be sub-optimal ;-) On request, the customer switched off the cache-breaking, so that problem has been solved. As for the regex itself, I checked it in regex101.com and it did match the files. The customer has elected not to use typo3?s static file-cache and serve every page from typo3?s page-cache inside the database. To get some sort of sanity, I want to micro-cache all successful GET requests for a minute. Thanks for your ideas. Rainer From nginx-forum at forum.nginx.org Fri Jan 29 00:13:11 2016 From: nginx-forum at forum.nginx.org (jeeeff) Date: Thu, 28 Jan 2016 19:13:11 -0500 Subject: proxy_cache_lock allow multiple requests to remote server in some cases Message-ID: <6129e33610b874bc698e0d6ad22ce105.NginxMailingListEnglish@forum.nginx.org> My understanding of proxy_cache_lock is that only one request should be passed to the proxied server for a given uri, even if many requests from the same uri/key are hitting nginx while it is being refreshed. When the cache folder specified in the proxy_cache_path is empty, it works well and behave like I described above. However, if the element in the cache already exists, but is expired (according to proxy_cache_valid configuration), all concurrent requests will hit the proxied server and the resource will be downloaded multiple times. Here is my config: proxy_cache_path /usr/share/nginx/cache levels=1:2 keys_zone=CACHE:10m max_size=2g inactive=1440m; server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; server_name localhost; location /videos { proxy_cache CACHE; proxy_cache_valid 200 15s; proxy_cache_revalidate on; proxy_cache_lock on; proxy_cache_lock_timeout 30s; proxy_cache_lock_age 30s; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_503 http_504; proxy_pass http://origin_server/videos; } } Basically what I want to do is to be able to take advantage of the "proxy_cache_revalidate on" to force a If-Modified-Since request, but only one request should go and fetch the new element from the proxied server, even if multiple requests are coming in for the same uri/key and the cache is expired. To be more specific, in my case, the resources downloaded are videos between 1MB to 10MB in size, so they take some time to download and saving bandwidth is important, and only one request should be done, not multiple (to the proxied server). Using "proxy_cache_use_stale updating" is also not an option since I want all requests that are coming simultaneously to wait and use the new resource when there is a new one returned from the proxied server. Is there something I am doing wrong, or is this the expected behavior? Is there a way to do what I am trying to do with nginx? I am using nginx 1.8.1 on Ubuntu Server 14.04 x64. Regards, Jeeeff Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264203,264203#msg-264203 From dewanggaba at xtremenitro.org Fri Jan 29 02:28:23 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 29 Jan 2016 09:28:23 +0700 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56A9C4F1.8020604@opendoc.net> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> <56A94580.4010002@opendoc.net> <56A9C2EC.8010007@nginx.com> <56A9C4F1.8020604@opendoc.net> Message-ID: <56AACE47.8060902@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! If you are in development, you can try this module https://github.com/yaoweibin/nginx_upstream_check_module. But, not recommended if use on production, better use n+. On 01/28/2016 02:36 PM, Alexandre wrote: > Hello, > > On 28/01/16 08:27, Maxim Konovalov wrote: >> On 1/28/16 1:32 AM, Alexandre wrote: >>> Monitoring of backend seems to be possible with NGINX PLUS. Can >>> you confirm ? >>> >>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html >>> https://www.nginx.com/products/ >>> >> Yes, that's right. > > OK, I can not use a load balancer in production without backend > monitoring. I will use LVS. > > Thank you. > > Alexandre. > >> >>> Thank you >>> >>> Alexande. >>> >>> On 27/01/16 22:37, Alexandre wrote: >>>> Hello, thank you I'll test. >>>> >>>> It is not possible to test the nginx upstream directly ? >>>> >>>> Thank you >>>> >>>> Alexandre >>>> >>>> >>>> On 27/01/16 20:40, Yichun Zhang (agentzh) wrote: >>>>> Hello! >>>>> >>>>> On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: >>>>>> >>>>>> However I wish to monitor the status of the backend. How >>>>>> can I do ? >>>>>> >>>>> >>>>> You may find the lua-resty-upstream-healthcheck library >>>>> helpful: >>>>> >>>>> https://github.com/openresty/lua-resty-upstream-healthcheck >>>>> >>>>> >>>>> But it's much easier to install via the OpenResty bundle though: >>>>> >>>>> https://openresty.org/ >>>>> >>>>> Regards, -agentzh >>>>> >>>>> _______________________________________________ nginx >>>>> mailing list nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> _______________________________________________ nginx mailing >>>> list nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ nginx mailing >>> list nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJWqs5DAAoJEOV/0iCgKM1wIAcP/inWjW0CGSQ1R0zDkwi+j5Ac m3LVzIS86/Ft86BuJXcam3SC5Cf+1bCYM0rUsYx5xsTZMjnJhOOkLaPFSGJ4jwth E3Lu5qDckCttOdmndmWXPUWHdTLASJXjntLEQgY969h48ucJRoCmA3ZB9DsPql7e nJS4Rkcu9DV4bSiCB6gFMNUksI9em/G6N/P2kXkEl33I9wEi79O7y5sT16dXiGm0 vTp5qozO2UtGtO6lfx3s0xYxUpsSiLYtYlfAY6335cuh1yFQqhMyNr2EgphtlY+1 cMMQnlM8Sfut0YltRcV3YSq1d8je4QEXSK0mC8sSXJUE58V/C6otu/O1I/h1BnQO g0TKFDTQk1rN1/z3b9jirlObwdNbWmIq7hI9Nuwi3WilDB57iEoD+pmC7R2obKDS ItrkkAKWWTsDyQOH2BMCK4nqW+whHC7D3lFCrKuzsITJDrFJf6fjzlXRzniwH/1L VJgKaIlnX7M54zJwITZcDcYGmLuC/MRHNU9HK3QYDgwdPdgSccDQwlhBnxI7tNtS gSaEVpf9Af/8QIPPyLngvl6uGskvfQdXiWHmRb0SyTwZcJEDrutf6q//hwT59Mn4 ZFLS7htPV8wCKvgxlFV7DtxRcbm9HBDRdt1ScrbTKQbLAxxQhpOVCV1emhFjfgem 020FGHFbMMmngyi50b1p =XA04 -----END PGP SIGNATURE----- From zxcvbn4038 at gmail.com Fri Jan 29 04:54:25 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 28 Jan 2016 23:54:25 -0500 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <56AACE47.8060902@xtremenitro.org> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> <56A94580.4010002@opendoc.net> <56A9C2EC.8010007@nginx.com> <56A9C4F1.8020604@opendoc.net> <56AACE47.8060902@xtremenitro.org> Message-ID: Does anyone know if the author still maintains nginx_upstream_check_module? I see only a handful of commits in the past year and they all look like contributed changes. On Thu, Jan 28, 2016 at 9:28 PM, Dewangga Bachrul Alam < dewanggaba at xtremenitro.org> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hello! > > If you are in development, you can try this module > https://github.com/yaoweibin/nginx_upstream_check_module. But, not > recommended if use on production, better use n+. > > On 01/28/2016 02:36 PM, Alexandre wrote: > > Hello, > > > > On 28/01/16 08:27, Maxim Konovalov wrote: > >> On 1/28/16 1:32 AM, Alexandre wrote: > >>> Monitoring of backend seems to be possible with NGINX PLUS. Can > >>> you confirm ? > >>> > >>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html > >>> https://www.nginx.com/products/ > >>> > >> Yes, that's right. > > > > OK, I can not use a load balancer in production without backend > > monitoring. I will use LVS. > > > > Thank you. > > > > Alexandre. > > > >> > >>> Thank you > >>> > >>> Alexande. > >>> > >>> On 27/01/16 22:37, Alexandre wrote: > >>>> Hello, thank you I'll test. > >>>> > >>>> It is not possible to test the nginx upstream directly ? > >>>> > >>>> Thank you > >>>> > >>>> Alexandre > >>>> > >>>> > >>>> On 27/01/16 20:40, Yichun Zhang (agentzh) wrote: > >>>>> Hello! > >>>>> > >>>>> On Wed, Jan 27, 2016 at 9:10 AM, Alexandre wrote: > >>>>>> > >>>>>> However I wish to monitor the status of the backend. How > >>>>>> can I do ? > >>>>>> > >>>>> > >>>>> You may find the lua-resty-upstream-healthcheck library > >>>>> helpful: > >>>>> > >>>>> https://github.com/openresty/lua-resty-upstream-healthcheck > >>>>> > >>>>> > >>>>> > But it's much easier to install via the OpenResty bundle though: > >>>>> > >>>>> https://openresty.org/ > >>>>> > >>>>> Regards, -agentzh > >>>>> > >>>>> _______________________________________________ nginx > >>>>> mailing list nginx at nginx.org > >>>>> http://mailman.nginx.org/mailman/listinfo/nginx > >>>>> > >>>> > >>>> _______________________________________________ nginx mailing > >>>> list nginx at nginx.org > >>>> http://mailman.nginx.org/mailman/listinfo/nginx > >>> > >>> _______________________________________________ nginx mailing > >>> list nginx at nginx.org > >>> http://mailman.nginx.org/mailman/listinfo/nginx > >>> > >> > >> > > > > _______________________________________________ nginx mailing list > > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQIcBAEBCAAGBQJWqs5DAAoJEOV/0iCgKM1wIAcP/inWjW0CGSQ1R0zDkwi+j5Ac > m3LVzIS86/Ft86BuJXcam3SC5Cf+1bCYM0rUsYx5xsTZMjnJhOOkLaPFSGJ4jwth > E3Lu5qDckCttOdmndmWXPUWHdTLASJXjntLEQgY969h48ucJRoCmA3ZB9DsPql7e > nJS4Rkcu9DV4bSiCB6gFMNUksI9em/G6N/P2kXkEl33I9wEi79O7y5sT16dXiGm0 > vTp5qozO2UtGtO6lfx3s0xYxUpsSiLYtYlfAY6335cuh1yFQqhMyNr2EgphtlY+1 > cMMQnlM8Sfut0YltRcV3YSq1d8je4QEXSK0mC8sSXJUE58V/C6otu/O1I/h1BnQO > g0TKFDTQk1rN1/z3b9jirlObwdNbWmIq7hI9Nuwi3WilDB57iEoD+pmC7R2obKDS > ItrkkAKWWTsDyQOH2BMCK4nqW+whHC7D3lFCrKuzsITJDrFJf6fjzlXRzniwH/1L > VJgKaIlnX7M54zJwITZcDcYGmLuC/MRHNU9HK3QYDgwdPdgSccDQwlhBnxI7tNtS > gSaEVpf9Af/8QIPPyLngvl6uGskvfQdXiWHmRb0SyTwZcJEDrutf6q//hwT59Mn4 > ZFLS7htPV8wCKvgxlFV7DtxRcbm9HBDRdt1ScrbTKQbLAxxQhpOVCV1emhFjfgem > 020FGHFbMMmngyi50b1p > =XA04 > -----END PGP SIGNATURE----- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From infos at opendoc.net Fri Jan 29 06:27:33 2016 From: infos at opendoc.net (Alexandre) Date: Fri, 29 Jan 2016 07:27:33 +0100 Subject: load balancer on nginx : how to monitoring backend ? In-Reply-To: <73a86454c064a54749a203a43fce6679@none.at> References: <56A8FA00.8050604@opendoc.net> <56A9389F.7090807@opendoc.net> <56A94580.4010002@opendoc.net> <56A9C2EC.8010007@nginx.com> <56A9C4F1.8020604@opendoc.net> <73a86454c064a54749a203a43fce6679@none.at> Message-ID: <56AB0655.40905@opendoc.net> Hello On 28/01/16 22:26, Aleksandar Lazic wrote: > Hi. > > Am 28-01-2016 08:36, schrieb Alexandre: >> Hello, >> >> On 28/01/16 08:27, Maxim Konovalov wrote: >>> On 1/28/16 1:32 AM, Alexandre wrote: >>>> Monitoring of backend seems to be possible with NGINX PLUS. Can you >>>> confirm ? >>>> >>>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html >>>> https://www.nginx.com/products/ >>>> >>> Yes, that's right. >> >> OK, I can not use a load balancer in production without backend >> monitoring. I will use LVS. > > just for my curiosity why can't buying n+, for production!? First I do not know the price of nginx plus. Moreover I do not think it's a good idea to use a reverse proxy TLS and load balancing on the same machine when there is high web traffic. I preferred to separate processes. Nginx for reverse proxy TLS and LVS + Varnish for load balancing and cache. > > BR Aleks Regards, Alexandre. From sca at andreasschulze.de Fri Jan 29 07:19:53 2016 From: sca at andreasschulze.de (A. Schulze) Date: Fri, 29 Jan 2016 08:19:53 +0100 Subject: echo-nginx-module and HTTP2 In-Reply-To: References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> Message-ID: <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> Yichun Zhang (agentzh): > Yeah, the ngx_echo module does not support the HTTP/2 mode yet (as the > maintainer, I've never tested it anyway). Patches welcome and > volunteers welcome :) thanks, I could not support with patches but would do some beta testing. Just to have ask: disabling http2 for a location is not possible, isn't it? Andreas From nginx-forum at forum.nginx.org Fri Jan 29 11:40:19 2016 From: nginx-forum at forum.nginx.org (atulhost) Date: Fri, 29 Jan 2016 06:40:19 -0500 Subject: How to check which directive actually delivers the files? In-Reply-To: References: Message-ID: <1d0697d63254e0613d19bbcdc62e72a0.NginxMailingListEnglish@forum.nginx.org> To server static files from nginx use the below configs inside serverblock of nginx, location ~* \.(js|css|png|jpg|jpeg|gif|ico|eot|otf|ttf|woff)$ { access_log off; log_not_found off; expires 30d; } and use symbol "|" without quote and extension name to add more static file or extension types. Source: http://atulhost.com/install-wordpress-on-nginx-with-fastcgi-cache-in-ubuntu Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264129,264206#msg-264206 From mdounin at mdounin.ru Fri Jan 29 13:47:21 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Jan 2016 16:47:21 +0300 Subject: proxy_cache_lock allow multiple requests to remote server in some cases In-Reply-To: <6129e33610b874bc698e0d6ad22ce105.NginxMailingListEnglish@forum.nginx.org> References: <6129e33610b874bc698e0d6ad22ce105.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160129134721.GJ98618@mdounin.ru> Hello! On Thu, Jan 28, 2016 at 07:13:11PM -0500, jeeeff wrote: > My understanding of proxy_cache_lock is that only one request should be > passed to the proxied server for a given uri, even if many requests from the > same uri/key are hitting nginx while it is being refreshed. > > When the cache folder specified in the proxy_cache_path is empty, it works > well and behave like I described above. > However, if the element in the cache already exists, but is expired > (according to proxy_cache_valid configuration), all concurrent requests will > hit the proxied server and the resource will be downloaded multiple times. That's expected, see http://nginx.org/r/proxy_cache_lock: : When enabled, only one request at a time will be allowed to : populate a new cache element identified according to the : proxy_cache_key directive by passing a request to a proxied : server. Note "a new cache element". It does not work while updating cache elements, this is not implemented. [...] > Using "proxy_cache_use_stale updating" is also not an option since I want > all requests that are coming simultaneously to wait and use the new resource > when there is a new one returned from the proxied server. > > Is there something I am doing wrong, or is this the expected behavior? Is > there a way to do what I am trying to do with nginx? See above, this is expected behaviour - as of now proxy_cache_lock does nothing while updating cache elements. Using "proxy_cache_use_stale updating" is recommended if you need to reduce concurrency while updating cache elements. If "proxy_cache_use_stale updating" doesn't work for you, you may try extending "proxy_cache_lock" to also cover updating. Current implementation doesn't try to do this to reduce complexity. -- Maxim Dounin http://nginx.org/ From mbukowski.lists at gmail.com Fri Jan 29 17:58:02 2016 From: mbukowski.lists at gmail.com (Markus Bukowski) Date: Fri, 29 Jan 2016 18:58:02 +0100 Subject: File download results in "110: Connection timed out" on nginx; Download continues until Linux Kernel 'tcp_fin_timeout' reached Message-ID: Hello everyone, when trying to download a static file using curl with a limit rate of 4000 (to simulate slow consumers) the download aborts after a certain time. A workaround to make that work is to increase the send_timeout to 3600. Nevertheless I would like to understand the behaviour and hope that you can help: - File download starts with via curl with --limit-rate 4000. - After exactly 60s connection on nginx host goes from ESTABLISHED to FIN_WAIT_1 (verified with 'ss'). Having nginx on debug gives the following: 2016/01/29 17:32:19 [debug] 9037#0: *1 http run request: "/8m?" 2016/01/29 17:32:19 [debug] 9037#0: *1 http writer handler: "/8m?" 2016/01/29 17:32:19 [debug] 9037#0: *1 http output filter "/8m?" 2016/01/29 17:32:19 [debug] 9037#0: *1 http copy filter: "/8m?" 2016/01/29 17:32:19 [debug] 9037#0: *1 image filter 2016/01/29 17:32:19 [debug] 9037#0: *1 xslt filter body 2016/01/29 17:32:19 [debug] 9037#0: *1 http postpone filter "/8m?" 0000000000000000 2016/01/29 17:32:19 [debug] 9037#0: *1 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 624692, size: 7763916 2016/01/29 17:32:19 [debug] 9037#0: *1 http write filter: l:1 f:0 s:7763916 2016/01/29 17:32:19 [debug] 9037#0: *1 http write filter limit 0 2016/01/29 17:32:19 [debug] 9037#0: *1 sendfile: @624692 7763916 2016/01/29 17:32:19 [debug] 9037#0: *1 sendfile: 777600, @624692 777600:7763916 2016/01/29 17:32:19 [debug] 9037#0: *1 http write filter 00000000024CA578 2016/01/29 17:32:19 [debug] 9037#0: *1 http copy filter: -2 "/8m?" 2016/01/29 17:32:19 [debug] 9037#0: *1 http writer output filter: -2, "/8m?" 2016/01/29 17:32:19 [debug] 9037#0: *1 event timer: 12, old: 1454088799035, new: 1454088799119 2016/01/29 17:33:06 [debug] 9037#0: *1 post event 00000000024FFEF8 2016/01/29 17:33:06 [debug] 9037#0: *1 delete posted event 00000000024FFEF8 2016/01/29 17:33:06 [debug] 9037#0: *1 http run request: "/8m?" 2016/01/29 17:33:06 [debug] 9037#0: *1 http writer handler: "/8m?" 2016/01/29 17:33:06 [debug] 9037#0: *1 http output filter "/8m?" 2016/01/29 17:33:06 [debug] 9037#0: *1 http copy filter: "/8m?" 2016/01/29 17:33:06 [debug] 9037#0: *1 image filter 2016/01/29 17:33:06 [debug] 9037#0: *1 xslt filter body 2016/01/29 17:33:06 [debug] 9037#0: *1 http postpone filter "/8m?" 0000000000000000 2016/01/29 17:33:06 [debug] 9037#0: *1 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 1402292, size: 6986316 2016/01/29 17:33:06 [debug] 9037#0: *1 http write filter: l:1 f:0 s:6986316 2016/01/29 17:33:06 [debug] 9037#0: *1 http write filter limit 0 2016/01/29 17:33:06 [debug] 9037#0: *1 sendfile: @1402292 6986316 2016/01/29 17:33:06 [debug] 9037#0: *1 sendfile: 518400, @1402292 518400:6986316 2016/01/29 17:33:06 [debug] 9037#0: *1 http write filter 00000000024CA578 2016/01/29 17:33:06 [debug] 9037#0: *1 http copy filter: -2 "/8m?" 2016/01/29 17:33:06 [debug] 9037#0: *1 http writer output filter: -2, "/8m?" 2016/01/29 17:33:06 [debug] 9037#0: *1 event timer del: 12: 1454088799035 2016/01/29 17:33:06 [debug] 9037#0: *1 event timer add: 12: 60000:1454088846743 2016/01/29 17:34:06 [debug] 9037#0: *1 event timer del: 12: 1454088846743 2016/01/29 17:34:06 [debug] 9037#0: *1 http run request: "/8m?" 2016/01/29 17:34:06 [debug] 9037#0: *1 http writer handler: "/8m?" 2016/01/29 17:34:06 [info] 9037#0: *1 client timed out (110: Connection timed out) while sending response to client, client: 123.123.123.123, server: localhost, request: "GET /8m HTTP/1.1", host: "xxx.xxx.xxx" 2016/01/29 17:34:06 [debug] 9037#0: *1 http finalize request: 408, "/8m?" a:1, c:1 2016/01/29 17:34:06 [debug] 9037#0: *1 http terminate request count:1 2016/01/29 17:34:06 [debug] 9037#0: *1 http terminate cleanup count:1 blk:0 2016/01/29 17:34:06 [debug] 9037#0: *1 http posted request: "/8m?" 2016/01/29 17:34:06 [debug] 9037#0: *1 http terminate handler count:1 2016/01/29 17:34:06 [debug] 9037#0: *1 http request count:1 blk:0 2016/01/29 17:34:06 [debug] 9037#0: *1 http close request 2016/01/29 17:34:06 [debug] 9037#0: *1 http log handler 2016/01/29 17:34:06 [debug] 9037#0: *1 run cleanup: 00000000024D4AD0 2016/01/29 17:34:06 [debug] 9037#0: *1 file cleanup: fd:13 2016/01/29 17:34:06 [debug] 9037#0: *1 free: 00000000024D3B40, unused: 0 2016/01/29 17:34:06 [debug] 9037#0: *1 free: 00000000024CA250, unused: 3145 2016/01/29 17:34:06 [debug] 9037#0: *1 close http connection: 12 2016/01/29 17:34:06 [debug] 9037#0: *1 reusable connection: 0 2016/01/29 17:34:06 [debug] 9037#0: *1 free: 00000000024D3730 2016/01/29 17:34:06 [debug] 9037#0: *1 free: 00000000024C5780, unused: 0 2016/01/29 17:34:06 [debug] 9037#0: *1 free: 00000000024EA910, unused: 128 - Taking a look at the tcpdump file taken on the client side during the download *NO* FIN has been received. - Client keeps download and nginx keeps providing data. The tcp connection on client side remains in ESTABLISHED state. - After a while: the download still works, the connection on NGINX host goes from FIN_WAIT_1 to FIN_WAIT_2. After that client side goes to CLOSE_WAIT. - Waiting a few minutes and the connection is closed (it seems to me that this matches with tcp_fin_timeout = 180) and the download is stopped with an error only on client side. nginx doesn't print anything in the logs. *Note*: This only produced the problem when accessing HTTP via the internet (in a local network everything worked fine). Thanks a lot for your help in advance! Markus --- To reproduce the problem one can do the following (on a linux system): - Create a file about 8m big e.g. dd if=/dev/random of=/tmp/abc bs=1024 count=0 seek=$[1024*8] and make it available for download via - curl -# --limit-rate 4000 --verbose -O --trace-ascii trace.out --trace-time http://PUBLIC-IP/8m - monitor connections with watch -n 2 " ss '( dport = :http or sport = :http )' " as well as the error log --- # nginx -V nginx version: nginx/1.4.6 (Ubuntu) built by gcc 4.8.2 (Ubuntu 4.8.2-16ubuntu6) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module # uname -a Linux 7e355bbce7a7 4.2.0-17-generic #21-Ubuntu SMP Fri Oct 23 19:56:16 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux -------------- next part -------------- A non-text attachment was scrubbed... Name: default Type: application/octet-stream Size: 2593 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1607 bytes Desc: not available URL: From agentzh at gmail.com Fri Jan 29 20:34:37 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 29 Jan 2016 12:34:37 -0800 Subject: echo-nginx-module and HTTP2 In-Reply-To: <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> Message-ID: Hello! On Thu, Jan 28, 2016 at 11:19 PM, A. Schulze wrote: > I could not support with patches but would do some beta testing. > Thanks. > Just to have ask: > disabling http2 for a location is not possible, isn't it? > Nope. Regards, -agentzh From kurt at x64architecture.com Sat Jan 30 04:40:44 2016 From: kurt at x64architecture.com (Kurt Cancemi) Date: Fri, 29 Jan 2016 23:40:44 -0500 Subject: echo-nginx-module and HTTP2 In-Reply-To: References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> Message-ID: Hello, I was doing some debugging and though I haven't found a fix. The problem is in the ngx_http_echo_client_request_headers_variable() function c->buffer is NULL when http v2 is used for some reason (internal to nginx). -- Kurt Cancemi https://www.x64architecture.com -- -- Kurt Cancemi https://www.x64architecture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tolga.ceylan at gmail.com Sat Jan 30 05:23:37 2016 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Fri, 29 Jan 2016 21:23:37 -0800 Subject: File download results in "110: Connection timed out" on nginx; Download continues until Linux Kernel 'tcp_fin_timeout' reached In-Reply-To: References: Message-ID: This looks normal. Definition of 'send_timeout' is in: http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout "Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed." Here the tricky part is "write operations" phrase. In your config (and from your logs), I see that you are using "sendfile" which is a single write operation for static files from nginx perspective. With 4000/sec, it'll take 34 minutes for the download (8MB) to complete, so send timeout fires. Cheers, Tolga From nginx-forum at forum.nginx.org Sat Jan 30 13:29:03 2016 From: nginx-forum at forum.nginx.org (mbukowski) Date: Sat, 30 Jan 2016 08:29:03 -0500 Subject: File download results in "110: Connection timed out" on nginx; Download continues until Linux Kernel 'tcp_fin_timeout' reached In-Reply-To: References: Message-ID: <05cd71fc27c360b949f8c89acd1651d2.NginxMailingListEnglish@forum.nginx.org> Thanks a lot for the clarification. I did read the docs but the part with the sendfile and no successive writes was what i did not connect. I thought that because of "not for the transmission of the whole response" it didn't apply in that case. I tried to fix that by adding 'sendfile_max_chunk 32k' but this didn't help. To me when doing that it seems to be weird that many timers are added and deleted when starting the transfer 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158511891 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer add: 12: 1:1454158451892 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158451892 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer add: 12: 1:1454158451894 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158451894 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer add: 12: 120000:1454158571894 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158571894 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer add: 12: 1:1454158451922 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158451922 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer add: 12: 1:1454158451923 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158451923 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer add: 12: 1:1454158451924 2016/01/30 12:54:11 [debug] 9259#0: *1 event timer del: 12: 1454158451924 ... (many more follow) So the only way to fix that for the moment to me is to set 'sendfile off' and also increase the 'send_timeout' to a more reasonable value in that case. Again thanks a lot for the clarification! Cheers, Markus Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264209,264211#msg-264211 From tomuxiong at gmail.com Sat Jan 30 16:31:14 2016 From: tomuxiong at gmail.com (Thomas Nyberg) Date: Sat, 30 Jan 2016 17:31:14 +0100 Subject: upstream max_fails/fail_timeout logic? Message-ID: <56ACE552.7060208@gmail.com> Hello I've set up an http proxy to a couple of other servers and am using max_fails and fail_time in addition to having a proxy_read_timeout to force failover in case of a read timeout. It seems to work fine, but I have two questions. 1) I'm not totally understanding the logic. I can tell that if the timeout hits the max number of times, it must sit out for the rest of the fail_timeout time and then it seems to start working again at the end of the time. But it also seems like it only needs to fail once (i.e. not a full set of max_fails) to be removed from consideration again. But then it seems like it doesn't fail again for a long time, it needs to fail max_fails times again. How does this logic work exactly? 2) Is the fact that an upstream server is taken down (in this temporary fashion) logged somewhere? I.e. some file where it just says "server hit max fails" or something? 3) Extending 2), is there any way to "hook" into that server failure? I.e. if the server fails, is there a way with nginx to execute some sort of a program (either internal or external)? Thanks for any help! I've been reading the documentation, but I get lost at times so if it's written there and I'm just being an idiot, please tell me to RTFM (with a link if possible please :) ). Also I forgot to mention, I'm using the community version on Linux mint: $ nginx -v nginx version: nginx/1.4.6 (Ubuntu) $ uname -a Linux mint-carbon 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Cheers, Thomas From shahzaib.cb at gmail.com Sun Jan 31 07:09:15 2016 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 31 Jan 2016 12:09:15 +0500 Subject: Nginx Slow download over 1Gbps load !! Message-ID: Hi, We've recently shifted to FreeBSD-10 due to its robust asynchronous performance for big storage based on .mp4 files. Here is the server specs : 2 x Intel Xeon X5690 96GB DDR3 Memory 12 x 3TB SATA Raid-10 (HBA LSI-9211) ZFS FileSystem with 18TB usable space 2 x 1Gbps LACP (2Gbps Throughput) Things are working quite well, no high I/O due to Big Ram cache and AIO performance but once network port started to go over 1Gbps, performance begins to lag, download speed started to stuck around 60-100Kbps on a 4Mbps connection (using wget) which works quite efficient under 800Mbps port (450kbps on 4Mbps). We first thought it could be network issue or LACP issue but doesn't looks like it is. We also checked that if requests are in queue using following command but it was '0': [root at cw005 ~/scripts]# netstat -Lan Current listen queue sizes (qlen/incqlen/maxqlen) Proto Listen Local Address tcp4 0/0/6000 *.80 tcp4 0/0/6000 *.443 tcp4 0/0/10 127.0.0.1.25 tcp4 0/0/128 *.1880 tcp6 0/0/128 *.1880 tcp4 0/0/5 *.5666 tcp6 0/0/5 *.5666 tcp4 0/0/128 *.199 unix 0/0/6000 /var/run/www.socket unix 0/0/4 /var/run/devd.pipe unix 0/0/4 /var/run/devd.seqpacket.pipe Here is the output of mbcluster : 119747/550133/669880/6127378 mbuf clusters in use (current/cache/total/max) 661065/1410183/2071248/6063689 4k (page size) jumbo clusters in use (current/cache/total/max) We also checked with Disk Busy rate using gstat which was quite stable as well. So it looks like either the sysctl values need to tweak or Nginx configurations are not optimized. Here is the sysctl.conf : kern.ipc.somaxconn=6000 # set to at least 16MB for 10GE hosts kern.ipc.maxsockbuf=16777216 # socket buffers net.inet.tcp.recvspace=4194304 net.inet.tcp.sendspace=4197152 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.recvbuf_inc=524288 # security security.bsd.see_other_uids=0 security.bsd.see_other_gids=0 # drop UDP packets destined for closed sockets net.inet.udp.blackhole=1 # drop TCP packets destined for closed sockets net.inet.tcp.blackhole=2 # ipfw net.inet.ip.fw.verbose_limit=3 # maximum incoming and outgoing IPv4 network queue sizes net.inet.ip.intr_queue_maxlen=2048 net.route.netisr_maxqlen=2048 net.inet.icmp.icmplim: 2048 net.inet.tcp.fast_finwait2_recycle=1 kern.random.sys.harvest.ethernet=0 net.inet.ip.portrange.randomized=0 net.link.lagg.0.use_flowid=0 Here is the bootloader.conf : zpool_cache_load="YES" zpool_cache_type="/boot/zfs/zpool.cache" zpool_cache_name="/boot/zfs/zpool.cache" aio_load="YES" zfs_load="YES" ipmi_load="YES" Here is the nginx.conf : user www www; worker_processes 48; worker_rlimit_nofile 900000; #2 filehandlers for each connection error_log /var/log/nginx-error.log error; #pid logs/nginx.pid; events { worker_connections 10240; multi_accept on; } http { include mime.types; default_type application/octet-stream; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; client_max_body_size 4096M; client_body_buffer_size 800M; output_buffers 1 512k; sendfile_max_chunk 128k; fastcgi_connect_timeout 30; fastcgi_send_timeout 30; fastcgi_read_timeout 30; proxy_read_timeout 30; fastcgi_buffer_size 64k; fastcgi_buffers 16 64k; fastcgi_temp_file_write_size 256k; server_tokens off; #Conceals nginx version access_log off; sendfile off; tcp_nodelay on; aio on; client_header_timeout 30s; client_body_timeout 30s; send_timeout 30s; keepalive_timeout 15s; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; gzip off; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.0; gzip_min_length 1280; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/xml text/css application/x-javascript image/png image/x-icon image/gif image/jpeg image/jpg application/xml application/xml+rss text/javascr ipt application/atom+xml; include /usr/local/etc/nginx/vhosts/*.conf; } Here is the vhost : server { listen 80 sndbuf=16k; server_name cw005.files.com cw005.domain.com www.cw005.files.com www.cw005.domain.com cw005.domain.net www.cw005.domain.net; location / { root /files; index index.html index.htm index.php; autoindex off; } location ~ \.(jpg)$ { * sendfile on;* tcp_nopush on; * aio off;* root /files; try_files $uri /thumbs.php; expires 1y; } location ~* \.(js|css|png|gif|ico)$ { root /files; expires 1y; log_not_found off; } location ~ \.(flv)$ { flv; root /files; expires 7d; include hotlink.inc; } location ~ \.(mp4)$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 20M; expires 1y; add_header Cache-Control "public"; root /files; include hotlink.inc; } # pass the PHP scripts to FastCGI server listening on unix:/var/run/www.socket location ~ \.php$ { root /files; fastcgi_pass unix:/var/run/www.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_read_timeout 10000; } location ~ /\.ht { deny all; } } ==================================================== Please i need guidance to handle with this problem, i am sure that some value needs to tweak. Thanks in advance !! -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sun Jan 31 17:33:19 2016 From: r at roze.lv (Reinis Rozitis) Date: Sun, 31 Jan 2016 19:33:19 +0200 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: Message-ID: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> This is a bit out of scope of nginx but .. > could be network issue or LACP issue but doesn't looks like it is How did you determine this? Can you generate more than 1 Gbps (without nginx)? > 12 x 3TB SATA Raid-10 (HBA LSI-9211) > ZFS FileSystem with 18TB usable space > Please i need guidance to handle with this problem, i am sure that some > value needs to tweak. What's the output of zpool iostat (and the overal zpool/zfs configuration)? Also do you have ZFS on top of hardware raid ? In general just 12 SATA disks won't have a lot of IOps (especially random read) unless it all hits ZFS Arc (can/should be monitored), even more if there is a hardware raid underneath (in your place would flash the HBA with IT firmware so you get plain jbods managed by ZFS). rr From pchychi at gmail.com Sun Jan 31 18:04:23 2016 From: pchychi at gmail.com (Payam Chychi) Date: Sun, 31 Jan 2016 10:04:23 -0800 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: Hi, Forget the application layer being the problem until you have successfully replicated the problem in several different setups. Are you monitoring both links utilization levels? Really sounds like a network layer problem or something with your ip stack. Can you replicate using ftp, scp? How is your switch configured? How are the links negotiated, make sure both sides of both links are full duplex 1gig. Look for crc or input errors on the interface side. How many packets are you pushing? Make sure the switch isnt activating unicast limiting. Lots of things to check... Would help if you can help us understand what tests youve done to determine its nginx. Thanks -- Payam Chychi Network Engineer / Security Specialist On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote: > This is a bit out of scope of nginx but .. > > > could be network issue or LACP issue but doesn't looks like it is > > How did you determine this? > Can you generate more than 1 Gbps (without nginx)? > > > > 12 x 3TB SATA Raid-10 (HBA LSI-9211) > > ZFS FileSystem with 18TB usable space > > > > > > Please i need guidance to handle with this problem, i am sure that some > > value needs to tweak. > > > > > What's the output of zpool iostat (and the overal zpool/zfs configuration)? > > Also do you have ZFS on top of hardware raid ? > > In general just 12 SATA disks won't have a lot of IOps (especially random > read) unless it all hits ZFS Arc (can/should be monitored), even more if > there is a hardware raid underneath (in your place would flash the HBA with > IT firmware so you get plain jbods managed by ZFS). > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun Jan 31 18:18:24 2016 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 31 Jan 2016 23:18:24 +0500 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: Hi, Thanks a lot for response. Now i am doubting that issue is on network layer as i can examine lots of retransmitted packets in netstat -s output. Here is the server's status : http://prntscr.com/9xa6z2 Following is the thread with same mentioned issue : http://serverfault.com/questions/218101/freebsd-8-1-unstable-network-connection This is what he said in thread : "I ran into a problem with Cisco Switchs forcing Negotiation of network speeds. This caused intermittent errors and retransmissions. The result was file transfers being really slow. May not be the cases, but you can turn of speed negotiation with miitools (if I recall correctly, been a long time)." >>Can you replicate using ftp, scp? Yes, we recently tried downloading file over FTP and encountered the same slow transfer rate. >>What's the output of zpool iostat (and the overal zpool/zfs configuration)?Also do you have ZFS on top of hardware raid ? In general just 12 SATA disks won't have a lot of IOps (especially random read) unless it all hits ZFS Arc (can/should be monitored), even more if there is a hardware raid underneath (in your place would flash the HBA with IT firmware so you get plain jbods managed by ZFS). zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not hardware controller as FreeBSD recommends to use HBA in order to directly access all drives for scrubbing and data-integrity purposes. Do you recommend Hardware-Raid ? Following is the scrnshot of ARC status : http://prntscr.com/9xaf9p >>How is your switch configured? How are the links negotiated, make sure both sides of both links are full duplex 1gig. Look for crc or input errors on the interface side. On My side, i can see that both interfaces have Fully-Duplex port. Regarding crc / input errors, is there any command i can use to check that on FreeBSD ? Regards. Shahzaib On Sun, Jan 31, 2016 at 11:04 PM, Payam Chychi wrote: > Hi, > > Forget the application layer being the problem until you have successfully > replicated the problem in several different setups. > > Are you monitoring both links utilization levels? Really sounds like a > network layer problem or something with your ip stack. > > Can you replicate using ftp, scp? > > How is your switch configured? How are the links negotiated, make sure > both sides of both links are full duplex 1gig. Look for crc or input errors > on the interface side. > > How many packets are you pushing? Make sure the switch isnt activating > unicast limiting. > > Lots of things to check... Would help if you can help us understand what > tests youve done to determine its nginx. > > Thanks > > -- > Payam Chychi > Network Engineer / Security Specialist > > On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote: > > This is a bit out of scope of nginx but .. > > could be network issue or LACP issue but doesn't looks like it is > > > How did you determine this? > Can you generate more than 1 Gbps (without nginx)? > > > 12 x 3TB SATA Raid-10 (HBA LSI-9211) > ZFS FileSystem with 18TB usable space > > > Please i need guidance to handle with this problem, i am sure that some > value needs to tweak. > > > What's the output of zpool iostat (and the overal zpool/zfs configuration)? > > Also do you have ZFS on top of hardware raid ? > > In general just 12 SATA disks won't have a lot of IOps (especially random > read) unless it all hits ZFS Arc (can/should be monitored), even more if > there is a hardware raid underneath (in your place would flash the HBA > with > IT firmware so you get plain jbods managed by ZFS). > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun Jan 31 18:48:56 2016 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 31 Jan 2016 23:48:56 +0500 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: The server is using ports 18 and 19 and those port are configured with speed 1000 LH26876_SW2#sh run int g 0/18 ! interface GigabitEthernet 0/18 description LH28765_3 no ip address speed 1000 ! port-channel-protocol LACP port-channel 3 mode active no shutdown LH26876_SW2#sh run int g 0/19 ! interface GigabitEthernet 0/19 description LH28765_3 no ip address speed 1000 ! port-channel-protocol LACP port-channel 3 mode active no shutdown LH26876_SW2# ------------------------------ Is it alright ? Regards. Shahzaib On Sun, Jan 31, 2016 at 11:18 PM, shahzaib shahzaib wrote: > Hi, > > Thanks a lot for response. Now i am doubting that issue is on network > layer as i can examine lots of retransmitted packets in netstat -s output. > Here is the server's status : > > http://prntscr.com/9xa6z2 > > Following is the thread with same mentioned issue : > > > http://serverfault.com/questions/218101/freebsd-8-1-unstable-network-connection > > This is what he said in thread : > > "I ran into a problem with Cisco Switchs forcing Negotiation of network > speeds. This caused intermittent errors and retransmissions. The result was > file transfers being really slow. May not be the cases, but you can turn of > speed negotiation with miitools (if I recall correctly, been a long time). > " > > >>Can you replicate using ftp, scp? > Yes, we recently tried downloading file over FTP and encountered the same > slow transfer rate. > > >>What's the output of zpool iostat (and the overal zpool/zfs > configuration)?Also do you have ZFS on top of hardware raid ? In general > just 12 SATA disks won't have a lot of IOps (especially random read) unless > it all hits ZFS Arc (can/should be monitored), even more if there is a > hardware raid underneath (in your place would flash the HBA with IT > firmware so you get plain jbods managed by ZFS). > > zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not > hardware controller as FreeBSD recommends to use HBA in order to directly > access all drives for scrubbing and data-integrity purposes. Do you > recommend Hardware-Raid ? Following is the scrnshot of ARC status : > > http://prntscr.com/9xaf9p > > >>How is your switch configured? How are the links negotiated, make sure > both sides of both links are full duplex 1gig. Look for crc or input errors > on the interface side. > On My side, i can see that both interfaces have Fully-Duplex port. > Regarding crc / input errors, is there any command i can use to check that > on FreeBSD ? > > Regards. > Shahzaib > > > On Sun, Jan 31, 2016 at 11:04 PM, Payam Chychi wrote: > >> Hi, >> >> Forget the application layer being the problem until you have >> successfully replicated the problem in several different setups. >> >> Are you monitoring both links utilization levels? Really sounds like a >> network layer problem or something with your ip stack. >> >> Can you replicate using ftp, scp? >> >> How is your switch configured? How are the links negotiated, make sure >> both sides of both links are full duplex 1gig. Look for crc or input errors >> on the interface side. >> >> How many packets are you pushing? Make sure the switch isnt activating >> unicast limiting. >> >> Lots of things to check... Would help if you can help us understand what >> tests youve done to determine its nginx. >> >> Thanks >> >> -- >> Payam Chychi >> Network Engineer / Security Specialist >> >> On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote: >> >> This is a bit out of scope of nginx but .. >> >> could be network issue or LACP issue but doesn't looks like it is >> >> >> How did you determine this? >> Can you generate more than 1 Gbps (without nginx)? >> >> >> 12 x 3TB SATA Raid-10 (HBA LSI-9211) >> ZFS FileSystem with 18TB usable space >> >> >> Please i need guidance to handle with this problem, i am sure that some >> value needs to tweak. >> >> >> What's the output of zpool iostat (and the overal zpool/zfs >> configuration)? >> >> Also do you have ZFS on top of hardware raid ? >> >> In general just 12 SATA disks won't have a lot of IOps (especially random >> read) unless it all hits ZFS Arc (can/should be monitored), even more if >> there is a hardware raid underneath (in your place would flash the HBA >> with >> IT firmware so you get plain jbods managed by ZFS). >> >> rr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Sun Jan 31 18:57:32 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sun, 31 Jan 2016 10:57:32 -0800 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: Sounds like at this point this discussion needs to be moved off the nginx mailing list. > On Jan 31, 2016, at 10:48, shahzaib shahzaib wrote: > > The server is using ports 18 and 19 and those port are configured with speed 1000 > > > > LH26876_SW2#sh run int g 0/18 > > ! > > interface GigabitEthernet 0/18 > > description LH28765_3 > > no ip address > > speed 1000 > > ! > > port-channel-protocol LACP > > port-channel 3 mode active > > no shutdown > > LH26876_SW2#sh run int g 0/19 > > ! > > interface GigabitEthernet 0/19 > > description LH28765_3 > > no ip address > > speed 1000 > > ! > > port-channel-protocol LACP > > port-channel 3 mode active > > no shutdown > > LH26876_SW2# > > > > ------------------------------ > > Is it alright ? > > Regards. > > Shahzaib > > >> On Sun, Jan 31, 2016 at 11:18 PM, shahzaib shahzaib wrote: >> Hi, >> >> Thanks a lot for response. Now i am doubting that issue is on network layer as i can examine lots of retransmitted packets in netstat -s output. Here is the server's status : >> >> http://prntscr.com/9xa6z2 >> >> Following is the thread with same mentioned issue : >> >> http://serverfault.com/questions/218101/freebsd-8-1-unstable-network-connection >> >> This is what he said in thread : >> >> "I ran into a problem with Cisco Switchs forcing Negotiation of network speeds. This caused intermittent errors and retransmissions. The result was file transfers being really slow. May not be the cases, but you can turn of speed negotiation with miitools (if I recall correctly, been a long time)." >> >> >>Can you replicate using ftp, scp? >> Yes, we recently tried downloading file over FTP and encountered the same slow transfer rate. >> >> >>What's the output of zpool iostat (and the overal zpool/zfs configuration)?Also do you have ZFS on top of hardware raid ? In general just 12 SATA disks won't have a lot of IOps (especially random read) unless it all hits ZFS Arc (can/should be monitored), even more if there is a hardware raid underneath (in your place would flash the HBA with IT firmware so you get plain jbods managed by ZFS). >> >> zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not hardware controller as FreeBSD recommends to use HBA in order to directly access all drives for scrubbing and data-integrity purposes. Do you recommend Hardware-Raid ? Following is the scrnshot of ARC status : >> >> http://prntscr.com/9xaf9p >> >> >>How is your switch configured? How are the links negotiated, make sure both sides of both links are full duplex 1gig. Look for crc or input errors on the interface side. >> On My side, i can see that both interfaces have Fully-Duplex port. Regarding crc / input errors, is there any command i can use to check that on FreeBSD ? >> >> Regards. >> Shahzaib >> >> >>> On Sun, Jan 31, 2016 at 11:04 PM, Payam Chychi wrote: >>> Hi, >>> >>> Forget the application layer being the problem until you have successfully replicated the problem in several different setups. >>> >>> Are you monitoring both links utilization levels? Really sounds like a network layer problem or something with your ip stack. >>> >>> Can you replicate using ftp, scp? >>> >>> How is your switch configured? How are the links negotiated, make sure both sides of both links are full duplex 1gig. Look for crc or input errors on the interface side. >>> >>> How many packets are you pushing? Make sure the switch isnt activating unicast limiting. >>> >>> Lots of things to check... Would help if you can help us understand what tests youve done to determine its nginx. >>> >>> Thanks >>> >>> -- >>> Payam Chychi >>> Network Engineer / Security Specialist >>> >>>> On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote: >>>> >>>> This is a bit out of scope of nginx but .. >>>> >>>>> could be network issue or LACP issue but doesn't looks like it is >>>> >>>> How did you determine this? >>>> Can you generate more than 1 Gbps (without nginx)? >>>> >>>> >>>>> 12 x 3TB SATA Raid-10 (HBA LSI-9211) >>>>> ZFS FileSystem with 18TB usable space >>>> >>>>> Please i need guidance to handle with this problem, i am sure that some >>>>> value needs to tweak. >>>> >>>> What's the output of zpool iostat (and the overal zpool/zfs configuration)? >>>> >>>> Also do you have ZFS on top of hardware raid ? >>>> >>>> In general just 12 SATA disks won't have a lot of IOps (especially random >>>> read) unless it all hits ZFS Arc (can/should be monitored), even more if >>>> there is a hardware raid underneath (in your place would flash the HBA with >>>> IT firmware so you get plain jbods managed by ZFS). >>>> >>>> rr >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Sun Jan 31 18:59:30 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 31 Jan 2016 19:59:30 +0100 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: <5C774E24-675D-4FAB-8F3E-FA73772B3FE1@ultra-secure.de> > Am 31.01.2016 um 19:48 schrieb shahzaib shahzaib >: > > The server is using ports 18 and 19 and those port are configured with speed 1000 > > > LH26876_SW2#sh run int g 0/18 > > ! > > interface GigabitEthernet 0/18 > > description LH28765_3 > > no ip address > > speed 1000 > > Can you set that to ?auto? of some sort? I know very little about switches - but we usually have problems when one side is set to auto negotiation and the other isn?t?. Most of the time, the switch being set to some fixed bandwidth is a legacy of maybe a decade ago, when switches were crap (and some NICs were, too). Then, can you check with something like ioperf or so, from two different hosts at the same time if you can get past the 1GBit/s? LACP will only do load-balancing with different addresses. So, if you test from one IP, you will only ever get 1 GBit/s. You could also play with some of the setting described on calomel.org for tuning tcp/ip. As others have pointed out, it won?t hurt moving this to freebsd-stable at freebsd.org ?. Rainer -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sun Jan 31 20:56:38 2016 From: r at roze.lv (Reinis Rozitis) Date: Sun, 31 Jan 2016 22:56:38 +0200 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: > Yes, we recently tried downloading file over FTP and encountered the same > slow transfer rate. Then it's not really a nginx issue, it seems you just hit the servers (current) network limit. > zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not > hardware controller as FreeBSD recommends to use HBA in order to directly > access all drives for scrubbing and data-integrity purposes. > Do you recommend Hardware-Raid ? Well no, exactly opposite - ZFS works best(better) with bare disks than hardware raid between. Just usually theese HBAs come with IR (internal raid) firmware - also you mentioned Raid 10 in your setup while in ZFS-world such term isn't often used (rather than mirrored pool or raidz(1,2,3..)). As to how to solve the bandwidth issue - from personal experience I have more success (less trouble/expensive etc) with interface bonding on the server itself (for Linux with balance-alb which doesn't require any specific switch features or port configuration) rather than relying on the switches. The drawback is that a single stream/download can't go beyond one physical interface speed limit (for 1Gbe = ~100MB/s) but the total bandwidth cap is roughly multiplied by the device count bonded together. Not a BSD guy but I guess the lagg in loadbalance mode would do the same ( https://www.freebsd.org/cgi/man.cgi?lagg ). rr From pchychi at gmail.com Sun Jan 31 22:35:58 2016 From: pchychi at gmail.com (Payam Chychi) Date: Sun, 31 Jan 2016 14:35:58 -0800 Subject: Nginx Slow download over 1Gbps load !! In-Reply-To: References: <5DF42DA59DCB4FA79C814C912A698F52@MezhRoze> Message-ID: <7FF1DA7CFE384BB987C39CBFA21E0A94@gmail.com> Yep, im certain this is not an nginx problem as others have also pointed out. Two ways of solving an interface limitation problem. 1. Change ur load balancing algo to per packet load balancing. This will split up te traffic much more evenly on multiple interfaces however, i would not recommend this as you run the risk of dropped/ out of state pkt and errors... There are only few conditions that this would work with a high level of success. 2. 10gig interfaces. You can pickup a cisco 3750x on amazon for a few hundred $ these days and add a 10gig card to it. Id say to check your sysctl and ulimit settings however kts clear that issiue is only active when pushing over 1gig/sec. Simple test: use two different sources and test your download at the same time. If you can dictate the source ip addresses, use an even and odd last octet so to aid in a better balanced return traffic path. Monitor both switch port interfaces and you should see the total traffic > 1gig/sec without problems... More controlled test would be to setup each interface with its own ip and force reply traffic to staybon each nic. Feel free to drop me an email offlist if you need anymore help. -- Payam Chychi Solution Architect On Sunday, January 31, 2016 at 12:56 PM, Reinis Rozitis wrote: > > Yes, we recently tried downloading file over FTP and encountered the same > > slow transfer rate. > > > > > Then it's not really a nginx issue, it seems you just hit the servers > (current) network limit. > > > > zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not > > hardware controller as FreeBSD recommends to use HBA in order to directly > > access all drives for scrubbing and data-integrity purposes. > > Do you recommend Hardware-Raid ? > > > > > Well no, exactly opposite - ZFS works best(better) with bare disks than > hardware raid between. > Just usually theese HBAs come with IR (internal raid) firmware - also you > mentioned Raid 10 in your setup while in ZFS-world such term isn't often > used (rather than mirrored pool or raidz(1,2,3..)). > > > As to how to solve the bandwidth issue - from personal experience I have > more success (less trouble/expensive etc) with interface bonding on the > server itself (for Linux with balance-alb which doesn't require any specific > switch features or port configuration) rather than relying on the switches. > The drawback is that a single stream/download can't go beyond one physical > interface speed limit (for 1Gbe = ~100MB/s) but the total bandwidth cap is > roughly multiplied by the device count bonded together. > > Not a BSD guy but I guess the lagg in loadbalance mode would do the same ( > https://www.freebsd.org/cgi/man.cgi?lagg ). > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: