From nginx-forum at nginx.us Sun Jun 1 09:02:15 2014 From: nginx-forum at nginx.us (SupaIrish) Date: Sun, 01 Jun 2014 05:02:15 -0400 Subject: Whitelisting Req/Conn Limiting In-Reply-To: References: Message-ID: <21c1113f46de2975dc93480bc684fe9a.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply 1) Your explanation clarified my misunderstanding, much appreciated. 2) Your suggestion would make a lot of sense. But after reading your response, I realized I wrote the check wrong in my example. I'm trying to whitelist an inbound request from a specific server, not one that Nginx is serving. Instead of $hostname I should be using $remote_addr and the IP of the remote server I'm attempting to whitelist from the throttling. i.e. if ( $remote_addr = XX.XXX.XXX.XXX ) { set $whitelist 1; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250510,250530#msg-250530 From nginx-forum at nginx.us Sun Jun 1 10:30:49 2014 From: nginx-forum at nginx.us (omercz) Date: Sun, 01 Jun 2014 06:30:49 -0400 Subject: Email Reverse Proxy issue Message-ID: <4efec97591f89c67ee99a2741d671b74.NginxMailingListEnglish@forum.nginx.org> I am using Nginx as an Email reverse proxy. The email client sends a request to the nginx, the nginx fetch the WHOLE email(message) from exchange server, and only then manipulates it and sends it back to client. Email Client<---->Nginx<----> Office 365 Everything is working great, besides the following problem, the Email client has a timeout of 30 seconds, but sometimes it can take the Nginx to download the whole email more the 30sec (if it has a big attachment). My question is as follow: In the meantime can the nginx send 'something' to the client to keep him aware that something is being downloaded ? Right now I have an ugly patch that sends on the open socket 'blanks' to the client every 20 seconds, until the nginx can send him the whole file. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250534,250534#msg-250534 From reallfqq-nginx at yahoo.fr Sun Jun 1 12:09:34 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 1 Jun 2014 14:09:34 +0200 Subject: Whitelisting Req/Conn Limiting In-Reply-To: <21c1113f46de2975dc93480bc684fe9a.NginxMailingListEnglish@forum.nginx.org> References: <21c1113f46de2975dc93480bc684fe9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am glad my explanations were clear enough, though I doubt I said more than the docs which, again, seem pretty clear to me. You could make suggestions on how to improve the docs by quoting what you think would gain to be rephrased better. Based on my previous advice, you already know that using 'if' is avoidable and recommended. What do you think you could change to match your new needs? At which level does the check should be done? How to use variables to make only targeted environments affected by you contional statement? You could search archives of the ML on the forum you are using since map configuration are a recurring question, and your use case has been addressed multiple times in the past. To increase your chances to gain help, please provide: - details on the process you followed - relevant bits of the configuration you achieved which bother you - intel about searches/thinking you made - precise questions/wondering you have --- *B. R.* On Sun, Jun 1, 2014 at 11:02 AM, SupaIrish wrote: > Thanks for the reply > > 1) Your explanation clarified my misunderstanding, much appreciated. > > 2) Your suggestion would make a lot of sense. But after reading your > response, I realized I wrote the check wrong in my example. I'm trying to > whitelist an inbound request from a specific server, not one that Nginx is > serving. Instead of $hostname I should be using $remote_addr and the IP of > the remote server I'm attempting to whitelist from the throttling. i.e. > > if ( $remote_addr = XX.XXX.XXX.XXX ) { > set $whitelist 1; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250510,250530#msg-250530 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From philipp.kraus at tu-clausthal.de Sun Jun 1 17:37:51 2014 From: philipp.kraus at tu-clausthal.de (Philipp Kraus) Date: Sun, 1 Jun 2014 19:37:51 +0200 Subject: location for php except Message-ID: <579A3506-81F3-40F5-AE62-E29F12BC51CE@tu-clausthal.de> Hello, I'm using nginx with Gitlab, so in the Gitlab some PHP projects are hosted and on other directory there exists some PHP scripts. For the PHP files I use: location ~ \.php$ { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; try_files $uri $uri/ =404; } and for my Gitlab I'm using: location /gitlab { alias /home/gitlab/gitlab/public; try_files $uri @gitlab; } location @gitlab { proxy_read_timeout 300; proxy_connect_timeout 300; proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } The problem at the moment is, that if a PHP file is hosted in a Gitlab repository the location ~\.php$ is matched first. So how can I change the locations, that all PHP files, which URL has got the part /gitlab it should be passed to Gitlab and otherwise it should be passed to the fastcgi call e.g. myserver/gitlab/user/repository.git/master/test.php should be handled by Gitlab and myserver/somedirectory/test.php should be handled by the fastcgi call. Thanks a lot Phil From nginx-forum at nginx.us Sun Jun 1 17:48:09 2014 From: nginx-forum at nginx.us (allang) Date: Sun, 01 Jun 2014 13:48:09 -0400 Subject: Invalid ports added in redirects on AWS EC2 nginx Message-ID: On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to be able to test the app by directly talking to the EC2 server and via an Elastic Load Balancer (ELB -the public route in). I've setup the ELB to decrypt all the SSL traffic and pass this on to my EC2 server via port 80, as well as pass port 80 directly onto my EC2 server via port 80. Initially this caused infinite redirects in my app but I researched and then fixed this by adding fastcgi_param HTTPS $https; with some custom logic that looks at $http_x_forwarded_proto to figure out when its actually via SSL. There remains one issue I can't solve. When a user logs into the Symfony app, if they come via the ELB, the form POST eventually returns a redirect back to https://elb.mysite.com:80/dashboard instead of https://elb.mysite.com/dashboard which gives the user an error of "SSL connection error". I've tried setting fastcgi_param SERVER_PORT $fastcgi_port; to force it away from 80 and I've also added the port_in_redirect off directive but both make no difference. The only way I've found to fix this is to alter the ELB 443 listener to pass traffic via https. The EC2 server has a self certified SSL certificate configured. But this means the EC2 server is wasting capacity performing this unnecessary 2nd decryption. Any help very much appreciated. Maybe there is a separate way within nginx of telling POST requests to not apply port numbers? Nginx vhost config: server { port_in_redirect off; listen 80; listen 443 ssl; ssl_certificate /etc/nginx/ssl/mysite.com/self-ssl.crt; ssl_certificate_key /etc/nginx/ssl/mysite.com/self-ssl.key; # Determine if HTTPS being used either locally or via ELB set $fastcgi_https off; set $fastcgi_port 80; if ( $http_x_forwarded_proto = 'https' ) { # ELB is using https set $fastcgi_https on; # set $fastcgi_port 443; } if ( $https = 'on' ) { # Local connection is using https set $fastcgi_https on; # set $fastcgi_port 443; } server_name *.mysite.com my-mysite-com-1234.eu-west-1.elb.amazonaws.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log error; rewrite ^/app\.php/?(.*)$ /$1 permanent; location / { port_in_redirect off; root /var/www/vhosts/mysite.com/web; index app.php index.php index.html index.html; try_files $uri @rewriteapp; } location ~* \.(jpg|jpeg|gif|png)$ { root /var/www/vhosts/mysite.com/web; access_log off; log_not_found off; expires 30d; } location ~* \.(css|js)$ { root /var/www/vhosts/mysite.com/web; access_log off; log_not_found off; expires 2h; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } location ~ ^/(app|app_dev|config)\.php(/|$) { port_in_redirect off; fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param HTTPS $fastcgi_https; # fastcgi_param SERVER_PORT $fastcgi_port; #fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/mysite.com/web$fastcgi_script_name; include fastcgi_params; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250545,250545#msg-250545 From francis at daoine.org Sun Jun 1 18:16:13 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 1 Jun 2014 19:16:13 +0100 Subject: location for php except In-Reply-To: <579A3506-81F3-40F5-AE62-E29F12BC51CE@tu-clausthal.de> References: <579A3506-81F3-40F5-AE62-E29F12BC51CE@tu-clausthal.de> Message-ID: <20140601181613.GA16942@daoine.org> On Sun, Jun 01, 2014 at 07:37:51PM +0200, Philipp Kraus wrote: Hi there, > I'm using nginx with Gitlab, so in the Gitlab some PHP projects are hosted and on other directory there exists some PHP scripts. > > For the PHP files I use: > > location ~ \.php$ { > and for my Gitlab I'm using: > > location /gitlab > The problem at the moment is, that if a PHP file is hosted in a Gitlab repository the location ~\.php$ is matched first. Strictly, it's that this location is the one that best matches the request. ("first" doesn't really apply.) See http://nginx.org/r/location for details. > So how can I change the locations, that all PHP files, which URL has got the part /gitlab it should be passed to Gitlab and > otherwise it should be passed to the fastcgi call e.g. If by "has got the part" you mean "starts with exactly the string", then pay particular attention to the "^~" modifier, (If not, then pay attention to the order of regexes.) f -- Francis Daly francis at daoine.org From philipp.kraus at tu-clausthal.de Sun Jun 1 19:00:13 2014 From: philipp.kraus at tu-clausthal.de (Philipp Kraus) Date: Sun, 1 Jun 2014 21:00:13 +0200 Subject: location for php except In-Reply-To: <20140601181613.GA16942@daoine.org> References: <579A3506-81F3-40F5-AE62-E29F12BC51CE@tu-clausthal.de> <20140601181613.GA16942@daoine.org> Message-ID: Am 01.06.2014 um 20:16 schrieb Francis Daly : > Strictly, it's that this location is the one that best matches the > request. ("first" doesn't really apply.) > > See http://nginx.org/r/location for details. okay, I have try to swap both location, imho I have translate "best location" with "first-come-first-serv order", so the swap does not create any effect. > >> So how can I change the locations, that all PHP files, which URL has got the part /gitlab it should be passed to Gitlab and >> otherwise it should be passed to the fastcgi call e.g. > > If by "has got the part" you mean "starts with exactly the string", > then pay particular attention to the "^~" modifier, I have added the ^~ to my locations and it seems to be working Thanks Phil From nginx-forum at nginx.us Mon Jun 2 03:47:33 2014 From: nginx-forum at nginx.us (TECK) Date: Sun, 01 Jun 2014 23:47:33 -0400 Subject: Nginx 1.7.0: location @php In-Reply-To: <20140531112745.GZ16942@daoine.org> References: <20140531112745.GZ16942@daoine.org> Message-ID: <07d939cb9c0fee924eeceae42fdbb282.NginxMailingListEnglish@forum.nginx.org> Francis, We are going in circles without reaching a solution. I think what I asked is very clear and simple: How do I avoid repeating a segment of configuration code assigned to @php into various locations: location @php { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fastcgi; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi.conf; } The above configuration will never change, regardless in what location is used: location ^~ /alpha { auth_basic "Restricted Access"; auth_basic_user_file htpasswd; try_files $uri $uri/ /alpha/index.php?$uri&$args; location ~ \.php$ { try_files @php =404; } } location ^~ /beta { try_files $uri $uri/ /beta/index.php?$uri&$args; location ~ \.php$ { try_files @php =404; } } If I replace the @php contents into /beta location, everything works. location ^~ /beta { try_files $uri $uri/ /beta/index.php?$uri&$args; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fastcgi; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi.conf; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250342,250548#msg-250548 From artemrts at ukr.net Mon Jun 2 06:24:01 2014 From: artemrts at ukr.net (wishmaster) Date: Mon, 02 Jun 2014 09:24:01 +0300 Subject: Nginx 1.7.0: location @php In-Reply-To: <07d939cb9c0fee924eeceae42fdbb282.NginxMailingListEnglish@forum.nginx.org> References: <20140531112745.GZ16942@daoine.org> <07d939cb9c0fee924eeceae42fdbb282.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1401690101.458972078.6ulmzud3@frv34.fwdcdn.com> I have the same problem in my php-application. Admin folder is protected with auth_basic and the rest folders - without auth. I have not found any solution except code duplication for php location. --- Original message --- From: "TECK" Date: 2 June 2014, 06:47:47 > Francis, > > We are going in circles without reaching a solution. I think what I asked is > very clear and simple: > How do I avoid repeating a segment of configuration code assigned to @php > into various locations: > location @php { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass fastcgi; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; > include fastcgi.conf; > } > > The above configuration will never change, regardless in what location is > used: > location ^~ /alpha { > auth_basic "Restricted Access"; > auth_basic_user_file htpasswd; > try_files $uri $uri/ /alpha/index.php?$uri From no-reply at alllangin.com Mon Jun 2 06:31:22 2014 From: no-reply at alllangin.com (support) Date: Mon, 02 Jun 2014 10:31:22 +0400 Subject: Nginx 1.7.0: location @php In-Reply-To: <1401690101.458972078.6ulmzud3@frv34.fwdcdn.com> References: <20140531112745.GZ16942@daoine.org> <07d939cb9c0fee924eeceae42fdbb282.NginxMailingListEnglish@forum.nginx.org> <1401690101.458972078.6ulmzud3@frv34.fwdcdn.com> Message-ID: <538C1A3A.70306@alllangin.com> yes. update and test 02.06.2014 10:24, wishmaster ?????: > I have the same problem in my php-application. Admin folder is protected with auth_basic and the rest folders - without auth. I have not found any solution except code duplication for php location. > > > --- Original message --- > From: "TECK" > Date: 2 June 2014, 06:47:47 > > > >> Francis, >> >> We are going in circles without reaching a solution. I think what I asked is >> very clear and simple: >> How do I avoid repeating a segment of configuration code assigned to @php >> into various locations: >> location @php { >> try_files $uri =404; >> fastcgi_split_path_info ^(.+\.php)(/.+)$; >> fastcgi_pass fastcgi; >> fastcgi_param PATH_INFO $fastcgi_path_info; >> fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; >> include fastcgi.conf; >> } >> >> The above configuration will never change, regardless in what location is >> used: >> location ^~ /alpha { >> auth_basic "Restricted Access"; >> auth_basic_user_file htpasswd; >> try_files $uri $uri/ /alpha/index.php?$uri > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From shahzaib.cb at gmail.com Mon Jun 2 09:04:45 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 2 Jun 2014 14:04:45 +0500 Subject: Connection reset by peer error !! Message-ID: Hello, We're using nginx as reverse proxy in front of apache and following error occurs most of the time : 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 141.0.10.83, server: domain.com, request: "GET /rss/recent HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: "domain.com" 2014/06/02 01:00:29 [error] 3288#0: *6492143 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 141.0.8.54, server: domain.com, request: "GET /rss/recent HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: "domain.com" 2014/06/02 01:00:31 [error] 3285#0: *6492195 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 119.160.119.47, server: domain.com, request: "GET /video/2437706/18-xxx-sex-hot HTTP/1.1", upstream: " http://127.0.0.1:7172/video/2437706/18-xxx-sex-hot", host: "domain.com", referrer: "http://www.google.com.pk/search?hl=en&ie=ISO-8859-1&q=xxxsex" Following is the nginx.conf : user tune; # no need for more workers in the proxy mode worker_processes 16; error_log /var/log/nginx/error.log error; error_log /var/log/nginx/error.log crit; #error_log /var/log/nginx/error.log; worker_rlimit_nofile 409600; events { worker_connections 102400; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; proxy_read_timeout 900; fastcgi_read_timeout 900; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.0; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/javascript application/xml application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 2000M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/conf.d/virtual.conf"; include "/etc/nginx/conf.d/beta.conf"; include "/etc/nginx/conf.d/admin.conf"; #include "/etc/nginx/conf.d/admin-ssl.conf"; } Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 2 09:59:46 2014 From: nginx-forum at nginx.us (Ventzy) Date: Mon, 02 Jun 2014 05:59:46 -0400 Subject: Wildcard proxy_cache_purge doesn't work In-Reply-To: References: Message-ID: <2f381646d888a78612e32c366981fe87.NginxMailingListEnglish@forum.nginx.org> I see. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250519,250558#msg-250558 From reallfqq-nginx at yahoo.fr Mon Jun 2 10:06:24 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 2 Jun 2014 12:06:24 +0200 Subject: Connection reset by peer error !! In-Reply-To: References: Message-ID: '18-xxx-sex-hot' :oD Pro. --- *B. R.* On Mon, Jun 2, 2014 at 11:04 AM, shahzaib shahzaib wrote: > Hello, > > We're using nginx as reverse proxy in front of apache and > following error occurs most of the time : > > 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: 141.0.10.83, server: domain.com, request: "GET /rss/recent > HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: "domain.com > " > 2014/06/02 01:00:29 [error] 3288#0: *6492143 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: 141.0.8.54, server: domain.com, request: "GET /rss/recent > HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: "domain.com > " > 2014/06/02 01:00:31 [error] 3285#0: *6492195 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: 119.160.119.47, server: domain.com, request: "GET > /video/2437706/18-xxx-sex-hot HTTP/1.1", upstream: " > http://127.0.0.1:7172/video/2437706/18-xxx-sex-hot", host: "domain.com", > referrer: "http://www.google.com.pk/search?hl=en&ie=ISO-8859-1&q=xxxsex" > > Following is the nginx.conf : > > user tune; > # no need for more workers in the proxy mode > worker_processes 16; > error_log /var/log/nginx/error.log error; > error_log /var/log/nginx/error.log crit; > #error_log /var/log/nginx/error.log; > worker_rlimit_nofile 409600; > events { > worker_connections 102400; # increase for busier servers > use epoll; # you should use epoll here for Linux kernels 2.6.x > } > http { > server_name_in_redirect off; > server_names_hash_max_size 10240; > proxy_read_timeout 900; > fastcgi_read_timeout 900; > server_names_hash_bucket_size 1024; > include mime.types; > default_type application/octet-stream; > server_tokens off; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 5; > gzip on; > gzip_vary on; > gzip_disable "MSIE [1-6]\."; > gzip_proxied any; > gzip_http_version 1.0; > gzip_min_length 1000; > gzip_comp_level 6; > gzip_buffers 16 8k; > # You can remove image/png image/x-icon image/gif image/jpeg if you have > slow CPU > gzip_types text/plain text/xml text/css application/x-javascript > application/javascript application/xml application/xml+rss text/javascript > application/atom+xml; > ignore_invalid_headers on; > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > reset_timedout_connection on; > connection_pool_size 256; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > client_max_body_size 2000M; > client_body_buffer_size 128k; > request_pool_size 32k; > output_buffers 4 32k; > postpone_output 1460; > client_body_in_file_only on; > log_format bytes_log "$msec $bytes_sent ."; > include "/etc/nginx/conf.d/virtual.conf"; > include "/etc/nginx/conf.d/beta.conf"; > include "/etc/nginx/conf.d/admin.conf"; > #include "/etc/nginx/conf.d/admin-ssl.conf"; > > } > > Regards. > Shahzaib > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Jun 2 10:13:36 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 2 Jun 2014 15:13:36 +0500 Subject: Connection reset by peer error !! In-Reply-To: References: Message-ID: Lol, well that was some URL link on our website :-D. Could you please help me instead of enjoying the xxx words :p On Mon, Jun 2, 2014 at 3:06 PM, B.R. wrote: > '18-xxx-sex-hot' :oD > > Pro. > --- > *B. R.* > > > On Mon, Jun 2, 2014 at 11:04 AM, shahzaib shahzaib > wrote: > >> Hello, >> >> We're using nginx as reverse proxy in front of apache and >> following error occurs most of the time : >> >> 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: >> Connection reset by peer) while reading response header from upstream, >> client: 141.0.10.83, server: domain.com, request: "GET /rss/recent >> HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: " >> domain.com" >> 2014/06/02 01:00:29 [error] 3288#0: *6492143 recv() failed (104: >> Connection reset by peer) while reading response header from upstream, >> client: 141.0.8.54, server: domain.com, request: "GET /rss/recent >> HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: " >> domain.com" >> 2014/06/02 01:00:31 [error] 3285#0: *6492195 recv() failed (104: >> Connection reset by peer) while reading response header from upstream, >> client: 119.160.119.47, server: domain.com, request: "GET >> /video/2437706/18-xxx-sex-hot HTTP/1.1", upstream: " >> http://127.0.0.1:7172/video/2437706/18-xxx-sex-hot", host: "domain.com", >> referrer: "http://www.google.com.pk/search?hl=en&ie=ISO-8859-1&q=xxxsex" >> >> Following is the nginx.conf : >> >> user tune; >> # no need for more workers in the proxy mode >> worker_processes 16; >> error_log /var/log/nginx/error.log error; >> error_log /var/log/nginx/error.log crit; >> #error_log /var/log/nginx/error.log; >> worker_rlimit_nofile 409600; >> events { >> worker_connections 102400; # increase for busier servers >> use epoll; # you should use epoll here for Linux kernels 2.6.x >> } >> http { >> server_name_in_redirect off; >> server_names_hash_max_size 10240; >> proxy_read_timeout 900; >> fastcgi_read_timeout 900; >> server_names_hash_bucket_size 1024; >> include mime.types; >> default_type application/octet-stream; >> server_tokens off; >> sendfile on; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 5; >> gzip on; >> gzip_vary on; >> gzip_disable "MSIE [1-6]\."; >> gzip_proxied any; >> gzip_http_version 1.0; >> gzip_min_length 1000; >> gzip_comp_level 6; >> gzip_buffers 16 8k; >> # You can remove image/png image/x-icon image/gif image/jpeg if you have >> slow CPU >> gzip_types text/plain text/xml text/css application/x-javascript >> application/javascript application/xml application/xml+rss text/javascript >> application/atom+xml; >> ignore_invalid_headers on; >> client_header_timeout 3m; >> client_body_timeout 3m; >> send_timeout 3m; >> reset_timedout_connection on; >> connection_pool_size 256; >> client_header_buffer_size 256k; >> large_client_header_buffers 4 256k; >> client_max_body_size 2000M; >> client_body_buffer_size 128k; >> request_pool_size 32k; >> output_buffers 4 32k; >> postpone_output 1460; >> client_body_in_file_only on; >> log_format bytes_log "$msec $bytes_sent ."; >> include "/etc/nginx/conf.d/virtual.conf"; >> include "/etc/nginx/conf.d/beta.conf"; >> include "/etc/nginx/conf.d/admin.conf"; >> #include "/etc/nginx/conf.d/admin-ssl.conf"; >> >> } >> >> Regards. >> Shahzaib >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Jun 2 10:15:21 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 2 Jun 2014 15:15:21 +0500 Subject: Connection reset by peer error !! In-Reply-To: References: Message-ID: The website content is similar to youtube, random stuff. :-) Don't get it wrong. It's not a porn website :) On Mon, Jun 2, 2014 at 3:13 PM, shahzaib shahzaib wrote: > Lol, well that was some URL link on our website :-D. Could you please help > me instead of enjoying the xxx words :p > > > On Mon, Jun 2, 2014 at 3:06 PM, B.R. wrote: > >> '18-xxx-sex-hot' :oD >> >> Pro. >> --- >> *B. R.* >> >> >> On Mon, Jun 2, 2014 at 11:04 AM, shahzaib shahzaib > > wrote: >> >>> Hello, >>> >>> We're using nginx as reverse proxy in front of apache and >>> following error occurs most of the time : >>> >>> 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: >>> Connection reset by peer) while reading response header from upstream, >>> client: 141.0.10.83, server: domain.com, request: "GET /rss/recent >>> HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: " >>> domain.com" >>> 2014/06/02 01:00:29 [error] 3288#0: *6492143 recv() failed (104: >>> Connection reset by peer) while reading response header from upstream, >>> client: 141.0.8.54, server: domain.com, request: "GET /rss/recent >>> HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: " >>> domain.com" >>> 2014/06/02 01:00:31 [error] 3285#0: *6492195 recv() failed (104: >>> Connection reset by peer) while reading response header from upstream, >>> client: 119.160.119.47, server: domain.com, request: "GET >>> /video/2437706/18-xxx-sex-hot HTTP/1.1", upstream: " >>> http://127.0.0.1:7172/video/2437706/18-xxx-sex-hot", host: "domain.com", >>> referrer: "http://www.google.com.pk/search?hl=en&ie=ISO-8859-1&q=xxxsex" >>> >>> Following is the nginx.conf : >>> >>> user tune; >>> # no need for more workers in the proxy mode >>> worker_processes 16; >>> error_log /var/log/nginx/error.log error; >>> error_log /var/log/nginx/error.log crit; >>> #error_log /var/log/nginx/error.log; >>> worker_rlimit_nofile 409600; >>> events { >>> worker_connections 102400; # increase for busier servers >>> use epoll; # you should use epoll here for Linux kernels 2.6.x >>> } >>> http { >>> server_name_in_redirect off; >>> server_names_hash_max_size 10240; >>> proxy_read_timeout 900; >>> fastcgi_read_timeout 900; >>> server_names_hash_bucket_size 1024; >>> include mime.types; >>> default_type application/octet-stream; >>> server_tokens off; >>> sendfile on; >>> tcp_nopush on; >>> tcp_nodelay on; >>> keepalive_timeout 5; >>> gzip on; >>> gzip_vary on; >>> gzip_disable "MSIE [1-6]\."; >>> gzip_proxied any; >>> gzip_http_version 1.0; >>> gzip_min_length 1000; >>> gzip_comp_level 6; >>> gzip_buffers 16 8k; >>> # You can remove image/png image/x-icon image/gif image/jpeg if you have >>> slow CPU >>> gzip_types text/plain text/xml text/css application/x-javascript >>> application/javascript application/xml application/xml+rss text/javascript >>> application/atom+xml; >>> ignore_invalid_headers on; >>> client_header_timeout 3m; >>> client_body_timeout 3m; >>> send_timeout 3m; >>> reset_timedout_connection on; >>> connection_pool_size 256; >>> client_header_buffer_size 256k; >>> large_client_header_buffers 4 256k; >>> client_max_body_size 2000M; >>> client_body_buffer_size 128k; >>> request_pool_size 32k; >>> output_buffers 4 32k; >>> postpone_output 1460; >>> client_body_in_file_only on; >>> log_format bytes_log "$msec $bytes_sent ."; >>> include "/etc/nginx/conf.d/virtual.conf"; >>> include "/etc/nginx/conf.d/beta.conf"; >>> include "/etc/nginx/conf.d/admin.conf"; >>> #include "/etc/nginx/conf.d/admin-ssl.conf"; >>> >>> } >>> >>> Regards. >>> Shahzaib >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From es12b1001 at iith.ac.in Mon Jun 2 10:45:09 2014 From: es12b1001 at iith.ac.in (Adarsh Pugalia) Date: Mon, 2 Jun 2014 16:15:09 +0530 Subject: How do i free memory when my master process ends? Message-ID: I am allocating memory using malloc for some of my variables in my module. I know i can call an exit master process, but how do i access my variables in that function. I tried finding some examples, but didnt find any. Can anyone guide me through this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Jun 2 11:51:43 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 2 Jun 2014 12:51:43 +0100 Subject: Nginx 1.7.0: location @php In-Reply-To: <07d939cb9c0fee924eeceae42fdbb282.NginxMailingListEnglish@forum.nginx.org> References: <20140531112745.GZ16942@daoine.org> <07d939cb9c0fee924eeceae42fdbb282.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 2 June 2014 04:47, TECK wrote: > Francis, > > We are going in circles without reaching a solution Fortunately, this being a *public* *mailing* *list*, and Francis (along with almost every other subscriber) giving his time, experience and opinions for free, you are definitely no worse off than when you started. Actually, however, you're demonstrably better off, as Francis has both attempted to help you go through the kind of troubleshooting process that will serve you well if you apply it (Message-ID 20140525132809.GN16942 at daoine.org) and ... > I think what I asked is > very clear and simple: > How do I avoid repeating a segment of configuration code assigned to @php > into various locations: ... he has given you the answer to this question - that you clearly thought you began by asking, but didn't. In Message-ID 20140531112745.GZ16942 at daoine.org. HTH. From mdounin at mdounin.ru Mon Jun 2 12:26:13 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 16:26:13 +0400 Subject: Email Reverse Proxy issue In-Reply-To: <4efec97591f89c67ee99a2741d671b74.NginxMailingListEnglish@forum.nginx.org> References: <4efec97591f89c67ee99a2741d671b74.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140602122613.GA1849@mdounin.ru> Hello! On Sun, Jun 01, 2014 at 06:30:49AM -0400, omercz wrote: > I am using Nginx as an Email reverse proxy. > The email client sends a request to the nginx, the nginx fetch the WHOLE > email(message) from exchange server, and only then manipulates it and sends > it back to client. This is not how nginx mail proxy works. > Email Client<---->Nginx<----> Office 365 > > Everything is working great, besides the following problem, the Email client > has a timeout of 30 seconds, > but sometimes it can take the Nginx to download the whole email more the > 30sec (if it has a big attachment). > > My question is as follow: > In the meantime can the nginx send 'something' to the client to keep him > aware that something is being downloaded ? > Right now I have an ugly patch that sends on the open socket 'blanks' to the > client every 20 seconds, until the nginx can send him the whole file. After an authentication, nginx just establises opaque pipe between a client and a server. Therefore, if the server sends anything, it will be immediately passed to the client. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Mon Jun 2 13:17:40 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 2 Jun 2014 18:17:40 +0500 Subject: Connection reset by peer error !! In-Reply-To: References: Message-ID: kept on getting those timeouts. recv() failed (104: Connection reset by peer) while reading response header from upstream On Mon, Jun 2, 2014 at 3:15 PM, shahzaib shahzaib wrote: > The website content is similar to youtube, random stuff. :-) Don't get it > wrong. It's not a porn website :) > > > On Mon, Jun 2, 2014 at 3:13 PM, shahzaib shahzaib > wrote: > >> Lol, well that was some URL link on our website :-D. Could you please >> help me instead of enjoying the xxx words :p >> >> >> On Mon, Jun 2, 2014 at 3:06 PM, B.R. wrote: >> >>> '18-xxx-sex-hot' :oD >>> >>> Pro. >>> --- >>> *B. R.* >>> >>> >>> On Mon, Jun 2, 2014 at 11:04 AM, shahzaib shahzaib < >>> shahzaib.cb at gmail.com> wrote: >>> >>>> Hello, >>>> >>>> We're using nginx as reverse proxy in front of apache and >>>> following error occurs most of the time : >>>> >>>> 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: >>>> Connection reset by peer) while reading response header from upstream, >>>> client: 141.0.10.83, server: domain.com, request: "GET /rss/recent >>>> HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: " >>>> domain.com" >>>> 2014/06/02 01:00:29 [error] 3288#0: *6492143 recv() failed (104: >>>> Connection reset by peer) while reading response header from upstream, >>>> client: 141.0.8.54, server: domain.com, request: "GET /rss/recent >>>> HTTP/1.1", upstream: "http://127.0.0.1:7172/rss/recent", host: " >>>> domain.com" >>>> 2014/06/02 01:00:31 [error] 3285#0: *6492195 recv() failed (104: >>>> Connection reset by peer) while reading response header from upstream, >>>> client: 119.160.119.47, server: domain.com, request: "GET >>>> /video/2437706/18-xxx-sex-hot HTTP/1.1", upstream: " >>>> http://127.0.0.1:7172/video/2437706/18-xxx-sex-hot", host: "domain.com", >>>> referrer: "http://www.google.com.pk/search?hl=en&ie=ISO-8859-1&q=xxxsex >>>> " >>>> >>>> Following is the nginx.conf : >>>> >>>> user tune; >>>> # no need for more workers in the proxy mode >>>> worker_processes 16; >>>> error_log /var/log/nginx/error.log error; >>>> error_log /var/log/nginx/error.log crit; >>>> #error_log /var/log/nginx/error.log; >>>> worker_rlimit_nofile 409600; >>>> events { >>>> worker_connections 102400; # increase for busier servers >>>> use epoll; # you should use epoll here for Linux kernels 2.6.x >>>> } >>>> http { >>>> server_name_in_redirect off; >>>> server_names_hash_max_size 10240; >>>> proxy_read_timeout 900; >>>> fastcgi_read_timeout 900; >>>> server_names_hash_bucket_size 1024; >>>> include mime.types; >>>> default_type application/octet-stream; >>>> server_tokens off; >>>> sendfile on; >>>> tcp_nopush on; >>>> tcp_nodelay on; >>>> keepalive_timeout 5; >>>> gzip on; >>>> gzip_vary on; >>>> gzip_disable "MSIE [1-6]\."; >>>> gzip_proxied any; >>>> gzip_http_version 1.0; >>>> gzip_min_length 1000; >>>> gzip_comp_level 6; >>>> gzip_buffers 16 8k; >>>> # You can remove image/png image/x-icon image/gif image/jpeg if you >>>> have slow CPU >>>> gzip_types text/plain text/xml text/css application/x-javascript >>>> application/javascript application/xml application/xml+rss text/javascript >>>> application/atom+xml; >>>> ignore_invalid_headers on; >>>> client_header_timeout 3m; >>>> client_body_timeout 3m; >>>> send_timeout 3m; >>>> reset_timedout_connection on; >>>> connection_pool_size 256; >>>> client_header_buffer_size 256k; >>>> large_client_header_buffers 4 256k; >>>> client_max_body_size 2000M; >>>> client_body_buffer_size 128k; >>>> request_pool_size 32k; >>>> output_buffers 4 32k; >>>> postpone_output 1460; >>>> client_body_in_file_only on; >>>> log_format bytes_log "$msec $bytes_sent ."; >>>> include "/etc/nginx/conf.d/virtual.conf"; >>>> include "/etc/nginx/conf.d/beta.conf"; >>>> include "/etc/nginx/conf.d/admin.conf"; >>>> #include "/etc/nginx/conf.d/admin-ssl.conf"; >>>> >>>> } >>>> >>>> Regards. >>>> Shahzaib >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 2 13:51:42 2014 From: nginx-forum at nginx.us (crespin) Date: Mon, 02 Jun 2014 09:51:42 -0400 Subject: Proposal minor patch on ngx_http_upstream.c Message-ID: <9f395830d9963bbb331d8106a78fc773.NginxMailingListEnglish@forum.nginx.org> Hello, errno is only set on error, so if |recv()| is a success, |err| will have a random value. Only debug message are impacted. Can you check if it is ok? Comments are welcome. Feel free to change the patch. Regards, yves --- nginx-1.6.0/src/http/ngx_http_upstream.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/nginx-1.6.0/src/http/ngx_http_upstream.c b/nginx-1.6.0/src/http/ngx index 040bda1..f60acb3 100644 --- a/nginx-1.6.0/src/http/ngx_http_upstream.c +++ b/nginx-1.6.0/src/http/ngx_http_upstream.c @@ -1128,7 +1128,7 @@ ngx_http_upstream_check_broken_connection(ngx_http_request_t *r, n = recv(c->fd, buf, 1, MSG_PEEK); - err = ngx_socket_errno; + err = n == 1 ? ngx_socket_errno : 0; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ev->log, err, "http upstream recv(): %d", n); @@ -1158,9 +1158,6 @@ ngx_http_upstream_check_broken_connection(ngx_http_request_t *r, } ev->error = 1; - - } else { /* n == 0 */ - err = 0; } ev->eof = 1; -- 1.7.10.4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250570,250570#msg-250570 From vishal.mestri at cloverinfotech.com Mon Jun 2 13:48:48 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Mon, 02 Jun 2014 19:18:48 +0530 (IST) Subject: Issue nginx - ajax In-Reply-To: <0db8aa0b-11d7-405c-ab3e-b0b01dba03f6@mail.cloverinfotech.com> Message-ID: <144f3e89-bd37-4ab4-9193-00308f241e3c@mail.cloverinfotech.com> Hi All and B.R. We tested on Apache as well and we faced same issue. Further, we disabled Antivirus on our client machine.(where we are accessing browser). Post that, we did following reverse proxy configuration on Nginx. When we receive request on 443 port, it will be sent to 80 port. When we receive request on 6401 port , it will be sent to 6400 port. This configuration without SSL worked on IE 10, Chrome and firefox as well. Further, as soon as , I turned on SSL configuration on port 443 and 6401 for Nginx, we faced old issue. i.e. site worked on Chrome, but not on firefox and IE. Further, we got following error on error log for firefox only. *29 SSL_read() failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert number 48) while waiting for request, client: 203.115.123.90, server: 0.0.0.0:6401 We used below commands to generate ssl certificates:- openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org Sent: Tuesday, May 27, 2014 10:05:54 AM Subject: Re: Issue nginx - ajax Thanks B. R. for your immediate reply. Configuration file which we are using is attached along with the email. We want following functions:- On our server there are two services running. One on 80 port and another one on 6400 port. We want to use Nginx as product which can help us to SSL enable both services and these services does not have capabilities to be SSL Enabled. Thus, we want to Nginx to listen on two SSL Port 443 and 6401. When Nginx receives request on port 443 , it will reverse proxy that request to service running on 80 port. And when Nginx receives request on port 6401 , it will reverse proxy that request to service running on 6400 port. Service which is running on 80 port is apache tomcat, where as service running on port 6400 is proprietary product which is called using AJAX. This configuration is working very well on chrome, but we are facing issue on Internet explorer 8 onwards. Please let us know if shared configuration is correct or not. Thank you very much for reply. We have started looking in apache , but still it is in RnD phase. Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "B.R." To: "Nginx ML" Sent: Monday, May 26, 2014 7:25:47 PM Subject: Re: Issue nginx - ajax If you wanted more help, you could provide some of the following: - Your configuration and what you expect it to do - The step you took to check the communication between nginx and your backend (the dumps of tcpdump) with details of what you expected Logs only show applied configuration (which might be faulty) do. However, if your decision is already made, then good luck with Apache or squid ;o) --- B. R. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From trapni at gmail.com Mon Jun 2 14:00:28 2014 From: trapni at gmail.com (Christian Parpart) Date: Mon, 2 Jun 2014 16:00:28 +0200 Subject: $rquest_time for only the time it spent getting data from the upstream Message-ID: Hey all, we used $request_time in the past to measure how long it took to serve certain pages, this was never the problem, because there was something in front of nginx, thus, the client read/write operatings didn't influence the $request_time as much as a real client would. However, now, that we'd like to skip that extra level of indirection, we can't actually measure the time it took to actually process the request, as $request_time now kind of doubles because internet clients are slow usually. So is there a way to log the time the request handler required to "just handle the request internally", without the reads and writes from/to the client? Many thanks in advance, Christian Parpart. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon Jun 2 14:18:57 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 2 Jun 2014 18:18:57 +0400 Subject: Proposal minor patch on ngx_http_upstream.c In-Reply-To: <9f395830d9963bbb331d8106a78fc773.NginxMailingListEnglish@forum.nginx.org> References: <9f395830d9963bbb331d8106a78fc773.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140602141857.GB25209@lo0.su> On Mon, Jun 02, 2014 at 09:51:42AM -0400, crespin wrote: > Hello, > > errno is only set on error, so if |recv()| is a success, |err| will have a > random value. > Only debug message are impacted. > Can you check if it is ok? > Comments are welcome. > Feel free to change the patch. > > Regards, > > yves > > > --- > nginx-1.6.0/src/http/ngx_http_upstream.c | 5 +---- > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/nginx-1.6.0/src/http/ngx_http_upstream.c > b/nginx-1.6.0/src/http/ngx > index 040bda1..f60acb3 100644 > --- a/nginx-1.6.0/src/http/ngx_http_upstream.c > +++ b/nginx-1.6.0/src/http/ngx_http_upstream.c > @@ -1128,7 +1128,7 @@ > ngx_http_upstream_check_broken_connection(ngx_http_request_t *r, > > n = recv(c->fd, buf, 1, MSG_PEEK); > > - err = ngx_socket_errno; > + err = n == 1 ? ngx_socket_errno : 0; (n == -1) > > ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ev->log, err, > "http upstream recv(): %d", n); > @@ -1158,9 +1158,6 @@ > ngx_http_upstream_check_broken_connection(ngx_http_request_t *r, > } > > ev->error = 1; > - > - } else { /* n == 0 */ > - err = 0; > } > > ev->eof = 1; > > -- > 1.7.10.4 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250570,250570#msg-250570 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ruslan Ermilov From mdounin at mdounin.ru Mon Jun 2 14:48:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 18:48:28 +0400 Subject: $rquest_time for only the time it spent getting data from the upstream In-Reply-To: References: Message-ID: <20140602144828.GD1849@mdounin.ru> Hello! On Mon, Jun 02, 2014 at 04:00:28PM +0200, Christian Parpart wrote: > Hey all, > > we used $request_time in the past to measure how long it took to serve > certain pages, this was never the problem, because there was something in > front of nginx, thus, the client read/write operatings didn't influence the > $request_time as much as a real client would. > > However, now, that we'd like to skip that extra level of indirection, we > can't actually measure the time it took to actually process the request, as > $request_time now kind of doubles because internet clients are slow usually. > > So is there a way to log the time the request handler required to "just > handle the request internally", without the reads and writes from/to the > client? Try looking at the $upstream_response_time variable, see here for details: http://nginx.org/r/$upstream_response_time -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 2 14:52:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 18:52:01 +0400 Subject: Connection reset by peer error !! In-Reply-To: References: Message-ID: <20140602145201.GE1849@mdounin.ru> Hello! On Mon, Jun 02, 2014 at 02:04:45PM +0500, shahzaib shahzaib wrote: > Hello, > > We're using nginx as reverse proxy in front of apache and > following error occurs most of the time : > > 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: Connection > reset by peer) while reading response header from upstream, client: > 141.0.10.83, server: domain.com, request: "GET /rss/recent HTTP/1.1", > upstream: "http://127.0.0.1:7172/rss/recent", host: "domain.com" [...] Your upstream server isn't working properly, and resets connections. There is more or less nothing to do on nginx side, try looking into the upstream server instead. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 2 15:32:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 19:32:02 +0400 Subject: Invalid ports added in redirects on AWS EC2 nginx In-Reply-To: References: Message-ID: <20140602153202.GF1849@mdounin.ru> Hello! On Sun, Jun 01, 2014 at 01:48:09PM -0400, allang wrote: > On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to > be able to test the app by directly talking to the EC2 server and via an > Elastic Load Balancer (ELB -the public route in). > > I've setup the ELB to decrypt all the SSL traffic and pass this on to my EC2 > server via port 80, as well as pass port 80 directly onto my EC2 server via > port 80. > > Initially this caused infinite redirects in my app but I researched and then > fixed this by adding > > fastcgi_param HTTPS $https; > with some custom logic that looks at $http_x_forwarded_proto to figure out > when its actually via SSL. > > There remains one issue I can't solve. When a user logs into the Symfony > app, if they come via the ELB, the form POST eventually returns a redirect > back to https://elb.mysite.com:80/dashboard instead of > https://elb.mysite.com/dashboard which gives the user an error of "SSL > connection error". > > I've tried setting > > fastcgi_param SERVER_PORT $fastcgi_port; > to force it away from 80 and I've also added the [...] > fastcgi_param HTTPS $fastcgi_https; > # fastcgi_param SERVER_PORT $fastcgi_port; > #fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /var/www/vhosts/mysite.com/web$fastcgi_script_name; > include fastcgi_params; Make sure you've commented out SERVER_PORT from the fastcgi_params file. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jun 2 15:37:31 2014 From: nginx-forum at nginx.us (crespin) Date: Mon, 02 Jun 2014 11:37:31 -0400 Subject: Proposal minor patch on ngx_http_upstream.c In-Reply-To: <20140602141857.GB25209@lo0.su> References: <20140602141857.GB25209@lo0.su> Message-ID: <05ef057b3763a12ffcc15d73c83ded2c.NginxMailingListEnglish@forum.nginx.org> Hello Ruslan, Thanks for your remark. Is it necessary to send the correct patch ? Regards, yves Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250570,250579#msg-250579 From mdounin at mdounin.ru Mon Jun 2 15:56:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 19:56:08 +0400 Subject: How do i free memory when my master process ends? In-Reply-To: References: Message-ID: <20140602155608.GH1849@mdounin.ru> Hello! On Mon, Jun 02, 2014 at 04:15:09PM +0530, Adarsh Pugalia wrote: > I am allocating memory using malloc for some of my variables in my module. > I know i can call an exit master process, but how do i access my variables > in that function. I tried finding some examples, but didnt find any. Can > anyone guide me through this. Resource allocation and deallocations are usually done with pools in nginx. If you want to do some allocations which aren't from pool (e.g., an opened file, or a memory allocated directly with malloc() for some reason), it's usually good idea to add a pool cleanup to do required deallocation when coresponding pool will be destroyed. Try looking into the code for ngx_pool_cleanup_add(), there are lots of examples. -- Maxim Dounin http://nginx.org/ From nhadie at gmail.com Mon Jun 2 16:08:17 2014 From: nhadie at gmail.com (ron ramos) Date: Tue, 3 Jun 2014 00:08:17 +0800 Subject: Invalid ports added in redirects on AWS EC2 nginx In-Reply-To: References: Message-ID: how about binding it to another port like 8080. so elb will receive request as https port 443 and send it to ec2 instance via http port 8080. will that help? regards, nhadie On 2 Jun 2014 01:48, "allang" wrote: > On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to > be able to test the app by directly talking to the EC2 server and via an > Elastic Load Balancer (ELB -the public route in). > > I've setup the ELB to decrypt all the SSL traffic and pass this on to my > EC2 > server via port 80, as well as pass port 80 directly onto my EC2 server via > port 80. > > Initially this caused infinite redirects in my app but I researched and > then > fixed this by adding > > fastcgi_param HTTPS $https; > with some custom logic that looks at $http_x_forwarded_proto to figure out > when its actually via SSL. > > There remains one issue I can't solve. When a user logs into the Symfony > app, if they come via the ELB, the form POST eventually returns a redirect > back to https://elb.mysite.com:80/dashboard instead of > https://elb.mysite.com/dashboard which gives the user an error of "SSL > connection error". > > I've tried setting > > fastcgi_param SERVER_PORT $fastcgi_port; > to force it away from 80 and I've also added the > > port_in_redirect off > directive but both make no difference. > > The only way I've found to fix this is to alter the ELB 443 listener to > pass > traffic via https. The EC2 server has a self certified SSL certificate > configured. But this means the EC2 server is wasting capacity performing > this unnecessary 2nd decryption. > > Any help very much appreciated. Maybe there is a separate way within nginx > of telling POST requests to not apply port numbers? > > Nginx vhost config: > server { > port_in_redirect off; > > listen 80; > listen 443 ssl; > > ssl_certificate /etc/nginx/ssl/mysite.com/self-ssl.crt; > ssl_certificate_key /etc/nginx/ssl/mysite.com/self-ssl.key; > > # Determine if HTTPS being used either locally or via ELB > set $fastcgi_https off; > set $fastcgi_port 80; > if ( $http_x_forwarded_proto = 'https' ) { > # ELB is using https > set $fastcgi_https on; > # set $fastcgi_port 443; > } > if ( $https = 'on' ) { > # Local connection is using https > set $fastcgi_https on; > # set $fastcgi_port 443; > } > > server_name *.mysite.com > my-mysite-com-1234.eu-west-1.elb.amazonaws.com; > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log error; > > rewrite ^/app\.php/?(.*)$ /$1 permanent; > > location / { > port_in_redirect off; > root /var/www/vhosts/mysite.com/web; > index app.php index.php index.html index.html; > try_files $uri @rewriteapp; > } > > location ~* \.(jpg|jpeg|gif|png)$ { > root /var/www/vhosts/mysite.com/web; > access_log off; > log_not_found off; > expires 30d; > } > > location ~* \.(css|js)$ { > root /var/www/vhosts/mysite.com/web; > access_log off; > log_not_found off; > expires 2h; > } > > location @rewriteapp { > rewrite ^(.*)$ /app.php/$1 last; > } > > location ~ ^/(app|app_dev|config)\.php(/|$) { > port_in_redirect off; > fastcgi_pass 127.0.0.1:9000; > fastcgi_split_path_info ^(.+\.php)(/.*)$; > fastcgi_param HTTPS $fastcgi_https; > # fastcgi_param SERVER_PORT $fastcgi_port; > #fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /var/www/vhosts/mysite.com/web$fastcgi_script_name; > include fastcgi_params; > } > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250545,250545#msg-250545 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Jun 2 16:29:18 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 2 Jun 2014 21:29:18 +0500 Subject: Connection reset by peer error !! In-Reply-To: <20140602145201.GE1849@mdounin.ru> References: <20140602145201.GE1849@mdounin.ru> Message-ID: thanks @Maxim, our upstream server is apache and there are not any suspicious error_logs in apache. We're getting one reset error every 3 hours. Is there any clue to find the root cause of apache problem ? I know its nginx forum, sorry for off-topic. On Mon, Jun 2, 2014 at 7:52 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jun 02, 2014 at 02:04:45PM +0500, shahzaib shahzaib wrote: > > > Hello, > > > > We're using nginx as reverse proxy in front of apache and > > following error occurs most of the time : > > > > 2014/06/02 01:00:28 [error] 3288#0: *6492138 recv() failed (104: > Connection > > reset by peer) while reading response header from upstream, client: > > 141.0.10.83, server: domain.com, request: "GET /rss/recent HTTP/1.1", > > upstream: "http://127.0.0.1:7172/rss/recent", host: "domain.com" > > [...] > > Your upstream server isn't working properly, and resets > connections. There is more or less nothing to do on nginx side, > try looking into the upstream server instead. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 2 16:44:31 2014 From: nginx-forum at nginx.us (allang) Date: Mon, 02 Jun 2014 12:44:31 -0400 Subject: Invalid ports added in redirects on AWS EC2 nginx In-Reply-To: <20140602153202.GF1849@mdounin.ru> References: <20140602153202.GF1849@mdounin.ru> Message-ID: <05a4be0e26d23ca4692bee9088c81749.NginxMailingListEnglish@forum.nginx.org> That's fixed it ! Thanks Maxim. You are a lifesaver. Much appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250545,250586#msg-250586 From nginx-forum at nginx.us Mon Jun 2 18:09:53 2014 From: nginx-forum at nginx.us (Satake) Date: Mon, 02 Jun 2014 14:09:53 -0400 Subject: Browser showing headers and Gziped code inside body. Message-ID: I'm currently using nginx/1.4.7 and for some reason a client is complaining that the page is showing with errors in both his browsers (Firefox and IE). The error in question is the following: https://www.dropbox.com/s/wnpjyyq01j7l5qg/Problemas%20ao%20inserir%20post.jpg Does anyone have any suggestion or idea of why this is happening? Regards. $ nginx -V nginx version: nginx/1.4.7 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-file-aio --with-http_spdy_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/headers-more-nginx-module --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-auth-ldap --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-cache-purge --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-development-kit --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx-fancyindex --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-push-stream-module --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-lua --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-upload-progress --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx_http_pinba_module --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx_http_substitutions_filter_module --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx_pagespeed --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-x-rid-header --with-ld-opt=-lossp-uuid Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250588,250588#msg-250588 From qwrules at gmail.com Mon Jun 2 18:16:22 2014 From: qwrules at gmail.com (KC) Date: Mon, 02 Jun 2014 20:16:22 +0200 Subject: Can't get nginx to work with Adminer nor phpMyadmin Message-ID: <538CBF76.5090208@gmail.com> I have been trying to set up Nginx with Adminer (and phpMyadmin), but I can't seem to get it to work. Here are my config files: /etc/nginx/nginx.conf: http://pastebin.com/uNTAwuTp* * /etc/nginx/sites-available/adminer: http://pastebin.com/cpwK1Bz2 But still if I go to http://server.ip/adminer I only get "404 Not Found" error. What am I doing wrong? From reallfqq-nginx at yahoo.fr Mon Jun 2 18:37:04 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 2 Jun 2014 20:37:04 +0200 Subject: Browser showing headers and Gziped code inside body. In-Reply-To: References: Message-ID: What you provided clearly shows that nginx has been compiled with 3rd-party modules. Standard debugging procedures suggest: 1?) Removal all 3rd-party modules to address issues in core nginx only 2?) If not enough, check your logs to see if anything appears in them regarding that client connection(s) 3?) If not enough, read official nginx docs for instructions on how to activate debug log If 2?) or 3?) are relevant (ie problem with core nginx still appears to your client), please provide the collected logs/errors messages here. --- *B. R.* On Mon, Jun 2, 2014 at 8:09 PM, Satake wrote: > I'm currently using nginx/1.4.7 and for some reason a client is complaining > that the page is showing with errors in both his browsers (Firefox and IE). > > The error in question is the following: > > > https://www.dropbox.com/s/wnpjyyq01j7l5qg/Problemas%20ao%20inserir%20post.jpg > > Does anyone have any suggestion or idea of why this is happening? > > Regards. > > $ nginx -V > nginx version: nginx/1.4.7 > built by gcc 4.7.2 (Debian 4.7.2-5) > TLS SNI support enabled > configure arguments: --with-cc-opt='-g -O2 -fstack-protector > --param=ssp-buffer-size=4 -Wformat -Werror=format-security > -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx > --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-file-aio --with-http_spdy_module > --with-http_addition_module --with-http_dav_module --with-http_flv_module > --with-http_geoip_module --with-http_gzip_static_module > --with-http_gunzip_module --with-http_image_filter_module > --with-http_mp4_module --with-http_perl_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_spdy_module --with-http_sub_module --with-http_xslt_module > --with-mail --with-mail_ssl_module > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/headers-more-nginx-module > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-auth-ldap > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-auth-pam > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-cache-purge > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-dav-ext-module > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-development-kit > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-echo > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx-fancyindex > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-push-stream-module > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-lua > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-upload-progress > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-upstream-fair > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-syslog > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx_http_pinba_module > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx_http_substitutions_filter_module > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/ngx_pagespeed > > --add-module=/usr/src/nginx/source/nginx-1.4.7/debian/modules/nginx-x-rid-header > --with-ld-opt=-lossp-uuid > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250588,250588#msg-250588 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Jun 2 18:43:01 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 2 Jun 2014 20:43:01 +0200 Subject: Can't get nginx to work with Adminer nor phpMyadmin In-Reply-To: <538CBF76.5090208@gmail.com> References: <538CBF76.5090208@gmail.com> Message-ID: The right link for you nginx.conf is: http://pastebin.com/uNTAwuTp You could at least had that properly done... Copy-pasting nginx configuration from somewhere else without understanding it has little chances of success. What have you tried to configure? What do you expect from it (which directive does what)? Please copy the *relevant* parts of your configuration in your messages here so further reference will be understandable. --- *B. R.* On Mon, Jun 2, 2014 at 8:16 PM, KC wrote: > I have been trying to set up Nginx with Adminer (and phpMyadmin), but I > can't seem to get it to work. > > Here are my config files: > > /etc/nginx/nginx.conf: > http://pastebin.com/uNTAwuTp* > * > /etc/nginx/sites-available/adminer: > http://pastebin.com/cpwK1Bz2 > > > But still if I go to http://server.ip/adminer > > I only get "404 Not Found" error. > > What am I doing wrong? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwrules at gmail.com Mon Jun 2 20:12:03 2014 From: qwrules at gmail.com (KC) Date: Mon, 02 Jun 2014 22:12:03 +0200 Subject: Can't get nginx to work with Adminer nor phpMyadmin In-Reply-To: References: <538CBF76.5090208@gmail.com> Message-ID: <538CDA93.2010002@gmail.com> Sorry about the link. Don't know how it happened. So, the portions in question of nginx.conf are: ----------------------------------------------------------------- server { server_name phpmyadmin.; root /usr/share/webapps/phpMyAdmin; index index.php; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } } server { server_name adminer.; root /usr/share/webapps/adminer; index index.php; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } } ----------------------------------------------------------------- And the /etc/nginx/sites-available/adminer file is: ----------------------------------------------------------------- server { listen 80; server_name adminer; index adminer-3.3.3.php; set $root_path '/usr/share/webapps/adminer'; root $root_path; try_files $uri $uri/ @rewrite; location @rewrite { rewrite ^/(.*)$ /index.php?_url=/$1; } location ~ \.php { fastcgi_pass unix:/run/php5-fpm.sock; fastcgi_index /adminer-3.3.3.php; include /etc/nginx/fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~* ^/(css|img|js|flv|swf|download)/(.+)$ { root $root_path; } location ~ /\.ht { deny all; } } ----------------------------------------------------------------- I don't know if this file is necessary. The howtos for making Adminer/phpMyadmin work with Nginx are very limited to non-existent. I want to be able to type http://ipaddress/phpmyadmin (or adminer) and see its respective interface. But as it is now, I only get 404 error. On 02/06/14 20:43, B.R. wrote: > The right link for you nginx.conf is: > http://pastebin.com/uNTAwuTp > You could at least had that properly done... > > Copy-pasting nginx configuration from somewhere else without > understanding it has little chances of success. > > What have you tried to configure? > What do you expect from it (which directive does what)? > > Please copy the /*relevant*/ parts of your configuration in your > messages here so further reference will be understandable. > --- > *B. R.* > > > On Mon, Jun 2, 2014 at 8:16 PM, KC > wrote: > > I have been trying to set up Nginx with Adminer (and phpMyadmin), > but I can't seem to get it to work. > > Here are my config files: > > /etc/nginx/nginx.conf: > http://pastebin.com/uNTAwuTp* > * > /etc/nginx/sites-available/__adminer: > http://pastebin.com/cpwK1Bz2 > > > But still if I go to http://server.ip/adminer > > I only get "404 Not Found" error. > > What am I doing wrong? > > _________________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/__mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Mon Jun 2 23:12:48 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Jun 2014 00:12:48 +0100 Subject: Can't get nginx to work with Adminer nor phpMyadmin In-Reply-To: <538CDA93.2010002@gmail.com> References: <538CBF76.5090208@gmail.com> <538CDA93.2010002@gmail.com> Message-ID: <20140602231248.GB16942@daoine.org> On Mon, Jun 02, 2014 at 10:12:03PM +0200, KC wrote: Hi there, > I want to be able to type http://ipaddress/phpmyadmin (or adminer) > and see its respective interface. But as it is now, I only get 404 > error. If you want http://ipaddress/phpmyadmin and http://ipaddress/adminer to both work, you will probably want a single server{} block with all of the configuration. http://nginx.org/en/docs/http/request_processing.html for the reason. After you've done that, configure each application independently, probably by wrapping the normal "it's a php script" configuration within location blocks like location ^~ /adminer/ After that, when you test, you can read the nginx logs to see exactly why the 404 was produced. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Jun 3 06:54:19 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 03 Jun 2014 02:54:19 -0400 Subject: Thread support Message-ID: Hi, I am trying to run Nginx as a multi threaded application. Looking at the code it seems the initial code to support multi threaded was there. May be it got broken (as the error message says) or it was not developed to the end. Can anyone who is aware of the history of the Nginx comment about the thread support of Nginx. I am trying to understand what was the reason behind not supporting this as a multi threaded application. Please let me where it was stopped or where it was broken ? Thank you. Regards, Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250598,250598#msg-250598 From vl at nginx.com Tue Jun 3 07:01:49 2014 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 3 Jun 2014 11:01:49 +0400 Subject: Thread support In-Reply-To: References: Message-ID: <20140603070149.GA20304@vlpc> On Tue, Jun 03, 2014 at 02:54:19AM -0400, nginxsantos wrote: > Hi, > > I am trying to run Nginx as a multi threaded application. Looking at the > code it seems the initial code to support multi threaded was there. May be > it got broken (as the error message says) or it was not developed to the > end. > See this: http://mailman.nginx.org/pipermail/nginx/2007-September/001796.html http://mailman.nginx.org/pipermail/nginx/2008-April/004559.html > Can anyone who is aware of the history of the Nginx comment about the thread > support of Nginx. I am trying to understand what was the reason behind not > supporting this as a multi threaded application. > > Please let me where it was stopped or where it was broken ? From contact at jpluscplusm.com Tue Jun 3 09:48:28 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 3 Jun 2014 10:48:28 +0100 Subject: Browser showing headers and Gziped code inside body. In-Reply-To: References: Message-ID: On 2 June 2014 19:09, Satake wrote: > I'm currently using nginx/1.4.7 and for some reason a client is complaining > that the page is showing with errors in both his browsers (Firefox and IE). > > The error in question is the following: > > https://www.dropbox.com/s/wnpjyyq01j7l5qg/Problemas%20ao%20inserir%20post.jpg > > Does anyone have any suggestion or idea of why this is happening? I /was/ about to suggest you post the output of a "curl -v -o /dev/null" of that page as I thought you probably have the Content-Type header set to something incorrect. But then I noticed that your screenshot actually shows a sub-request getting inserted halfway through the page, after the header. I suggest that whatever entity is combining the page header and the sub-request body is causing your problem, by not dealing with its input correctly. From sarah at nginx.com Tue Jun 3 09:54:39 2014 From: sarah at nginx.com (sarah.novotny) Date: Tue, 3 Jun 2014 02:54:39 -0700 Subject: Wiki updates -- was: Can anyone tell me how to delete spam pages on the wiki? In-Reply-To: <3bee8aed88d9df1dfb76e51832b2e2f4.NginxMailingListEnglish@forum.nginx.org> References: <3bee8aed88d9df1dfb76e51832b2e2f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: I?d like to offer a MONSTER thank you to talkingnews and tsbolzonello for helping battle the wiki spam.? We?ve had no net new spam in more than a week and all the old spam has been vanquished. ? Next up is going through the existing content to audit and update the remaining pages. ?If any of you are interested in helping and have trouble editing, please send me and email. talkingnews and tsbolzonello, please send me your physical mail addresses. I?d like to ship you some treats. Sarah From:?talkingnews nginx-forum at nginx.us Reply:?nginx at nginx.org nginx at nginx.org Date:?May 28, 2014 at 11:29:17 AM To:?nginx at nginx.org nginx at nginx.org Subject:? Can anyone tell me how to delete spam pages on the wiki? Click this: http://wiki.nginx.org/index.php?title=Special:Search&limit=500&offset=0&redirs=1&profile=default&search=a+ See the problem?! I stopped counting at 500 pages. It gets kinda tedious searching for info to end up wading through wedding dress and SEO spam and finding out "how to finger you woman". I can't seem to do anything except "blank" the pages - my wiki username is talkingnews. If someone can give me "the power", I promise not to be malicious, but when I get a spare 10 minutes from time to time I'll go through and delete 100 or so spams. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250445,250445#msg-250445 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx --? sarah.novotny -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 3 10:24:06 2014 From: nginx-forum at nginx.us (talkingnews) Date: Tue, 03 Jun 2014 06:24:06 -0400 Subject: Wiki updates -- was: Can anyone tell me how to delete spam pages on the wiki? In-Reply-To: References: Message-ID: <527de04cfa2f5ced94f7d2f6a0decec2.NginxMailingListEnglish@forum.nginx.org> sarahnovotny Wrote: > I?d like to offer a MONSTER thank you to talkingnews and tsbolzonello > for helping battle the wiki spam.? The least I can do in return for this great nginx server! email sent :) > Next up is going through the existing content to audit and update the > remaining pages. ?If any of you are interested in helping and have > trouble editing, please send me and email. Thought I'd reply on-list to see what others think here: Regarding those other pages, what about pages like this? http://wiki.nginx.org/Bugs - I know it says "old/obsolete", but as it appears second in Google search results, perhaps a redirect to http://trac.nginx.org/nginx/ might make things easier? Also, the Chinese pages seem to mix between http://wiki.nginx.org/ChsFullExample and http://wiki.nginx.org/CoreModuleChs I see that in this example: http://wiki.nginx.org/index.php?title=HttpEmptyGifModuleChs&action=history that there's a move to moving the Chs from the start to the end of the URL. However, nginx is truly international and growing fast - is there any extension, configuration or module for the Wiki which would allow pages to be tagged or grouped by language, so someone could include or exclude a language in search results? I think that would be helpful to have the same page names but under language directories, a bit like php do, eg: http://www.php.net/manual/en/language.basic-syntax.php http://www.php.net/manual/pt_BR/language.basic-syntax.php Also, I know that you can see the "last updated" date by clicking the little calendar on the top right, but I was thinking that it might be useful to have a "last updated" date clearly on the main page, perhaps at the head or foot. Also, how about an "applies to" section? That way, I could tick "applies to 1.7+" and just get pages containing tips and configs for this version. Just throwing some thoughts out there - anyone else got any thoughts on that? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250601,250602#msg-250602 From vishal.mestri at cloverinfotech.com Tue Jun 3 10:58:54 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Tue, 03 Jun 2014 16:28:54 +0530 (IST) Subject: Issue nginx - ajax In-Reply-To: <144f3e89-bd37-4ab4-9193-00308f241e3c@mail.cloverinfotech.com> Message-ID: <2688e699-8ae7-42fa-81fd-7b47b41262ae@mail.cloverinfotech.com> Hi B.R. and all, Really thank you for your support till now. We were able to resolve issue on IE as well as on Firefox. we did following settings:- 1) IE We added my website to secure site list. Post that, I imported certificate to "Trusted Root Certiication Authorities". 2) I used "security exception option" by adding same certificate twice by accessing two different ports 6401 and 443. B. R.> Thank you for all your inputs. Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org Sent: Monday, June 2, 2014 7:18:48 PM Subject: Re: Issue nginx - ajax Hi All and B.R. We tested on Apache as well and we faced same issue. Further, we disabled Antivirus on our client machine.(where we are accessing browser). Post that, we did following reverse proxy configuration on Nginx. When we receive request on 443 port, it will be sent to 80 port. When we receive request on 6401 port , it will be sent to 6400 port. This configuration without SSL worked on IE 10, Chrome and firefox as well. Further, as soon as , I turned on SSL configuration on port 443 and 6401 for Nginx, we faced old issue. i.e. site worked on Chrome, but not on firefox and IE. Further, we got following error on error log for firefox only. *29 SSL_read() failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert number 48) while waiting for request, client: 203.115.123.90, server: 0.0.0.0:6401 We used below commands to generate ssl certificates:- openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org Sent: Tuesday, May 27, 2014 10:05:54 AM Subject: Re: Issue nginx - ajax Thanks B. R. for your immediate reply. Configuration file which we are using is attached along with the email. We want following functions:- On our server there are two services running. One on 80 port and another one on 6400 port. We want to use Nginx as product which can help us to SSL enable both services and these services does not have capabilities to be SSL Enabled. Thus, we want to Nginx to listen on two SSL Port 443 and 6401. When Nginx receives request on port 443 , it will reverse proxy that request to service running on 80 port. And when Nginx receives request on port 6401 , it will reverse proxy that request to service running on 6400 port. Service which is running on 80 port is apache tomcat, where as service running on port 6400 is proprietary product which is called using AJAX. This configuration is working very well on chrome, but we are facing issue on Internet explorer 8 onwards. Please let us know if shared configuration is correct or not. Thank you very much for reply. We have started looking in apache , but still it is in RnD phase. Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "B.R." To: "Nginx ML" Sent: Monday, May 26, 2014 7:25:47 PM Subject: Re: Issue nginx - ajax If you wanted more help, you could provide some of the following: - Your configuration and what you expect it to do - The step you took to check the communication between nginx and your backend (the dumps of tcpdump) with details of what you expected Logs only show applied configuration (which might be faulty) do. However, if your decision is already made, then good luck with Apache or squid ;o) --- B. R. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Jun 3 12:05:29 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 3 Jun 2014 13:05:29 +0100 Subject: Issue nginx - ajax In-Reply-To: <2688e699-8ae7-42fa-81fd-7b47b41262ae@mail.cloverinfotech.com> References: <144f3e89-bd37-4ab4-9193-00308f241e3c@mail.cloverinfotech.com> <2688e699-8ae7-42fa-81fd-7b47b41262ae@mail.cloverinfotech.com> Message-ID: On 3 June 2014 11:58, Vishal Mestri wrote: > Hi B.R. and all, > > > Really thank you for your support till now. > > > We were able to resolve issue on IE as well as on Firefox. > > > we did following settings:- > > > 1) IE > > We added my website to secure site list. > > Post that, I imported certificate to "Trusted Root Certiication > Authorities". And that's a solution for /all/ clients who'll be accessing this site? You're going to make them install your *site* cert as a root CA? I think you may have made a mistake here. At the very least, you're doing the wrong thing. > 2) I used "security exception option" by adding same certificate twice by > accessing two different ports 6401 and 443. Ditto. I'd keep working on this, if it were me. YMMV. From jayadev at ymail.com Tue Jun 3 16:42:41 2014 From: jayadev at ymail.com (Jayadev C) Date: Tue, 3 Jun 2014 09:42:41 -0700 (PDT) Subject: Getting rewritten and encoded/escaped url in nginx module Message-ID: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> Hi, I am writing a nginx proxy module and want to grab the url which is urlencoded (as the client sends it) and also after rewrite rules are applied.? My typical url looks like : path1/path2/path3/urlencoded(key)?args, after rewriting the url I would love to have is something like : newpath1/newpath2/newpath3/../urlencoded(key)?args. Currently , r->uri? is decoded rewritten uri, r->unparsed_uri is encoded but not rewritten. I read on the forum that nginx decodes the url for rewrite, is there a handy internal function I can use encode the rewritten url back. A simple use of ngx_escape_uri(r->uri) with different parameters doesn't do what I want out of the box. Thanks, Jayadev -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Jun 3 17:09:08 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 3 Jun 2014 19:09:08 +0200 Subject: Getting rewritten and encoded/escaped url in nginx module In-Reply-To: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> References: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> Message-ID: I am not providing a direct answer but could not you use some standard modules to do that? Such as using (examples): - rewrite associated with a map, loaded from a separate configuration file reloaded after changes - the perl module to invoke external perl scripts doing that for you, maybe in conjunction with the ssi module ? --- *B. R.* On Tue, Jun 3, 2014 at 6:42 PM, Jayadev C wrote: > > Hi, > > I am writing a nginx proxy module and want to grab the url which is > urlencoded (as the client sends it) and also after rewrite rules are > applied. My typical url looks like : > path1/path2/path3/urlencoded(key)?args , after rewriting the url I would > love to have is something like : > newpath1/newpath2/newpath3/../urlencoded(key)?args. > > Currently , r->uri is decoded rewritten uri, r->unparsed_uri is encoded > but not rewritten. > > I read on the forum that nginx decodes the url for rewrite, is there a > handy internal function I can use encode the rewritten url back. A simple > use of ngx_escape_uri(r->uri) with different parameters doesn't do what I > want out of the box. > > Thanks, > Jayadev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 3 17:17:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Jun 2014 21:17:28 +0400 Subject: Getting rewritten and encoded/escaped url in nginx module In-Reply-To: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> References: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> Message-ID: <20140603171728.GU1849@mdounin.ru> Hello! On Tue, Jun 03, 2014 at 09:42:41AM -0700, Jayadev C wrote: > > > Hi, > > > I am writing a nginx proxy module and want to grab the url which > is urlencoded (as the client sends it) and also after rewrite > rules are applied.? My typical url looks like : > path1/path2/path3/urlencoded(key)?args, after rewriting the url > I would love to have is something like : > > newpath1/newpath2/newpath3/../urlencoded(key)?args. > > Currently , r->uri? is decoded rewritten uri, r->unparsed_uri is > encoded but not rewritten. > > I read on the forum that nginx decodes the url for rewrite, is > there a handy internal function I can use encode the rewritten > url back. A simple use of ngx_escape_uri(r->uri) with different > parameters doesn't do what I want out of the box. The ngx_escape_uri(r->uri) _is_ the internal function you should use if you need encoded URI. If in doubt, try looking, e.g., into the ngx_http_proxy_create_request() function to see how it's done there. -- Maxim Dounin http://nginx.org/ From jayadev at ymail.com Tue Jun 3 21:08:36 2014 From: jayadev at ymail.com (Jayadev C) Date: Tue, 3 Jun 2014 14:08:36 -0700 (PDT) Subject: Getting rewritten and encoded/escaped url in nginx module In-Reply-To: <20140603171728.GU1849@mdounin.ru> References: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> <20140603171728.GU1849@mdounin.ru> Message-ID: <1401829716.69686.YahooMailNeo@web163501.mail.gq1.yahoo.com> The problem is ngx_http_proxy doesn't do that either, once rewrite is applied the url remains decoded (or I am not reading the code correctly).? In fact escaping uri back is a bit tricky since you have to split the rewritten url to segments and apply encoding again (which would require looking at the original url to know the correct number of '/'s). Anybody had attempted doing something like this ? or is there an easier way. Jayadev On Tuesday, June 3, 2014 10:17 AM, Maxim Dounin wrote: Hello! On Tue, Jun 03, 2014 at 09:42:41AM -0700, Jayadev C wrote: > > > Hi, > > > I am writing a nginx proxy module and want to grab the url which > is urlencoded (as the client sends it) and also after rewrite > rules are applied.? My typical url looks like : > path1/path2/path3/urlencoded(key)?args, after rewriting the url > I would love to have is something like : > > newpath1/newpath2/newpath3/../urlencoded(key)?args. > > Currently , r->uri? is decoded rewritten uri, r->unparsed_uri is > encoded but not rewritten. > > I read on the forum that nginx decodes the url for rewrite, is > there a handy internal function I can use encode the rewritten > url back. A simple use of ngx_escape_uri(r->uri) with different > parameters doesn't do what I want out of the box. The ngx_escape_uri(r->uri) _is_ the internal function you should use if you need encoded URI. If in doubt, try looking, e.g., into the ngx_http_proxy_create_request() function to see how it's done there. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishal.mestri at cloverinfotech.com Wed Jun 4 04:48:42 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Wed, 04 Jun 2014 10:18:42 +0530 (IST) Subject: Issue nginx - ajax In-Reply-To: Message-ID: Hi Jonathan Matthews, Thank you for your valuable comments. I understand , what you would like to suggest, b ut we are using self-signed certificate just for trial demo. Once UAT is done, we would be using actual certificate, where I guess we will not face any issue. Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "Jonathan Matthews" To: nginx at nginx.org Sent: Tuesday, June 3, 2014 5:35:29 PM Subject: Re: Issue nginx - ajax On 3 June 2014 11:58, Vishal Mestri wrote: > Hi B.R. and all, > > > Really thank you for your support till now. > > > We were able to resolve issue on IE as well as on Firefox. > > > we did following settings:- > > > 1) IE > > We added my website to secure site list. > > Post that, I imported certificate to "Trusted Root Certiication > Authorities". And that's a solution for /all/ clients who'll be accessing this site? You're going to make them install your *site* cert as a root CA? I think you may have made a mistake here. At the very least, you're doing the wrong thing. > 2) I used "security exception option" by adding same certificate twice by > accessing two different ports 6401 and 443. Ditto. I'd keep working on this, if it were me. YMMV. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Jun 4 09:11:20 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 4 Jun 2014 10:11:20 +0100 Subject: Issue nginx - ajax In-Reply-To: References: Message-ID: On 4 June 2014 05:48, Vishal Mestri wrote: > Hi Jonathan Matthews, > > Thank you for your valuable comments. > I understand , what you would like to suggest, but we are using self-signed > certificate just for trial demo. > > Once UAT is done, we would be using actual certificate, where I guess we > will not face any issue. The 2 problems I have seen you state in this thread are that 1) your error logs are full of "upstream prematurely closed connection while reading upstream", and 2) this problem doesn't go away when you use HTTP instead of HTTPS. I am unsure why you think the provenance of your SSL certificate has *anything* to do with these two error conditions. I suggest to you that it /really/ doesn't; and that you haven't found the root cause of your problem. Good luck with it, J From nginx-forum at nginx.us Wed Jun 4 11:15:17 2014 From: nginx-forum at nginx.us (chrismcv) Date: Wed, 04 Jun 2014 07:15:17 -0400 Subject: Can I use ssl_verify_client without a prompt Message-ID: Hi, I'm using ssl_client_certificate, I want to set "proxy_set_header X-Cert-DN $ssl_client_s_dn;" when I proxy. I've ssl_verify_client set to optional - this all works great, but in the browser, the user gets a cert prompt. Is there a way I can avoid the cert prompt - so the header will get set if a cert is supplied, but otherwise it won't prompt the user? (So curl -e cert.crt https://mysite should get the cert header, but a browser shouldn't know about it). Cheers, Chris Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250627,250627#msg-250627 From nginx-forum at nginx.us Wed Jun 4 13:17:56 2014 From: nginx-forum at nginx.us (Godinho) Date: Wed, 04 Jun 2014 09:17:56 -0400 Subject: nginx Segmentation fault Message-ID: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> I needed to use modsecurity so I compiled nginx and modsecurity. Modsecurity was compiled with options: ./configure --enable-standalone-module nginx with: ./configure --add-module=../modsecurity-2.8.0/nginx/modsecurity/ When I try to test my configuration I have this: [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t Segmentation fault from message logs: Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip 00007f9088569a32 sp 00007fffc90ccf58 error 4 in libc-2.12.so[7f90884e0000+18b000] Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in libc-2.12.so[7f6626c0a000+18b000] Update: it seams it a modsecurity problem, copiled without it (./configure --with-http_ssl_module) work just fine... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250629,250629#msg-250629 From nginx-forum at nginx.us Wed Jun 4 13:24:49 2014 From: nginx-forum at nginx.us (yarek) Date: Wed, 04 Jun 2014 09:24:49 -0400 Subject: How to install Nginx from source and avoid the OpenSSL Bug ? Message-ID: Hi, I am new to nginx and linux. I tried to install nginx form source (nginx+rtmp module) following many tutos on my Debian And all gives me the same error: ./configure --add-module=../nginx-rtmp-module-master make -f objs/Makefile make[1]: entrant dans le r?pertoire ? /root/nginx/nginx-1.4.3 ? cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I/root/nginx/nginx-rtmp-module/ -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/event/ngx_event_openssl.o \ src/event/ngx_event_openssl.c src/event/ngx_event_openssl.c: In function ngx_ssl_create: src/event/ngx_event_openssl.c:189:5: error: ?SSL_OP_MSIE_SSLV2_RSA_PADDING? undeclared (first use in this function) src/event/ngx_event_openssl.c:189:5: note: each undeclared identifier is reported only once for each function it appears in make[1]: * [objs/src/event/ngx_event_openssl.o] Erreur 1 make[1]: quittant le r?pertoire ? /root/nginx/nginx-1.4.3 ? make: * [build] Erreur 2 It seems error comes from : Planned removal of SSL_OP_MSIE_SSLV2_RSA_PADDING breaks dependent software if you are using OpenSSL 1.0.2 or higher. Any idea on how do I fix that ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250630,250630#msg-250630 From kurt at x64architecture.com Wed Jun 4 13:27:42 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Wed, 4 Jun 2014 09:27:42 -0400 Subject: nginx Segmentation fault In-Reply-To: References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, this is unrelated to nginx and has to do with mod_security. There is an alternative if it suits your needs called naxsi. Regards, Kurt Cancemi http://www.getwnmp.org On Jun 4, 2014 9:18 AM, "Godinho" wrote: I needed to use modsecurity so I compiled nginx and modsecurity. Modsecurity was compiled with options: ./configure --enable-standalone-module nginx with: ./configure --add-module=../modsecurity-2.8.0/nginx/modsecurity/ When I try to test my configuration I have this: [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t Segmentation fault from message logs: Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip 00007f9088569a32 sp 00007fffc90ccf58 error 4 in libc-2.12.so[7f90884e0000+18b000] Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in libc-2.12.so[7f6626c0a000+18b000] Update: it seams it a modsecurity problem, copiled without it (./configure --with-http_ssl_module) work just fine... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250629,250629#msg-250629 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Jun 4 13:33:04 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 4 Jun 2014 15:33:04 +0200 Subject: How to install Nginx from source and avoid the OpenSSL Bug ? In-Reply-To: References: Message-ID: Hi, > How to install Nginx from source and avoid the OpenSSL Bug ? What openssl bug are you talking about? Debian contains all important fixes afaik. > It seems error comes from : > Planned removal of SSL_OP_MSIE_SSLV2_RSA_PADDING breaks dependent software > if you are using OpenSSL 1.0.2 or higher. > > Any idea on how do I fix that ? It was already fixed 9 months ago: http://hg.nginx.org/nginx/rev/a73678f5f96f Use a recent nginx tarball. Regards, Lukas From reallfqq-nginx at yahoo.fr Wed Jun 4 13:50:55 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 4 Jun 2014 15:50:55 +0200 Subject: How to install Nginx from source and avoid the OpenSSL Bug ? In-Reply-To: References: Message-ID: On Wed, Jun 4, 2014 at 3:33 PM, Lukas Tribus wrote: > > How to install Nginx from source and avoid the OpenSSL Bug ? > > What openssl bug are you talking about? Debian contains all > important fixes afaik. > ?I think 'yarek'? ? tries to build nginx with a 3rd-party program. I'd suggest to use either the latest stable (v1.6.0) or mainline (v1.7.1) source. v1.4.3 is pretty old now and is deprecated. Btw, nginx links the OpenSSL library dynamically, so the bug has never lied inside nginx. It depends on the version of OpenSSL which has been used to compile nginx (since using a version other than the one used for compilation at run time might fail/introduce problems). > It seems error comes from : > > Planned removal of SSL_OP_MSIE_SSLV2_RSA_PADDING breaks dependent > software > > if you are using OpenSSL 1.0.2 or higher. > > > > Any idea on how do I fix that ? > > It was already fixed 9 months ago: > http://hg.nginx.org/nginx/rev/a73678f5f96f > > Use a recent nginx tarball. > ?'yarek' you could have compared the error message triggered by the source you were using with the current ngx_event_openssl.c source file . You would have seen that the deprecation of the constant you triggered is handled, by a check for its existence. Lukas has been kind enough to provide you with the exact commit introducing this change. ?To sump up: - use recent/supported source - use an unaffected version of OpenSSL ? when compiling your nginx binary. All major distro (including Debian) have fixed their repositories with corrected versions for a long time now ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Jun 4 14:52:19 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 4 Jun 2014 16:52:19 +0200 Subject: How to install Nginx from source and avoid the OpenSSL Bug ? In-Reply-To: References: , , Message-ID: Hi, >> How to install Nginx from source and avoid the OpenSSL Bug ?? >? > What openssl bug are you talking about? Debian contains all? > important fixes afaik.? >? > ?I think 'yarek'?? tries to build nginx with a 3rd-party program. Just installing libssl-dev from the debian repository would have been enough then. Also using: ?aptitude build-dep nginx is a more convenient way to install all source dependencies. Building openssl on his own without understanding system paths is dangerous and will probably break his system. Lukas From rpaprocki at fearnothingproductions.net Wed Jun 4 15:00:36 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 04 Jun 2014 08:00:36 -0700 Subject: nginx Segmentation fault In-Reply-To: References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> Message-ID: <538F3494.5090804@fearnothingproductions.net> Can you post a full core dump? Did you verify the mod_security tarball you downloaded? Can detail the steps taken to build that module? What version of nginx are you trying to build? On 6/4/2014 06:27, Kurt Cancemi wrote: > > Hello, this is unrelated to nginx and has to do with mod_security. > There is an alternative if it suits your needs called naxsi. > > Regards, > Kurt Cancemi > http://www.getwnmp.org > > On Jun 4, 2014 9:18 AM, "Godinho" > wrote: > > I needed to use modsecurity so I compiled nginx and modsecurity. > > Modsecurity was compiled with options: ./configure > --enable-standalone-module > > nginx with: ./configure > --add-module=../modsecurity-2.8.0/nginx/modsecurity/ > > When I try to test my configuration I have this: > > [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t > Segmentation fault > > from message logs: > Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip > 00007f9088569a32 sp 00007fffc90ccf58 error 4 in > libc-2.12.so [7f90884e0000+18b000] > Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip > 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in > libc-2.12.so [7f6626c0a000+18b000] > > Update: it seams it a modsecurity problem, copiled without it > (./configure > --with-http_ssl_module) work just fine... > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250629,250629#msg-250629 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jun 4 15:04:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Jun 2014 19:04:59 +0400 Subject: Getting rewritten and encoded/escaped url in nginx module In-Reply-To: <1401829716.69686.YahooMailNeo@web163501.mail.gq1.yahoo.com> References: <1401813761.57691.YahooMailNeo@web163503.mail.gq1.yahoo.com> <20140603171728.GU1849@mdounin.ru> <1401829716.69686.YahooMailNeo@web163501.mail.gq1.yahoo.com> Message-ID: <20140604150459.GA1849@mdounin.ru> Hello! On Tue, Jun 03, 2014 at 02:08:36PM -0700, Jayadev C wrote: > The problem is ngx_http_proxy doesn't do that either, once > rewrite is applied the url remains decoded (or I am not reading > the code correctly). The proxy does ngx_escape_uri() if URI was rewritten. It has to, as unencoded URI can't be used in a HTTP request. Take a look at the ngx_http_proxy_create_request() function as previously suggested. >?In fact escaping uri back is a bit tricky > since you have to split the rewritten url to segments and apply > encoding again (which would require looking at the original url > to know the correct number of '/'s). As long as there were encoded slashes, the encoding of these slashes will be lost if URI was rewritten. This is the expected behaviour. -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Wed Jun 4 15:05:27 2014 From: kyprizel at gmail.com (kyprizel) Date: Wed, 4 Jun 2014 19:05:27 +0400 Subject: nginx Segmentation fault In-Reply-To: <538F3494.5090804@fearnothingproductions.net> References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> <538F3494.5090804@fearnothingproductions.net> Message-ID: I think this bug was fixed in nginx_refactoring tree. On Wed, Jun 4, 2014 at 7:00 PM, Robert Paprocki < rpaprocki at fearnothingproductions.net> wrote: > Can you post a full core dump? Did you verify the mod_security tarball > you downloaded? Can detail the steps taken to build that module? What > version of nginx are you trying to build? > > On 6/4/2014 06:27, Kurt Cancemi wrote: > > Hello, this is unrelated to nginx and has to do with mod_security. There > is an alternative if it suits your needs called naxsi. > > Regards, > Kurt Cancemi > http://www.getwnmp.org > On Jun 4, 2014 9:18 AM, "Godinho" wrote: > > I needed to use modsecurity so I compiled nginx and modsecurity. > > Modsecurity was compiled with options: ./configure > --enable-standalone-module > > nginx with: ./configure > --add-module=../modsecurity-2.8.0/nginx/modsecurity/ > > When I try to test my configuration I have this: > > [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t > Segmentation fault > > from message logs: > Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip > 00007f9088569a32 sp 00007fffc90ccf58 error 4 in > libc-2.12.so[7f90884e0000+18b000] > Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip > 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in > libc-2.12.so[7f6626c0a000+18b000] > > Update: it seams it a modsecurity problem, copiled without it (./configure > --with-http_ssl_module) work just fine... > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250629,250629#msg-250629 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From defan at nginx.com Wed Jun 4 15:12:43 2014 From: defan at nginx.com (Andrei Belov) Date: Wed, 4 Jun 2014 19:12:43 +0400 Subject: nginx Segmentation fault In-Reply-To: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5D405364-E117-43D8-B119-71FDFC08E3AA@nginx.com> Hi, there is a lot of open issues with ModSecurity and nginx: https://github.com/SpiderLabs/ModSecurity/issues?labels=Platform+-+Nginx&state=open Some of them have been already fixed in nginx_refactoring branch: https://github.com/SpiderLabs/ModSecurity/tree/nginx_refactoring I spent some time experimenting with this branch locally, and added a number of improvements to the code: https://github.com/defanator/ModSecurity/tree/nginx_refactoring A number of patches were already imported into SpiderLabs/nginx_refactoring repository, others are under review and testing. Please feel free to try - any feedback will be greatly appreciated! Cheers, ? Andrei Belov http://nginx.com/ On 04 Jun 2014, at 17:17, Godinho wrote: > I needed to use modsecurity so I compiled nginx and modsecurity. > > Modsecurity was compiled with options: ./configure > --enable-standalone-module > > nginx with: ./configure > --add-module=../modsecurity-2.8.0/nginx/modsecurity/ > > When I try to test my configuration I have this: > > [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t > Segmentation fault > > from message logs: > Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip > 00007f9088569a32 sp 00007fffc90ccf58 error 4 in > libc-2.12.so[7f90884e0000+18b000] > Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip > 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in > libc-2.12.so[7f6626c0a000+18b000] > > Update: it seams it a modsecurity problem, copiled without it (./configure > --with-http_ssl_module) work just fine... > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250629,250629#msg-250629 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From kyprizel at gmail.com Wed Jun 4 15:58:22 2014 From: kyprizel at gmail.com (kyprizel) Date: Wed, 4 Jun 2014 19:58:22 +0400 Subject: nginx Segmentation fault In-Reply-To: <5D405364-E117-43D8-B119-71FDFC08E3AA@nginx.com> References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> <5D405364-E117-43D8-B119-71FDFC08E3AA@nginx.com> Message-ID: Andrei, have you checked issue 630? https://github.com/SpiderLabs/ModSecurity/issues/630 On Wed, Jun 4, 2014 at 7:12 PM, Andrei Belov wrote: > Hi, > > there is a lot of open issues with ModSecurity and nginx: > > https://github.com/SpiderLabs/ModSecurity/issues?labels=Platform+-+Nginx&state=open > > Some of them have been already fixed in nginx_refactoring branch: > https://github.com/SpiderLabs/ModSecurity/tree/nginx_refactoring > > I spent some time experimenting with this branch locally, > and added a number of improvements to the code: > https://github.com/defanator/ModSecurity/tree/nginx_refactoring > > A number of patches were already imported into SpiderLabs/nginx_refactoring > repository, others are under review and testing. > > Please feel free to try - any feedback will be greatly appreciated! > > Cheers, > > ? > Andrei Belov > http://nginx.com/ > > > On 04 Jun 2014, at 17:17, Godinho wrote: > > > I needed to use modsecurity so I compiled nginx and modsecurity. > > > > Modsecurity was compiled with options: ./configure > > --enable-standalone-module > > > > nginx with: ./configure > > --add-module=../modsecurity-2.8.0/nginx/modsecurity/ > > > > When I try to test my configuration I have this: > > > > [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t > > Segmentation fault > > > > from message logs: > > Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip > > 00007f9088569a32 sp 00007fffc90ccf58 error 4 in > > libc-2.12.so[7f90884e0000+18b000] > > Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip > > 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in > > libc-2.12.so[7f6626c0a000+18b000] > > > > Update: it seams it a modsecurity problem, copiled without it > (./configure > > --with-http_ssl_module) work just fine... > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250629,250629#msg-250629 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From defan at nginx.com Wed Jun 4 16:10:00 2014 From: defan at nginx.com (Andrei Belov) Date: Wed, 4 Jun 2014 20:10:00 +0400 Subject: nginx Segmentation fault In-Reply-To: References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> <5D405364-E117-43D8-B119-71FDFC08E3AA@nginx.com> Message-ID: <028E5567-F32C-4A1D-BDA2-07CEE4949EDC@nginx.com> Not yet. Quick look makes me think that "client_body_in_file_only on;" might help. -- defan > On 04 ???? 2014 ?., at 19:58, kyprizel wrote: > > Andrei, have you checked issue 630? > > https://github.com/SpiderLabs/ModSecurity/issues/630 > > >> On Wed, Jun 4, 2014 at 7:12 PM, Andrei Belov wrote: >> Hi, >> >> there is a lot of open issues with ModSecurity and nginx: >> https://github.com/SpiderLabs/ModSecurity/issues?labels=Platform+-+Nginx&state=open >> >> Some of them have been already fixed in nginx_refactoring branch: >> https://github.com/SpiderLabs/ModSecurity/tree/nginx_refactoring >> >> I spent some time experimenting with this branch locally, >> and added a number of improvements to the code: >> https://github.com/defanator/ModSecurity/tree/nginx_refactoring >> >> A number of patches were already imported into SpiderLabs/nginx_refactoring >> repository, others are under review and testing. >> >> Please feel free to try - any feedback will be greatly appreciated! >> >> Cheers, >> >> ? >> Andrei Belov >> http://nginx.com/ >> >> >> On 04 Jun 2014, at 17:17, Godinho wrote: >> >> > I needed to use modsecurity so I compiled nginx and modsecurity. >> > >> > Modsecurity was compiled with options: ./configure >> > --enable-standalone-module >> > >> > nginx with: ./configure >> > --add-module=../modsecurity-2.8.0/nginx/modsecurity/ >> > >> > When I try to test my configuration I have this: >> > >> > [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t >> > Segmentation fault >> > >> > from message logs: >> > Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip >> > 00007f9088569a32 sp 00007fffc90ccf58 error 4 in >> > libc-2.12.so[7f90884e0000+18b000] >> > Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip >> > 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in >> > libc-2.12.so[7f6626c0a000+18b000] >> > >> > Update: it seams it a modsecurity problem, copiled without it (./configure >> > --with-http_ssl_module) work just fine... >> > >> > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250629,250629#msg-250629 >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> > >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From amikamsnir at gmail.com Wed Jun 4 16:15:40 2014 From: amikamsnir at gmail.com (Amikam Snir) Date: Wed, 4 Jun 2014 19:15:40 +0300 Subject: sub-request to external resource Message-ID: Hi all, How can I make sub-request to external resource (without returning it to the user)? The following commands are used only for internal resources: ngx.location.capture ngx.exec any idea? :-) Thanks in advance, -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Jun 4 18:50:08 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 4 Jun 2014 23:50:08 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: <2d390b3d32431d04dd610eefbd2f19c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: @itpp, i just used your method try_files and it worked flawlessly :). Following is the testing config : server { listen 80; server_name domain.com; root /var/www/html/files; location / { root /var/www/html/files; try_files $uri @getfrom_origin; } location @getfrom_origin { proxy_pass http://127.0.0.1:8080; } } if proxy_pass worked for localhost, i hope it will also work for remote host to forward request if the file doesn't exist on local caching server. :-) Would you suggest me to add some more configs for tweaking on nginx ? Btw, proxy_pass should only be for mp4 and jpeg, cause the caching is only for video files. Should i use rsync or lsync for mirroring the files between Origin and caching server ? Suggestion will be highly appreciated. Regards. Shahzaib On Wed, May 28, 2014 at 4:19 PM, shahzaib shahzaib wrote: > Right. I'll proceed with my research and get back to you with better > approach . :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 4 19:19:43 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 04 Jun 2014 15:19:43 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: shahzaib1232 Wrote: ------------------------------------------------------- > @itpp, i just used your method try_files and it worked flawlessly :). > Following is the testing config : > > server { > listen 80; > server_name domain.com; > root /var/www/html/files; > > location / { location ~* (\.mp3|\.avi|\.mp4)$ { > Should i use rsync or lsync for mirroring the files between Origin and > caching server ? Whatever works for you, I'd prefer rsync since that's easier to schedule for off-peek hours. Also sync to a temp folder and move after completion or nginx will attempt to send partial files. see also http://wanproxy.org/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250645#msg-250645 From shahzaib.cb at gmail.com Wed Jun 4 19:23:41 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 5 Jun 2014 00:23:41 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: >>Also sync to a temp folder and move after completion or nginx will attempt to send partial files. Oh right. Thanks for quick help and suggestion :). I'll look into wanproxy now. On Thu, Jun 5, 2014 at 12:19 AM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > @itpp, i just used your method try_files and it worked flawlessly :). > > Following is the testing config : > > > > server { > > listen 80; > > server_name domain.com; > > root /var/www/html/files; > > > > location / { > > location ~* (\.mp3|\.avi|\.mp4)$ { > > > Should i use rsync or lsync for mirroring the files between Origin and > > caching server ? > > Whatever works for you, I'd prefer rsync since that's easier to schedule > for > off-peek hours. > Also sync to a temp folder and move after completion or nginx will attempt > to send partial files. > see also http://wanproxy.org/ > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,250645#msg-250645 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 4 19:29:47 2014 From: nginx-forum at nginx.us (Spacedust) Date: Wed, 04 Jun 2014 15:29:47 -0400 Subject: [alert] 12928#0: worker process 3958 exited on signal 11 Message-ID: <7fb4a3505cf38f408a87f525bc3b25da.NginxMailingListEnglish@forum.nginx.org> I use nginx 1.7.1 as frontend proxy to Apache 2.2.27 + php-fpm 5.5.13. When I add new domain, then just reload nginx it throwing errors like "connection reset" etc. Error log is full of something like this: tail -f /var/log/nginx/error.log 2014/06/04 11:36:23 [alert] 12928#0: worker process 3958 exited on signal 11 2014/06/04 11:36:23 [alert] 12928#0: worker process 3959 exited on signal 11 2014/06/04 11:36:23 [alert] 12928#0: worker process 3992 exited on signal 11 2014/06/04 11:36:23 [alert] 12928#0: worker process 3982 exited on signal 11 2014/06/04 11:36:24 [alert] 12928#0: worker process 3993 exited on signal 11 2014/06/04 11:36:24 [alert] 12928#0: worker process 3995 exited on signal 11 2014/06/04 11:36:25 [alert] 12928#0: worker process 4001 exited on signal 11 2014/06/04 11:36:25 [alert] 12928#0: worker process 3994 exited on signal 11 2014/06/04 11:36:25 [alert] 12928#0: worker process 4002 exited on signal 11 2014/06/04 11:36:26 [alert] 12928#0: worker process 4021 exited on signal 11 Then I have to restart nginx, and sometimes even killall -9 nginx then restart, because then it shows something like this: 2014/06/04 21:16:30 [emerg] 11491#0: bind() to 0.0.0.0:443 failed (98: Address already in use) 2014/06/04 21:16:30 [emerg] 11491#0: bind() to [::]:443 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to [::]:80 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to 0.0.0.0:443 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to [::]:443 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to [::]:80 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to 0.0.0.0:443 failed (98: Address already in use) 2014/06/04 21:16:35 [emerg] 11754#0: bind() to [::]:443 failed (98: Address already in use) How to resolve this ? My config is as follows: user nginx; worker_processes 8; worker_rlimit_nofile 400000; pid /var/run/nginx.pid; events { worker_connections 8192; use epoll; } http { add_header Cache-Control public; server_names_hash_max_size 4096; server_names_hash_bucket_size 2048; types_hash_bucket_size 64; types_hash_max_size 2048; client_header_buffer_size 2k; client_header_timeout 180s; client_body_timeout 180s; send_timeout 180s; client_max_body_size 64M; client_body_buffer_size 128k; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; include '/etc/nginx/conf.d/*.conf'; } include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log alert; ### MR - must be using nginx-special (including ngx_http_log_request_speed) ## just enough remove # below for enable; only request > 5000 miliseconds write to error.log #log_request_speed_filter on; #log_request_speed_filter_timeout 5000; gzip on; gzip_static on; gzip_min_length 1024; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; keepalive_timeout 180; limit_conn_zone $binary_remote_addr zone=addr:10m; limit_rate_after 1000m; limit_rate 12500k; proxy_cache_path /dev/shm/nginx-proxy levels=1:2 keys_zone=pcache:8m max_size=1000m inactive=600m; proxy_temp_path /dev/shm/nginx 1 2; fastcgi_cache_path /dev/shm/nginx-fastcgi levels=1:2 keys_zone=fcache:8m max_size=1000m inactive=600m; fastcgi_temp_path /dev/shm/nginx 1 2; include /home/nginx/conf/defaults/*.conf; include /home/nginx/conf/domains/*.conf; Example domain config: ### begin - web of 'vmax24.pl' - do not remove/modify this line ## webmail for 'vmax24.pl' server { #disable_symlinks if_not_owner; listen 0.0.0.0:80; listen [::]:80; server_name webmail.vmax24.pl; index index.php index.html index.shtml index.htm default.htm Default.aspx Default.asp index.pl; set $rootdir '/home/kloxo/httpd/webmail/onlinemail'; root $rootdir; include '/home/nginx/conf/globals/custom.proxy.conf'; } ## web for 'vmax24.pl' server { #disable_symlinks if_not_owner; listen 0.0.0.0:80; listen [::]:80; server_name vmax24.pl www.vmax24.pl; index index.php index.html index.shtml index.htm default.htm Default.aspx Default.asp index.pl; set $domain 'vmax24.pl'; set $rootdir '/home/eremi21/vmax24.pl'; root $rootdir; set $user 'eremi21'; set $fpmport '57907'; include '/home/nginx/conf/globals/custom.proxy.conf'; include '/home/nginx/conf/globals/generic.conf'; } ## webmail for 'vmax24.pl' server { #disable_symlinks if_not_owner; listen 0.0.0.0:443; listen [::]:443; ssl on; ssl_certificate /home/kloxo/httpd/ssl/eth0___localhost.pem; ssl_certificate_key /home/kloxo/httpd/ssl/eth0___localhost.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; server_name webmail.vmax24.pl; index index.php index.html index.shtml index.htm default.htm Default.aspx Default.asp index.pl; set $rootdir '/home/kloxo/httpd/webmail/onlinemail'; root $rootdir; include '/home/nginx/conf/globals/custom.proxy.conf'; } ## web for 'vmax24.pl' server { #disable_symlinks if_not_owner; listen 0.0.0.0:443; listen [::]:443; ssl on; ssl_certificate /home/kloxo/httpd/ssl/eth0___localhost.pem; ssl_certificate_key /home/kloxo/httpd/ssl/eth0___localhost.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; server_name vmax24.pl www.vmax24.pl; index index.php index.html index.shtml index.htm default.htm Default.aspx Default.asp index.pl; set $domain 'vmax24.pl'; set $rootdir '/home/eremi21/vmax24.pl'; root $rootdir; set $user 'eremi21'; set $fpmport '57907'; include '/home/nginx/conf/globals/custom.proxy.conf'; include '/home/nginx/conf/globals/generic.conf'; } ### end - web of 'vmax24.pl' - do not remove/modify this line Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250647,250647#msg-250647 From nginx-forum at nginx.us Wed Jun 4 19:37:32 2014 From: nginx-forum at nginx.us (Spacedust) Date: Wed, 04 Jun 2014 15:37:32 -0400 Subject: [alert] 12928#0: worker process 3958 exited on signal 11 In-Reply-To: <7fb4a3505cf38f408a87f525bc3b25da.NginxMailingListEnglish@forum.nginx.org> References: <7fb4a3505cf38f408a87f525bc3b25da.NginxMailingListEnglish@forum.nginx.org> Message-ID: <639b3e8d930c193fcdb4d8fd37d7c11e.NginxMailingListEnglish@forum.nginx.org> Also dmesg is full of this: nginx[9184]: segfault at 27 ip 0000000000447606 sp 00007fff9972a8b0 error 4 in nginx[400000+b6000] nginx[9599]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9579]: segfault at 117 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9600]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9639]: segfault at ea78 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9659]: segfault at 19 ip 0000000000447606 sp 00007fff9972a8b0 error 4 in nginx[400000+b6000] nginx[9601]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9662]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9663]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] nginx[9661]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 in nginx[400000+b6000] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250647,250648#msg-250648 From piotr at cloudflare.com Wed Jun 4 19:47:51 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 4 Jun 2014 12:47:51 -0700 Subject: [alert] 12928#0: worker process 3958 exited on signal 11 In-Reply-To: <639b3e8d930c193fcdb4d8fd37d7c11e.NginxMailingListEnglish@forum.nginx.org> References: <7fb4a3505cf38f408a87f525bc3b25da.NginxMailingListEnglish@forum.nginx.org> <639b3e8d930c193fcdb4d8fd37d7c11e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey, assuming this is your whole config, could you either uncomment the access_log directive or add "access_log off"? I'm pretty sure you're hitting the bug introduced in 1.7.0: http://mailman.nginx.org/pipermail/nginx-devel/2014-June/005430.html Best regards, Piotr Sikora From vl at nginx.com Wed Jun 4 19:48:05 2014 From: vl at nginx.com (Homutov Vladimir) Date: Wed, 04 Jun 2014 23:48:05 +0400 Subject: [alert] 12928#0: worker process 3958 exited on signal 11 In-Reply-To: <639b3e8d930c193fcdb4d8fd37d7c11e.NginxMailingListEnglish@forum.nginx.org> References: <7fb4a3505cf38f408a87f525bc3b25da.NginxMailingListEnglish@forum.nginx.org> <639b3e8d930c193fcdb4d8fd37d7c11e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <538F77F5.6070508@nginx.com> On 04.06.2014 23:37, Spacedust wrote: > Also dmesg is full of this: > > nginx[9184]: segfault at 27 ip 0000000000447606 sp 00007fff9972a8b0 error 4 > in nginx[400000+b6000] > nginx[9599]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > nginx[9579]: segfault at 117 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > nginx[9600]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > nginx[9639]: segfault at ea78 ip 0000000000447606 sp 00007fff9972b7f0 error > 4 in nginx[400000+b6000] > nginx[9659]: segfault at 19 ip 0000000000447606 sp 00007fff9972a8b0 error 4 > in nginx[400000+b6000] > nginx[9601]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > nginx[9662]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > nginx[9663]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > nginx[9661]: segfault at 19 ip 0000000000447606 sp 00007fff9972b7f0 error 4 > in nginx[400000+b6000] > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250647,250648#msg-250648 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > http://wiki.nginx.org/Debugging When asking for help with debugging please provide: nginx -V output full config debug log backtrace (if nginx exits on signal) From nginx-forum at nginx.us Wed Jun 4 19:53:28 2014 From: nginx-forum at nginx.us (Spacedust) Date: Wed, 04 Jun 2014 15:53:28 -0400 Subject: [alert] 12928#0: worker process 3958 exited on signal 11 In-Reply-To: References: Message-ID: <1ce031932f075681c1660077bcf62bc4.NginxMailingListEnglish@forum.nginx.org> Piotr Sikora Wrote: ------------------------------------------------------- > Hey, > assuming this is your whole config, could you either uncomment the > access_log directive or add "access_log off"? I'm pretty sure you're > hitting the bug introduced in 1.7.0: > http://mailman.nginx.org/pipermail/nginx-devel/2014-June/005430.html Yes. It started since 1.7.0 - all previous version were working fine with the same config ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250647,250651#msg-250651 From nginx-forum at nginx.us Thu Jun 5 03:36:24 2014 From: nginx-forum at nginx.us (TECK) Date: Wed, 04 Jun 2014 23:36:24 -0400 Subject: Nginx 1.7.0: location @php In-Reply-To: <538C1A3A.70306@alllangin.com> References: <538C1A3A.70306@alllangin.com> Message-ID: <4850f866b7b07c766893dfc40bdf7a60.NginxMailingListEnglish@forum.nginx.org> support Wrote: ------------------------------------------------------- > yes. > > update and test > > 02.06.2014 10:24, wishmaster ?????: > > I have the same problem in my php-application. Admin folder is > protected with auth_basic and the rest folders - without auth. I have > not found any solution except code duplication for php location. Can you please explain what to update and test? Is unclear. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250342,250654#msg-250654 From nginx-forum at nginx.us Thu Jun 5 03:41:56 2014 From: nginx-forum at nginx.us (TECK) Date: Wed, 04 Jun 2014 23:41:56 -0400 Subject: Nginx 1.7.0: location @php In-Reply-To: References: Message-ID: <6157c3147221a3d27a0a1078fb07b83b.NginxMailingListEnglish@forum.nginx.org> Jonathan Matthews Wrote: ------------------------------------------------------- > Fortunately, this being a *public* *mailing* *list*, and Francis > (along with almost every other subscriber) giving his time, experience > and opinions for free, you are definitely no worse off than when you > started. > > Actually, however, you're demonstrably better off, as Francis has both > attempted to help you go through the kind of troubleshooting process > that will serve you well if you apply it (Message-ID > 20140525132809.GN16942 at daoine.org) and ... > > > I think what I asked is > > very clear and simple: > > How do I avoid repeating a segment of configuration code assigned to > @php > > into various locations: > > .... he has given you the answer to this question - that you clearly > thought you began by asking, but didn't. In Message-ID > 20140531112745.GZ16942 at daoine.org. > > HTH. I'm sorry, I did not understood nothing. Can you provide an example of how to avoid repeating the php configuration through @php location? IMO the logic presented is clear: define a @php location and call it in other locations with try_files, like explained into Nginx documentation. In my case the end result is unexpected, the php file contents will be downloaded, instead of being executed. This works: http://pastie.org/private/5gkl1pti1co0onzhgb8jpg This does not: http://pastie.org/private/iuz58naozk6xo92ukqmwsg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250342,250655#msg-250655 From nginx-forum at nginx.us Thu Jun 5 07:15:26 2014 From: nginx-forum at nginx.us (justink101) Date: Thu, 05 Jun 2014 03:15:26 -0400 Subject: Redirect to custom landing pages based on http referer Message-ID: <1612962ec04c5e4d96acccf8e471cb2f.NginxMailingListEnglish@forum.nginx.org> Is it possible using nginx to essentially look at the http referer header, and if its set to a specific value, and the page is index.html or /, redirect to a custom landing page. For example: # Psuedo code if($page = "index.html" and $http_referer ~* (www\.)?amazon.com.*) { rewrite ^ "our-amazon-landing-page.html" permanent; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250656,250656#msg-250656 From francis at daoine.org Thu Jun 5 07:35:47 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Jun 2014 08:35:47 +0100 Subject: Redirect to custom landing pages based on http referer In-Reply-To: <1612962ec04c5e4d96acccf8e471cb2f.NginxMailingListEnglish@forum.nginx.org> References: <1612962ec04c5e4d96acccf8e471cb2f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140605073547.GC16942@daoine.org> On Thu, Jun 05, 2014 at 03:15:26AM -0400, justink101 wrote: Hi there, > Is it possible using nginx to essentially look at the http referer header, > and if its set to a specific value, and the page is index.html or /, > redirect to a custom landing page. Yes. With a few other assumptions: location = /index.html { if ($http_referer ~* (www.)?amazon.com) { return 301 /our-amazon-landing-page.html; } } That doesn't scale much beyond one match; if you need that, use a "map". f -- Francis Daly francis at daoine.org From r at roze.lv Thu Jun 5 08:31:49 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 5 Jun 2014 11:31:49 +0300 Subject: Nginx 1.7.0: location @php In-Reply-To: <6157c3147221a3d27a0a1078fb07b83b.NginxMailingListEnglish@forum.nginx.org> References: <6157c3147221a3d27a0a1078fb07b83b.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I'm sorry, I did not understood nothing. Can you provide an example of how > to avoid repeating the php configuration through @php location? As someone said in earlier mails you can always use include and put the repeating parts in seperate files. For example put this into php.conf: location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fastcgi; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi.conf; } And then your main config will look like: location ^~ /alpha { auth_basic "Restricted Access"; auth_basic_user_file htpasswd; try_files $uri $uri/ /alpha/index.php?$uri&$args; include php.conf; } location ^~ /beta { try_files $uri $uri/ /beta/index.php?$uri&$args; include php.conf; } rr From qwrules at gmail.com Thu Jun 5 08:45:19 2014 From: qwrules at gmail.com (KC) Date: Thu, 05 Jun 2014 10:45:19 +0200 Subject: Can't get nginx to work with Adminer nor phpMyadmin In-Reply-To: <20140602231248.GB16942@daoine.org> References: <538CBF76.5090208@gmail.com> <538CDA93.2010002@gmail.com> <20140602231248.GB16942@daoine.org> Message-ID: <53902E1F.9050002@gmail.com> Hello >> I want to be able to type http://ipaddress/phpmyadmin (or adminer) >> and see its respective interface. But as it is now, I only get 404 >> error. > > If you want http://ipaddress/phpmyadmin and http://ipaddress/adminer to > both work, you will probably want a single server{} block with all of > the configuration. > I must say I do not understand those config files. Say, I only want to make adminer work, and we have a segment that goes like that server { listen 80; server_name example.org www.example.org; ... } Where am I supposed to put the subfolder name here? Why are there those "example.org" domain names, if I am not even using a domain at the moment? All I want is to http://192.168.0.2 to point to the index file in root html folder and http://192.168.0.2/adminer to point to a html/adminer And reading http://nginx.org/en/docs/http/request_processing.html is confusing for that reason - I see no references to subfolders, just some domains. I suppose that the "location" bit is taking care of subfolders, but if this is so, then... that would mean there is nothing to be set in "server" blocks. I just don't get it. Cheers From nginx-forum at nginx.us Thu Jun 5 13:36:09 2014 From: nginx-forum at nginx.us (dirknel) Date: Thu, 05 Jun 2014 09:36:09 -0400 Subject: Google Analytics event generation by using Nginx logs Message-ID: <32155ce935473d4d2aff17227f119c36.NginxMailingListEnglish@forum.nginx.org> Hi This is my first posting here, so HI to everyone :) I would like to generate Google Analytics events for documents/videos served directly by Nginx. Currently the user of our system can either get the docs by going through our website or by going directly to the url. In the first instance the javascript on the webpage logs the Google Analytics event but I do not know how to do create an event when the user uses the static link to the document/video (that basically bypasses the website). A lot of information should be in the Nginx access logs and I would like to use that to generate a Google Analytics event. Has anyone tried to do this? Any suggestions would be helpful. Kind regards, Dirk Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250668,250668#msg-250668 From nginx-forum at nginx.us Thu Jun 5 15:10:16 2014 From: nginx-forum at nginx.us (MaxDudu) Date: Thu, 05 Jun 2014 11:10:16 -0400 Subject: DNS resolution of backends Message-ID: Hi guys We run a reverse proxy to Amazon S3 service. Sometime Amazon change their IPs and some of them may become unresponsive and render reservse proxy unusuable. Is there options to force nginx to re-resolve IPs of backends lets say each 5 mins ? Thanks ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250669,250669#msg-250669 From r at roze.lv Thu Jun 5 15:20:35 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 5 Jun 2014 18:20:35 +0300 Subject: DNS resolution of backends In-Reply-To: References: Message-ID: <8DF54348BD7D416A8A47FFB3F417EB49@MasterPC> > We run a reverse proxy to Amazon S3 service. Sometime Amazon change their > IPs and some of them may become unresponsive and render reservse proxy > unusuable. Is there options to force nginx to re-resolve IPs of backends > lets say each 5 mins ? Give the upstream{} block the hostnames of the instances and add an resolver ( http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver ). Optionally add valid=5m; if the TTL of the domain is not optimal. p.s. for the nginx to be more responsive you can also tweak the connect timeouts like proxy_connect_timeout ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout ) as the default is quite high - 60 seconds. rr From namxam at gmail.com Thu Jun 5 16:00:12 2014 From: namxam at gmail.com (Maximilian Schulz) Date: Thu, 5 Jun 2014 18:00:12 +0200 Subject: Dynamic config options from ENV Message-ID: Hi everybody, is it possible to set a nginx config variable from an ENV variable? I tried several thing, but none of them worked. The most promising was specifying "env MY_VAR;" at the top of the nginx.conf and then using its value via "my_option $ENV{"MY_VAR"};". But it didn't work. I always got an error about the line not being terminated. 1. Is it possible to set a config option via ENV varaibles 2. If so, does it have any performance implications? (The ENV wont change during run time) Thank you very much for your help. Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Jun 5 16:21:29 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 5 Jun 2014 17:21:29 +0100 Subject: Dynamic config options from ENV In-Reply-To: References: Message-ID: On 5 June 2014 17:00, Maximilian Schulz wrote: > Hi everybody, > > is it possible to set a nginx config variable from an ENV variable? I tried > several thing, but none of them worked. The most promising was specifying > "env MY_VAR;" at the top of the nginx.conf and then using its value via > "my_option $ENV{"MY_VAR"};". But it didn't work. I always got an error about > the line not being terminated. > > 1. Is it possible to set a config option via ENV varaibles > 2. If so, does it have any performance implications? (The ENV wont change > during run time) You can't do this nicely with nginx. Your best option is to pre-process the config file each time you reload/etc, interpolating the envvars so that nginx itself sees static values. From nginx-forum at nginx.us Thu Jun 5 16:23:10 2014 From: nginx-forum at nginx.us (erankor2) Date: Thu, 05 Jun 2014 12:23:10 -0400 Subject: Asynchronous file processing Message-ID: Hi all, I'm working on a native nginx module in which I want to read an input file and perform some manipulations on its data. Since the files I'm reading are big and accessed over NFS, I want to use asynchronous I/O for reading them, and I want to implement it as a pipeline of chunks, i.e. read a chunk, process the chunk, add the chunk to the response chain. My questions are as follows: 1. Is there any sample / documentation / basic guidelines you can give me on how to progressively output data back to the client ? 2. Does nginx support generating responses with chunked-encoding (I may not be able to determine the response size without processing the whole file, and would prefer to avoid keeping the entire processed buffer in memory at once) ? 3. Can you send me some guidelines on how to perform asynchronous file read (my understanding is that I need to use ngx_file_aio_read and if I get NGX_AGAIN, set the handler/data members of file->aio to my completion callback) ? Thank you in advance, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250673,250673#msg-250673 From namxam at gmail.com Thu Jun 5 16:50:10 2014 From: namxam at gmail.com (Maximilian Schulz) Date: Thu, 5 Jun 2014 18:50:10 +0200 Subject: Dynamic config options from ENV In-Reply-To: References: Message-ID: Thank you Jonathan, I was afraid that this is the only option? really sad :( I am currently experimenting with docker and dynamic setups which work for all environments? too bad that we need to use a custom script do handle the problem at hand. But thank you very much. Max On Thu, Jun 5, 2014 at 6:21 PM, Jonathan Matthews wrote: > On 5 June 2014 17:00, Maximilian Schulz wrote: > > Hi everybody, > > > > is it possible to set a nginx config variable from an ENV variable? I > tried > > several thing, but none of them worked. The most promising was specifying > > "env MY_VAR;" at the top of the nginx.conf and then using its value via > > "my_option $ENV{"MY_VAR"};". But it didn't work. I always got an error > about > > the line not being terminated. > > > > 1. Is it possible to set a config option via ENV varaibles > > 2. If so, does it have any performance implications? (The ENV wont change > > during run time) > > You can't do this nicely with nginx. Your best option is to > pre-process the config file each time you reload/etc, interpolating > the envvars so that nginx itself sees static values. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 5 18:33:16 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 05 Jun 2014 14:33:16 -0400 Subject: [ANN] Windows nginx 1.7.2.2 RedKnight Message-ID: <36198a8c351e2300590a6b8183f30202.NginxMailingListEnglish@forum.nginx.org> 20:13 5-6-2014 nginx 1.7.2.2 RedKnight Based on nginx 1.7.2 (5-6-2014) with; + Openssl-1.0.1h (CVE-2014-0224, CVE-2014-0221, CVE-2014-0195, CVE-2014-0198, CVE-2010-5298, CVE-2014-3470) + New nginx Windows icon + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: no (openssl fixes) * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250675,250675#msg-250675 From r at roze.lv Thu Jun 5 18:34:29 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 5 Jun 2014 21:34:29 +0300 Subject: Dynamic config options from ENV In-Reply-To: References: Message-ID: <9A18A07BD4C14C57A3FE1B4C65CC116D@NeiRoze> > I am currently experimenting with docker and dynamic setups which work for > all environments? too bad that we need to use a custom script do handle > the problem at hand. Well probably not that bad if you can use something allready made: https://index.docker.io/u/shepmaster/nginx-template-image/ rr From nginx-forum at nginx.us Thu Jun 5 18:40:03 2014 From: nginx-forum at nginx.us (MaxDudu) Date: Thu, 05 Jun 2014 14:40:03 -0400 Subject: DNS resolution of backends In-Reply-To: <8DF54348BD7D416A8A47FFB3F417EB49@MasterPC> References: <8DF54348BD7D416A8A47FFB3F417EB49@MasterPC> Message-ID: <13c771ae230693a534eade5cd30fecd5.NginxMailingListEnglish@forum.nginx.org> Sounds exactly like what I'm looking for thank you Reinis ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250669,250677#msg-250677 From kyprizel at gmail.com Thu Jun 5 21:04:43 2014 From: kyprizel at gmail.com (kyprizel) Date: Fri, 6 Jun 2014 01:04:43 +0400 Subject: nginx Segmentation fault In-Reply-To: <028E5567-F32C-4A1D-BDA2-07CEE4949EDC@nginx.com> References: <3ea3501ac4ed08cc75d36d64fe933870.NginxMailingListEnglish@forum.nginx.org> <5D405364-E117-43D8-B119-71FDFC08E3AA@nginx.com> <028E5567-F32C-4A1D-BDA2-07CEE4949EDC@nginx.com> Message-ID: No, it does not help. The problem somewhere in body reading/processing. On Wed, Jun 4, 2014 at 8:10 PM, Andrei Belov wrote: > Not yet. > > Quick look makes me think that "client_body_in_file_only on;" might help. > > -- defan > > On 04 ???? 2014 ?., at 19:58, kyprizel wrote: > > Andrei, have you checked issue 630? > > https://github.com/SpiderLabs/ModSecurity/issues/630 > > > On Wed, Jun 4, 2014 at 7:12 PM, Andrei Belov wrote: > >> Hi, >> >> there is a lot of open issues with ModSecurity and nginx: >> >> https://github.com/SpiderLabs/ModSecurity/issues?labels=Platform+-+Nginx&state=open >> >> Some of them have been already fixed in nginx_refactoring branch: >> https://github.com/SpiderLabs/ModSecurity/tree/nginx_refactoring >> >> I spent some time experimenting with this branch locally, >> and added a number of improvements to the code: >> https://github.com/defanator/ModSecurity/tree/nginx_refactoring >> >> A number of patches were already imported into >> SpiderLabs/nginx_refactoring >> repository, others are under review and testing. >> >> Please feel free to try - any feedback will be greatly appreciated! >> >> Cheers, >> >> ? >> Andrei Belov >> http://nginx.com/ >> >> >> On 04 Jun 2014, at 17:17, Godinho wrote: >> >> > I needed to use modsecurity so I compiled nginx and modsecurity. >> > >> > Modsecurity was compiled with options: ./configure >> > --enable-standalone-module >> > >> > nginx with: ./configure >> > --add-module=../modsecurity-2.8.0/nginx/modsecurity/ >> > >> > When I try to test my configuration I have this: >> > >> > [root at nginx1 nginx]# /usr/local/nginx/sbin/nginx -t >> > Segmentation fault >> > >> > from message logs: >> > Jun 4 13:57:43 nginx1 kernel: nginx[12229]: segfault at 410 ip >> > 00007f9088569a32 sp 00007fffc90ccf58 error 4 in >> > libc-2.12.so[7f90884e0000+18b000] >> > Jun 4 13:57:44 nginx1 kernel: nginx[12230]: segfault at 410 ip >> > 00007f6626c93a32 sp 00007fff4fdd7cf8 error 4 in >> > libc-2.12.so[7f6626c0a000+18b000] >> > >> > Update: it seams it a modsecurity problem, copiled without it >> (./configure >> > --with-http_ssl_module) work just fine... >> > >> > Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,250629,250629#msg-250629 >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> > >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Fri Jun 6 07:48:52 2014 From: black.fledermaus at arcor.de (basti) Date: Fri, 06 Jun 2014 09:48:52 +0200 Subject: Deny all + Custom Error page Message-ID: <53917264.4000503@arcor.de> Hello, I try to block wildcard sub domains as follows: # block wildcard server { server_name ~^(.*)\.example\.com$ ; root /usr/share/nginx/www; error_page 403 /index.html; allow 127.0.0.1; deny all; access_log off; log_not_found off; } I always get the default "403 Forbidden" site of nginx. When "deny all" is removed it work as expected. Can anybody explain? And does anybody know a workaround? Best Regards; Basti From black.fledermaus at arcor.de Fri Jun 6 08:57:32 2014 From: black.fledermaus at arcor.de (basti) Date: Fri, 06 Jun 2014 10:57:32 +0200 Subject: Deny all + Custom Error page In-Reply-To: <53917264.4000503@arcor.de> References: <53917264.4000503@arcor.de> Message-ID: <5391827C.6060707@arcor.de> Here is my solution: server { server_name ~^(.*)\.example\.com$ ; return 200; deny all; access_log off; log_not_found off; } Am 06.06.2014 09:48, schrieb basti: > Hello, > > I try to block wildcard sub domains as follows: > > > # block wildcard > server { > server_name ~^(.*)\.example\.com$ ; > root /usr/share/nginx/www; > error_page 403 /index.html; > allow 127.0.0.1; > deny all; > access_log off; > log_not_found off; > } > > I always get the default "403 Forbidden" site of nginx. > When "deny all" is removed it work as expected. > > Can anybody explain? > And does anybody know a workaround? > > Best Regards; > Basti > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Jun 6 09:16:43 2014 From: nginx-forum at nginx.us (kirimedia) Date: Fri, 06 Jun 2014 05:16:43 -0400 Subject: the http output chain is empty bug (nginx lua module) Message-ID: <0762076a534bfb820547e9aca335c9b8.NginxMailingListEnglish@forum.nginx.org> nginx >= 1.5.7 nginx-lua-module >= 0.9.8 (possibly older version) CentOS 6 # ./sbin/nginx -V nginx version: nginx/1.5.7 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) configure arguments: --add-module=../lua-nginx-module Config file user nginx; worker_processes 1; error_log /var/log/nginx/nginx-error.log error; events { use epoll; } http { gzip on; server { listen 80; root /usr/local/nginx/html; location = /empty/ { empty_gif; } location = /include/ { content_by_lua ' ngx.location.capture("/empty/") ngx.location.capture("/empty/") '; } location = /ssi.html { ssi on; } } } Content of /usr/local/nginx/html/ssi.html HEADER Or request http://localhost/ssi.html (with header Accept-Encoding: gzip) become blank response (without headers and without body) And in log [alert] 20457#0: *1 the http output chain is empty, client: 127.0.0.1, server: , request: "GET /ssi.html HTTP/1.1", subrequest: "/include/" How I can fix it? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250689,250689#msg-250689 From contact at jpluscplusm.com Fri Jun 6 09:24:10 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 6 Jun 2014 10:24:10 +0100 Subject: Deny all + Custom Error page In-Reply-To: <53917264.4000503@arcor.de> References: <53917264.4000503@arcor.de> Message-ID: On 6 Jun 2014 08:49, "basti" wrote: > > Hello, > > I try to block wildcard sub domains as follows: > > > # block wildcard > server { > server_name ~^(.*)\.example\.com$ ; > root /usr/share/nginx/www; > error_page 403 /index.html; > allow 127.0.0.1; > deny all; > access_log off; > log_not_found off; > } I'm sure there's a precedence rule that'll explain this but I don't have it to hand. However, have you considered merely telling that server{} to listen only on 127.0.0.1? You may also wish to look at the server_name documentation for the shorthand of "*.foo.com" instead of the regex you're using. Finally, if your aim is just to deny requests for hosts you haven't explicitly configured elsewhere in nginx's config file, I find the following to be a useful catchall. Use it alongside well-defined server_names in other server blocks. server { listen 80 default_server; server_name _; location / { return 404; } } HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 6 10:13:15 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 06 Jun 2014 06:13:15 -0400 Subject: the http output chain is empty bug (nginx lua module) In-Reply-To: <0762076a534bfb820547e9aca335c9b8.NginxMailingListEnglish@forum.nginx.org> References: <0762076a534bfb820547e9aca335c9b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7751874f37ebd4b6f5ef700cd0e2259a.NginxMailingListEnglish@forum.nginx.org> See http://trac.nginx.org/nginx/ticket/132 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250689,250692#msg-250692 From mdounin at mdounin.ru Fri Jun 6 10:37:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Jun 2014 14:37:26 +0400 Subject: the http output chain is empty bug (nginx lua module) In-Reply-To: <7751874f37ebd4b6f5ef700cd0e2259a.NginxMailingListEnglish@forum.nginx.org> References: <0762076a534bfb820547e9aca335c9b8.NginxMailingListEnglish@forum.nginx.org> <7751874f37ebd4b6f5ef700cd0e2259a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140606103725.GF1849@mdounin.ru> Hello! On Fri, Jun 06, 2014 at 06:13:15AM -0400, itpp2012 wrote: > See http://trac.nginx.org/nginx/ticket/132 Unlikely it's related. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 6 11:41:18 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Fri, 06 Jun 2014 07:41:18 -0400 Subject: nginx channel Message-ID: <57fba2620e2097b231c5869e780250b9.NginxMailingListEnglish@forum.nginx.org> Would be great where channels are used, I am talking about ngx_channel? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250694,250694#msg-250694 From contact at jpluscplusm.com Fri Jun 6 12:03:37 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 6 Jun 2014 13:03:37 +0100 Subject: nginx channel In-Reply-To: <57fba2620e2097b231c5869e780250b9.NginxMailingListEnglish@forum.nginx.org> References: <57fba2620e2097b231c5869e780250b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 June 2014 12:41, nginxsantos wrote: > Would be great where channels are used, I am talking about ngx_channel? Please rearrange your words into a comprehensible sentence and/or question. Thank you. From nginx-forum at nginx.us Fri Jun 6 12:14:25 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Fri, 06 Jun 2014 08:14:25 -0400 Subject: nginx channel In-Reply-To: References: Message-ID: <85df0db4459b62cce6972c2d7117d8e4.NginxMailingListEnglish@forum.nginx.org> I was not clear about the usage of ngx_channel? When each worker process is started, the function "ngx_pass_open_channel" is called. It is not clear to me where we do use the channels, I mean for what ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250694,250697#msg-250697 From mdounin at mdounin.ru Fri Jun 6 12:20:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Jun 2014 16:20:43 +0400 Subject: nginx channel In-Reply-To: <85df0db4459b62cce6972c2d7117d8e4.NginxMailingListEnglish@forum.nginx.org> References: <85df0db4459b62cce6972c2d7117d8e4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140606122043.GI1849@mdounin.ru> Hello! On Fri, Jun 06, 2014 at 08:14:25AM -0400, nginxsantos wrote: > I was not clear about the usage of ngx_channel? When each worker process is > started, the function "ngx_pass_open_channel" is called. It is not clear to > me where we do use the channels, I mean for what ? Channels are used to pass control messages from master to workers. In particular, this is how master asks workers to shutdown, reopen logs and so on. It is planned that this infrastructure will also allow workers to pass various notifications from one process to others, though it's not something currently available. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 6 12:33:24 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Fri, 06 Jun 2014 08:33:24 -0400 Subject: nginx channel In-Reply-To: <20140606122043.GI1849@mdounin.ru> References: <20140606122043.GI1849@mdounin.ru> Message-ID: <271c4059807ad695606e9d296ec1e59c.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim. It helps... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250694,250700#msg-250700 From mdounin at mdounin.ru Fri Jun 6 12:40:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Jun 2014 16:40:24 +0400 Subject: DNS resolution of backends In-Reply-To: <8DF54348BD7D416A8A47FFB3F417EB49@MasterPC> References: <8DF54348BD7D416A8A47FFB3F417EB49@MasterPC> Message-ID: <20140606124024.GJ1849@mdounin.ru> Hello! On Thu, Jun 05, 2014 at 06:20:35PM +0300, Reinis Rozitis wrote: > >We run a reverse proxy to Amazon S3 service. Sometime Amazon change their > >IPs and some of them may become unresponsive and render reservse proxy > >unusuable. Is there options to force nginx to re-resolve IPs of backends > >lets say each 5 mins ? > > Give the upstream{} block the hostnames of the instances and add an resolver > ( http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver ). This won't make any difference. Names of servers in upstream{} blocks are resolved during configuration parsing, and won't be re-resolved till next configuration parsing. To ensure periodic hostname resolution, one have to use a hostname (not an upstream block) in proxy_pass, and variables in the proxy_pass directive. This way, nginx won't know a hostname in advance, and will have to use resolver to resolve it, resulting in a periodic hostname resolution. (Alternatively, a special "resolve" flag for servers in upstream{} blocks was recently introduced of the commercial subscription, see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server. But this requires commercial subscription.) -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Fri Jun 6 13:36:27 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 6 Jun 2014 18:36:27 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: @itpp I am currenlty proceeding with proxy_cache method just because i had to done this in emergency mode due to boss pressure :-|. I have a quick question, can i make nginx to cache files for specific clients ? Like, if our caching servers are deployed by only single ISP named "ptcl". So if ip from ptcl client is browsing video, only his requested file should be cached not for any other client, does nginx support that ?? I know its kind of funny, but i've to complete this task :( On Thu, Jun 5, 2014 at 12:23 AM, shahzaib shahzaib wrote: > >>Also sync to a temp folder and move after completion or nginx will > attempt > to send partial files. > > Oh right. Thanks for quick help and suggestion :). I'll look into wanproxy > now. > > > > > On Thu, Jun 5, 2014 at 12:19 AM, itpp2012 wrote: > >> shahzaib1232 Wrote: >> ------------------------------------------------------- >> > @itpp, i just used your method try_files and it worked flawlessly :). >> > Following is the testing config : >> > >> > server { >> > listen 80; >> > server_name domain.com; >> > root /var/www/html/files; >> > >> > location / { >> >> location ~* (\.mp3|\.avi|\.mp4)$ { >> >> > Should i use rsync or lsync for mirroring the files between Origin and >> > caching server ? >> >> Whatever works for you, I'd prefer rsync since that's easier to schedule >> for >> off-peek hours. >> Also sync to a temp folder and move after completion or nginx will attempt >> to send partial files. >> see also http://wanproxy.org/ >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,249997,250645#msg-250645 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy at xdam.com Fri Jun 6 15:13:10 2014 From: roy at xdam.com (Roy Phillips) Date: Fri, 06 Jun 2014 11:13:10 -0400 Subject: NGINX Error when uploading images Message-ID: Hi all, We have a handful of users in the UK that are getting the error (screenshot attached) We narrowed it down to the ISP/Router ?Virgin BT? ISP or Home Hub 2.0. If they use another ISP or even tether to an iPhone it works. They also upgraded the Home Hub 2.0 to Home Hub 5.0 and it works. Do you know of a work around for this error by any chance? Thank you, Roy Phillips XDAM Support 239-791-9995 roy at xdam.com http://www.xdam.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Unknown[8].png Type: image/png Size: 13519 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Unknown.png Type: image/png Size: 48758 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Jun 6 15:26:07 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 06 Jun 2014 11:26:07 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: shahzaib1232 Wrote: ------------------------------------------------------- > @itpp I am currenlty proceeding with proxy_cache method just because i > had > to done this in emergency mode due to boss pressure :-|. I have a > quick > question, can i make nginx to cache files for specific clients ? > > Like, if our caching servers are deployed by only single ISP named > "ptcl". > So if ip from ptcl client is browsing video, only his requested file > should > be cached not for any other client, does nginx support that ?? You could do this based on some IP ranges or via https://github.com/flant/nginx-http-rdns See http://serverfault.com/questions/380642/nginx-how-to-redirect-users-with-certain-ip-to-special-page and http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250707#msg-250707 From mdounin at mdounin.ru Fri Jun 6 15:45:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Jun 2014 19:45:03 +0400 Subject: NGINX Error when uploading images In-Reply-To: References: Message-ID: <20140606154503.GK1849@mdounin.ru> Hello! On Fri, Jun 06, 2014 at 11:13:10AM -0400, Roy Phillips wrote: > Hi all, > > We have a handful of users in the UK that are getting the error (screenshot > attached) We narrowed it down to the ISP/Router ?Virgin BT? ISP or Home Hub > 2.0. > > If they use another ISP or even tether to an iPhone it works. > > They also upgraded the Home Hub 2.0 to Home Hub 5.0 and it works. > > Do you know of a work around for this error by any chance? Unless you are using nginx and did something strange in the configuration, most relevant link I can think of is: http://nginx.org/en/docs/welcome_nginx_facebook.html Wikipedia article on the BT Home Hub suggests it has (or at least had) major problems with security, and this may be the reason: https://en.wikipedia.org/wiki/BT_Home_Hub#Criticism http://www.gnucitizen.org/blog/bt-home-flub-pwnin-the-bt-home-hub/ -- Maxim Dounin http://nginx.org/ From roy at xdam.com Fri Jun 6 15:56:06 2014 From: roy at xdam.com (Roy Phillips) Date: Fri, 06 Jun 2014 11:56:06 -0400 Subject: NGINX Error when uploading images In-Reply-To: <20140606154503.GK1849@mdounin.ru> References: <20140606154503.GK1849@mdounin.ru> Message-ID: Thanks, Our application doesn?t use NGINX or our hosting provider in the USA. When the user drags and drops images in the webpage app that?s when they get the error containing NGINX. We suspect their ISP or Router uses NGINX on the backend but can?t confirm with level 1 support over there. Is that possible? Why would NGINX come up otherwise? Thank you, Roy Phillips XDAM Support 239-791-9995 roy at xdam.com http://www.xdam.com On 6/6/14, 11:45 AM, "Maxim Dounin" wrote: >Hello! > >On Fri, Jun 06, 2014 at 11:13:10AM -0400, Roy Phillips wrote: > >> Hi all, >> >> We have a handful of users in the UK that are getting the error >>(screenshot >> attached) We narrowed it down to the ISP/Router ?Virgin BT? ISP or Home >>Hub >> 2.0. >> >> If they use another ISP or even tether to an iPhone it works. >> >> They also upgraded the Home Hub 2.0 to Home Hub 5.0 and it works. >> >> Do you know of a work around for this error by any chance? > >Unless you are using nginx and did something strange in the >configuration, most relevant link I can think of is: > >http://nginx.org/en/docs/welcome_nginx_facebook.html > >Wikipedia article on the BT Home Hub suggests it has (or at least >had) major problems with security, and this may be the reason: > >https://en.wikipedia.org/wiki/BT_Home_Hub#Criticism >http://www.gnucitizen.org/blog/bt-home-flub-pwnin-the-bt-home-hub/ > >-- >Maxim Dounin >http://nginx.org/ > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Fri Jun 6 15:56:36 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 6 Jun 2014 20:56:36 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: Thanks a lot itpp. :) I'll look into it and get back to you. Thanks again for quick solution :) On Fri, Jun 6, 2014 at 8:26 PM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > @itpp I am currenlty proceeding with proxy_cache method just because i > > had > > to done this in emergency mode due to boss pressure :-|. I have a > > quick > > question, can i make nginx to cache files for specific clients ? > > > > Like, if our caching servers are deployed by only single ISP named > > "ptcl". > > So if ip from ptcl client is browsing video, only his requested file > > should > > be cached not for any other client, does nginx support that ?? > > You could do this based on some IP ranges or via > https://github.com/flant/nginx-http-rdns > > See > > http://serverfault.com/questions/380642/nginx-how-to-redirect-users-with-certain-ip-to-special-page > and > > http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/ > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,250707#msg-250707 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 6 15:57:22 2014 From: nginx-forum at nginx.us (mfaridi) Date: Fri, 06 Jun 2014 11:57:22 -0400 Subject: arch linux , swf dir and forbidden error Message-ID: <83bd4fa8143cb6a8fab8f0ebf6409392.NginxMailingListEnglish@forum.nginx.org> I use arch linux and install nginx from arch repo , every thing is OK, I want use nginx for use play flash game and swf game , i download many swf files and make swf directory in /usr/share/nginx/html and put all swf files in swf directory and after that I set 755 for swf directory and set 644 for all swf files , but when I type in browser like firefox type http://127.0.0.1/swf I see this error 403 Forbidden and I can not play sw game but when I type http://127.0.0.1/swf/pacman.swf every thing is good and I can flash game what is problem ? I set 777 ,for folder and all swf but I see that problem again and I see 403 Forbidden Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250709,250709#msg-250709 From nginx-forum at nginx.us Fri Jun 6 21:16:23 2014 From: nginx-forum at nginx.us (MaxDudu) Date: Fri, 06 Jun 2014 17:16:23 -0400 Subject: DNS resolution of backends In-Reply-To: <20140606124024.GJ1849@mdounin.ru> References: <20140606124024.GJ1849@mdounin.ru> Message-ID: <714f2763c40d8e0cbd12c21ccade4bb5.NginxMailingListEnglish@forum.nginx.org> Ok thank for clarifications Max Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250669,250713#msg-250713 From francis at daoine.org Fri Jun 6 23:15:17 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 7 Jun 2014 00:15:17 +0100 Subject: arch linux , swf dir and forbidden error In-Reply-To: <83bd4fa8143cb6a8fab8f0ebf6409392.NginxMailingListEnglish@forum.nginx.org> References: <83bd4fa8143cb6a8fab8f0ebf6409392.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140606231517.GD16942@daoine.org> On Fri, Jun 06, 2014 at 11:57:22AM -0400, mfaridi wrote: Hi there, > http://127.0.0.1/swf > I see this error > 403 Forbidden > and I can not play sw game > but when I type > http://127.0.0.1/swf/pacman.swf > every thing is good and I can flash game > what is problem ? What does error_log say? Most likely there is no index.html file, or autoindex (http://nginx.org/r/autoindex) is not on, is my guess. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Jun 7 01:07:57 2014 From: nginx-forum at nginx.us (ericmachine) Date: Fri, 06 Jun 2014 21:07:57 -0400 Subject: website with login button, redirect to intranet? Message-ID: <1aac1b8eb732bce84b8a4ea3c1640cb4.NginxMailingListEnglish@forum.nginx.org> Hi everyone, I would like to check whether this is possible with nginx (on ubuntu 12.04 LTS 64 bits). I have a website www.mywebpage.com.my this is just another website. There is a login button. When someone click on this login button, it would redirect them to https://erp.mywebpage.com.my. However, there are 2 scenarios will happen:- - if you are connected to our secure VPN (via OpenVPN), this redirection would be successful. - if you are not connected to the secure VPN (means the user doesn't have any access), then it will show "you are not authorised to view this page". This message should appear within www.mywebpage.com.my. FYI www.mywebpage.com.my - hosted on a public facing VPS (outside office) erp.mywebpage.com.my - hosted at my office server (once connected to the secure VPN, it is as if accessing via intranet) Is this possible and any suggestions to make this work? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250715,250715#msg-250715 From lordnynex at gmail.com Sat Jun 7 04:52:14 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Fri, 6 Jun 2014 21:52:14 -0700 Subject: website with login button, redirect to intranet? In-Reply-To: <1aac1b8eb732bce84b8a4ea3c1640cb4.NginxMailingListEnglish@forum.nginx.org> References: <1aac1b8eb732bce84b8a4ea3c1640cb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, Assuming your VPN subnet is 10.10.1.0/24, In your server{} block on erp.mywebpage.com.my you will want to put the following. allow 10.10.1.0/24; deny all; error_page 403 = @403; location @403 { echo "You are not authorized to view this page" } On Fri, Jun 6, 2014 at 6:07 PM, ericmachine wrote: > Hi everyone, > > I would like to check whether this is possible with nginx (on ubuntu 12.04 > LTS 64 bits). > > I have a website > > www.mywebpage.com.my > > this is just another website. > > There is a login button. When someone click on this login button, it would > redirect them to https://erp.mywebpage.com.my. However, there are 2 > scenarios will happen:- > - if you are connected to our secure VPN (via OpenVPN), this redirection > would be successful. > - if you are not connected to the secure VPN (means the user doesn't have > any access), then it will show "you are not authorised to view this page". > This message should appear within www.mywebpage.com.my. > > FYI > > www.mywebpage.com.my - hosted on a public facing VPS (outside office) > > erp.mywebpage.com.my - hosted at my office server (once connected to the > secure VPN, it is as if accessing via intranet) > > Is this possible and any suggestions to make this work? > > Thanks. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250715,250715#msg-250715 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luciano at vespaperitivo.it Sat Jun 7 08:50:05 2014 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Sat, 7 Jun 2014 10:50:05 +0200 Subject: Switching to nginx: php and rewrite rules from apache Message-ID: Hello list, I'm moving a bunch of sites from Apache to nginx 1.5.13 :). Everything went fine for the static ones and for the sites under wp or joomla. I have one site in php developped by the customer that seemed ok till I discovered that it has a subdirectory with its own .htacess file. Trying to add the rewriting rules makes the php "dead", only in the cited subdirectory. Here is my config: server { listen 212.45.144.216:80 default_server; server_name new.assirm.it test.assirm.it; access_log /dati/log/http/assirm/access.log; error_log /dati/log/http/assirm/error_new.log; rewrite_log on; #charset koi8-r; location ~ \.php$ { root /dati/httpd/web_assirm/sito_nginx; fastcgi_pass 127.0.0.1:9004; fastcgi_index index.php; include /etc/nginx/fastcgi.conf; } location / { root /dati/httpd/web_assirm/sito_nginx; index index.html index.htm index.php home.html welcome.html; # try_files $uri $uri/ /index.php?$args; rewrite ^/ultime-news.php$ /archivio-news.php?last=1 ; rewrite ^/(.*)_news(.*).htm$ /news.php?id=$2 ; rewrite ^/(.*)_ev(.*).htm$ /evento.php?id=$2 ; rewrite ^/(.*)_att(.*).htm$ /attivita.php?id=$2 ; rewrite ^/(.*)_k(.*).htm$ /pagina.php?k=$2 ; rewrite ^/(.*)_sk(.*).htm$ /stampa-contenuto.php?k=$2 ; rewrite ^/(.*)_sn(.*).htm$ /stampa-news.php?id=$2 ; rewrite ^/(.*)_a(.*).htm$ /associato.php?id=$2&$args ; rewrite ^/(.*)_p(.*).htm$ /mypost.php?id=$2&$args ; rewrite ^/ricerca-(.*).htm$ /risultati.php?s=$1&goo=1 ; rewrite ^/privacy.php$ /pagina.php?k=privacy ; } location ^~ /en/ { rewrite ^/en/ultime-news.php$ /en/archivio-news.php?last=1 break; rewrite ^/en/(.*)_news(.*).htm$ /en/news.php?id=$2 break; rewrite ^/en/(.*)_ev(.*).htm$ /en/evento.php?id=$2 break; rewrite ^/en/(.*)_att(.*).htm$ /en/attivita.php?id=$2 break; rewrite ^/en/(.*)_k(.*).htm$ /en/pagina.php?k=$2 break; rewrite ^/en/(.*)_sk(.*).htm$ /en/stampa-contenuto.php?k=$2 break; rewrite ^/en/(.*)_sn(.*).htm$ /en/stampa-news.php?id=$2 break; rewrite ^/en/(.*)_a(.*).htm$ /en/associato.php?id=$2&$args break; rewrite ^/en/(.*)_p(.*).htm$ /en/mypost.php?id=$2&$args break; rewrite ^/en/(.*)_wit(.*).htm$ /en/wit.php?c=$2&$args break; rewrite ^/en/ricerca-(.*).htm$ /en/risultati.php?s=$1&goo=1 break; rewrite ^/en/privacy.php$ /en/pagina.php?k=privacy break; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } What did I have wrong? Many thanks to all, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From francis at daoine.org Sat Jun 7 09:08:54 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 7 Jun 2014 10:08:54 +0100 Subject: Switching to nginx: php and rewrite rules from apache In-Reply-To: <20140607085016.E94C83FA26A@mail.nginx.com> References: <20140607085016.E94C83FA26A@mail.nginx.com> Message-ID: <20140607090854.GE16942@daoine.org> On Sat, Jun 07, 2014 at 10:50:05AM +0200, Luciano Mannucci wrote: Hi there, > I have one site in php developped by the customer that seemed > ok till I discovered that it has a subdirectory with its own .htacess > file. Trying to add the rewriting rules makes the php "dead", only in > the cited subdirectory. You use "location" (http://nginx.org/r/location) and "rewrite" (http://nginx.org/r/rewrite) with the "break" flag. Based on the documentation, I guess that you will want to include within your "/en/" location, some configuration which tells nginx how you want php requests to be handled. > What did I have wrong? You left out some useful information: * what request do you make? * what response do you get? * what response do you want? f -- Francis Daly francis at daoine.org From luciano at vespaperitivo.it Sat Jun 7 10:08:05 2014 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Sat, 7 Jun 2014 12:08:05 +0200 Subject: Switching to nginx: php and rewrite rules from apache In-Reply-To: <20140607090854.GE16942@daoine.org> References: <20140607085016.E94C83FA26A@mail.nginx.com> <20140607090854.GE16942@daoine.org> Message-ID: On Sat, 7 Jun 2014 10:08:54 +0100 Francis Daly wrote: > On Sat, Jun 07, 2014 at 10:50:05AM +0200, Luciano Mannucci wrote: > > Hi there, > > What did I have wrong? > > You left out some useful information: > > * what request do you make? new.assirm.it/en/ > * what response do you get? The index.php page source. > * what response do you want? The php interpreted result. Many thanks again, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From francis at daoine.org Sat Jun 7 10:39:55 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 7 Jun 2014 11:39:55 +0100 Subject: Switching to nginx: php and rewrite rules from apache In-Reply-To: <20140607100815.9D0C73FA281@mail.nginx.com> References: <20140607085016.E94C83FA26A@mail.nginx.com> <20140607090854.GE16942@daoine.org> <20140607100815.9D0C73FA281@mail.nginx.com> Message-ID: <20140607103955.GF16942@daoine.org> On Sat, Jun 07, 2014 at 12:08:05PM +0200, Luciano Mannucci wrote: > On Sat, 7 Jun 2014 10:08:54 +0100 > Francis Daly wrote: Hi there, > > * what request do you make? > new.assirm.it/en/ > > > * what response do you get? > The index.php page source. > > > * what response do you want? > The php interpreted result. So, within your location ^~ /en/ { add something like location ~ \.php$ { fastcgi_pass 127.0.0.1:9004; } so that nginx knows to handle this request the way you want it to. (If you have no other configuration, the nginx default is "serve the file from the file system".) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Jun 7 14:54:21 2014 From: nginx-forum at nginx.us (ericmachine) Date: Sat, 07 Jun 2014 10:54:21 -0400 Subject: website with login button, redirect to intranet? In-Reply-To: References: Message-ID: Thanks :) I will give this a try :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250715,250722#msg-250722 From nginx-forum at nginx.us Sat Jun 7 20:59:29 2014 From: nginx-forum at nginx.us (terryr) Date: Sat, 07 Jun 2014 16:59:29 -0400 Subject: Wiki updates -- was: Can anyone tell me how to delete spam pages on the wiki? In-Reply-To: <527de04cfa2f5ced94f7d2f6a0decec2.NginxMailingListEnglish@forum.nginx.org> References: <527de04cfa2f5ced94f7d2f6a0decec2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <17e12a037781f460aebed074ef9bf02d.NginxMailingListEnglish@forum.nginx.org> I'd like to also add my thanks. Great work. I think a very visible last updated date and the "applies to" section would be great. When I first started using nginx and the wiki, I found one thing a bit confusing. Clicking on a link on http://wiki.nginx.org/Modules directs you to the corresponding page on nginx.org/en/docs. While it's great to be directed to those docs, I expected to be taken to a wiki page on the core module. I would suggest some kind of notice that these links will take you to the nginx.org/en/docs site. The obsoleted pages are difficult. A link like this http://wiki.nginx.org/CoreModule#error_log in a google search bypasses the notice at the top so whoever's reading has no idea that the page has been obsoleted. I'm not registerd on the wiki so I don't know what editing options are availble to offer any suggestions for dealing with this. Terry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250601,250725#msg-250725 From nginx-forum at nginx.us Sat Jun 7 21:05:46 2014 From: nginx-forum at nginx.us (mfaridi) Date: Sat, 07 Jun 2014 17:05:46 -0400 Subject: arch linux , swf dir and forbidden error In-Reply-To: <20140606231517.GD16942@daoine.org> References: <20140606231517.GD16942@daoine.org> Message-ID: <82fc0cb61f046f525a5e80f28e01dad8.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Fri, Jun 06, 2014 at 11:57:22AM -0400, mfaridi wrote: > > Hi there, > > > http://127.0.0.1/swf > > I see this error > > 403 Forbidden > > and I can not play sw game > > but when I type > > http://127.0.0.1/swf/pacman.swf > > every thing is good and I can flash game > > what is problem ? > > What does error_log say? > > Most likely there is no index.html file, or autoindex > (http://nginx.org/r/autoindex) is not on, is my guess. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx when I type tail -f access.log I see these 127.0.0.1 - - [08/Jun/2014:02:39:03 +0430] "GET /swf/ HTTP/1.1" 403 168 "-" "Mozilla/5.0 (X11; Linux i686; rv:29.0) Gecko/20100101 Firefox/29.0" 127.0.0.1 - - [08/Jun/2014:02:39:04 +0430] "GET /favicon.ico HTTP/1.1" 404 168 "-" "Mozilla/5.0 (X11; Linux i686; rv:29.0) Gecko/20100101 Firefox/29.0" 127.0.0.1 - - [08/Jun/2014:02:40:00 +0430] "GET /swf/ HTTP/1.1" 403 168 "-" "Mozilla/5.0 (X11; Linux i686; rv:29.0) Gecko/20100101 Firefox/29.0" 127.0.0.1 - - [08/Jun/2014:02:40:27 +0430] "GET /swf/ HTTP/1.1" 403 168 "-" "Mozilla/5.0 (X11; Linux i686; rv:29.0) Gecko/20100101 Firefox/29.0" 127.0.0.1 - - [08/Jun/2014:02:41:24 +0430] "GET /swf/ HTTP/1.1" 403 168 "-" "Mozilla/5.0 (X11; Linux i686; rv:29.0) Gecko/20100101 Firefox/29.0" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250709,250726#msg-250726 From francis at daoine.org Sat Jun 7 22:17:28 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 7 Jun 2014 23:17:28 +0100 Subject: arch linux , swf dir and forbidden error In-Reply-To: <82fc0cb61f046f525a5e80f28e01dad8.NginxMailingListEnglish@forum.nginx.org> References: <20140606231517.GD16942@daoine.org> <82fc0cb61f046f525a5e80f28e01dad8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140607221728.GG16942@daoine.org> On Sat, Jun 07, 2014 at 05:05:46PM -0400, mfaridi wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Fri, Jun 06, 2014 at 11:57:22AM -0400, mfaridi wrote: Hi there, > > What does error_log say? > > > > Most likely there is no index.html file, or autoindex > > (http://nginx.org/r/autoindex) is not on, is my guess. > 127.0.0.1 - - [08/Jun/2014:02:39:03 +0430] "GET /swf/ HTTP/1.1" 403 168 "-" > "Mozilla/5.0 (X11; Linux i686; rv:29.0) Gecko/20100101 Firefox/29.0" So the request you make is "/swf/". The response you get is http 403. What response do you want to get? error_log might say why you get the 403 response. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Sat Jun 7 23:17:50 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 7 Jun 2014 16:17:50 -0700 Subject: [ANN] OpenResty 1.7.0.1 released Message-ID: Hi folks! I am happy to announce the new formal release, 1.7.0.1, of the OpenResty bundle: http://openresty.org/#Download Special thanks go to all our contributors for making this happen! Below is the complete change log for this release, as compared to the last formal release, 1.5.12.1: * upgraded the Nginx core to 1.7.0. * see the changes here: * feature: bundled new Lua library, the lua-resty-lrucache library, which is also enabled by default. see https://github.com/openresty/lua-resty-lrucache#readme for more details. thanks Shuxin Yang for the help. * upgraded LuaJIT to v2.1-20140607: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest bug fixes and features: * Fix frame traversal while searching for error function. * Fix FOLD rule for STRREF of SNEW. * FFI: Fix recording of indexing a struct pointer ctype object itself. * FFI: Another fix for cdata equality comparisons. * Fix FOLD rule for "string.sub(s, ...) == k". * x86: Fix code generation for unused result of "math.random()". * x64: Workaround for MSVC build issue. * PPC: Fix red zone overflow in machine code generation. * Fix compatibility issues with Illumos. Thanks to Theo Schlossnagle. * Add PS Vita port. Thanks to Anton Stenmark. * disabled trace stitching by default for now since it may trigger random lua stack corruptions when using with ngx_lua. * feature: jit.dump: output Lua source location after every BC. * feature: added internal memory-buffer-based trace entry/exit/start-recording event logging, mainly for debugging bugs in the JIT compiler. it requires "-DLUA_USE_TRACE_LOGS" when building. * feature: save "g->jit_base" to "g->saved_jit_base" before "lj_err_throw" clears "g->jit_base" which makes it impossible to get Lua backtrace in such states. * upgraded the lua-resty-core library to 0.0.7. * feature: implemented ngx.req.set_header() (partial: table-typed values not yet supported) and ngx.req.clear_header() with FFI in the resty.core.request module. * feature: implemented shdict:flush_all() with FFI in the resty.core.shdict. * feature: implemented ngx.req.set_method() with FFI in resty.core.request. * feature: implemented ngx.req.get_method() with FFI in resty.core.request. * feature: implemented ngx.time() with FFI in resty.core.time. * feature: implemented ngx.req.start_time with FFI in rest.core.request. * feature: implemented ngx.now() with FFI in resty.core.time. * upgraded the ngx_lua module to 0.9.8. * bugfix: the ngx.ctx table might be released prematurely when ngx.exit() was used to generate the response header. thanks Monkey Zhang for the report. now we always release ngx.ctx in our request pool cleanup handler. * bugfix: we did not call our coroutine cleanup handlers right after our coroutine completes (either successfully or unsuccessfully) otherwise segmentation fault might happen when the Lua VM throws out unexpected exceptions like "attempt to yield across C-call boundary". thanks Lipin Dmitriy for the report. * bugfix: nginx does not guarentee the parent pointer of the rbtree root is meaningful, which could lead to inifinite loops when the ngx_lua module tried to abort pending timers prematurely (upon worker exit). thanks pengqi for the report. * bugfix: ngx.req.set_method(): we incorrectly modified "r->method" when the method ID was wrong. * bugfix: rewrite_by_lua* and access_by_lua* will now terminate the current request if the response header has already been sent (via calls like ngx.say and ngx.send_headers) at that point. thanks yaronli and Sophos for the report. * bugfix: issues in the error handling for pure C API functions for shared dict. thanks Xiaochen Wang. * feature: now we save the original pattern string pointer value into our "ngx_http_lua_regex_t" C struct, to help runtime regex profiling and debugging. * feature: allow use of 3rd-party pcre bindings in init_by_lua*. thanks ikokostya for the feature request. * feature: added pure C API functions to support the new FFI-based Lua API implemented in the lua-resty-core library. * feature: make use of the new shm API in nginx 1.5.13+ to suppress the "no memory" error logging when the shared dictionaries run out of memory. * feature: added C macro "NGX_LUA_ABORT_AT_PANIC" to allow generating a core dump when the Lua VM panics. * upgraded the ngx_srcache module to 0.27. * bugfix: we used to skip all the output header and body filters run before our filters (which unfortunately bypassed the standard ngx_http_not_modified_filter_module, for example). thanks Lloyd Zhou for the report. * feature: added new config directive srcache_store_ranges for storing 206 Partial Content responses generated by the standard ngx_http_range_filter_module. * bugfix: updated the dtrace patch because systemtap 2.5 no longer accepts the "-xnolib" option in its dtrace utility. * removed our bundled version of "ngx_http_auth_request_module" because recent versions of the nginx core already have it. thanks LazyZhu for the report. * bugfix: applied our patch for the nginx core to fix the long standing memory fragmentation issue for blocks larger than the page size in the nginx slab allocator: thanks Shuxin Yang for the help. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1007000 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From al-nginx at none.at Sat Jun 7 23:34:36 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 08 Jun 2014 01:34:36 +0200 Subject: ANN: HTTP 1.1 is now RFC 7230 to RFC 7235 Message-ID: <9f4fe7c46b19b7a17b52074280e7dbfa@none.at> Dear Readers. Today I haven seen the following post on the curl-lib mailing list which I want to share. http://curl.haxx.se/mail/lib-2014-06/0073.html ##### Zite Finally, RFC 2616 is deprecated and HTTP 1.1 is now offically instead done in the whole range of new RFCs in the subject. This revision work has taken almost seven years to complete. ##### In the new rfcs are always a section Changes from RFC 2616 and in the last one 7235 is it Changes from RFCs 2616 and 2617 Best regards Aleks From nginx-forum at nginx.us Sun Jun 8 17:50:18 2014 From: nginx-forum at nginx.us (nrahl) Date: Sun, 08 Jun 2014 13:50:18 -0400 Subject: Default host is being ignored? Message-ID: I created a default conf file: server { listen 80 default; listen 443 default; server_name ""; return 444; } and linked it in sites-enabled. The other server is declared with: server { listen 80; listen 443 ssl; server_name .mydomain.com; Accessing the https://xx.xx.xx.xx (the IP address) uses the mydomain.com host instead of using the default, which should reject the request. The browser is sending the host as "xx.xx.xx.xx". This should not match the mydomain.com file. So why is it it not using the right server? I've also tried leaving server name blank and adding "ssl" after 443 in the listen directive, but it still ignores the default server and uses the domain.com one instead. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250732,250732#msg-250732 From shahzaib.cb at gmail.com Mon Jun 9 06:45:22 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 9 Jun 2014 11:45:22 +0500 Subject: http_geo_module invalid option during compile !! Message-ID: Does nginx Geo module work on nginx ? I am getting the following error during compiling nginx-1.4.7 with it : ./configure --with-http_mp4_module --with-http_flv_module --with-http_geoip_module --with-http_geo_module --sbin-path=/usr/local/sbin --with-debug ./configure: error: invalid option "--with-http_geo_module" -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jun 9 07:30:21 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Jun 2014 08:30:21 +0100 Subject: http_geo_module invalid option during compile !! In-Reply-To: References: Message-ID: <20140609073021.GI16942@daoine.org> On Mon, Jun 09, 2014 at 11:45:22AM +0500, shahzaib shahzaib wrote: > Does nginx Geo module work on nginx ? I am getting the following error > during compiling nginx-1.4.7 with it : > > ./configure --with-http_mp4_module --with-http_flv_module > --with-http_geoip_module --with-http_geo_module --sbin-path=/usr/local/sbin > --with-debug > ./configure: error: invalid option "--with-http_geo_module" ./configure --help | grep geo Some modules are default-excluded, and must be added with "--with-"; some are default-included, and must be removed with "--without-". f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Mon Jun 9 07:46:58 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 9 Jun 2014 12:46:58 +0500 Subject: http_geo_module invalid option during compile !! In-Reply-To: <20140609073021.GI16942@daoine.org> References: <20140609073021.GI16942@daoine.org> Message-ID: ./configure --help | grep geo --with-http_geoip_module enable ngx_http_geoip_module --without-http_geo_module disable ngx_http_geo_module Alright, its enabled by default. Thanks On Mon, Jun 9, 2014 at 12:30 PM, Francis Daly wrote: > On Mon, Jun 09, 2014 at 11:45:22AM +0500, shahzaib shahzaib wrote: > > Does nginx Geo module work on nginx ? I am getting the following error > > during compiling nginx-1.4.7 with it : > > > > ./configure --with-http_mp4_module --with-http_flv_module > > --with-http_geoip_module --with-http_geo_module > --sbin-path=/usr/local/sbin > > --with-debug > > ./configure: error: invalid option "--with-http_geo_module" > > ./configure --help | grep geo > > Some modules are default-excluded, and must be added with "--with-"; > some are default-included, and must be removed with "--without-". > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From om.brahmana at gmail.com Mon Jun 9 11:05:31 2014 From: om.brahmana at gmail.com (Srirang Doddihal) Date: Mon, 9 Jun 2014 16:35:31 +0530 Subject: CORS headers not being set for a 401 response from upstream. Message-ID: Hi, I am trying to setup CORS headers in my nginx config. My config is here : http://pastie.org/private/eevpeyy6uwu25rl5qsgzyq The upstream is defined separately. It responds fine to OPTIONS request. It also adds the "Access-Control-Allow-Origin *" header to responses from the upstream (pm_puma_cluster) when the response status is 200. But if the response status from the upstream is 401 it is not adding the CORS header. Is it expected? Or am I missing something in the config? P.S : The upstream is a Rails app and I would prefer to deal with the CORS headers in the nginx and not mess with them in the Rails app. -- Regards, Srirang G Doddihal Brahmana. The LIGHT shows the way. The WISE see it. The BRAVE walk it. The PERSISTENT endure and complete it. I want to do it all ALONE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Jun 9 15:00:00 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 9 Jun 2014 17:00:00 +0200 Subject: Default host is being ignored? In-Reply-To: References: Message-ID: 1?) Ensure the config files are all included (starting with main nginx.conf files, then following all 'include' logic). The 'sites-available' logic is due to some distributions packaging and is not there in the official nginx one. Usually this is OK, but checking cannot hurt. :o) 2?) Ensure your configuration is *loaded*, ie the syntax is correct and nginx is actually applying it. - use nginx -t to check the available config does not contain errors - send SIGHUP to nginx master to reload the configuration (and check logs for messages regarding it) ... or use the probably available 'reload' command from your system management script which does both steps, knowing what you are doing 3?) Ensure the reply you see in your client is the one sent by the server (ie take care about any intermediate cache). One way of effectively doing that is monitoring the access.log file (which exists in the default nginx configuration , though commented). Another (complementary) way of doing the same is using cURL on the command-line, querying your server on the right interface If you are still experiencing difficulties, please provide details about your problem and your attempts (and their result aswell) to solve it. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 9 17:53:11 2014 From: nginx-forum at nginx.us (paulg1981) Date: Mon, 09 Jun 2014 13:53:11 -0400 Subject: Proxy_Pass to another vhost on same machine Message-ID: <0d183d30598727b37cd7354f37b11f36.NginxMailingListEnglish@forum.nginx.org> Hello, I am attempting to use ca.mydomain.com with client certificate auth as a springboard for other sites on the same server. I am using client certs with my iphone (and other browsers) to skip the password auth and be more secure. The first two proxy_pass statements work fine (sickbeard and couchpotato) but the next (munin) gives the error 400 Bad Request No required SSL certificate was sent. If I put the address (https://tools.mydomain.com/munin) in my address bar it works fine? I don't understand why it is requesting the client cert for the subdomain that doesn't use client auth. The tools.mydomain.com uses basic auth. Secondly I want to access the tools.mydomain.com from ca.mydomain.com and not be prompted for the basic auth password. So I want to include the authorization in the proxying. Any help you all can provide would be great. I hope I explained my issue well enough! server { listen my.ip.address:80; server_name ca.mydomain.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen my.ip.address:443 ssl spdy; ssl_certificate /etc/ssl/certs/my.pem; ssl_certificate_key /etc/ssl/private/my.key; root /var/www/ca.thefamilygarrison; index index.php index.html index.htm; server_name ca.mydomain.com; pagespeed off; ssl_client_certificate /etc/nginx/clientauth/ca.crt; ssl_verify_client on; location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location /sickbeard { proxy_pass http://my.ip.address:65007/sickbeard; } location /couchpotato { proxy_pass http://my.ip.address:65005/couchpotato; } location /munin { proxy_pass https://tools.mydomain.com/munin; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250747,250747#msg-250747 From rvrv7575 at yahoo.com Mon Jun 9 18:09:13 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Tue, 10 Jun 2014 02:09:13 +0800 (SGT) Subject: Accessing the location configuration of module 2 during post configuration processing of module 1 for a particular server Message-ID: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> How do we access the configuration of a an unrelated module in a given module. This may be required for example to check if the directives pertaining to module 2 were specified in location for a particular server that has directives for module 1 in its configuration. From what I understand, code similar to this can be used? /* Get http main configuration */ ? ? cmcf = ctx->main_conf[ngx_http_core_module.ctx_index];? /* Get the list of servers */ ? ? cscfp = cmcf->servers.elts; /* Iterate through the list */ ? ? for (s = 0; s < cmcf->servers.nelts; s++) { ? /* Problem : how to get the configuration of module 2*/ ? ? ? ? ? ? ? ? ? cscfp[s]->ctx->loc_conf[module2.ctx_index];-------------> does not yield the correct location struct of module 2 I did not find any documentation on how the configuration is stored within nginx using these structs? typedef struct { ............. ?/* server ctx */ ngx_http_conf_ctx_t *ctx;?............ } ngx_http_core_srv_conf_t; typedef struct { ? ? void ? ? ? ?**main_conf; ? ? void ? ? ? ?**srv_conf; ? ? void ? ? ? ?**loc_conf; } ngx_http_conf_ctx_t; -------------- next part -------------- An HTML attachment was scrubbed... URL: From amikamsnir at gmail.com Mon Jun 9 18:19:34 2014 From: amikamsnir at gmail.com (Amikam Snir) Date: Mon, 9 Jun 2014 21:19:34 +0300 Subject: how to get response headers from internal-requests? Message-ID: Hi all, Is there a way to get the response headers from internal-requests? p.s. I am using ngx.location.capture(uri) for the internal requests. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 10 01:30:06 2014 From: nginx-forum at nginx.us (arteomp) Date: Mon, 09 Jun 2014 21:30:06 -0400 Subject: how to get response headers from internal-requests? In-Reply-To: References: Message-ID: <6dfc8473e7791d3db65f09216884159d.NginxMailingListEnglish@forum.nginx.org> Hello, http://wiki.nginx.org/HttpLuaModule#ngx.location.capture res.header holds all the response headers of the subrequest and it is a normal Lua table. For multi-value response headers, the value is a Lua (array) table that holds all the values in the order that they appear. For instance, if the subrequest response headers contain the following lines: Set-Cookie: a=3 Set-Cookie: foo=bar Set-Cookie: baz=blah Then res.header["Set-Cookie"] will be evaluated to the table value {"a=3", "foo=bar", "baz=blah"}. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250749,250751#msg-250751 From nginx-forum at nginx.us Tue Jun 10 02:53:49 2014 From: nginx-forum at nginx.us (spch2008) Date: Mon, 09 Jun 2014 22:53:49 -0400 Subject: input body filer Message-ID: <926af5434a56406820126465fce99617.NginxMailingListEnglish@forum.nginx.org> Now I want to do some security testing for input body, but I find nginx doesn't have the detection mechanism for input body. anyone who can tell me why? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250752,250752#msg-250752 From mdounin at mdounin.ru Tue Jun 10 11:33:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 15:33:43 +0400 Subject: CORS headers not being set for a 401 response from upstream. In-Reply-To: References: Message-ID: <20140610113343.GS1849@mdounin.ru> Hello! On Mon, Jun 09, 2014 at 04:35:31PM +0530, Srirang Doddihal wrote: > Hi, > > I am trying to setup CORS headers in my nginx config. My config is here : > http://pastie.org/private/eevpeyy6uwu25rl5qsgzyq > > The upstream is defined separately. > > It responds fine to OPTIONS request. It also adds the > "Access-Control-Allow-Origin *" header to responses from the upstream > (pm_puma_cluster) when the response status is 200. > > But if the response status from the upstream is 401 it is not adding the > CORS header. > > Is it expected? Or am I missing something in the config? Quote from http://nginx.org/r/add_header: "Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307." -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jun 10 12:03:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 16:03:48 +0400 Subject: Proxy_Pass to another vhost on same machine In-Reply-To: <0d183d30598727b37cd7354f37b11f36.NginxMailingListEnglish@forum.nginx.org> References: <0d183d30598727b37cd7354f37b11f36.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140610120348.GU1849@mdounin.ru> Hello! On Mon, Jun 09, 2014 at 01:53:11PM -0400, paulg1981 wrote: > Hello, > I am attempting to use ca.mydomain.com with client certificate auth as a > springboard for other sites on the same server. I am using client certs with > my iphone (and other browsers) to skip the password auth and be more secure. > The first two proxy_pass statements work fine (sickbeard and couchpotato) > but the next (munin) gives the error 400 Bad Request No required SSL > certificate was sent. If I put the address > (https://tools.mydomain.com/munin) in my address bar it works fine? I don't > understand why it is requesting the client cert for the subdomain that > doesn't use client auth. The tools.mydomain.com uses basic auth. In no particular order: - Make sure that "s" in the "https://tools..." isn't a typo and you actually mean to use encrypted connection between nginx and this backend. - Make sure the "tools.mydomain.com" https backend actually don't have client cert auth switched on. In particular, make sure it's either uses separate ip:port, or you've enabled SNI in nginx proxy (http://nginx.org/r/proxy_ssl_server_name). > Secondly I want to access the tools.mydomain.com from ca.mydomain.com and > not be prompted for the basic auth password. So I want to include the > authorization in the proxying. Instead of providing a password, you may consider configuring access from a fixed set of ip addresses, using the access module and "satisfy any", see http://nginx.org/r/satisfy for an example. If you want nginx to send a password, you may do so by adding the Authorization header with proxy_set_header, see http://nginx.org/r/proxy_set_header and http://tools.ietf.org/html/rfc2617#section-2. -- Maxim Dounin http://nginx.org/ From luciano at vespaperitivo.it Tue Jun 10 13:38:49 2014 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Tue, 10 Jun 2014 15:38:49 +0200 Subject: Switching to nginx: php and rewrite rules from apache In-Reply-To: <20140607103955.GF16942@daoine.org> References: <20140607085016.E94C83FA26A@mail.nginx.com> <20140607090854.GE16942@daoine.org> <20140607100815.9D0C73FA281@mail.nginx.com> <20140607103955.GF16942@daoine.org> Message-ID: On Sat, 7 Jun 2014 11:39:55 +0100 Francis Daly wrote: > add something like > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9004; > } > > so that nginx knows to handle this request the way you want it to. Works! :-) :) I had to add few other things (it was looking for index.html instead of .php) and now it seems fine. Maaaany thanks, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From mdounin at mdounin.ru Tue Jun 10 14:52:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 18:52:33 +0400 Subject: Accessing the location configuration of module 2 during post configuration processing of module 1 for a particular server In-Reply-To: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> References: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> Message-ID: <20140610145233.GW1849@mdounin.ru> Hello! On Tue, Jun 10, 2014 at 02:09:13AM +0800, Rv Rv wrote: > How do we access the configuration of a an unrelated module in a > given module. This may be required for example to check if the > directives pertaining to module 2 were specified in location for > a particular server that has directives for module 1 in its > configuration. I don't think it's something you should do at postconfiguration - location structure is complex and not easily accessible. There are location configuration merge callbacks where you are expected to work with location configs and, in particular, can use ngx_http_conf_get_module_loc_conf() macro to access a configuration of other modules (note though, that order of modules may be important in this case). [...] > I did not find any documentation on how the configuration is stored within nginx using these structs? It's under src/, in C language. I would rather say it's not a part of the API, and you'd better avoid using it directly. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jun 10 14:55:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 18:55:19 +0400 Subject: input body filer In-Reply-To: <926af5434a56406820126465fce99617.NginxMailingListEnglish@forum.nginx.org> References: <926af5434a56406820126465fce99617.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140610145519.GX1849@mdounin.ru> Hello! On Mon, Jun 09, 2014 at 10:53:49PM -0400, spch2008 wrote: > Now I want to do some security testing for input body, but I find nginx > doesn't have the detection mechanism for input body. > anyone who can tell me why? Thanks! http://mailman.nginx.org/pipermail/nginx-devel/2013-March/003492.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jun 10 15:14:35 2014 From: nginx-forum at nginx.us (khine) Date: Tue, 10 Jun 2014 11:14:35 -0400 Subject: Unable to complete secure transaction Message-ID: <30a5f6f5fa17e399b8b34f83130f0149.NginxMailingListEnglish@forum.nginx.org> Hello, I am using the following nginx.conf https://gist.github.com/nkhine/f620f8bdc0fb613b7b59 and have an EV Multi-Domain certificate from COMODO for my company. This works on Chrome and Firefox, but I am having difficulties on Safari and Opera. My server is FreeBSD 10 with ZFS, with Nginx running inside a jailed environment. Nginx is used as a front end to a Node.js application. There two issues that happen, on Safari, if i try to login using federated login, i get 'Too many redirects occurred' and on Opera i get 'Secure connection: fatal error (40) from server.' Any advise much appreciated Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250765,250765#msg-250765 From luciano at vespaperitivo.it Tue Jun 10 16:00:13 2014 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Tue, 10 Jun 2014 18:00:13 +0200 Subject: Rewrite rules from Apache again Message-ID: I'm still trying to move everything from apache to nginx. I've successfully translated some rules from the .htaccess: RewriteRule ^(.*)_k(.*)\.htm$ pagina.php?k=$2 works pretty weff if turned into: rewrite ^/(.*)_k(.*).htm$ /pagina.php?k=$2 ; while RewriteRule ^privacy.php$ pagina.php?k=privacy does not work if translated as: rewrite ^/privacy.php$ /pagina.php?k=privacy ; the request is passed unchanged and I get a 404 error for the /privacy.php page does'nt exist. What did I miss? AdvThanksAnce, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From nginx-forum at nginx.us Tue Jun 10 16:30:18 2014 From: nginx-forum at nginx.us (khine) Date: Tue, 10 Jun 2014 12:30:18 -0400 Subject: Unable to complete secure transaction In-Reply-To: <30a5f6f5fa17e399b8b34f83130f0149.NginxMailingListEnglish@forum.nginx.org> References: <30a5f6f5fa17e399b8b34f83130f0149.NginxMailingListEnglish@forum.nginx.org> Message-ID: the site in question is https://dev.continentalclothing.com/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250765,250767#msg-250767 From nginx-forum at nginx.us Tue Jun 10 16:31:25 2014 From: nginx-forum at nginx.us (grd2345) Date: Tue, 10 Jun 2014 12:31:25 -0400 Subject: location uri wildcard Message-ID: I recently converted my website from asp.net and have the following links out there that I would like to permanent redirect to my new url, but I seem to not be able to create this. My goal is to create one rewrite or something that can handle all of these. The old urls are as follows. http://www.mysite.com/(X(1)S(kt54xgv5mfifsmkgxo5eqk2m))/ClassScheduler.aspx?AspxAutoDetectCookieSupport=1 http://www.mysite.com/(X(1)S(kt54xgv5mfifsmkgxo5eqk2m))/ClassScheduler.aspx http://www.mysite.com/ClassScheduler.aspx ----------------------------------------- I basically need a wild card to detect ClassScheduler.aspx from the above old urls and redirect to http://www.mysite.com/classScheduler Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250768,250768#msg-250768 From reallfqq-nginx at yahoo.fr Tue Jun 10 17:25:29 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Jun 2014 19:25:29 +0200 Subject: location uri wildcard In-Reply-To: References: Message-ID: What have your tried? What do your access logs show? What did you expect? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 10 17:33:52 2014 From: nginx-forum at nginx.us (grd2345) Date: Tue, 10 Jun 2014 13:33:52 -0400 Subject: location uri wildcard In-Reply-To: References: Message-ID: I havent tried anything yet. All I ever did was redirect a single domain. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250768,250770#msg-250770 From reallfqq-nginx at yahoo.fr Tue Jun 10 17:33:40 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Jun 2014 19:33:40 +0200 Subject: Unable to complete secure transaction In-Reply-To: References: <30a5f6f5fa17e399b8b34f83130f0149.NginxMailingListEnglish@forum.nginx.org> Message-ID: >From the error message on Safari, one could determine the problem does not come from the certificate but from looping (internal?) redirections. Where does the loop occur? nginx or NodeJS? (ie is the request correctly forwarded or not) If nginx, what does the error log says? Same questions for Opera. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 10 18:49:58 2014 From: nginx-forum at nginx.us (Thaxll) Date: Tue, 10 Jun 2014 14:49:58 -0400 Subject: 400 bad requests now returning http headers? ( crossdomain.xml ) Message-ID: <4facfdcd4b4254c2bec97c6d5ed9c74b.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using Nginx to serve a file called crossdomain.xml, that file is used by Flash client to allow socket crossdomain Policy. It's a trick that many people are using instead of having a dedicated app to server that file. The trick is to return that xml file when nginx get a bad request. Since a recent version ( 1.4.7+ ) it seems that a bad request replies include HTTP headers and therefore breaking the Flash client ( instead of returning only the data without headers ). Is there a way to remove those headers? Also I searched in the changelog and didn't find any hints about that change? Example: perl -e 'printf "%c",0' | nc test.com 843 HTTP/1.1 400 Bad Request Server: nginx Date: Tue, 10 Jun 2014 18:44:00 GMT Content-Type: text/xml Content-Length: 308 Connection: close ETag: "5385f727-134" Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250772,250772#msg-250772 From reallfqq-nginx at yahoo.fr Tue Jun 10 19:07:21 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Jun 2014 21:07:21 +0200 Subject: location uri wildcard In-Reply-To: References: Message-ID: Well, that is maybe why you have not received any help and might not receive some. No effort, no problem, no help. http://www.catb.org/esr/faqs/smart-questions.html#before If you wanna learn about configuring nginx, a good head start is the official documentation: http://nginx.org/en/docs --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 10 19:13:48 2014 From: nginx-forum at nginx.us (grd2345) Date: Tue, 10 Jun 2014 15:13:48 -0400 Subject: location uri wildcard In-Reply-To: References: Message-ID: Ah, gotcha, I am new to this and thanks for the link to this documentation. I will close this thread. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250768,250774#msg-250774 From mdounin at mdounin.ru Tue Jun 10 19:36:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 23:36:24 +0400 Subject: 400 bad requests now returning http headers? ( crossdomain.xml ) In-Reply-To: <4facfdcd4b4254c2bec97c6d5ed9c74b.NginxMailingListEnglish@forum.nginx.org> References: <4facfdcd4b4254c2bec97c6d5ed9c74b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140610193624.GA1849@mdounin.ru> Hello! On Tue, Jun 10, 2014 at 02:49:58PM -0400, Thaxll wrote: > Hi, > > I'm using Nginx to serve a file called crossdomain.xml, that file is used by > Flash client to allow socket crossdomain Policy. It's a trick that many > people are using instead of having a dedicated app to server that file. The > trick is to return that xml file when nginx get a bad request. Since a > recent version ( 1.4.7+ ) it seems that a bad request replies include HTTP > headers and therefore breaking the Flash client ( instead of returning only > the data without headers ). Is there a way to remove those headers? Also I > searched in the changelog and didn't find any hints about that change? Quote from http://nginx.org/en/CHANGES, changes with nginx 1.5.5: *) Change: now nginx assumes HTTP/1.0 by default if it is not able to detect protocol reliably. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jun 10 20:07:28 2014 From: nginx-forum at nginx.us (Thaxll) Date: Tue, 10 Jun 2014 16:07:28 -0400 Subject: 400 bad requests now returning http headers? ( crossdomain.xml ) In-Reply-To: <20140610193624.GA1849@mdounin.ru> References: <20140610193624.GA1849@mdounin.ru> Message-ID: Hi Maxim, Thank you for the quick reply, I guess there is no workaround for that problem? It isn't possible to remove headers or specify a dummy protocol for Nginx? Thank you, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250772,250776#msg-250776 From francis at daoine.org Tue Jun 10 21:43:05 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Jun 2014 22:43:05 +0100 Subject: Rewrite rules from Apache again In-Reply-To: <20140610160019.670733F9F4A@mail.nginx.com> References: <20140610160019.670733F9F4A@mail.nginx.com> Message-ID: <20140610214305.GL16942@daoine.org> On Tue, Jun 10, 2014 at 06:00:13PM +0200, Luciano Mannucci wrote: Hi there, > works pretty weff if turned into: > > rewrite ^/(.*)_k(.*).htm$ /pagina.php?k=$2 ; > does not work if translated as: > > rewrite ^/privacy.php$ /pagina.php?k=privacy ; > > the request is passed unchanged and I get a 404 error for the > /privacy.php page does'nt exist. > > What did I miss? The rest of the config? == server { rewrite ^/(.*)_k(.*).htm$ /pagina.php?k=$2 ; rewrite ^/privacy.php$ /pagina.php?k=privacy ; location = /pagina.php { return 200 "I got $uri$is_args$args from $request_uri\n"; } } == seems to work as expected for me, for requests like /privacy.php and /a_k_b.htm?key=value. What do you have that is different? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jun 11 03:13:07 2014 From: nginx-forum at nginx.us (xinster8192) Date: Tue, 10 Jun 2014 23:13:07 -0400 Subject: nginx gunzip memcached data , xss_callback failed Message-ID: Hi, I use nginx to read mecached data and write to memcached by java XmemcachedClient . Xmemcached compress data that larger than 256kB before write . So nginx i use ngx_gunzip_module and memcached_gzip_flag to unzip data from mecached to be sure unzip is Ok . the problem is xss_callback_arg is failed . when i delete memcached_gzip_flag , the xss_callback_arg is Ok ,but the result is conrupted . Is there good way to gunzip the memcached data first , then xss_callback_arg second . this is part of nginx.conf location ^~ /api/2/topic/nodejs { set_unescape_uri $topicurl $arg_topicurl; set_unescape_uri $topicurl $topicurl; set_md5 $md5_key $arg_client_id$topicurl$arg_topicsid; set $memcached_key '/topic/nodejs?$md5_key'; memcached_gzip_flag 2; memcached_pass memcached-cluster; error_page 404 501 502 = @miss; xss_get on; xss_callback_arg 'callback'; xss_input_types 'application/json text/plain'; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250779,250779#msg-250779 From es12b1001 at iith.ac.in Wed Jun 11 06:56:03 2014 From: es12b1001 at iith.ac.in (Adarsh Pugalia) Date: Wed, 11 Jun 2014 12:26:03 +0530 Subject: Life of objects allocated using the request pool? Message-ID: What is the life of objects allocated using the request pool? If I allocate memory from the r->pool in a request handler, what would be the life of the object? Will the objects be freed if the request is over of will it sustain over multiple requests? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunalvjti at gmail.com Wed Jun 11 07:46:41 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Wed, 11 Jun 2014 00:46:41 -0700 Subject: proxy_pass to different upstreams based on a cookie in the http request header Message-ID: Hello, Am wondering if there is a way to proxy (i.e proxy_pass inside location directive) to different set of upstreams based on whether a particular cookie is present or not in a http request header. Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Jun 11 14:00:48 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 11 Jun 2014 16:00:48 +0200 Subject: location uri wildcard In-Reply-To: References: Message-ID: Keep in mind this ML (and other places such as StackOverfflow or whatever fora are still open to help you if you are struggling on anything. People are usually glad to help others who demonstrated efforts in thinking about it and who provided details about their approach of the problem. That is, based on the docs, try to make a solution on your own. Once you hit the wall, and if all the docs you could find (nginx docs, Web search are a start) do not help you further, please come back with details of the process you followed. Doors might open this time. :o) --- *B. R.* On Tue, Jun 10, 2014 at 9:13 PM, grd2345 wrote: > Ah, gotcha, I am new to this and thanks for the link to this documentation. > I will close this thread. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250768,250774#msg-250774 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 11 14:58:47 2014 From: nginx-forum at nginx.us (Tatonka) Date: Wed, 11 Jun 2014 10:58:47 -0400 Subject: tmp directory filling up Message-ID: <076e6438b60e67a46bf809df89178bcf.NginxMailingListEnglish@forum.nginx.org> Hi, I have a rails application that is hosted through nginx and passenger. In this application I want provide very large files for the users to download (>2GB) using send_file .. which is working just fine on the development and staging system. On the production system however the system tmp directory is limited to 1GB (separately mounted disk). When triggering a download, the tmp folder quickly fills up and the download breaks once it is completely full. I already moved passengers /tmp directory to a new location but could find how to do the same for nginx (I did set $tmp and $tmpdir with no effect). When looking into the /tmp directory however, I cannot find any large files that would explain what is happening, nevertheless, df reports it is filling up at the same time .. Lastly .. I also specified the proxy_temp_path directive in the nginx config. Again with no effect. Is there any way to specify which directory nginx uses for its tmp data? Is nginx even the culprit here? Thanks .. any help is greatly appreciated. Thanks Tim Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250795,250795#msg-250795 From nginx-forum at nginx.us Wed Jun 11 15:06:39 2014 From: nginx-forum at nginx.us (grd2345) Date: Wed, 11 Jun 2014 11:06:39 -0400 Subject: location uri wildcard In-Reply-To: References: Message-ID: <61674a2573756842a8ca5ee9f56e3c74.NginxMailingListEnglish@forum.nginx.org> I actually was looking around on google for the solution, but I lazied out and came here, but I ended up finding the solution for this, buy getting myself familiar with regex. I used the following command to do this to maybe help someone else needing this. location ~* "\bClassScheduler.aspx\b" { rewrite ^ http://www.mysite.com/classScheduler/? permanent; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250768,250796#msg-250796 From luciano at vespaperitivo.it Wed Jun 11 16:20:50 2014 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Wed, 11 Jun 2014 18:20:50 +0200 Subject: Rewrite rules from Apache again In-Reply-To: <20140610214305.GL16942@daoine.org> References: <20140610160019.670733F9F4A@mail.nginx.com> <20140610214305.GL16942@daoine.org> Message-ID: On Tue, 10 Jun 2014 22:43:05 +0100 Francis Daly wrote: > The rest of the config? :) Well, I've posted it in my previous request for help. Beeing longish I tried to spare some bandwith... :) > > == > server { > rewrite ^/(.*)_k(.*).htm$ /pagina.php?k=$2 ; > rewrite ^/privacy.php$ /pagina.php?k=privacy ; Wow! I had it under "location /" Moving it to "server" level and adding a "break" seems to make it work! > location = /pagina.php { > return 200 "I got $uri$is_args$args from $request_uri\n"; > } Many thanks for this elegant way of debugging this kind of configuration problems. > What do you have that is different? Another problem :( If I try the same thing in a subdirectory, it doesn't work. In the error log I get: 2014/06/11 17:51:46 [error] 602#0: *264 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 212.121.88.183, server: new.assirm.it, request: "GET /en/privacy.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9004", host: "new.assirm.it", referrer: "http://new.assirm.it/en/history_khistory.htm" It seems that the "location =" that I've put to intercept the rewrite doesn't match. My configuration, now looks like this: server { listen 212.45.144.216:80 default_server; server_name new.assirm.it test.assirm.it; access_log /dati/log/http/assirm/access.log; error_log /dati/log/http/assirm/error_new.log; rewrite_log on; rewrite ^/ultime-news.php$ /archivio-news.php?last=1 ; rewrite ^/(.*)_news(.*).htm$ /news.php?id=$2 ; rewrite ^/(.*)_ev(.*).htm$ /evento.php?id=$2 ; rewrite ^/(.*)_att(.*).htm$ /attivita.php?id=$2 ; rewrite ^/(.*)_k(.*).htm$ /pagina.php?k=$2 ; rewrite ^/(.*)_sk(.*).htm$ /stampa-contenuto.php?k=$2 ; rewrite ^/(.*)_sn(.*).htm$ /stampa-news.php?id=$2 ; rewrite ^/(.*)_a(.*).htm$ /associato.php?id=$2&$args ; rewrite ^/(.*)_p(.*).htm$ /mypost.php?id=$2&$args ; rewrite ^/ricerca-(.*).htm$ /risultati.php?s=$1&goo=1 ; rewrite ^/privacy.php$ /pagina.php?k=privacy break; location ~ \.php$ { root /dati/httpd/web_assirm/sito_nginx; fastcgi_pass 127.0.0.1:9004; fastcgi_index index.php; include /etc/nginx/fastcgi.conf; } location / { root /dati/httpd/web_assirm/sito_nginx; index index.html index.htm index.php home.html welcome.html; } location ^~ /en/ { root /dati/httpd/web_assirm/sito_nginx; index index.html index.htm index.php home.html; rewrite ^/en/ultime-news.php$ /en/archivio-news.php?last=1 ; rewrite ^/en/(.*)_news(.*).htm$ /en/news.php?id=$2 ; rewrite ^/en/(.*)_ev(.*).htm$ /en/evento.php?id=$2 ; rewrite ^/en/(.*)_att(.*).htm$ /en/attivita.php?id=$2 ; rewrite ^/en/(.*)_k(.*).htm$ /en/pagina.php?k=$2 ; rewrite ^/en/(.*)_sk(.*).htm$ /en/stampa-contenuto.php?k=$2 ; rewrite ^/en/(.*)_sn(.*).htm$ /en/stampa-news.php?id=$2 ; rewrite ^/en/(.*)_a(.*).htm$ /en/associato.php?id=$2&$args ; rewrite ^/en/(.*)_p(.*).htm$ /en/mypost.php?id=$2&$args ; rewrite ^/en/(.*)_wit(.*).htm$ /en/wit.php?c=$2&$args ; rewrite ^/en/ricerca-(.*).htm$ /en/risultati.php?s=$1&goo=1 ; rewrite ^/en/privacy.php /en/pagina.php?k=privacy ; location = /en/pagina.php { return 200 "I got $uri$is_args$args from $request_uri\n"; } location ~ \.php$ { index index.html index.htm index.php home.html; fastcgi_pass 127.0.0.1:9004; fastcgi_index index.php; include /etc/nginx/fastcgi.conf; } } } -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From francis at daoine.org Wed Jun 11 17:05:50 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Jun 2014 18:05:50 +0100 Subject: Rewrite rules from Apache again In-Reply-To: <20140611162101.B48E13F9FD7@mail.nginx.com> References: <20140610160019.670733F9F4A@mail.nginx.com> <20140610214305.GL16942@daoine.org> <20140611162101.B48E13F9FD7@mail.nginx.com> Message-ID: <20140611170550.GN16942@daoine.org> On Wed, Jun 11, 2014 at 06:20:50PM +0200, Luciano Mannucci wrote: > On Tue, 10 Jun 2014 22:43:05 +0100 > Francis Daly wrote: Hi there, > > The rest of the config? > :) > Well, I've posted it in my previous request for help. Beeing > longish I tried to spare some bandwith... :) No worries. It can be useful to have a minimal test case that shows the problem. > I had it under "location /" > Moving it to "server" level and adding a "break" seems to make it work! Very approximately, the order is: choose the server{} run the rewrite directives choose the location{} run the rewrite directives, looping back as necessary handle the request So your non-server-level rewrites will only apply if they are in the location{} that is chosen. > > location = /pagina.php { > > return 200 "I got $uri$is_args$args from $request_uri\n"; > > } > Many thanks for this elegant way of debugging this kind of > configuration problems. You're welcome. You may also find it useful to enable debug logging for your test client, such as by putting something like debug_connection 127.0.0.10; within the events{} block, and then looking in error_log. > If I try the same thing in a subdirectory, it doesn't work. Put the rewrites at server{} level, or in the location{} that is chosen. > It seems that the "location =" that I've put to intercept the rewrite > doesn't match. No. The rewrite that you want doesn't happen, because the request /en/privacy.php is handled in: > location ~ \.php$ { > location / { > location ^~ /en/ { > location = /en/pagina.php { > location ~ \.php$ { ...that location, and not in the one two above it. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jun 11 17:10:08 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Jun 2014 18:10:08 +0100 Subject: proxy_pass to different upstreams based on a cookie in the http request header In-Reply-To: References: Message-ID: <20140611171008.GO16942@daoine.org> On Wed, Jun 11, 2014 at 12:46:41AM -0700, Kunal Pariani wrote: Hi there, > Am wondering if there is a way to proxy (i.e proxy_pass inside location > directive) to different set of upstreams based on whether a particular > cookie is present or not in a http request header. You can use a map (http://nginx.org/r/map) to set a variable based on a cookie (http://nginx.org/en/docs/http/ngx_http_core_module.html#variables), and you can use a variable in your proxy_pass directive (http://nginx.org/r/proxy_pass). So it looks like it should Just Work. f -- Francis Daly francis at daoine.org From contact at jpluscplusm.com Wed Jun 11 17:10:10 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 11 Jun 2014 18:10:10 +0100 Subject: location uri wildcard In-Reply-To: References: Message-ID: On 10 June 2014 17:31, grd2345 wrote: > http://www.mysite.com/ClassScheduler.aspx [snip] > I basically need a wild card to detect ClassScheduler.aspx from the above > old urls This assumption looks wrong. Check out how location stanzas work: http://nginx.org/r/location Hint: locations in their simplest state just match path *prefixes* ... Also check the first parameter that a "rewrite" takes ... consider if you could use this functionality to avoid specifying a location /entirely/: http://nginx.org/r/rewrite J From mdounin at mdounin.ru Wed Jun 11 19:23:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jun 2014 23:23:30 +0400 Subject: tmp directory filling up In-Reply-To: <076e6438b60e67a46bf809df89178bcf.NginxMailingListEnglish@forum.nginx.org> References: <076e6438b60e67a46bf809df89178bcf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140611192330.GC1849@mdounin.ru> Hello! On Wed, Jun 11, 2014 at 10:58:47AM -0400, Tatonka wrote: > Hi, > > I have a rails application that is hosted through nginx and passenger. In > this application I want provide very large files for the users to download > (>2GB) using send_file .. which is working just fine on the development and > staging system. On the production system however the system tmp directory is > limited to 1GB (separately mounted disk). > > When triggering a download, the tmp folder quickly fills up and the download > breaks once it is completely full. I already moved passengers /tmp directory > to a new location but could find how to do the same for nginx (I did set > $tmp and $tmpdir with no effect). > > When looking into the /tmp directory however, I cannot find any large files > that would explain what is happening, nevertheless, df reports it is filling > up at the same time .. > > Lastly .. I also specified the proxy_temp_path directive in the nginx > config. Again with no effect. The proxy_temp_path is related to the problem, but it's for proxy, not for passenger, and it's expected that it has no effect in your case. > Is there any way to specify which directory nginx uses for its tmp data? Is > nginx even the culprit here? That's not about nginx, but rather about passenger module for nginx. Last time I checked, passenger module for nginx implemented its own protocol for the upstream module (like proxy/fastcgi/etc), and should have its own "..._temp_path" directive, as well as "..._max_temp_file_size" and so on. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jun 11 20:21:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jun 2014 00:21:29 +0400 Subject: 400 bad requests now returning http headers? ( crossdomain.xml ) In-Reply-To: References: <20140610193624.GA1849@mdounin.ru> Message-ID: <20140611202129.GF1849@mdounin.ru> Hello! On Tue, Jun 10, 2014 at 04:07:28PM -0400, Thaxll wrote: > Hi Maxim, > > Thank you for the quick reply, I guess there is no workaround for that > problem? It isn't possible to remove headers or specify a dummy protocol for > Nginx? I don't think there is anything that can be done at the configuration level. On the other hand, it should be more or less trivial to write a module to force nginx to think the protocol was HTTP/0.9 and to respond accordingly. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jun 11 22:07:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jun 2014 02:07:12 +0400 Subject: Life of objects allocated using the request pool? In-Reply-To: References: Message-ID: <20140611220712.GH1849@mdounin.ru> Hello! On Wed, Jun 11, 2014 at 12:26:03PM +0530, Adarsh Pugalia wrote: > What is the life of objects allocated using the request pool? If I allocate > memory from the r->pool in a request handler, what would be the life of the > object? Will the objects be freed if the request is over of will it sustain > over multiple requests? The request pool is destroyed with the request, and no objects allocated from the pool can be used after this. That's actually the whole point: allocations from a pool don't need to be freed individually, it's enough to destroy the pool itself. -- Maxim Dounin http://nginx.org/ From rvrv7575 at yahoo.com Thu Jun 12 05:46:58 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Thu, 12 Jun 2014 13:46:58 +0800 (SGT) Subject: Accessing the location configuration of module 2 during post configuration processing of module 1 for a particular server In-Reply-To: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> References: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> Message-ID: <1402552018.84107.YahooMailNeo@web193505.mail.sg3.yahoo.com> Hello Maxim Thanks for your response. Here is a related query. Say in module 1 I have a ?typedef struct ?{ int flag; ngx_str somestring; }?module1; flag gets initialized with the following code? ? ? { ngx_string("module1_directive"), ? ? ? NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ? ? ? ngx_conf_set_flag_slot, ? ? ? NGX_HTTP_LOC_CONF_OFFSET, ? ? ? offsetof(module1,configured), ? ? ? NULL }, somestring gets iniitialized with a handler written in the module (i.e ?not ngx_conf_set_flag_slot or any inbuilt handler). In the post configuration, I see that flag is not properly set but somestring is. Flag is properly set during request processing though. Are the values set during processing of a directive in location struct guaranteed to be set by the time post configuration is executed? When is the time that one can check for the values set during configuration. I need to test these values to ensure that they are sane when nginx is executed with -t option Thanks Hello! On Tue, Jun 10, 2014 at 02:09:13AM +0800, Rv Rv wrote: > How do we access the configuration of a an unrelated module in a? > given module. This may be required for example to check if the? > directives pertaining to module 2 were specified in location for? > a particular server that has directives for module 1 in its? > configuration. I don't think it's something you should do at postconfiguration -? location structure is complex and not easily accessible.? There? are location configuration merge callbacks where you are expected? to work with location configs and, in particular, can use? ngx_http_conf_get_module_loc_conf() macro to access a? configuration of other modules (note though, that order of modules? may be important in this case). [...] > I did not find any documentation on how the configuration is stored within nginx using these structs? It's under src/, in C language. I would rather say it's not a part of the API, and you'd better? avoid using it directly. --? Maxim Dounin http://nginx.org/ ------------------------------ On Monday, 9 June 2014 11:39 PM, Rv Rv wrote: How do we access the configuration of a an unrelated module in a given module. This may be required for example to check if the directives pertaining to module 2 were specified in location for a particular server that has directives for module 1 in its configuration. From what I understand, code similar to this can be used? /* Get http main configuration */ ? ? cmcf = ctx->main_conf[ngx_http_core_module.ctx_index];? /* Get the list of servers */ ? ? cscfp = cmcf->servers.elts; /* Iterate through the list */ ? ? for (s = 0; s < cmcf->servers.nelts; s++) { ? /* Problem : how to get the configuration of module 2*/ ? ? ? ? ? ? ? ? ? cscfp[s]->ctx->loc_conf[module2.ctx_index];-------------> does not yield the correct location struct of module 2 I did not find any documentation on how the configuration is stored within nginx using these structs? typedef struct { ............. ?/* server ctx */ ngx_http_conf_ctx_t *ctx;?............ } ngx_http_core_srv_conf_t; typedef struct { ? ? void ? ? ? ?**main_conf; ? ? void ? ? ? ?**srv_conf; ? ? void ? ? ? ?**loc_conf; } ngx_http_conf_ctx_t; -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 12 06:57:35 2014 From: nginx-forum at nginx.us (Tatonka) Date: Thu, 12 Jun 2014 02:57:35 -0400 Subject: tmp directory filling up In-Reply-To: <20140611192330.GC1849@mdounin.ru> References: <20140611192330.GC1849@mdounin.ru> Message-ID: <38ca2a8ab941e12bf0b0209c155dc2d0.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, thanks for your answer. I tried redirecting passengers tmp dir as well using the tmp_dir directive as well as using the env variables. For the "regular" passenger tmp files, this seems to work fine (they appear in the new location). My main problem is that I can't even see the file that is using up so much space in the /tmp dir and am quite frankly at a loss how that is even possible. Anyways. Thanks for your help. I'll try to dig into the passenger config a little more. Tim Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250795,250813#msg-250813 From kunalvjti at gmail.com Thu Jun 12 07:00:47 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Thu, 12 Jun 2014 00:00:47 -0700 Subject: proxy_pass to different upstreams based on a cookie in the http request header In-Reply-To: <20140611171008.GO16942@daoine.org> References: <20140611171008.GO16942@daoine.org> Message-ID: Thanks for your answer. Worked great for me.. On Wed, Jun 11, 2014 at 10:10 AM, Francis Daly wrote: > On Wed, Jun 11, 2014 at 12:46:41AM -0700, Kunal Pariani wrote: > > Hi there, > > > Am wondering if there is a way to proxy (i.e proxy_pass inside location > > directive) to different set of upstreams based on whether a particular > > cookie is present or not in a http request header. > > You can use a map (http://nginx.org/r/map) > to set a variable based on a cookie > (http://nginx.org/en/docs/http/ngx_http_core_module.html#variables), > and you can use a variable in your proxy_pass directive > (http://nginx.org/r/proxy_pass). > > So it looks like it should Just Work. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xmirya at gmail.com Thu Jun 12 12:14:14 2014 From: xmirya at gmail.com (m irya) Date: Thu, 12 Jun 2014 15:14:14 +0300 Subject: proxy_redirect vs. relative URLs from upstream Message-ID: Hi, I'm trying to create a sinle proxying configuration as following: location = /proxy { resolver 8.8.8.8; proxy_pass $args; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_redirect "" $uri?; } e.g. http://example.com/proxy?http://example2.com/ would deliver the contents/headers of http://example2.com/ . If http://example2.com/ replies with Location: http://example3.com/ , the client gets Location: http://example.com/proxy?http://example3.com/ , so it works as expected. However, i've got example2.com replying with relative Location, e.g.: Location: /some/path This results in client getting: Location: http://example.com/proxy?/some/path , something that won't work. Is there any way to make it work for client to get Location: http://example.com/proxy?http://example2.com/some/path ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 12 12:51:32 2014 From: nginx-forum at nginx.us (Khmelevsky) Date: Thu, 12 Jun 2014 08:51:32 -0400 Subject: Nginx location for not exists php files Message-ID: I have config(minimal): server { listen test.local:80; server_name test.local; server_name_in_redirect off; location / { root /data/www/test/public; try_files $uri $uri/ /index.php?route=$uri&$args; index index.html index.php; } location ~ \.php$ { root /data/www/test/public; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /data/www/test/public$fastcgi_script_name; fastcgi_param ... ... } } Thats work file for urls /help/ or /contacts/ etc(all redirect to index.php with get variabled). But if url, for example, /help.php or contacts.php, and this files not exists, i have output File not found. How update my nginx config? I need urls, for example: /help.php => /index.php?route=/help.php /contacts.php?foo=bar... => /index.php?route=/contacts.php&args=... Thanks a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250818,250818#msg-250818 From luciano at vespaperitivo.it Thu Jun 12 13:24:45 2014 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Thu, 12 Jun 2014 15:24:45 +0200 Subject: Rewrite rules from Apache again In-Reply-To: <20140611170550.GN16942@daoine.org> References: <20140610160019.670733F9F4A@mail.nginx.com> <20140610214305.GL16942@daoine.org> <20140611162101.B48E13F9FD7@mail.nginx.com> <20140611170550.GN16942@daoine.org> Message-ID: On Wed, 11 Jun 2014 18:05:50 +0100 Francis Daly wrote: On Wed, 11 Jun 2014 18:05:50 +0100 Francis Daly wrote: > The rewrite that you want doesn't happen, because the request > /en/privacy.php is handled in: > > > location ~ \.php$ { > > location / { > > location ^~ /en/ { > > location = /en/pagina.php { > > location ~ \.php$ { > > ...that location, and not in the one two above it. > > Good luck with it, Solved! Moving all the rewrite rules outside the "location" to the "server" section makes them work as expected. The regexp takes care of the different rewritings to be done. Maaany thanks again, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From nginx-forum at nginx.us Thu Jun 12 16:36:48 2014 From: nginx-forum at nginx.us (Keyur) Date: Thu, 12 Jun 2014 12:36:48 -0400 Subject: GeoIP FirstNonPrivateXForwardedForIP Message-ID: Hi, My website does country based redirection based on result obtained from GeoIP against IP. I am facing a problem where GeoIP does not work as first IP in the X-Forwarded-For has Private network address. (Say 192.168.1.1) I know GeoIP on private network would fail but the X-Forwarded-For also has the public IP along with Private IP. Eg : 192.168.1.1, 115.97.213.63 - - [Timezone] ...... In some cases where multiple proxies are involved it would show : 192.168.1.1, 115.97.213.63, 115.97.213.12 - - I want GeoIP should be done on the first non private ip. I could achieve this in apache using GeoIP module directive called "FirstNonPrivateXForwardedForIP" How to do this in nginx ? Kindly suggest Regards, Keyur Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250823,250823#msg-250823 From luky-37 at hotmail.com Thu Jun 12 17:10:20 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 12 Jun 2014 19:10:20 +0200 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: Message-ID: > Hi, > > My website does country based redirection based on result obtained from > GeoIP against IP. > > I am facing a problem where GeoIP does not work as first IP in the > X-Forwarded-For has Private network address. (Say 192.168.1.1) > > I know GeoIP on private network would fail but the X-Forwarded-For also has > the public IP along with Private IP. > > Eg : 192.168.1.1, 115.97.213.63 - - [Timezone] ...... > > In some cases where multiple proxies are involved it would show : > > 192.168.1.1, 115.97.213.63, 115.97.213.12 - - > > I want GeoIP should be done on the first non private ip. I could achieve > this in apache using GeoIP module directive called > "FirstNonPrivateXForwardedForIP" > > How to do this in nginx ? > > Kindly suggest http://nginx.org/en/docs/http/ngx_http_geoip_module.html#geoip_proxy Lukas From francis at daoine.org Thu Jun 12 18:50:27 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Jun 2014 19:50:27 +0100 Subject: Nginx location for not exists php files In-Reply-To: References: Message-ID: <20140612185027.GP16942@daoine.org> On Thu, Jun 12, 2014 at 08:51:32AM -0400, Khmelevsky wrote: Hi there, > But if url, for example, /help.php or contacts.php, and this files not > exists, i have output > > File not found. > How update my nginx config? I need urls, for example: Untested, but: put try_files $uri /index.php?route=$uri&$args; inside the php location, and see if that changes anything. f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Fri Jun 13 10:08:14 2014 From: lists at ruby-forum.com (Chong Flechsig) Date: Fri, 13 Jun 2014 12:08:14 +0200 Subject: buy beautiful oil paintings at low price In-Reply-To: References: Message-ID: Leonardo Da Vinci Last Supper Paintings for sale -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Jun 13 11:37:56 2014 From: nginx-forum at nginx.us (MrDaniel) Date: Fri, 13 Jun 2014 07:37:56 -0400 Subject: Rtmp Module and LibRtmp Message-ID: Hello. I am seeking advice and suggestions for this issue with nginx and the rtmp module I have been working with RTMP. Previously i have worked with cRtmpServer, however nginx seems better for extending the service later on, further down the line. I am using nginx from this source, which works with my flash player. http://nginx-win.ecsds.eu/ It seem Nginx and libRtmp can work together. http://stackoverflow.com/questions/23629913/capture-camera-and-publish-video-with-librtmp Nginx config, this works with my flash test player for both applications. http://pastebin.com/j2eVWzEg I wish to connect with librtmp, and i am using the setopt functions so that librtmp has a url, port, app, playpath. Currently, my code connects and also connects to the stream. However, only about 9bytes are returned when reading a packet. Here is my libRtmp C++ code. https://groups.google.com/forum/#!topic/c-rtmp-server/HMYfrrs0Bus LibRTMP page with param list. http://rtmpdump.mplayerhq.hu/librtmp.3.html So that's the first issue. Connecting properly, so that we can read a file. Now, for the next part... recording. The solution to the first part, may result in both working. I would also like to record to the server with librtmp. The first link, above, shows how to record using libRtmp to nginx when the params are correct for connecting. How do i configure the server to publish a live stream, so that many clients can access it?? Ideally, we don't want to record video to the server just transport it. Any help moving forward would be good. Suggestions, advice, anything that anyone can recommend trying. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250839,250839#msg-250839 From 627571 at qq.com Fri Jun 13 14:14:51 2014 From: 627571 at qq.com (=?gb18030?B?0qbvvw==?=) Date: Fri, 13 Jun 2014 22:14:51 +0800 Subject: How could I forbid outside visits without response 403 Message-ID: Hi Buddy, I am a newer to Nginx world, now I have a project to link the varnish HTTP server and nginx together, nginx is the back end. I want to allow the connections only by varnish, so I use deny all ,this kind of stuff to archieve this. But if there is a way to compeletely forbid the connections, at present, even the outside connections is forbidden, but I think it still waste some resourses, "RETURN A 403 STATIC PAGE".. I will not use a iptables.. Thank you everyone !!!!!! I would appreciate very much.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 13 14:26:20 2014 From: nginx-forum at nginx.us (MrDaniel) Date: Fri, 13 Jun 2014 10:26:20 -0400 Subject: Rtmp Module and LibRtmp In-Reply-To: References: Message-ID: <365b6c0db05bd5158b40f33707239ab1.NginxMailingListEnglish@forum.nginx.org> Further to this..... Here is the log for working swf and not working libRTMP Flash.swf 127.0.0.1 [06/Jun/2014:12:28:23 -0700] PLAY "vod" "bunny.flv" "" - 1037 400014 "http://dl.dropboxusercontent.com/u/2918563/flvplayback.swf" "WIN 13,0,0,214" (1m 34s) LibRTMP 127.0.0.1 [07/Jun/2014:12:44:39 -0700] PLAY "vod" "bunny.flv" "" - 306 25553 "" "" (1m 5s) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250839,250842#msg-250842 From nginx-forum at nginx.us Fri Jun 13 14:49:25 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 13 Jun 2014 10:49:25 -0400 Subject: Rtmp Module and LibRtmp In-Reply-To: <365b6c0db05bd5158b40f33707239ab1.NginxMailingListEnglish@forum.nginx.org> References: <365b6c0db05bd5158b40f33707239ab1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0b7751eae390157dcf0eaaf25fe84633.NginxMailingListEnglish@forum.nginx.org> Specific rtmp issues are better placed here: https://groups.google.com/forum/#!forum/nginx-rtmp Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250839,250843#msg-250843 From reallfqq-nginx at yahoo.fr Fri Jun 13 14:57:10 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 13 Jun 2014 16:57:10 +0200 Subject: How could I forbid outside visits without response 403 In-Reply-To: References: Message-ID: What you describe is by design the job of firewalls... in your case dropping unwanted connections rather that rejecting them. 1 tool = 1 task --- *B. R.* On Fri, Jun 13, 2014 at 4:14 PM, ?? <627571 at qq.com> wrote: > Hi Buddy, > I am a newer to Nginx world, now I have a project to link the varnish HTTP > server and nginx together, nginx is the back end. > > I want to allow the connections only by varnish, so I use deny all ,this > kind of stuff to archieve this. > > But if there is a way to compeletely forbid the connections, at present, > even the outside connections is forbidden, but I think it still waste some > resourses, "RETURN A 403 STATIC PAGE".. > > I will not use a iptables.. > > > Thank you everyone !!!!!! > > I would appreciate very much.. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Fri Jun 13 21:54:56 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 13 Jun 2014 22:54:56 +0100 Subject: How could I forbid outside visits without response 403 In-Reply-To: References: Message-ID: On 13 Jun 2014 15:15, "??" <627571 at qq.com> wrote: > I think it still waste some resourses, "RETURN A 403 STATIC PAGE".. I think you're probably wrong. You're almost certainly prematurely optimising the wrong thing. Just 403 the unwanted requests and move on with your job/life/project. I mean this sincerely, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Fri Jun 13 23:19:25 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sat, 14 Jun 2014 00:19:25 +0100 Subject: How could I forbid outside visits without response 403 In-Reply-To: References: Message-ID: <539B86FD.4090201@swsystem.co.uk> On 13/06/14 15:14, ?? wrote: > Hi Buddy, > I am a newer to Nginx world, now I have a project to link the varnish > HTTP server and nginx together, nginx is the back end. > > I want to allow the connections only by varnish, so I use deny all > ,this kind of stuff to archieve this. > > But if there is a way to compeletely forbid the connections, at > present, even the outside connections is forbidden, but I think it > still waste some resourses, "RETURN A 403 STATIC PAGE".. > > I will not use a iptables.. > If varnish and nginx are on the same machine, you could configure nginx listen to listen on loopback (127.0.0.1:8080 say) and varnish to connect to that ip:port. This will stop all external direct access to nginx. I'm guessing you've some conditional check in nginx that's currently denying external access, you could look at the 444 return code. A quick google came up with Steve. From ahmetselmi at gmail.com Sat Jun 14 14:16:14 2014 From: ahmetselmi at gmail.com (Ahmet Selami) Date: Sat, 14 Jun 2014 17:16:14 +0300 Subject: Apache To Nginx Web Server Message-ID: Hello, I want to change my Web Server from Apache to Nginx, but i have a configuration error with this server. I want to change the rewrite module inside .htaccess, shown below, as nginx configuration rewrite module. Can you help me how i change this module as nginx rewrite module? What is the converted nginx module of this apache rewrite module? Thanks, Have a good day. Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahmetselmi at gmail.com Sat Jun 14 14:33:11 2014 From: ahmetselmi at gmail.com (Ahmet Selami) Date: Sat, 14 Jun 2014 17:33:11 +0300 Subject: Apache To Nginx Web Server Message-ID: Hello, I want to change my Web Server from Apache to Nginx, but i have a configuration error with this server. I want to change the rewrite module inside .htaccess, shown below, as nginx configuration rewrite module. Can you help me how i change this module as nginx rewrite module? What is the converted nginx module of this apache rewrite module? Thanks, Have a good day. Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahmetselmi at gmail.com Sat Jun 14 14:35:14 2014 From: ahmetselmi at gmail.com (Ahmet Selami) Date: Sat, 14 Jun 2014 17:35:14 +0300 Subject: Apache To Nginx Web Server Message-ID: Hello, I want to change my Web Server from Apache to Nginx, but i have a configuration error with this server. I want to change the rewrite module inside .htaccess, shown below, as nginx configuration rewrite module. Can you help me how i change this module as nginx rewrite module? What is the converted nginx module of this apache rewrite module? Thanks, Have a good day. Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy at xdam.com Sat Jun 14 14:38:29 2014 From: roy at xdam.com (Roy Phillips) Date: Sat, 14 Jun 2014 10:38:29 -0400 Subject: nginx error after trying to upload images Message-ID: Hi all, Anyone have an idea why we would get this error when trying to upload image? I am stumped as to why nginx erro would be mixed in with this. Is the customers ISP using nginx and blocking our upload? Thank you, Roy Phillips XDAM Support roy at xdam.com http://www.xdam.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Unknown[1].png Type: image/png Size: 48758 bytes Desc: not available URL: From reallfqq-nginx at yahoo.fr Sat Jun 14 15:16:38 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 14 Jun 2014 17:16:38 +0200 Subject: nginx error after trying to upload images In-Reply-To: References: Message-ID: Without much information about the context/usage of you application, what follows is pure speculation. *First, please note you are on the nginx mailing list, where you might ask for help about handling nginx itself.* *Bumping in unexpected 3rd-party servers is out of scope.* I do not see why ISP would use proxies redirecting URI to others... Routing is not done that way ^^ Strange nginx in the middle might be the sign of cmpromission/tampering. Early checks I would do: Are you sure about your DNS resolution (ie domain name resolves to correct IP)? Are you sure the target machine is legit? Traceroutes might reveal which nodes your traffic is going through. Any oddity is to be investigated. Finally, using SSL certificates might prevent another machine impersonating the one you are trying to contact. --- *B. R.* On Sat, Jun 14, 2014 at 4:38 PM, Roy Phillips wrote: > Hi all, > > Anyone have an idea why we would get this error when trying to upload > image? I am stumped as to why nginx erro would be mixed in with this. Is > the customers ISP using nginx and blocking our upload? > > > > Thank you, > Roy Phillips > XDAM Support > roy at xdam.com > > http://www.xdam.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Unknown[1].png Type: image/png Size: 48758 bytes Desc: not available URL: From roy at xdam.com Sat Jun 14 15:57:26 2014 From: roy at xdam.com (Roy Phillips) Date: Sat, 14 Jun 2014 11:57:26 -0400 Subject: nginx error after trying to upload images In-Reply-To: References: Message-ID: Thanks for responding. it?s a custom web app that clients use to drag and drop images through a web browser. They are in the UK and on Virgin network. Clients on other isp/networks are fine. Wasn?t sure where to go since the ISP says they don?t use a proxy server. Thank you, Roy Phillips XDAM Support roy at xdam.com http://www.xdam.com From: "B.R." Reply-To: Date: Saturday, June 14, 2014 at 11:16 AM To: Nginx ML Subject: Re: nginx error after trying to upload images Without much information about the context/usage of you application, what follows is pure speculation. First, please note you are on the nginx mailing list, where you might ask for help about handling nginx itself. Bumping in unexpected 3rd-party servers is out of scope. I do not see why ISP would use proxies redirecting URI to others... Routing is not done that way ^^ Strange nginx in the middle might be the sign of cmpromission/tampering. Early checks I would do: Are you sure about your DNS resolution (ie domain name resolves to correct IP)? Are you sure the target machine is legit? Traceroutes might reveal which nodes your traffic is going through. Any oddity is to be investigated. Finally, using SSL certificates might prevent another machine impersonating the one you are trying to contact. --- B. R. On Sat, Jun 14, 2014 at 4:38 PM, Roy Phillips wrote: > Hi all, > > Anyone have an idea why we would get this error when trying to upload image? I > am stumped as to why nginx erro would be mixed in with this. Is the customers > ISP using nginx and blocking our upload? > > > > Thank you, > Roy Phillips > XDAM Support > roy at xdam.com > http://www.xdam.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Unknown[1].png Type: image/png Size: 48758 bytes Desc: not available URL: From mdounin at mdounin.ru Sat Jun 14 19:35:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 14 Jun 2014 23:35:37 +0400 Subject: Accessing the location configuration of module 2 during post configuration processing of module 1 for a particular server In-Reply-To: <1402552018.84107.YahooMailNeo@web193505.mail.sg3.yahoo.com> References: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> <1402552018.84107.YahooMailNeo@web193505.mail.sg3.yahoo.com> Message-ID: <20140614193537.GM1849@mdounin.ru> Hello! On Thu, Jun 12, 2014 at 01:46:58PM +0800, Rv Rv wrote: [...] > In the post configuration, I see that flag is not properly set > but somestring is. Flag is properly set during request > processing though. > Are the values set during processing of a directive in location > struct guaranteed to be set by the time post configuration is > executed? > When is the time that one can check for the values set during > configuration. I need to test these values to ensure that they > are sane when nginx is executed with -t option Again: there are lots of location configurations, and by trying to access them at postconfiguration callback you are likely checking a wrong one. Note that even a simple config with a single location in a single server{} block, like this: http { server { location { ... } } } has 3 location configurations for each http module. As previously suggested, you should consider using merge callbacks to validate configuration instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Jun 15 01:48:13 2014 From: nginx-forum at nginx.us (justink101) Date: Sat, 14 Jun 2014 21:48:13 -0400 Subject: Parse JSON POST request into nginx variable Message-ID: <9b0283fab62b8173a9f29b732e9609c9.NginxMailingListEnglish@forum.nginx.org> How can I read a POST request body which is JSON and get a property? I need to read a property and use it as a variable in proxy_pass. Pseudo code: $post_request_body = '{"account": "test.mydomain.com", "more-stuff": "here"}'; // I want to get $account = "test.mydomain.com"; proxy_pass $account/rest/of/url/here; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250864,250864#msg-250864 From rpaprocki at fearnothingproductions.net Sun Jun 15 01:58:12 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 14 Jun 2014 18:58:12 -0700 Subject: Parse JSON POST request into nginx variable In-Reply-To: <9b0283fab62b8173a9f29b732e9609c9.NginxMailingListEnglish@forum.nginx.org> References: <9b0283fab62b8173a9f29b732e9609c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <539CFDB4.6070303@fearnothingproductions.net> There is a form input module you can use to parse POST body into a variable: https://github.com/calio/form-input-nginx-module However this will not get JSON data. For this you make want to look into leveraging the nxin Lua module in conjunction with the Lua cjson module: http://wiki.nginx.org/HttpLuaModule http://www.kyne.com.au/~mark/software/lua-cjson.php The openresty package combines the above two modules into one package :) On 06/14/2014 06:48 PM, justink101 wrote: > How can I read a POST request body which is JSON and get a property? I need > to read a property and use it as a variable in proxy_pass. > > Pseudo code: > > $post_request_body = '{"account": "test.mydomain.com", "more-stuff": > "here"}'; > // I want to get > $account = "test.mydomain.com"; > proxy_pass $account/rest/of/url/here; > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250864,250864#msg-250864 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Sun Jun 15 02:17:22 2014 From: nginx-forum at nginx.us (MrDaniel) Date: Sat, 14 Jun 2014 22:17:22 -0400 Subject: Rtmp Module and LibRtmp In-Reply-To: References: Message-ID: <0035b457015e8d9cb0e57b8e0cb9f3f3.NginxMailingListEnglish@forum.nginx.org> Sadly, not much of a response there either. The Google groups in general seem to be quite casual and less likely to be maintained or reviewed. This is the same for d3js and cRtmpServer. :/ Any other recommended resources? As there are none for libRTMP and this is very disparaging. You'd think there'd be more of a handle on this topic that what exists out there. I mean, how hard can it be to send and receive a char * in C++ via RTMP? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250839,250866#msg-250866 From nginx-forum at nginx.us Sun Jun 15 11:31:52 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 15 Jun 2014 07:31:52 -0400 Subject: Rtmp Module and LibRtmp In-Reply-To: <0035b457015e8d9cb0e57b8e0cb9f3f3.NginxMailingListEnglish@forum.nginx.org> References: <0035b457015e8d9cb0e57b8e0cb9f3f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97a59af5563aa577dfb905b8665c5686.NginxMailingListEnglish@forum.nginx.org> Thats about it when it comes to support, though it is weekend maybe as of monday there might be more activity, Roman is very active there. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250839,250867#msg-250867 From nginx-forum at nginx.us Sun Jun 15 22:03:27 2014 From: nginx-forum at nginx.us (alayim) Date: Sun, 15 Jun 2014 18:03:27 -0400 Subject: ngx_http_process_header_line function in source code Message-ID: <2ea40adfc0159fc59a0ed26915d6cd49.NginxMailingListEnglish@forum.nginx.org> Hi, I'm browsing through the source code of the project, and looked at ngx_http_request.c where the function ngx_http_process_header_line() creates a pointer to a pointer to a large struct(ngx_http_request_t) containing a smaller one(ngx_http_headers_in_t), containing yet another one. ngx_http_process_header_line(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { ph = (ngx_table_elt_t **) ((char *) &r->headers_in + offset); // ... then check if ph is NULL, and if so point it to h } Why is it done in this way? It seems quite complex and error prone, doesen't it? Is there any reason something like this wasn't done instead? (where range are ONE of those structures in headers_in that are a ngx_table_elt_t) if(r->headers_in->range == NULL) { r->headers_in->range = h; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250869,250869#msg-250869 From nginx-forum at nginx.us Mon Jun 16 00:12:07 2014 From: nginx-forum at nginx.us (Eberton) Date: Sun, 15 Jun 2014 20:12:07 -0400 Subject: 400 bad requests now returning http headers? ( crossdomain.xml ) In-Reply-To: <20140611202129.GF1849@mdounin.ru> References: <20140611202129.GF1849@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Jun 10, 2014 at 04:07:28PM -0400, Thaxll wrote: > > > Hi Maxim, > > > > Thank you for the quick reply, I guess there is no workaround for > that > > problem? It isn't possible to remove headers or specify a dummy > protocol for > > Nginx? > > I don't think there is anything that can be done at the > configuration level. On the other hand, it should be more or less > trivial to write a module to force nginx to think the protocol was > HTTP/0.9 and to respond accordingly. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Found a one solution Max? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250772,250870#msg-250870 From nginx-forum at nginx.us Mon Jun 16 05:17:54 2014 From: nginx-forum at nginx.us (Keyur) Date: Mon, 16 Jun 2014 01:17:54 -0400 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: Message-ID: <6d056da83ea1f9c6ddcf344f28e31eff.NginxMailingListEnglish@forum.nginx.org> Hi Lukas, Thanks for your reply. I have already tried http://nginx.org/en/docs/http/ngx_http_geoip_module.html#geoip_proxy But this needs a list of subnets / networks to be whitelisted first as a trusted source. I do not (Can not) have a list of such networks as they can be intermediate proxy of any company. Eg : Google chrome on smartphone uses Google compression proxy in between before reaching the actual server where website is hosted. Opera mini also does the same and similarly don't know who all does it. So I can not have a list of all trusted networks. I've also come across an issue where someone sitting behind a proxy. (Eg: Squid on local network) and browse internet then the first IP from left is LAN IP (Private network address) and then the public IP follows. Here GeoIP country detection fails. Eg : 10.0.0.50, - - [12/Jun/2014:17:09:28 +0530] "GET / HTTP/1.1" 200 50675 I need a way where I can tell nginx that it should do GeoIP on the First Public IP from left. Currently due to private address at the first place GeoIP fails and country is not detected. Do suggest what can be done. Regards, Keyur Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250823,250871#msg-250871 From es12b1001 at iith.ac.in Mon Jun 16 06:53:44 2014 From: es12b1001 at iith.ac.in (Adarsh Pugalia) Date: Mon, 16 Jun 2014 12:23:44 +0530 Subject: Order of directive call during request handling Message-ID: I am not able to get the order in which the directive handlers are called when a request comes. I have a directive to connect to the database and the other to do put operation. In the conf file, i write the connect directive first, but when request comes, the connect directive handler is not called, and the put handler is called first. Can you please help me with the order in which they are called? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvrv7575 at yahoo.com Mon Jun 16 06:58:50 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Mon, 16 Jun 2014 14:58:50 +0800 (SGT) Subject: Accessing the location configuration of module 2 during post configuration processing of module 1 for a particular server In-Reply-To: <1402552018.84107.YahooMailNeo@web193505.mail.sg3.yahoo.com> References: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> <1402552018.84107.YahooMailNeo@web193505.mail.sg3.yahoo.com> Message-ID: <1402901930.526.YahooMailNeo@web193505.mail.sg3.yahoo.com> Hello Maxim Thanks for the response. >>As previously suggested, you should consider using merge callbacks>>to validate configuration instead. The requirement is *not* to validate the configuration. The requirement is to find the final value set by one of the directives once the configuration has been parsed. e..g lets say we have a directive my_set_flag that sets a value to 0 or 1. So if in configuration we have? location \ { my_set_flag 0; --- my_set_flag 1; --- my_set_flag 0; } then the merge callback will be called thrice for each invocation of the directive. Let's assume the logic is to set a variable with whatever the value of the directive was. So once parsing completes, the value of the variable should be 0. I can get this value during request processing. However, I cannot get this value *after* the parsing of the configuration has completed. ?What is the nginx recommended way to get this value. As noted in earlier post, I am not seeing the correct values in post configuration - and so perhaps that is not the right way. Thanks for your continued inputs Hello! On Thu, Jun 12, 2014 at 01:46:58PM +0800, Rv Rv wrote: [...] > In the post configuration, I see that flag is not properly set > but somestring is. Flag is properly set during request > processing though. > Are the values set during processing of a directive in location > struct guaranteed to be set by the time post configuration is > executed? > When is the time that one can check for the values set during > configuration. I need to test these values to ensure that they > are sane when nginx is executed with -t option Again: there are lots of location configurations, and by trying to access them at postconfiguration callback you are likely checking a wrong one. Note that even a simple config with a single location in a single server{} block, like this: http { server { location { ... } } } has 3 location configurations for each http module. As previously suggested, you should consider using merge callbacks to validate configuration instead. -- Maxim Dounin http://nginx.org/ On Thursday, 12 June 2014 11:16 AM, Rv Rv wrote: Hello Maxim Thanks for your response. Here is a related query. Say in module 1 I have a ?typedef struct ?{ int flag; ngx_str somestring; }?module1; flag gets initialized with the following code? ? ? { ngx_string("module1_directive"), ? ? ? NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ? ? ? ngx_conf_set_flag_slot, ? ? ? NGX_HTTP_LOC_CONF_OFFSET, ? ? ? offsetof(module1,configured), ? ? ? NULL }, somestring gets iniitialized with a handler written in the module (i.e ?not ngx_conf_set_flag_slot or any inbuilt handler). In the post configuration, I see that flag is not properly set but somestring is. Flag is properly set during request processing though. Are the values set during processing of a directive in location struct guaranteed to be set by the time post configuration is executed? When is the time that one can check for the values set during configuration. I need to test these values to ensure that they are sane when nginx is executed with -t option Thanks Hello! On Tue, Jun 10, 2014 at 02:09:13AM +0800, Rv Rv wrote: > How do we access the configuration of a an unrelated module in a? > given module. This may be required for example to check if the? > directives pertaining to module 2 were specified in location for? > a particular server that has directives for module 1 in its? > configuration. I don't think it's something you should do at postconfiguration -? location structure is complex and not easily accessible.? There? are location configuration merge callbacks where you are expected? to work with location configs and, in particular, can use? ngx_http_conf_get_module_loc_conf() macro to access a? configuration of other modules (note though, that order of modules? may be important in this case). [...] > I did not find any documentation on how the configuration is stored within nginx using these structs? It's under src/, in C language. I would rather say it's not a part of the API, and you'd better? avoid using it directly. --? Maxim Dounin http://nginx.org/ ------------------------------ On Monday, 9 June 2014 11:39 PM, Rv Rv wrote: How do we access the configuration of a an unrelated module in a given module. This may be required for example to check if the directives pertaining to module 2 were specified in location for a particular server that has directives for module 1 in its configuration. From what I understand, code similar to this can be used? /* Get http main configuration */ ? ? cmcf = ctx->main_conf[ngx_http_core_module.ctx_index];? /* Get the list of servers */ ? ? cscfp = cmcf->servers.elts; /* Iterate through the list */ ? ? for (s = 0; s < cmcf->servers.nelts; s++) { ? /* Problem : how to get the configuration of module 2*/ ? ? ? ? ? ? ? ? ? cscfp[s]->ctx->loc_conf[module2.ctx_index];-------------> does not yield the correct location struct of module 2 I did not find any documentation on how the configuration is stored within nginx using these structs? typedef struct { ............. ?/* server ctx */ ngx_http_conf_ctx_t *ctx;?............ } ngx_http_core_srv_conf_t; typedef struct { ? ? void ? ? ? ?**main_conf; ? ? void ? ? ? ?**srv_conf; ? ? void ? ? ? ?**loc_conf; } ngx_http_conf_ctx_t; -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Jun 16 07:03:56 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 16 Jun 2014 19:03:56 +1200 Subject: nginx + Message-ID: <1402902236.3413.124.camel@steve-new> Sorry to ask this, but does using this *require* a support contract? I'm interested in the streaming stuff, but it's not worth $1300/yr to me... Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From luky-37 at hotmail.com Mon Jun 16 07:12:47 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 16 Jun 2014 09:12:47 +0200 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: <6d056da83ea1f9c6ddcf344f28e31eff.NginxMailingListEnglish@forum.nginx.org> References: , <6d056da83ea1f9c6ddcf344f28e31eff.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, > Thanks for your reply. > > I have already tried > http://nginx.org/en/docs/http/ngx_http_geoip_module.html#geoip_proxy > > But this needs a list of subnets / networks to be whitelisted first as a > trusted source. I do not (Can not) have a list of such networks as they can > be intermediate proxy of any company. Eg : Google chrome on smartphone uses > Google compression proxy in between before reaching the actual server where > website is hosted. Opera mini also does the same and similarly don't know > who all does it. So I can not have a list of all trusted networks. You cannot trust X-F-F headers of untrusted third party networks and proxies, otherwise everyone can spoof whatever remote IP they want. Don't do this. Lukas From nginx-forum at nginx.us Mon Jun 16 07:19:49 2014 From: nginx-forum at nginx.us (Keyur) Date: Mon, 16 Jun 2014 03:19:49 -0400 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: Message-ID: <0ba92eaa217acbee68d2ee8c3c09177f.NginxMailingListEnglish@forum.nginx.org> Hi Lucas, Noted! Agreed! How do I tell nginx to do GeoIP on FirstNonPrivateXForwardedForIP ? Regards, Keyur Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250823,250876#msg-250876 From miaohonghit at gmail.com Mon Jun 16 07:19:32 2014 From: miaohonghit at gmail.com (Harold.Miao) Date: Mon, 16 Jun 2014 15:19:32 +0800 Subject: nginx -s reload problem Message-ID: hi all I use a endless rtmp stream /usr/local/nginx/wsgi/ffmpeg -i haha.mp4 -c:v libx264 -b:v 500k -c:a copy -f flv rtmp://172.16.205.50:1936/publish/you as you known, if I use nginx -s reload , then I got a lot of "nginx: worker process is shutting down" pplive 15355 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down pplive 15356 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down pplive 15357 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down pplive 15358 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down pplive 15359 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down pplive 15360 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down ? because the connection will not quit, so "nginx: worker process is shutting down" more and more so How to aviod this status using "nginx -s reload" -- Best Regards, Harold Miao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 16 09:14:59 2014 From: nginx-forum at nginx.us (mknazri) Date: Mon, 16 Jun 2014 05:14:59 -0400 Subject: How to redirect error page (html) self built html error page Message-ID: Hi everyone, Sorry for my english. My scenario like below ; I setup my webserver in cloud and I have my ERP server running at different DC. The connection from cloud to DC is very fast, no interuption at all. When my customer access www.example.com it will show the website that i build in cloud (Webserver). Each customer have vpn access when they subscribe with us. So, when they wat to use ERP system, they need to connect to vpn first and they can go through https://erp.example.com. I configured NGINX to allow only server subnet IP and tunnel network subnet for VPN. The problem is when they click to login page, it will show error 403 not my html error page. I want it to trigger error page that I build in HTML. My problem is it not show the error page (html) that I build. It wil show error 403. Please anyone help me Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250878,250878#msg-250878 From ar at xlrs.de Mon Jun 16 09:45:33 2014 From: ar at xlrs.de (Axel) Date: Mon, 16 Jun 2014 11:45:33 +0200 Subject: How to redirect error page (html) self built html error page In-Reply-To: References: Message-ID: <4741286.eq38nxfU1N@lxrosenski.pag> Hi, Am Montag, 16. Juni 2014, 05:14:59 schrieb mknazri: > Hi everyone, > > Sorry for my english. My scenario like below ; > > I setup my webserver in cloud and I have my ERP server running at different > DC. The connection from cloud to DC is very fast, no interuption at all. > When my customer access www.example.com it will show the website that i > build in cloud (Webserver). Each customer have vpn access when they > subscribe with us. So, when they wat to use ERP system, they need to connect > to vpn first and they can go through https://erp.example.com. I configured > NGINX to allow only server subnet IP and tunnel network subnet for VPN. The > problem is when they click to login page, it will show error 403 not my > html error page. I want it to trigger error page that I build in HTML. > > My problem is it not show the error page (html) that I build. It wil show > error 403. How did you configure your error_page? Here's a snippet I use error_page 500 502 503 504 =200 /maintenance.html; location /maintenance.html { root /etc/nginx; } You can add 403 and 404 status. > Please anyone help me > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250878,250878#msg-250878 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Regards, Axel From nginx-forum at nginx.us Mon Jun 16 10:12:30 2014 From: nginx-forum at nginx.us (mknazri) Date: Mon, 16 Jun 2014 06:12:30 -0400 Subject: How to redirect error page (html) self built html error page In-Reply-To: <4741286.eq38nxfU1N@lxrosenski.pag> References: <4741286.eq38nxfU1N@lxrosenski.pag> Message-ID: <256cecf65a9d6eb4e1dfd1ec1ee0aec8.NginxMailingListEnglish@forum.nginx.org> Hi Axel, This is my nginx conf in /etc/nginx/sites-available/erp; upstream webserver { server 127.0.0.1:8078 weight=1 fail_timeout=300s; } server { listen 80; server_name example.com; location / { proxy_pass http://127.0.0.1:5000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-for $remote_addr; port_in_redirect off; proxy_connect_timeout 300; } } server { listen 80; server_name _; # server_name erp.example.com; # Strict Transport Security add_header Strict-Transport-Security max-age=2592000; rewrite ^/.*$ https://$host$request_uri? permanent; } server { # server port and name listen 443 default; server_name erp.example.com; # Specifies the maximum accepted body size of a client request, # as indicated by the request header Content-Length. client_max_body_size 200m; # ssl log files access_log /var/log/nginx/openerp-access.log; error_log /var/log/nginx/openerp-error.log; # ssl certificate files ssl on; ssl_certificate /etc/ssl/nginx/server.crt; ssl_certificate_key /etc/ssl/nginx/server.key; # add ssl specific settings keepalive_timeout 60; # limit ciphers ssl_ciphers HIGH:!ADH:!MD5; ssl_protocols SSLv3 TLSv1; ssl_prefer_server_ciphers on; # increase proxy buffer to handle some ERP web requests proxy_buffers 16 64k; proxy_buffer_size 128k; allow 10.8.8.0/24; allow 10.9.8.0/24; deny all; error_page 403 /error403.html; location = /index.html { index index.html; root /usr/share/nginx/www; allow all; } location / { proxy_pass http://webserver; # force timeouts if the backend dies proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; # set headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; # Let the OpenERP web service know that we're using HTTPS, otherwise # it will generate URL using http:// and not https:// proxy_set_header X-Forwarded-Proto https; # by default, do not forward anything proxy_redirect off; allow all; } # cache some static data in memory for 60mins. # under heavy load this should relieve stress on the ERP web interface a bit. location ~* /web/static/ { proxy_cache_valid 200 60m; proxy_buffering on; expires 864000; proxy_pass http://webserver; } } For your information, 10.8.8.0/24 is tunnel vpn that I allowed and 10.9.8.0/24 is server subnet also allowed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250878,250880#msg-250880 From defan at nginx.com Mon Jun 16 11:22:11 2014 From: defan at nginx.com (Andrei Belov) Date: Mon, 16 Jun 2014 15:22:11 +0400 Subject: tmp directory filling up In-Reply-To: <20140611192330.GC1849@mdounin.ru> References: <076e6438b60e67a46bf809df89178bcf.NginxMailingListEnglish@forum.nginx.org> <20140611192330.GC1849@mdounin.ru> Message-ID: <64C9FB13-0425-4F23-879A-B3189696F07F@nginx.com> On 11 Jun 2014, at 23:23, Maxim Dounin wrote: > Hello! > > On Wed, Jun 11, 2014 at 10:58:47AM -0400, Tatonka wrote: > >> Hi, >> >> I have a rails application that is hosted through nginx and passenger. In >> this application I want provide very large files for the users to download >> (>2GB) using send_file .. which is working just fine on the development and >> staging system. On the production system however the system tmp directory is >> limited to 1GB (separately mounted disk). >> >> When triggering a download, the tmp folder quickly fills up and the download >> breaks once it is completely full. I already moved passengers /tmp directory >> to a new location but could find how to do the same for nginx (I did set >> $tmp and $tmpdir with no effect). >> >> When looking into the /tmp directory however, I cannot find any large files >> that would explain what is happening, nevertheless, df reports it is filling >> up at the same time .. >> >> Lastly .. I also specified the proxy_temp_path directive in the nginx >> config. Again with no effect. > > The proxy_temp_path is related to the problem, but it's for proxy, > not for passenger, and it's expected that it has no effect in your > case. > >> Is there any way to specify which directory nginx uses for its tmp data? Is >> nginx even the culprit here? > > That's not about nginx, but rather about passenger module for > nginx. > > Last time I checked, passenger module for nginx implemented its > own protocol for the upstream module (like proxy/fastcgi/etc), and > should have its own "..._temp_path" directive, as well as > "..._max_temp_file_size" and so on. It?s still true: https://github.com/phusion/passenger/blob/master/ext/nginx/Configuration.c#L55-L57 It uses NGX_HTTP_PROXY_TEMP_PATH which is set at configure stage. From mdounin at mdounin.ru Mon Jun 16 11:31:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Jun 2014 15:31:20 +0400 Subject: ngx_http_process_header_line function in source code In-Reply-To: <2ea40adfc0159fc59a0ed26915d6cd49.NginxMailingListEnglish@forum.nginx.org> References: <2ea40adfc0159fc59a0ed26915d6cd49.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140616113119.GP1849@mdounin.ru> Hello! On Sun, Jun 15, 2014 at 06:03:27PM -0400, alayim wrote: > Hi, > I'm browsing through the source code of the project, and looked at > ngx_http_request.c where the function ngx_http_process_header_line() creates > a pointer to a pointer to a large struct(ngx_http_request_t) containing a > smaller one(ngx_http_headers_in_t), containing yet another one. > > ngx_http_process_header_line(ngx_http_request_t *r, ngx_table_elt_t *h, > ngx_uint_t offset) { > ph = (ngx_table_elt_t **) ((char *) &r->headers_in + offset); > // ... then check if ph is NULL, and if so point it to h > } > > > Why is it done in this way? It seems quite complex and error prone, doesen't > it? > Is there any reason something like this wasn't done instead? > > (where range are ONE of those structures in headers_in that are a > ngx_table_elt_t) > if(r->headers_in->range == NULL) { > r->headers_in->range = h; > } The ngx_http_process_header_line() function is used to handle lots of header lines. You can't hardcode just one name into it - you'll have to write 15 functions instead (and add another one on each header line added). Using one universal function instead greatly reduces code size and therefore less error prone. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 16 11:33:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Jun 2014 15:33:53 +0400 Subject: Order of directive call during request handling In-Reply-To: References: Message-ID: <20140616113353.GQ1849@mdounin.ru> Hello! On Mon, Jun 16, 2014 at 12:23:44PM +0530, Adarsh Pugalia wrote: > I am not able to get the order in which the directive handlers are called > when a request comes. I have a directive to connect to the database and the > other to do put operation. In the conf file, i write the connect directive > first, but when request comes, the connect directive handler is not called, > and the put handler is called first. Can you please help me with the order > in which they are called? You may want to read Evan Miller's guides, see links here: http://nginx.org/en/links.html -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Mon Jun 16 11:49:34 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 16 Jun 2014 16:49:34 +0500 Subject: Really high disk i/o !! Message-ID: Our server HDD i/o is constant on 8MB/s and i/o utilization + await is very high due to which nginx video streaming is really slow and we're receiving complains from our users regarding slow streaming of the videos. We're using 12X3TB SATA HDD Hardware-Raid10 with 16G RAM OS Centos 6.4 8MB/s w/r should not be issue for 12X3TB SATA HDD. Maybe i need to tweak some nginx buffers or kernels in order to reduce the high io wait ? Could someone point me to the right direction ? We can't afford SAS Drives right now and have to go with the SATA. Linux 2.6.32-431.17.1.el6.x86_64 (storage17) 06/16/2014 _x86_64_ (8 CPU) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 123.66 336.85 118.87 7.67 19496.19 2759.42 175.88 2.40 18.93 6.27 79.35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 75.00 8.67 108.00 2.00 18117.33 85.33 165.48 1.88 17.05 6.86 75.47 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 76.00 0.00 94.00 0.33 17192.00 2.67 182.28 1.50 16.04 7.47 70.47 Any help would be highly appreciated. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at kearsley.me Mon Jun 16 14:13:12 2014 From: richard at kearsley.me (Richard Kearsley) Date: Mon, 16 Jun 2014 15:13:12 +0100 Subject: Really high disk i/o !! In-Reply-To: References: Message-ID: <539EFB78.4080500@kearsley.me> On 16/06/14 12:49, shahzaib shahzaib wrote: > > 8MB/s w/r should not be issue for 12X3TB SATA HDD. Maybe i need to > tweak some nginx buffers or kernels in order to reduce the high io wait ? > if you have a high number of concurrent connections and/or use limit_rate, then expect hdd (sata or sas) to quickly run out of iops tip from me: use nmon (linux) to see the real time status of your disks. If they are wrapped up in iowait then you need a faster seeking disk (e.g. ssd because sas won't give you a significant amount extra) if only one of them is in iowait then it's probably a dead/dying disk From mdounin at mdounin.ru Mon Jun 16 14:14:40 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Jun 2014 18:14:40 +0400 Subject: Accessing the location configuration of module 2 during post configuration processing of module 1 for a particular server In-Reply-To: <1402901930.526.YahooMailNeo@web193505.mail.sg3.yahoo.com> References: <1402337353.2224.YahooMailNeo@web193506.mail.sg3.yahoo.com> <1402552018.84107.YahooMailNeo@web193505.mail.sg3.yahoo.com> <1402901930.526.YahooMailNeo@web193505.mail.sg3.yahoo.com> Message-ID: <20140616141440.GT1849@mdounin.ru> Hello! On Mon, Jun 16, 2014 at 02:58:50PM +0800, Rv Rv wrote: > Hello Maxim > Thanks for the response. > >>As previously suggested, you should consider using merge callbacks>>to validate configuration instead. > > The requirement is *not* to validate the configuration. The > requirement is to find the final value set by one of the > directives once the configuration has been parsed. e..g lets say > we have a directive my_set_flag that sets a value to 0 or 1. > > So if in configuration we have? > location \ { > my_set_flag 0; > --- > my_set_flag 1; > --- > my_set_flag 0; > } Such a configuration is invalid due to duplicate "my_set_flag" directive, and will be rejected during configuration parsing. > then the merge callback will be called thrice for each > invocation of the directive. Let's assume the logic is to set a > variable with whatever the value of the directive was. So once > parsing completes, the value of the variable should be 0. You are misunderstanding what merge callback is, what it does and how it's called. > I can get this value during request processing. However, I > cannot get this value *after* the parsing of the configuration > has completed. ?What is the nginx recommended way to get this > value. As noted in earlier post, I am not seeing the correct > values in post configuration - and so perhaps that is not the > right way. Again: accessing a location configuration from postconfiguration callback isn't trivial and not supported. You should use merge callback instead, or use main or server configuration instead, not location. Before proceeding any further, I would recommend you to spend some time digging into nginx configuration basics. In particular, Evan Miller's guide may be helpful, see links here: http://nginx.org/en/links.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 16 14:26:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Jun 2014 18:26:56 +0400 Subject: tmp directory filling up In-Reply-To: <64C9FB13-0425-4F23-879A-B3189696F07F@nginx.com> References: <076e6438b60e67a46bf809df89178bcf.NginxMailingListEnglish@forum.nginx.org> <20140611192330.GC1849@mdounin.ru> <64C9FB13-0425-4F23-879A-B3189696F07F@nginx.com> Message-ID: <20140616142656.GU1849@mdounin.ru> Hello! On Mon, Jun 16, 2014 at 03:22:11PM +0400, Andrei Belov wrote: > > On 11 Jun 2014, at 23:23, Maxim Dounin wrote: > > > Hello! > > > > On Wed, Jun 11, 2014 at 10:58:47AM -0400, Tatonka wrote: > > > >> Hi, > >> > >> I have a rails application that is hosted through nginx and passenger. In > >> this application I want provide very large files for the users to download > >> (>2GB) using send_file .. which is working just fine on the development and > >> staging system. On the production system however the system tmp directory is > >> limited to 1GB (separately mounted disk). > >> > >> When triggering a download, the tmp folder quickly fills up and the download > >> breaks once it is completely full. I already moved passengers /tmp directory > >> to a new location but could find how to do the same for nginx (I did set > >> $tmp and $tmpdir with no effect). > >> > >> When looking into the /tmp directory however, I cannot find any large files > >> that would explain what is happening, nevertheless, df reports it is filling > >> up at the same time .. > >> > >> Lastly .. I also specified the proxy_temp_path directive in the nginx > >> config. Again with no effect. > > > > The proxy_temp_path is related to the problem, but it's for proxy, > > not for passenger, and it's expected that it has no effect in your > > case. > > > >> Is there any way to specify which directory nginx uses for its tmp data? Is > >> nginx even the culprit here? > > > > That's not about nginx, but rather about passenger module for > > nginx. > > > > Last time I checked, passenger module for nginx implemented its > > own protocol for the upstream module (like proxy/fastcgi/etc), and > > should have its own "..._temp_path" directive, as well as > > "..._max_temp_file_size" and so on. > > It?s still true: > > https://github.com/phusion/passenger/blob/master/ext/nginx/Configuration.c#L55-L57 > > It uses NGX_HTTP_PROXY_TEMP_PATH which is set at configure stage. And it looks like it doesn't provide relevant directives, so the only option left is to switch off buffering completely, using the "passenger_buffer_response" directive (which is again incorrectly named, it should be "..._buffering"). Well, I never had a reason to say anything good about passenger, so it's at least consistent... ;) -- Maxim Dounin http://nginx.org/ From sarah at nginx.com Mon Jun 16 15:14:09 2014 From: sarah at nginx.com (sarah.novotny) Date: Mon, 16 Jun 2014 08:14:09 -0700 Subject: nginx + In-Reply-To: <1402902236.3413.124.camel@steve-new> References: <1402902236.3413.124.camel@steve-new> Message-ID: Hello Steve! From:?Steve Holdoway steve at greengecko.co.nz Sorry to ask this, but does using this *require* a support contract? I'm interested in the streaming stuff, but it's not worth $1300/yr to me... At this point we only offer a combined license and support model at the $1350/yr price point. ?We do evaluate our product offerings and user feedback regularly; so, thanks for reaching out. ?Since this is the FOSS mailing list, I?ll leave the topic here. ?If you?d like more information please reach out to me directly. Sarah --? sarah.novotny -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 16 16:06:04 2014 From: nginx-forum at nginx.us (Gona) Date: Mon, 16 Jun 2014 12:06:04 -0400 Subject: rewrite url in upstream block Message-ID: <12bea14e5cb4f2e1bf76d14c4bbba2f1.NginxMailingListEnglish@forum.nginx.org> Hi, I am using a query parameter in upstream module to serve request based on consistent hashing. This query parameter is introduced in the request handler module and not originally coming from the downstream. I would like to remove this parameter once the job is done before sending it to an upstream server but I couldn't see a place where to do this. Rewrite rules are not allowed in upstream block. Is there a better way of doing this? Thanks, Gona Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250892,250892#msg-250892 From shahzaib.cb at gmail.com Mon Jun 16 17:18:57 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 16 Jun 2014 22:18:57 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: Hello itpp,i have been abled to use ngx_http_geo_module. Now the request coming from local ISP will first go to the main server (US) and then main server will check if the ip is 1.2.3.4 so it'll direct the request to the local caching server and than caching server will check if the file is cached or it should again get the file from main server and cache it locally. When i tested it locally, it worked fine but the file URL in firebug is coming from MAIN server when it should have come from the local caching server. I can also see the caching directory size increases when the matching client via geo module is directed to the local caching server but the URL remains the same in firebug. US config :- geo $TW { default 0; 192.168.1.0/24 1; } server { listen 80; server_name 002.files.com; # limit_rate 600k; location / { root /var/www/html/files; index index.html index.htm index.php; # autoindex on; } location ~ \.(mp4|jpeg|jpg)$ { mp4; root /var/www/html/files; if ($TW) { proxy_pass http://192.168.22.32:80; } expires 7d; valid_referers none blocked domain.com *.domain.com blog.domain.com *.facebook.com *.twitter.com *.files.com *.pump.net domain.tv *.domain.tv domainmedia.tv www.domainmedia.tv embed.domainmedia.tv; if ($invalid_referer) { return 403; } } } Edge config :- proxy_ignore_headers "Set-Cookie"; proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m loader_threshold=300 loader_files=10 inactive=1d max_size=300000m; proxy_temp_path /data/nginx/tmp 1 2; add_header X-Cache-Status $upstream_cache_status; add_header Accept-Ranges bytes; max_ranges 512; server { listen 80; server_name 192.168.22.32; root /var/www/html/files; location ~ \.(mp4|jpeg|jpg)$ { root /var/www/html/files; mp4; try_files $uri @getfrom_origin; } location @getfrom_origin { proxy_pass http://002.files.com:80; proxy_cache_valid 200 302 60m; proxy_cache_valid any 1m; proxy_cache static; proxy_cache_min_uses 1; } Maybe i need to add some variable to get original server ip ? On Fri, Jun 6, 2014 at 8:56 PM, shahzaib shahzaib wrote: > Thanks a lot itpp. :) I'll look into it and get back to you. > > Thanks again for quick solution :) > > > On Fri, Jun 6, 2014 at 8:26 PM, itpp2012 wrote: > >> shahzaib1232 Wrote: >> ------------------------------------------------------- >> > @itpp I am currenlty proceeding with proxy_cache method just because i >> > had >> > to done this in emergency mode due to boss pressure :-|. I have a >> > quick >> > question, can i make nginx to cache files for specific clients ? >> > >> > Like, if our caching servers are deployed by only single ISP named >> > "ptcl". >> > So if ip from ptcl client is browsing video, only his requested file >> > should >> > be cached not for any other client, does nginx support that ?? >> >> You could do this based on some IP ranges or via >> https://github.com/flant/nginx-http-rdns >> >> See >> >> http://serverfault.com/questions/380642/nginx-how-to-redirect-users-with-certain-ip-to-special-page >> and >> >> http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/ >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,249997,250707#msg-250707 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparvu at systemdatarecorder.org Mon Jun 16 17:23:24 2014 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Mon, 16 Jun 2014 20:23:24 +0300 Subject: Really high disk i/o !! In-Reply-To: <539EFB78.4080500@kearsley.me> References: <539EFB78.4080500@kearsley.me> Message-ID: <20140616202324.eca0e327c02de39fdbd04975@systemdatarecorder.org> > If they are wrapped up in iowait then you need a faster seeking disk > (e.g. ssd because sas won't give you a significant amount extra) or need to tune something. You don't necessarily need to change always the hdw if you have high iowaits. There are many variables around. -- Stefan Parvu From nginx-forum at nginx.us Mon Jun 16 17:41:28 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 16 Jun 2014 13:41:28 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: <8db949929174ee6dadbb35b107d2ec75.NginxMailingListEnglish@forum.nginx.org> shahzaib1232 Wrote: ------------------------------------------------------- > Maybe i need to add some variable to get original server ip ? https://www.google.nl/#q=nginx+geo+remote+ip+address http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250895#msg-250895 From nginx-forum at nginx.us Mon Jun 16 21:05:03 2014 From: nginx-forum at nginx.us (jakubp) Date: Mon, 16 Jun 2014 17:05:03 -0400 Subject: Problem with big files Message-ID: Hi Recently I hit quite big problem with huge files. Nginx is a cache fronting an origin which serves huge files (several GB). Clients use mostly range requests (often to get parts towards the end of the file) and I use a patch Maxim provided some time ago allowing range requests to receive HTTP 206 if a resource is not in cache but it's determined to be cacheable... When a file is not in cache and I see a flurry of requests for the same file I see that after proxy_cache_lock_timeout - at that time the download didn't reach the first requested byte of a lot of requests - nginx establishes a new connection to upstream for each client and initiates another download of the same file. I understand why this happens and that it's by design but... That kills the server. Multiple writes to temp directory basically kill the disk performance (which in turn blocks nginx worker processes). Is there anything that can be done to help that? Keeping in mind that I can't afford serving HTTP 200 to a range request and also I'd like to avoid clients waiting for the first requested byte forever... Thanks in advance! Regards, Kuba Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250899,250899#msg-250899 From jdorfman at netdna.com Mon Jun 16 21:12:13 2014 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 16 Jun 2014 14:12:13 -0700 Subject: Problem with big files In-Reply-To: References: Message-ID: > > I use a patch > Maxim provided some time ago allowing range requests to receive HTTP 206 if > a resource is not in cache but it's determined to be cacheable... Can you please link to this patch? Regards, Justin Dorfman Director of Developer Relations MaxCDN On Mon, Jun 16, 2014 at 2:05 PM, jakubp wrote: > Hi > > Recently I hit quite big problem with huge files. Nginx is a cache fronting > an origin which serves huge files (several GB). Clients use mostly range > requests (often to get parts towards the end of the file) and I use a patch > Maxim provided some time ago allowing range requests to receive HTTP 206 if > a resource is not in cache but it's determined to be cacheable... > > When a file is not in cache and I see a flurry of requests for the same > file > I see that after proxy_cache_lock_timeout - at that time the download > didn't > reach the first requested byte of a lot of requests - nginx establishes a > new connection to upstream for each client and initiates another download > of > the same file. I understand why this happens and that it's by design but... > That kills the server. Multiple writes to temp directory basically kill the > disk performance (which in turn blocks nginx worker processes). > > Is there anything that can be done to help that? Keeping in mind that I > can't afford serving HTTP 200 to a range request and also I'd like to avoid > clients waiting for the first requested byte forever... > > Thanks in advance! > > Regards, > Kuba > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250899,250899#msg-250899 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 17 00:31:43 2014 From: nginx-forum at nginx.us (Yumi) Date: Mon, 16 Jun 2014 20:31:43 -0400 Subject: Possible to have a limit_req "nodelay burst" option? In-Reply-To: <75889ab591fdc9f01bb9d5f8c49ff467.NginxMailingListEnglish@forum.nginx.org> References: <75889ab591fdc9f01bb9d5f8c49ff467.NginxMailingListEnglish@forum.nginx.org> Message-ID: +1 to the idea. Maybe something like: limit_req one burst=10 nodelay=5; # first 5 'bursts' don't have a delay, the next 5 do I haven't tried, but I suspect this doesn't do the desired thing: limit_req one burst=10; limit_req one burst=5 nodelay; (I'm guessing that the first directive above essentially overrides the second for the first 5, then the second directive overrides the first after that) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238389,250902#msg-250902 From nginx-forum at nginx.us Tue Jun 17 01:43:33 2014 From: nginx-forum at nginx.us (vickyma) Date: Mon, 16 Jun 2014 21:43:33 -0400 Subject: how to set timer? In-Reply-To: References: <0e96d43ade091f18dbb98f62f9445495.NginxMailingList@forum.nginx.org> Message-ID: how works? i lost. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,4265,250903#msg-250903 From nginx-forum at nginx.us Tue Jun 17 03:24:39 2014 From: nginx-forum at nginx.us (roman_mir) Date: Mon, 16 Jun 2014 23:24:39 -0400 Subject: upstream on OpenBSD not executing requests Message-ID: Hello everybody! I am a new and excited nginx user and I just had to hit a problem complex enough for me to post a message here hoping to get some help. OS: OpenBSD 5.5 amd64 nginx -v: nginx version: nginx/1.4.7 nginx.conf: user www; worker_processes 10; error_log /var/log/nginx/error.log error; worker_rlimit_nofile 1024; events { worker_connections 800; } http { include mime.types; default_type application/octet-stream; index index.jsp; keepalive_timeout 4; upstream shipmaticacluster { server 10.0.0.10:8080; server 10.0.0.11:8080; } server { server_tokens off; access_log /var/log/nginx/proxy.log; location / { proxy_pass http://shipmaticacluster; } } } Here is the problem: if the following is used: proxy_pass http://10.1.1.10:8080; or this is used: proxy_pass http://10.1.1.11:8080; then the requests are executed and the proxy log has this in it: 192.168.0.13 - - [16/Jun/2014:21:22:56 -0400] "GET /Shipmatica/ HTTP/1.1" 200 11118 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0" and that is great! However when I use upstream cluster settings: proxy_pass http://shipmaticacluster; then the request executes a long time until it expires or until I hit escape in the browser and then these lines are printed into the proxy log: 192.168.0.13 - - [16/Jun/2014:23:03:26 -0400] "GET /Shipmatica HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0" and info log: 2014/06/16 23:03:26 [info] 29349#0: *1 kevent() reported that client prematurely closed connection, so upstream connection is closed too while connecting to upstream, client: 192.168.0.13, server: , request: "GET /Shipmatica HTTP/1.1", upstream: "http://10.0.0.10:8080/Shipmatica", host: "192.168.0.28" I have gone through about 5 or 6 hours of internet searches and experiments by now, looked at the system log files and payed attention to pflog, no results anywhere, nothing is found in the OS log files, pf doesn't block any traffic. I switch back to the specific IP address in the proxy_pass and the requests flow through just fine. This is as far as I can go without some help, I hope somebody has insights on this issue. Thank you! Roman Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250904,250904#msg-250904 From nginx-forum at nginx.us Tue Jun 17 06:34:37 2014 From: nginx-forum at nginx.us (m1nu2) Date: Tue, 17 Jun 2014 02:34:37 -0400 Subject: Alias or rewrite to get Horde rpc.php to work Message-ID: Hello all, I'm trying to setup a nginx server on a RaspberryPi to host a Horde Groupware. The Webinterface is working so far. But I have problems setting up the Microsoft-Server-ActiveSync URL to sync my Smartphone and Tablet. I already asked in the IRC channel and posted a question at Serverfault but did not get an answer yet. I also googled a lot, tried different solutions, but non of these worked. So I would be really happy if someone could help me with that. Here are more details (mainly taken from the Serverfault question located here: http://serverfault.com/questions/604866/nginx-alias-or-rewrite-for-horde-groupware-activesync-url-does-not-process-the-r ): The configuration should allow to access the ActiveSync part via the URL /horde/Microsoft-Server-ActiveSync. The horde webinterface is already accessible via /horde My configuration looks like this: default-ssl.conf: server { listen 443 ssl; ssl on; ssl_certificate /opt/nginx/conf/certs/server.crt; ssl_certificate_key /opt/nginx/conf/certs/server.key; server_name example.com; index index.html index.php; root /var/www; include sites-available/horde.conf; } horde.conf: location /horde { rewrite_log on; rewrite ^/horde/Microsoft-Server-ActiveSync(.*)$ /horde/rpc.php$1 last; try_files $uri $uri/ /rampage.php?$args; location ~ \.php$ { try_files $uri =404; include sites-available/horde.fcgi-php.conf; } }. horde.fcgi-php.conf: include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; So why is the rpc.php not correctly working? If I understand correctly it tries to establish a basic authentication process. Could this be a problem? Best regards, m1nu2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250907,250907#msg-250907 From nginx-forum at nginx.us Tue Jun 17 06:49:54 2014 From: nginx-forum at nginx.us (gwilym) Date: Tue, 17 Jun 2014 02:49:54 -0400 Subject: URI escaping for X-Accel-Redirect and proxy_pass in 1.4.7 and 1.6.0 Message-ID: <3a10d1555a139f2d9c98655c8151f569.NginxMailingListEnglish@forum.nginx.org> We are updating Nginx from 1.4.7 and 1.6.0 and noticed an error in our app likely related to the 1.5.9 change: now nginx expects escaped URIs in "X-Accel-Redirect" headers. We have an internal location for proxying content from a backend HTTP system (Swift, actually). The location block looks like this: location ~ ^/protected/swift/(http|https)/([A-Za-z0-9\.\-]+)/([1-9][0-9]+)/([A-Za-z0-9_]+)/(.*) { internal; proxy_set_header X-Auth-Token $4; proxy_pass $1://$2:$3/$5; proxy_hide_header Content-Type; } Nginx 1.4.7 functions as expected when sending X-Accel-Redirect: /protected/swift/http/HOST/PORT/TOKEN/v1/AUTH_test/content/image%20with%20spaces.jpg Under 1.6.0 this fails. It produces a GET request to the backend decoded into spaces and so becomes an invalid HTTP request. The workaround is to _double_ encode so as to send back "image%2520with%2520spaces.jpg" to Nginx but we can't roll this out until Nginx 1.6 because it breaks 1.4... but we can't roll out 1.6 until the code is there. Is there a solution that works for both? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250909,250909#msg-250909 From vbart at nginx.com Tue Jun 17 07:04:06 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 17 Jun 2014 11:04:06 +0400 Subject: upstream on OpenBSD not executing requests In-Reply-To: References: Message-ID: <3319723.UTKlYb2VOz@vbart-workstation> On Monday 16 June 2014 23:24:39 roman_mir wrote: > Hello everybody! > I am a new and excited nginx user and I just had to hit a problem complex > enough for me to post a message here hoping to get some help. [..] > upstream shipmaticacluster { > server 10.0.0.10:8080; > server 10.0.0.11:8080; > } [..] > Here is the problem: if the following is used: > proxy_pass http://10.1.1.10:8080; > or this is used: > proxy_pass http://10.1.1.11:8080; > then the requests are executed and the proxy log has this in it: [..] Have you noticed that 10.1.1.1[01] and 10.0.0.1[01] (in your upstream block) are different IPs? Is that intentionally? wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Jun 17 13:35:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 17:35:30 +0400 Subject: nginx-1.7.2 Message-ID: <20140617133530.GM1849@mdounin.ru> Changes with nginx 1.7.2 17 Jun 2014 *) Feature: the "hash" directive inside the "upstream" block. *) Feature: defragmentation of free shared memory blocks. Thanks to Wandenberg Peixoto and Yichun Zhang. *) Bugfix: a segmentation fault might occur in a worker process if the default value of the "access_log" directive was used; the bug had appeared in 1.7.0. Thanks to Piotr Sikora. *) Bugfix: trailing slash was mistakenly removed from the last parameter of the "try_files" directive. *) Bugfix: nginx could not be built on OS X in some cases. *) Bugfix: in the ngx_http_spdy_module. -- Maxim Dounin http://nginx.org/en/donation.html From shahzaib.cb at gmail.com Tue Jun 17 13:59:41 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Jun 2014 18:59:41 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: <8db949929174ee6dadbb35b107d2ec75.NginxMailingListEnglish@forum.nginx.org> References: <8db949929174ee6dadbb35b107d2ec75.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks itpp but the issue is still same and still the ip is from the main server in inspect element as well as in local-caching nginx access logs, i am getting the client ip as main-server's ip instead of original client ip and i am sure that i am doing something wrong. Well i have another question now, as our test with CIDR notation worked well with nginx geo module and nginx decided to route specific ips to specific server (caching server). So, the specific subnet coming from our ISP to the main server will be routed to the local caching server and our ISP will have to tell us each time to add specific ip prefix in the nginx config to route them towards their caching server. So the problem is, whenever few hundreds ip prefixes are added to their network, they'll have to provide us those prefixes in order to enable caching for newly added ips. We just had a chat with our local ISP and he said that you should use some services like BGP to automatically detect if any new ip prefixes are added to our network and we'll not have to tell you each time we add some ip prefixes to our network. Could you guide me how could i make this work in our environment. The basic architecture of our network is :- Two static servers (serving mp4,jpg). One server located in US and one server located in Local ISP. I hope you can put me on some track as you did in the past and provide me some kick start to work with BGP. On Mon, Jun 16, 2014 at 10:41 PM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > > Maybe i need to add some variable to get original server ip ? > > https://www.google.nl/#q=nginx+geo+remote+ip+address > > > http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/ > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,250895#msg-250895 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Jun 17 14:05:03 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 17 Jun 2014 10:05:03 -0400 Subject: nginx-1.7.2 In-Reply-To: <20140617133530.GM1849@mdounin.ru> References: <20140617133530.GM1849@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.2 for Windows http://goo.gl/IbnbJ6 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jun 17, 2014 at 9:35 AM, Maxim Dounin wrote: > Changes with nginx 1.7.2 17 Jun > 2014 > > *) Feature: the "hash" directive inside the "upstream" block. > > *) Feature: defragmentation of free shared memory blocks. > Thanks to Wandenberg Peixoto and Yichun Zhang. > > *) Bugfix: a segmentation fault might occur in a worker process if the > default value of the "access_log" directive was used; the bug had > appeared in 1.7.0. > Thanks to Piotr Sikora. > > *) Bugfix: trailing slash was mistakenly removed from the last > parameter > of the "try_files" directive. > > *) Bugfix: nginx could not be built on OS X in some cases. > > *) Bugfix: in the ngx_http_spdy_module. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efeldhusen.lists at gmail.com Tue Jun 17 14:13:37 2014 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Tue, 17 Jun 2014 09:13:37 -0500 Subject: Send all requests to two separate upstream servers? Message-ID: I have a need to adjust a nginx install doing reverse proxy to a single server now to adjust it to send all requests it receives to two different upstream servers. I was thinking I could do it with the configuration below, but I wasn't sure if that would work and I'd have to use re-write rules instead. upstream original_upstream { server } upstream new_upstream { server } server { location / { proxy_pass http://original_upstream; } location / { proxy_pass http://new_upstream; } Any suggestions? Eric Feldhusen -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at kearsley.me Tue Jun 17 15:08:17 2014 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 17 Jun 2014 16:08:17 +0100 Subject: Send all requests to two separate upstream servers? In-Reply-To: References: Message-ID: <53A059E1.3050406@kearsley.me> On 17/06/14 15:13, Eric Feldhusen wrote: > I have a need to adjust a nginx install doing reverse proxy to a > single server now to adjust it to send all requests it receives to two > different upstream servers. do you mean a) send each request to both? b) send each request to one or the other (like load balancing) a) is not possible, simply because of the basics of proxying and http (there would be 2 http responses mixed into 1 connection) b) can be done with 1 / location but adding another address to the upstream block: |upstream backend { server ||| server | }| |server { location / { proxy_passhttp://backend ; } }| -------------- next part -------------- An HTML attachment was scrubbed... URL: From efeldhusen.lists at gmail.com Tue Jun 17 15:12:43 2014 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Tue, 17 Jun 2014 10:12:43 -0500 Subject: Send all requests to two separate upstream servers? In-Reply-To: <53A059E1.3050406@kearsley.me> References: <53A059E1.3050406@kearsley.me> Message-ID: Option A and that's what I figured as well. Eric Feldusen On Tue, Jun 17, 2014 at 10:08 AM, Richard Kearsley wrote: > On 17/06/14 15:13, Eric Feldhusen wrote: > > I have a need to adjust a nginx install doing reverse proxy to a single > server now to adjust it to send all requests it receives to two different > upstream servers. > > do you mean > a) send each request to both? > b) send each request to one or the other (like load balancing) > > a) is not possible, simply because of the basics of proxying and http > (there would be 2 http responses mixed into 1 connection) > b) can be done with 1 / location but adding another address to the > upstream block: > > upstream backend { > server server > } > > server { > location / { > proxy_pass http://backend ; > } > } > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at kearsley.me Tue Jun 17 15:24:28 2014 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 17 Jun 2014 16:24:28 +0100 Subject: Send all requests to two separate upstream servers? In-Reply-To: References: <53A059E1.3050406@kearsley.me> Message-ID: <53A05DAC.6070601@kearsley.me> On 17/06/14 16:12, Eric Feldhusen wrote: > Option A and that's what I figured as well. > If you don't care about sending the upstream response back to the client, or want to pick one of the two responses to send back then you can use the nginx lua module to perform some obscure functionality... it's quite a bit more advanced but something you might want to look into http://wiki.nginx.org/HttpLuaModule http://wiki.nginx.org/HttpLuaModule#ngx.location.capture_multi however the response would be fully buffered before it could be sent to client, so may be quite a delay and a memory hog if it's something large From r at roze.lv Tue Jun 17 15:35:39 2014 From: r at roze.lv (Reinis Rozitis) Date: Tue, 17 Jun 2014 18:35:39 +0300 Subject: Send all requests to two separate upstream servers? In-Reply-To: References: <53A059E1.3050406@kearsley.me> Message-ID: <849C8D146AB04ABF9F2821F7B810FD93@MasterPC> > Option A and that's what I figured as well. Depends on what you actually want to achieve by doing those 2 requests ? eg is it to prewarm 2 backend cache servers or something? But one way to do this would be for example to use nginx Lua module https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi / ngx.location.capture_multi , content_by_lua etc and then from one response discard the body ( ngx.req.discard_body ) or just print the first. .. theoretically maybe also the Echo module https://github.com/openresty/echo-nginx-module#echo_subrequest but I'm not exactly sure how the "combined" response would look like and if the duplicate body could be avoided just by sending HEAD request to the second backend. You can test it yourself or try to ask agentzh. rr From nginx-forum at nginx.us Tue Jun 17 17:11:43 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Jun 2014 13:11:43 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: shahzaib1232 Wrote: ------------------------------------------------------- > Thanks itpp but the issue is still same and still the ip is from the > main > server in inspect element as well as in local-caching nginx access > logs, i > am getting the client ip as main-server's ip instead of original > client ip It could be the case the traffic you are getting is from the ISP proxy which could mean that any traffic is from that ISP only, which makes it easier to determine which to serve from local. Ask the ISP from where the traffic is coming from, if it is a proxy then proxy=local. > So, the specific subnet coming from our ISP to the main server will be > routed to the local caching server and our ISP will have to tell us > each > time to add specific ip prefix in the nginx config to route them > towards > their caching server. So the problem is, whenever few hundreds ip > prefixes > are added to their network, they'll have to provide us those prefixes > in > order to enable caching for newly added ips. See above, if this is not the case look into https://github.com/flant/nginx-http-rdns if a client has something like 'p1234.adsl-pool2-auckland.au' you can redirect based on a part of the client dns name, your ISP can tell you which DHCP named pools there are. If you can't get the client IP of hostname you gonna need to do some wiresharking to see where the info is, if it is anywhere. If the ISP is using a proxy to pass clients to your server ask them to add a header with the client ip/hostname. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250950#msg-250950 From shahzaib.cb at gmail.com Tue Jun 17 18:08:40 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Jun 2014 23:08:40 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: i don't think the solution rdns will be suitable for us. I have checked the zebra software to make linux a BGP router http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-ospf-router/ Could you tell me if BGP is capable of doing what we want? Because our local ISP supports this method and i have no idea how to implement it. Functionality we need, is to auto detect the new ip prefixes from local ISP so they'll not have to provide us thousands of prefixes on daily basis. On Tue, Jun 17, 2014 at 10:11 PM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > Thanks itpp but the issue is still same and still the ip is from the > > main > > server in inspect element as well as in local-caching nginx access > > logs, i > > am getting the client ip as main-server's ip instead of original > > client ip > > It could be the case the traffic you are getting is from the ISP proxy > which > could mean that any traffic is from that ISP only, which makes it easier to > determine which to serve from local. Ask the ISP from where the traffic is > coming from, if it is a proxy then proxy=local. > > > So, the specific subnet coming from our ISP to the main server will be > > routed to the local caching server and our ISP will have to tell us > > each > > time to add specific ip prefix in the nginx config to route them > > towards > > their caching server. So the problem is, whenever few hundreds ip > > prefixes > > are added to their network, they'll have to provide us those prefixes > > in > > order to enable caching for newly added ips. > > See above, if this is not the case look into > https://github.com/flant/nginx-http-rdns if a client has something like > 'p1234.adsl-pool2-auckland.au' you can redirect based on a part of the > client dns name, your ISP can tell you which DHCP named pools there are. > > If you can't get the client IP of hostname you gonna need to do some > wiresharking to see where the info is, if it is anywhere. > If the ISP is using a proxy to pass clients to your server ask them to add > a > header with the client ip/hostname. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,250950#msg-250950 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 17 18:32:59 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Jun 2014 14:32:59 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: <7beeb51c70b877b4281abdfe9fd2d5ae.NginxMailingListEnglish@forum.nginx.org> shahzaib1232 Wrote: ------------------------------------------------------- > i don't think the solution rdns will be suitable for us. I have > checked the > zebra software to make linux a BGP router > http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-os > pf-router/ > > Could you tell me if BGP is capable of doing what we want? Because our > local ISP supports this method and i have no idea how to implement it. > > Functionality we need, is to auto detect the new ip prefixes from > local ISP > so they'll not have to provide us thousands of prefixes on daily > basis. Why not use a DNS for the clients? your making things too complicated. Client-1-request at ISP-1 -> edge1.streaming.au ISP-1-DNS -> 12.34.56.78 (which is your edge box) Client-1-request at ISP-2 -> edge1.streaming.au ISP-2-DNS -> 99.88.77.66 (which is your box in the US) Anyone from ISP-1 will always be directed to the edge systems, anyone else to where-ever you point the dns. ISP's also use regional DNS servers which allows you more edge systems closer to the users. Anyway, BGP see http://bird.network.cz/ (netflix solution) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250953#msg-250953 From shahzaib.cb at gmail.com Tue Jun 17 18:45:50 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Jun 2014 23:45:50 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: <7beeb51c70b877b4281abdfe9fd2d5ae.NginxMailingListEnglish@forum.nginx.org> References: <7beeb51c70b877b4281abdfe9fd2d5ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: >>Why not use a DNS for the clients? How i would be sure that request coming from the ISP-1 on the DNS server and then point it to the local caching server? I mean i can use View directive of BIND to route specific ips (local ISP clients) to the local caching server and what if tomorrow the ISP has added more clients to their network ? I'll also have those new ip prefixes to DNS server. Please correct me if i am wrong. On Tue, Jun 17, 2014 at 11:32 PM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > i don't think the solution rdns will be suitable for us. I have > > checked the > > zebra software to make linux a BGP router > > http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-os > > pf-router/ > > > > Could you tell me if BGP is capable of doing what we want? Because our > > local ISP supports this method and i have no idea how to implement it. > > > > Functionality we need, is to auto detect the new ip prefixes from > > local ISP > > so they'll not have to provide us thousands of prefixes on daily > > basis. > > Why not use a DNS for the clients? your making things too complicated. > > Client-1-request at ISP-1 -> edge1.streaming.au ISP-1-DNS -> 12.34.56.78 > (which > is your edge box) > Client-1-request at ISP-2 -> edge1.streaming.au ISP-2-DNS -> 99.88.77.66 > (which > is your box in the US) > > Anyone from ISP-1 will always be directed to the edge systems, anyone else > to where-ever you point the dns. > > ISP's also use regional DNS servers which allows you more edge systems > closer to the users. > > Anyway, BGP see http://bird.network.cz/ (netflix solution) > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,250953#msg-250953 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Tue Jun 17 18:46:50 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 17 Jun 2014 23:46:50 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: <7beeb51c70b877b4281abdfe9fd2d5ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: >>Why not use a DNS for the clients? How i would be sure that request coming from the ISP-1 on the DNS server ? and then point it to the local caching server? I mean i can use View directive of BIND to route specific ips (local ISP clients) to the local caching server and what if tomorrow the ISP has added more clients to their network ? I'll also have to add those new ip prefixes to DNS server. On Tue, Jun 17, 2014 at 11:45 PM, shahzaib shahzaib wrote: > >>Why not use a DNS for the clients? > How i would be sure that request coming from the ISP-1 on the DNS server > and then point it to the local caching server? I mean i can use View > directive of BIND to route specific ips (local ISP clients) to the local > caching server and what if tomorrow the ISP has added more clients to their > network ? I'll also have those new ip prefixes to DNS server. > > Please correct me if i am wrong. > > > On Tue, Jun 17, 2014 at 11:32 PM, itpp2012 wrote: > >> shahzaib1232 Wrote: >> ------------------------------------------------------- >> > i don't think the solution rdns will be suitable for us. I have >> > checked the >> > zebra software to make linux a BGP router >> > http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-os >> > pf-router/ >> > >> > Could you tell me if BGP is capable of doing what we want? Because our >> > local ISP supports this method and i have no idea how to implement it. >> > >> > Functionality we need, is to auto detect the new ip prefixes from >> > local ISP >> > so they'll not have to provide us thousands of prefixes on daily >> > basis. >> >> Why not use a DNS for the clients? your making things too complicated. >> >> Client-1-request at ISP-1 -> edge1.streaming.au ISP-1-DNS -> 12.34.56.78 >> (which >> is your edge box) >> Client-1-request at ISP-2 -> edge1.streaming.au ISP-2-DNS -> 99.88.77.66 >> (which >> is your box in the US) >> >> Anyone from ISP-1 will always be directed to the edge systems, anyone else >> to where-ever you point the dns. >> >> ISP's also use regional DNS servers which allows you more edge systems >> closer to the users. >> >> Anyway, BGP see http://bird.network.cz/ (netflix solution) >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,249997,250953#msg-250953 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 17 19:50:22 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Jun 2014 15:50:22 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org> You don't need to do anything with a dns that is only local to the clients served by the ISP. Suppose I am in Africa; Question to my ISP: I'd like to go to new-york ISP: new-york is located in south-Africa Suppose I am in the US; Question to my ISP: I'd like to go to new-york ISP: new-york is located in the US The DNS is just a pointer, where ever you have an edge server make the dns name point to it, when not point the dns to origin. Every ISP client gets the DNS servers from their ISP, its really simple. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250957#msg-250957 From efeldhusen.lists at gmail.com Tue Jun 17 22:09:05 2014 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Tue, 17 Jun 2014 17:09:05 -0500 Subject: Send all requests to two separate upstream servers? In-Reply-To: <849C8D146AB04ABF9F2821F7B810FD93@MasterPC> References: <53A059E1.3050406@kearsley.me> <849C8D146AB04ABF9F2821F7B810FD93@MasterPC> Message-ID: I'm looking for a way to mirror my production site traffic to a development environment, so that I have nearly identical traffic going to both to work through some optimization issues that are hard to do without the load, which is just incoming data. Eric On Tue, Jun 17, 2014 at 10:35 AM, Reinis Rozitis wrote: > Option A and that's what I figured as well. >> > > Depends on what you actually want to achieve by doing those 2 requests ? > eg is it to prewarm 2 backend cache servers or something? > > But one way to do this would be for example to use nginx Lua module > https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi / > ngx.location.capture_multi , content_by_lua etc and then from one response > discard the body ( ngx.req.discard_body ) or just print the first. > > .. theoretically maybe also the Echo module https://github.com/openresty/ > echo-nginx-module#echo_subrequest but I'm not exactly sure how the > "combined" response would look like and if the duplicate body could be > avoided just by sending HEAD request to the second backend. > > You can test it yourself or try to ask agentzh. > > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jun 17 22:11:03 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Tue, 17 Jun 2014 18:11:03 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts Message-ID: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> So this is going to be a bit of a long post but if i have encounterd this issue no doubt someone else will. Basically i keep getting read time outs or my web pages take a very long time to load. My server is Windows 2008 R2 64bit. Everything runs under either System or Root 1GBPS Connection server is from OVH.co.uk | http://www.ovh.co.uk/dedicated_servers/infra/2014-EG-32.xml My web application is Joomla 2.5.x External components i use are com_hwdmediashare and com_kunena. And i use MySQLi instead of MySQL. My PHP build is PHP 5.4 (5.4.29) x86 NTS (32bit) No caching extensions enabled My MySQL server is 5.6.19 (64bit) MySQL tables i use are all InnoDB Sever is Nginx from http://nginx-win.ecsds.eu/ But i was able to replicate this error with the official builds too http://nginx.org/en/download.html What runs on the server ? Just Nginx port 80, PHP port 9000-9100, MySQL port 3306. How i know it is not a MySQL or PHP issue ? With mysql i enabled slow query logging and i have no queries that are longer than 2 seconds. With PHP i added into Joomla a time to load page feature it shows me the time to create page was always a second or less. "Time to create page: 0.109 seconds" Now very often pages take 30 seconds or more to load but yet if i am to access a page through nginx that is not php. (Html or any other static content) it loads instantly all the time. So how do i serve PHP traffic through Nginx ? I use FastCGI. (This is what i believe is the culprit.) Bellow i will link to my Configs and hope someone can help me understand why i am having such a headache and constant read timeouts and downtime of my site(s). Perhaps i have reached my limit with windows and should be using linux :( PHP.ini Config : http://pastebin.com/54A3PDwU MySQL Config : (This was generated by the MySQL installer and the MySQL server runs fine) http://pastebin.com/DKiWjNqh Nginx Config : http://pastebin.com/J3tqnGpJ Bat Files to run PHP under FastCGI in windows on Nginx (Thanks to itpp2012 for these | http://forum.nginx.org/read.php?2,242895,243597#msg-243597) multi_run.cmd : http://pastebin.com/RPikwgnp runcgi.cmd : http://pastebin.com/4GYWiEWS Bare in mind this is a Windows setup not Linux. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250964#msg-250964 From lists-nginx at swsystem.co.uk Tue Jun 17 22:19:30 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Tue, 17 Jun 2014 23:19:30 +0100 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53A0BEF2.1060605@swsystem.co.uk> I'm not sure how you'd diagnose this on windows, I had a similar issue with php-fpm under linux as it was running out of "handlers". Could it be that all 20 of your fastcgi processes are in use and nginx is waiting for one to become available? As a side note, why windows? I'm curious why you'd be running this setup on windows when it would normally be seen running on linux. Steve. On 17/06/2014 23:11, c0nw0nk wrote: > So this is going to be a bit of a long post but if i have encounterd this > issue no doubt someone else will. > > Basically i keep getting read time outs or my web pages take a very long > time to load. > > My server is Windows 2008 R2 64bit. > Everything runs under either System or Root > 1GBPS Connection server is from OVH.co.uk | > http://www.ovh.co.uk/dedicated_servers/infra/2014-EG-32.xml > > My web application is Joomla 2.5.x > External components i use are com_hwdmediashare and com_kunena. > And i use MySQLi instead of MySQL. > > My PHP build is PHP 5.4 (5.4.29) x86 NTS (32bit) > No caching extensions enabled > > My MySQL server is 5.6.19 (64bit) > MySQL tables i use are all InnoDB > > Sever is Nginx from http://nginx-win.ecsds.eu/ But i was able to replicate > this error with the official builds too http://nginx.org/en/download.html > > What runs on the server ? > Just Nginx port 80, PHP port 9000-9100, MySQL port 3306. > > How i know it is not a MySQL or PHP issue ? > With mysql i enabled slow query logging and i have no queries that are > longer than 2 seconds. > With PHP i added into Joomla a time to load page feature it shows me the > time to create page was always a second or less. "Time to create page: 0.109 > seconds" > > Now very often pages take 30 seconds or more to load but yet if i am to > access a page through nginx that is not php. (Html or any other static > content) it loads instantly all the time. > > So how do i serve PHP traffic through Nginx ? > I use FastCGI. (This is what i believe is the culprit.) > > Bellow i will link to my Configs and hope someone can help me understand why > i am having such a headache and constant read timeouts and downtime of my > site(s). Perhaps i have reached my limit with windows and should be using > linux :( > > PHP.ini Config : > http://pastebin.com/54A3PDwU > > MySQL Config : (This was generated by the MySQL installer and the MySQL > server runs fine) > http://pastebin.com/DKiWjNqh > > Nginx Config : > http://pastebin.com/J3tqnGpJ > > Bat Files to run PHP under FastCGI in windows on Nginx (Thanks to itpp2012 > for these | http://forum.nginx.org/read.php?2,242895,243597#msg-243597) > multi_run.cmd : > http://pastebin.com/RPikwgnp > > runcgi.cmd : > http://pastebin.com/4GYWiEWS > > Bare in mind this is a Windows setup not Linux. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250964#msg-250964 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From agentzh at gmail.com Tue Jun 17 22:27:28 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 17 Jun 2014 15:27:28 -0700 Subject: Send all requests to two separate upstream servers? In-Reply-To: References: <53A059E1.3050406@kearsley.me> <849C8D146AB04ABF9F2821F7B810FD93@MasterPC> Message-ID: Hi On Tue, Jun 17, 2014 at 3:09 PM, Eric Feldhusen wrote: > I'm looking for a way to mirror my production site traffic to a development > environment, so that I have nearly identical traffic going to both to work > through some optimization issues that are hard to do without the load, which > is just incoming data. > Sounds like a perfect use case for the tcpcopy tool: https://github.com/wangbin579/tcpcopy Best regards, -agentzh From nginx-forum at nginx.us Tue Jun 17 22:51:10 2014 From: nginx-forum at nginx.us (jakubp) Date: Tue, 17 Jun 2014 18:51:10 -0400 Subject: Problem with big files In-Reply-To: References: Message-ID: <5f882c1b478211e080d2a603f4931358.NginxMailingListEnglish@forum.nginx.org> Hi Justin Justin Dorfman Wrote: ------------------------------------------------------- > > > > I use a patch > > Maxim provided some time ago allowing range requests to receive HTTP > 206 if > > a resource is not in cache but it's determined to be cacheable... > > > Can you please link to this patch? > > http://mailman.nginx.org/pipermail/nginx/2012-April/033522.html Regards, Kuba > Regards, > > Justin Dorfman > > Director of Developer Relations > MaxCDN > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250899,250969#msg-250969 From nginx-forum at nginx.us Tue Jun 17 22:56:25 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Tue, 17 Jun 2014 18:56:25 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <53A0BEF2.1060605@swsystem.co.uk> References: <53A0BEF2.1060605@swsystem.co.uk> Message-ID: <4d191badbc01e1c6062a8bd71f24e166.NginxMailingListEnglish@forum.nginx.org> Not a bad suggestion steve to test it i just changed my php mark to 100 processes will wait and see if i still get read time outs. And to answer your other question about why i do not use linux. I don't use linux because i am not very good with linux machines my understanding and trying to get to grips with setting one up and putty is very hard for me. And i suppose i have the kind of attitude of if linux can do it, Windows can do it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250970#msg-250970 From nginx-forum at nginx.us Tue Jun 17 23:39:36 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Tue, 17 Jun 2014 19:39:36 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <4d191badbc01e1c6062a8bd71f24e166.NginxMailingListEnglish@forum.nginx.org> References: <53A0BEF2.1060605@swsystem.co.uk> <4d191badbc01e1c6062a8bd71f24e166.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am still having the same issue read time outs. If every request made to the server circles the upstream then it has to be the upstream that is the issue not php. PHP loads are fine no crashes no errors. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250971#msg-250971 From lists-nginx at swsystem.co.uk Wed Jun 18 00:01:24 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 18 Jun 2014 01:01:24 +0100 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: <53A0BEF2.1060605@swsystem.co.uk> <4d191badbc01e1c6062a8bd71f24e166.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53A0D6D4.8060002@swsystem.co.uk> On 18/06/2014 00:39, c0nw0nk wrote: > I am still having the same issue read time outs. If every request made to > the server circles the upstream then it has to be the upstream that is the > issue not php. PHP loads are fine no crashes no errors. What's in the logs? Steve. From nginx-forum at nginx.us Wed Jun 18 00:13:29 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Tue, 17 Jun 2014 20:13:29 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <53A0D6D4.8060002@swsystem.co.uk> References: <53A0D6D4.8060002@swsystem.co.uk> Message-ID: <8c3fbb7c3c8c3fa69803b13620e3add8.NginxMailingListEnglish@forum.nginx.org> What logs you want me to paste PHP or Nginx ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250973#msg-250973 From contact at jpluscplusm.com Wed Jun 18 00:14:13 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 18 Jun 2014 01:14:13 +0100 Subject: URI escaping for X-Accel-Redirect and proxy_pass in 1.4.7 and 1.6.0 In-Reply-To: <3a10d1555a139f2d9c98655c8151f569.NginxMailingListEnglish@forum.nginx.org> References: <3a10d1555a139f2d9c98655c8151f569.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 17 June 2014 07:49, gwilym wrote: > The workaround is to _double_ encode so as to send back > "image%2520with%2520spaces.jpg" to Nginx but we can't roll this out until > Nginx 1.6 because it breaks 1.4... but we can't roll out 1.6 until the code > is there. I don't have a nice fix for you I'm afraid! However, as a way to get out of your chicken-and-egg upgrade problem, could you pass a static header containing the nginx version to your backend, and get it to switch its X-Accel-Redirect response based on this value? J From steve at greengecko.co.nz Wed Jun 18 00:15:29 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 18 Jun 2014 12:15:29 +1200 Subject: lots of work in a location block... Message-ID: <1403050529.3413.295.camel@steve-new> Hi Folks, I'm trying to integrate a python backend into a pre-existing php website, and am having problems doing this as I need to rewrite the url at the same time... eg: this is what isn't working. location = /example { rewrite /example/(.*) /$1 break; root /www/example; include proxy_params; proxy_pass http://python; #break; } ( stripped to bare essentials ) So it's - strip off the /example prefix - set the new root ( the php site sets is outside any location block ) - pass stripped request to the python backend. I've tried every option I can think of, but they all seem to drop out of the location block and process the rewritten request again, which means that they're not using the static files in the locally defined root, and also aren't being handed to the python backend for processing. I've fudged it for the moment, but would really like to do it properly! Cheers, Steve. Oh, proxy_params contains proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Wed Jun 18 01:59:08 2014 From: nginx-forum at nginx.us (TheBritishGeek) Date: Tue, 17 Jun 2014 21:59:08 -0400 Subject: Best method for adding GeoIP support Message-ID: We have just started to work with Nginx and have installed by adding the nginx repositry to our debian 7 installs. It works almost perfectly out of the box as such. However we need to add GeoIP support, so the question is what is the best method of doing this. I really don't want to compile our own install and break the simple apt-get upgrade path in the future if we have a choice. Any advice would be greatly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250978,250978#msg-250978 From nginx-forum at nginx.us Wed Jun 18 02:14:08 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Tue, 17 Jun 2014 22:14:08 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Could traffic surges do that too ? Such as after i leave php running for a while it seems to not take as long to load ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250979#msg-250979 From efeldhusen.lists at gmail.com Wed Jun 18 03:21:42 2014 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Tue, 17 Jun 2014 22:21:42 -0500 Subject: Send all requests to two separate upstream servers? In-Reply-To: References: <53A059E1.3050406@kearsley.me> <849C8D146AB04ABF9F2821F7B810FD93@MasterPC> Message-ID: That's almost perfect, except I don't have enough access to the development environment to get it installed. Eric On Tue, Jun 17, 2014 at 5:27 PM, Yichun Zhang (agentzh) wrote: > Hi > > On Tue, Jun 17, 2014 at 3:09 PM, Eric Feldhusen wrote: > > I'm looking for a way to mirror my production site traffic to a > development > > environment, so that I have nearly identical traffic going to both to > work > > through some optimization issues that are hard to do without the load, > which > > is just incoming data. > > > > Sounds like a perfect use case for the tcpcopy tool: > > https://github.com/wangbin579/tcpcopy > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Wed Jun 18 03:31:03 2014 From: pchychi at gmail.com (Payam Chychi) Date: Tue, 17 Jun 2014 20:31:03 -0700 Subject: Send all requests to two separate upstream servers? In-Reply-To: References: <53A059E1.3050406@kearsley.me> <849C8D146AB04ABF9F2821F7B810FD93@MasterPC> Message-ID: <8113D7447DD441CD8A2166D344741059@gmail.com> You are wanting to multi-purpose a production env for dev without proper parameters in ur setup. Set a system to use as ur test client requesting http, setup an if statement and match proper fields and proxy_pass proxyA or proxy_pass proxyB Id setup the more specific match on top Only an idea, im sure there are half a dozen ways of doing this... Just not without proper plan -- Payam Chychi Network Engineer / Security Specialist On Tuesday, June 17, 2014 at 8:21 PM, Eric Feldhusen wrote: > That's almost perfect, except I don't have enough access to the development environment to get it installed. > > Eric > > > On Tue, Jun 17, 2014 at 5:27 PM, Yichun Zhang (agentzh) wrote: > > Hi > > > > On Tue, Jun 17, 2014 at 3:09 PM, Eric Feldhusen wrote: > > > I'm looking for a way to mirror my production site traffic to a development > > > environment, so that I have nearly identical traffic going to both to work > > > through some optimization issues that are hard to do without the load, which > > > is just incoming data. > > > > > > > Sounds like a perfect use case for the tcpcopy tool: > > > > https://github.com/wangbin579/tcpcopy > > > > Best regards, > > -agentzh > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org (mailto:nginx at nginx.org) > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at x64architecture.com Wed Jun 18 06:51:50 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Wed, 18 Jun 2014 02:51:50 -0400 Subject: Best method for adding GeoIP support In-Reply-To: References: Message-ID: Hello, There is no way to do this with the packages from nginx.org, without recompiling nginx, with the --with-http_geoip_module build flag. Unless you do it on another level (e.g. with the geoip php extension) which I am assuming you don't want. You could set up your own repo. --- Kurt Cancemi http://www.getwnmp.org --- Kurt Cancemi http://www.getwnmp.org On Tue, Jun 17, 2014 at 9:59 PM, TheBritishGeek wrote: > We have just started to work with Nginx and have installed by adding the > nginx repositry to our debian 7 installs. It works almost perfectly out of > the box as such. However we need to add GeoIP support, so the question is > what is the best method of doing this. I really don't want to compile our > own install and break the simple apt-get upgrade path in the future if we > have a choice. > > Any advice would be greatly appreciated. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250978,250978#msg-250978 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Jun 18 06:57:40 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 18 Jun 2014 11:57:40 +0500 Subject: Best method for adding GeoIP support In-Reply-To: References: Message-ID: Maybe ngx_geo_module could help you, it's comes built-in with nginx and doesn't need re-compilation. http://nginx.org/en/docs/http/ngx_http_geo_module.html On Wed, Jun 18, 2014 at 11:51 AM, Kurt Cancemi wrote: > Hello, > > There is no way to do this with the packages from nginx.org, without > recompiling nginx, with the --with-http_geoip_module build flag. Unless > you do it on another level (e.g. with the geoip php extension) which I am > assuming you don't want. You could set up your own repo. > > --- > Kurt Cancemi > http://www.getwnmp.org > > --- > Kurt Cancemi > http://www.getwnmp.org > > > On Tue, Jun 17, 2014 at 9:59 PM, TheBritishGeek > wrote: > >> We have just started to work with Nginx and have installed by adding the >> nginx repositry to our debian 7 installs. It works almost perfectly out of >> the box as such. However we need to add GeoIP support, so the question is >> what is the best method of doing this. I really don't want to compile our >> own install and break the simple apt-get upgrade path in the future if we >> have a choice. >> >> Any advice would be greatly appreciated. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,250978,250978#msg-250978 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishal.mestri at cloverinfotech.com Wed Jun 18 06:53:43 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Wed, 18 Jun 2014 12:23:43 +0530 (IST) Subject: Nginx - OS Version supported In-Reply-To: <24ac8262-b68b-47ea-9dce-4536c25ab8ab@mail.cloverinfotech.com> Message-ID: Hi All, I have gone through below link which gives details about OS and Platform supported. http://nginx.org/en/#tested_os_and_platforms I just want to know if Redhat 6.4 and Redhat 6.5 is supported by nginx? In link about it mentions, linux 2.2 -3 , which is I guess kernel version. Thanks & Regards, Vishal Mestri -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Wed Jun 18 07:09:58 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 18 Jun 2014 19:09:58 +1200 Subject: Nginx - OS Version supported In-Reply-To: References: Message-ID: <1403075398.3413.326.camel@steve-new> I'm happily running 1.6 -> 1.7.1 on current 64 bit CentOS 6.5. In the past, I've run 1.3 and up on CentOS 6. hth, Steve On Wed, 2014-06-18 at 12:23 +0530, Vishal Mestri wrote: > Hi All, > > > I have gone through below link which gives details about OS and > Platform supported. > > http://nginx.org/en/#tested_os_and_platforms > > > > I just want to know if Redhat 6.4 and Redhat 6.5 is supported by > nginx? > > > In link about it mentions, linux 2.2 -3 , which is I guess kernel > version. > > > > Thanks & Regards, > > Vishal Mestri > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From vbart at nginx.com Wed Jun 18 07:14:26 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Jun 2014 11:14:26 +0400 Subject: lots of work in a location block... In-Reply-To: <1403050529.3413.295.camel@steve-new> References: <1403050529.3413.295.camel@steve-new> Message-ID: <2171523.O0fpYrErTT@vbart-workstation> On Wednesday 18 June 2014 12:15:29 Steve Holdoway wrote: > Hi Folks, > > I'm trying to integrate a python backend into a pre-existing php > website, and am having problems doing this as I need to rewrite the url > at the same time... eg: this is what isn't working. > > location = /example { The only request, that will be handled by your location with "=" modifier is "/example", even "/example/" isn't fit. > rewrite /example/(.*) /$1 break; > > root /www/example; > > include proxy_params; > proxy_pass http://python; > > #break; > } > ( stripped to bare essentials ) > > So it's > - strip off the /example prefix > - set the new root ( the php site sets is outside any location block ) > - pass stripped request to the python backend. [..] Please note, that the "root" directive is almost meaningless in location with "proxy_pass". And you don't need "rewrite". Something like that should work: location /example/ { include proxy_params; proxy_pass http://python/; } Please also check the docs: http://nginx.org/r/proxy_pass http://nginx.org/r/location wbr, Valentin V. Bartenev From vishal.mestri at cloverinfotech.com Wed Jun 18 07:12:23 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Wed, 18 Jun 2014 12:42:23 +0530 (IST) Subject: Nginx - OS Version supported In-Reply-To: <1403075398.3413.326.camel@steve-new> Message-ID: <757e0308-4220-449d-af22-7b58c2d12b75@mail.cloverinfotech.com> Thank you for prompt response. But I would like to know which Redhat OS Platforms supported by Nginx or tested by Nginx. Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "Steve Holdoway" To: nginx at nginx.org Sent: Wednesday, June 18, 2014 12:39:58 PM Subject: Re: Nginx - OS Version supported I'm happily running 1.6 -> 1.7.1 on current 64 bit CentOS 6.5. In the past, I've run 1.3 and up on CentOS 6. hth, Steve On Wed, 2014-06-18 at 12:23 +0530, Vishal Mestri wrote: > Hi All, > > > I have gone through below link which gives details about OS and > Platform supported. > > http://nginx.org/en/#tested_os_and_platforms > > > > I just want to know if Redhat 6.4 and Redhat 6.5 is supported by > nginx? > > > In link about it mentions, linux 2.2 -3 , which is I guess kernel > version. > > > > Thanks & Regards, > > Vishal Mestri > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Jun 18 07:29:50 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Jun 2014 11:29:50 +0400 Subject: Nginx - OS Version supported In-Reply-To: References: Message-ID: <3666322.0HZHC0CKOF@vbart-workstation> On Wednesday 18 June 2014 12:23:43 Vishal Mestri wrote: > > Hi All, > > > I have gone through below link which gives details about OS and Platform > supported. > http://nginx.org/en/#tested_os_and_platforms [..] Not "supported", but "tested" which currently something is very abstract, and means that one day in the past somebody has successfully compiled and run nginx on these platforms. And even this definition is too far from to be accurate. I know that nginx works flawlessly on ARMs, on the contrary I'm in doubt that it still working well on linux 2.2. You may just ignore that list. There's also a list of distributions for those we build official packages: http://nginx.org/en/linux_packages.html And another one for nginx plus: http://nginx.com/products/technical-specs/ wbr, Valentin V. Bartenev From steve at greengecko.co.nz Wed Jun 18 07:35:02 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 18 Jun 2014 19:35:02 +1200 Subject: lots of work in a location block... In-Reply-To: <2171523.O0fpYrErTT@vbart-workstation> References: <1403050529.3413.295.camel@steve-new> <2171523.O0fpYrErTT@vbart-workstation> Message-ID: <1403076902.4567.1.camel@localhost.localdomain> That's a red herring... cut/paste error. The rewrite is being processed, the result isn't being passed to the proxy server. On Wed, 2014-06-18 at 11:14 +0400, Valentin V. Bartenev wrote: > On Wednesday 18 June 2014 12:15:29 Steve Holdoway wrote: > > Hi Folks, > > > > I'm trying to integrate a python backend into a pre-existing php > > website, and am having problems doing this as I need to rewrite the url > > at the same time... eg: this is what isn't working. > > > > location = /example { > > The only request, that will be handled by your location with "=" modifier > is "/example", even "/example/" isn't fit. > > > rewrite /example/(.*) /$1 break; > > > > root /www/example; > > > > include proxy_params; > > proxy_pass http://python; > > > > #break; > > } > > ( stripped to bare essentials ) > > > > So it's > > - strip off the /example prefix > > - set the new root ( the php site sets is outside any location block ) > > - pass stripped request to the python backend. > [..] > > Please note, that the "root" directive is almost meaningless in location > with "proxy_pass". > > And you don't need "rewrite". > > Something like that should work: > > location /example/ { > include proxy_params; > proxy_pass http://python/; > } > > Please also check the docs: > > http://nginx.org/r/proxy_pass > http://nginx.org/r/location > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jun 18 07:51:36 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Jun 2014 03:51:36 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> Lets see some logging. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250990#msg-250990 From nginx-forum at nginx.us Wed Jun 18 08:03:56 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Jun 2014 04:03:56 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> Message-ID: PHP_FCGI_MAX_REQUESTS=0, try the recommended value of 10000. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,250992#msg-250992 From vbart at nginx.com Wed Jun 18 08:13:55 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Jun 2014 12:13:55 +0400 Subject: lots of work in a location block... In-Reply-To: <1403076902.4567.1.camel@localhost.localdomain> References: <1403050529.3413.295.camel@steve-new> <2171523.O0fpYrErTT@vbart-workstation> <1403076902.4567.1.camel@localhost.localdomain> Message-ID: <1861320.Gg7m0WuG88@vbart-workstation> On Wednesday 18 June 2014 19:35:02 Steve Holdoway wrote: > That's a red herring... cut/paste error. The rewrite is being processed, > the result isn't being passed to the proxy server. > [..] Without a full and exact copy of your configuration there's no way to help you. Every single bit of it has meaning. You are assuring us that the configuration is ok, then where is the problem? wbr, Valentin V. Bartenev From katmai at keptprivate.com Wed Jun 18 10:24:32 2014 From: katmai at keptprivate.com (Stefanita Rares Dumitrescu) Date: Wed, 18 Jun 2014 12:24:32 +0200 Subject: Sticky equivalent Message-ID: <53A168E0.70309@keptprivate.com> Hi guys, I am running nginx 1.4 right now on a bunch of front end servers, i am running freebsd, and have nginx with sticky patch compiled from ports. I can't upgrade to 1.6 or later, because the sticky port seems to be broken. Was there any similar feature introduced in nginx, or how can we work that one out? Regards Rares From ru at nginx.com Wed Jun 18 10:59:04 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 18 Jun 2014 14:59:04 +0400 Subject: Sticky equivalent In-Reply-To: <53A168E0.70309@keptprivate.com> References: <53A168E0.70309@keptprivate.com> Message-ID: <20140618105904.GA81997@lo0.su> On Wed, Jun 18, 2014 at 12:24:32PM +0200, Stefanita Rares Dumitrescu wrote: > Hi guys, > > I am running nginx 1.4 right now on a bunch of front end servers, i am > running freebsd, and have nginx with sticky patch compiled from ports. > > I can't upgrade to 1.6 or later, because the sticky port seems to be > broken. Was there any similar feature introduced in nginx, or how can we > work that one out? The latest version of nginx 1.7.2 includes the consistent hash feature [1]. The commercial version of nginx includes the sticky functionality [2]. [1] http://nginx.org/r/hash [2] http://nginx.org/r/sticky From nginx-forum at nginx.us Wed Jun 18 12:21:38 2014 From: nginx-forum at nginx.us (akurczyk) Date: Wed, 18 Jun 2014 08:21:38 -0400 Subject: Optimization of Nginx for 128 MB RAM VPS Message-ID: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> Hello, I have a 128 MB RAM VPS with 1 vcore of 2,2 GHz x86_64 CPU. The CPU is much faster than the Rapsberry one so that is not a problem but the RAM usage, I think, is. Could You help me optimize my Nginx installation? Thats my configuration: nginx.conf: http://pastebin.com/M55APXzD The worker_process value is set to the number of vcores which is 1, so it's OK, I guess. But what with the worker_connections? I have read that this is the number of simultaneous connections a single process can handle. How high should this value be? I am expecting lets say 20 connections per hour. I will host there only my personal home page and my school notebook (based on Wordpress). Will increasing this value result in high usage of RAM by idling nginx waiting for new connections? PHP-FPM pools configs: http://pastebin.com/Ap9G2qpx How should I configure the pm values for each of this websites/vhosts? The one called "zeszyt" is my notebook based on Wordpress, The "pma" one is the phpMyAdmin and the rest are just normal website builded with my own simple PHP scripts and HTML files. Should I set there something more to e.g. to make the file upload possible like with the phpMyAdmin or it will just work out of the box like with Apache and PHP module? Thanks in advance -- Best regards, Aleksander Kurczyk Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251000,251000#msg-251000 From maxim at nginx.com Wed Jun 18 12:26:09 2014 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 18 Jun 2014 16:26:09 +0400 Subject: Optimization of Nginx for 128 MB RAM VPS In-Reply-To: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> References: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53A18561.1030105@nginx.com> On 6/18/14 4:21 PM, akurczyk wrote: > I am expecting lets say 20 connections per hour. [...] Personally I don't think that you need any optimizations for such load. -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Wed Jun 18 13:21:55 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Wed, 18 Jun 2014 09:21:55 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> Even with PHP_FCGI_MAX_REQUESTS=10000 i still get it what log setting would you like nginx set to ? Because mysql has no outputs in the slow query log that take more than 2 seconds and PHP has no error outputs or crashes in my syslog. error_log logs/error.log crit; Is my current setting should i change it to debug ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251002#msg-251002 From nginx-forum at nginx.us Wed Jun 18 13:53:15 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Jun 2014 09:53:15 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> Message-ID: c0nw0nk Wrote: ------------------------------------------------------- > error_log logs/error.log crit; > > Is my current setting should i change it to debug ? No option is required to get error messages about backend issues. Remove that first server {} block with the return 200, for fcgi try this; fastcgi_pass web_rack; fastcgi_index index.php; include fastcgi_params; keepalive_timeout 600; keepalive_requests 500; proxy_http_version 1.1; proxy_ignore_client_abort on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_keep_conn on; expires 10s; When you experience a slow page what are the php-cgi processes / mysql doing ? use processexplorer to find out. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251003#msg-251003 From luky-37 at hotmail.com Wed Jun 18 14:19:33 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 18 Jun 2014 16:19:33 +0200 Subject: Optimization of Nginx for 128 MB RAM VPS In-Reply-To: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> References: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Hello, > > I have a 128 MB RAM VPS with 1 vcore of 2,2 GHz x86_64 CPU. The CPU is much > faster than the Rapsberry one so that is not a problem but the RAM usage, I > think, is. > > Could You help me optimize my Nginx installation? Is this really needed? Nginx doesn't use much RAM usually. How much RAM does nginx use currently and what is the target usage you would like to achieve? The most important thing is probably to disable all plugins, third party and build-in. Lukas From fusca14 at gmail.com Wed Jun 18 14:22:24 2014 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Wed, 18 Jun 2014 11:22:24 -0300 Subject: Sticky equivalent In-Reply-To: <20140618105904.GA81997@lo0.su> References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> Message-ID: Dear Ruslan, Can you post an example of using this hash feature? Thanks in advance. On Wed, Jun 18, 2014 at 7:59 AM, Ruslan Ermilov wrote: > On Wed, Jun 18, 2014 at 12:24:32PM +0200, Stefanita Rares Dumitrescu wrote: >> Hi guys, >> >> I am running nginx 1.4 right now on a bunch of front end servers, i am >> running freebsd, and have nginx with sticky patch compiled from ports. >> >> I can't upgrade to 1.6 or later, because the sticky port seems to be >> broken. Was there any similar feature introduced in nginx, or how can we >> work that one out? > > The latest version of nginx 1.7.2 includes the consistent hash feature [1]. > The commercial version of nginx includes the sticky functionality [2]. > > [1] http://nginx.org/r/hash > [2] http://nginx.org/r/sticky > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ru at nginx.com Wed Jun 18 14:49:14 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 18 Jun 2014 18:49:14 +0400 Subject: Sticky equivalent In-Reply-To: References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> Message-ID: <20140618144914.GB86855@lo0.su> On Wed, Jun 18, 2014 at 11:22:24AM -0300, Fabiano Furtado Pessoa Coelho wrote: > Dear Ruslan, > > Can you post an example of using this hash feature? > > Thanks in advance. upstream u { hash $binary_remote_addr consistent; server 10.0.0.1; server 10.0.0.2; server 10.0.0.3; } You can use any expression as the "key", e.g. hash $cookie_uid consistent; It all depends on your needs actually. > On Wed, Jun 18, 2014 at 7:59 AM, Ruslan Ermilov wrote: > > On Wed, Jun 18, 2014 at 12:24:32PM +0200, Stefanita Rares Dumitrescu wrote: > >> Hi guys, > >> > >> I am running nginx 1.4 right now on a bunch of front end servers, i am > >> running freebsd, and have nginx with sticky patch compiled from ports. > >> > >> I can't upgrade to 1.6 or later, because the sticky port seems to be > >> broken. Was there any similar feature introduced in nginx, or how can we > >> work that one out? > > > > The latest version of nginx 1.7.2 includes the consistent hash feature [1]. > > The commercial version of nginx includes the sticky functionality [2]. > > > > [1] http://nginx.org/r/hash > > [2] http://nginx.org/r/sticky From nginx-forum at nginx.us Wed Jun 18 15:19:00 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Wed, 18 Jun 2014 11:19:00 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> Message-ID: <95b7fce537fcdcbf69f9a2d7b33be5ba.NginxMailingListEnglish@forum.nginx.org> Looking at it PHP, MySQL and Nginx have no changes in CPU although PHP did spike in I/O usage. And i disabled caching in Joomla and sometimes i came across this. PHP Fatal error: Maximum execution time of 30 seconds exceeded in C:\server\websites\ps\public_www\libraries\loader.php on line 183 Do you recon that could be my bottle neck ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251010#msg-251010 From kurt at x64architecture.com Wed Jun 18 15:58:54 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Wed, 18 Jun 2014 11:58:54 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <95b7fce537fcdcbf69f9a2d7b33be5ba.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> <95b7fce537fcdcbf69f9a2d7b33be5ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: To change the Maximum execution time, in your php.ini change max_execution_time 30 to 120. The maximum execution time is how low a php script may run for in seconds. --- Kurt Cancemi http://www.getwnmp.org On Wed, Jun 18, 2014 at 11:19 AM, c0nw0nk wrote: > Looking at it PHP, MySQL and Nginx have no changes in CPU although PHP did > spike in I/O usage. > > And i disabled caching in Joomla and sometimes i came across this. > PHP Fatal error: Maximum execution time of 30 seconds exceeded in > C:\server\websites\ps\public_www\libraries\loader.php on line 183 > > > Do you recon that could be my bottle neck ? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251010#msg-251010 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kurt at x64architecture.com Wed Jun 18 16:00:30 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Wed, 18 Jun 2014 12:00:30 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> <95b7fce537fcdcbf69f9a2d7b33be5ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: I meant change max_execution_time = 30 to max_execution_time = 120 --- Kurt Cancemi http://www.getwnmp.org On Wed, Jun 18, 2014 at 11:58 AM, Kurt Cancemi wrote: > To change the Maximum execution time, in your php.ini change > max_execution_time 30 to 120. The maximum execution time is how low a > php script may run for in seconds. > > --- > Kurt Cancemi > http://www.getwnmp.org > > > On Wed, Jun 18, 2014 at 11:19 AM, c0nw0nk wrote: >> Looking at it PHP, MySQL and Nginx have no changes in CPU although PHP did >> spike in I/O usage. >> >> And i disabled caching in Joomla and sometimes i came across this. >> PHP Fatal error: Maximum execution time of 30 seconds exceeded in >> C:\server\websites\ps\public_www\libraries\loader.php on line 183 >> >> >> Do you recon that could be my bottle neck ? >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251010#msg-251010 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jun 18 16:06:11 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Wed, 18 Jun 2014 12:06:11 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <95b7fce537fcdcbf69f9a2d7b33be5ba.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> <95b7fce537fcdcbf69f9a2d7b33be5ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <98a8b228bc61685ffcbeca0f4af90e6a.NginxMailingListEnglish@forum.nginx.org> Doing further testing i have discoverd something disturbing. I can execute a page upon the website, the page never loads. Then when i go to load other pages they all give of a error. Fatal error: Maximum execution time of 30 seconds exceeded in C:\server\websites\ps\public_www\libraries\loader.php on line 183 So the one page that does not load triggers a knock on effect to all other pages it's like playing domino's Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251013#msg-251013 From nginx-forum at nginx.us Wed Jun 18 16:14:28 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Wed, 18 Jun 2014 12:14:28 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: Message-ID: Upon the change it gets worse now i do not even get a error the browser just time's out. I have no idea if it is a Joomla issue or if it is actualy something with PHP on windows. I get the feeling it is a bit of both. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251014#msg-251014 From nginx-forum at nginx.us Wed Jun 18 16:24:54 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Jun 2014 12:24:54 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: Message-ID: c0nw0nk Wrote: ------------------------------------------------------- > Upon the change it gets worse now i do not even get a error the > browser just time's out. I have no idea if it is a Joomla issue or if > it is actualy something with PHP on windows. I get the feeling it is a > bit of both. Then you first need to figure out why joomla needs so much time, maybe it is waiting for mysql without getting anywhere. Can be anything, invalid request ending in a loop, too many nested / unions queries timing out mysql... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251016#msg-251016 From shahzaib.cb at gmail.com Wed Jun 18 17:10:28 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 18 Jun 2014 22:10:28 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org> References: <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: ok, but i have no idea why ISP is asking for BGP and matter of fact is, i'll have to make BGP work somehow, so local caching server will fetch the new subnets from ISP router automatically (and i don't know how). Btw, our local ISP provided us with some testing ip prefixes to check nginx based caching. i.e geo { default 0; 10.0.0.0/8 39.23.2.0/24 1; 112.50.192.0/18 1; } Now whenever we add the prefix 112.50.192.0/18 in geo {} , all the requests coming from the 39.23.2.0/24 and 10.0.0.0/8 returns 504 gateway error and videos failed to stream. To resolve this issue, we have to remove 112.50.192.0/18 1; from geo block. On Wed, Jun 18, 2014 at 12:50 AM, itpp2012 wrote: > You don't need to do anything with a dns that is only local to the clients > served by the ISP. > > Suppose I am in Africa; > Question to my ISP: I'd like to go to new-york > ISP: new-york is located in south-Africa > > Suppose I am in the US; > Question to my ISP: I'd like to go to new-york > ISP: new-york is located in the US > > The DNS is just a pointer, where ever you have an edge server make the dns > name point to it, when not point the dns to origin. > Every ISP client gets the DNS servers from their ISP, its really simple. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,250957#msg-250957 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 18 18:24:33 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Jun 2014 14:24:33 -0400 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: shahzaib1232 Wrote: ------------------------------------------------------- > > Btw, our local ISP provided us with some testing ip prefixes to check > nginx > based caching. i.e > geo { > default 0; > 10.0.0.0/8 > 39.23.2.0/24 1; > 112.50.192.0/18 1; > } > Typo?? geo { default 0; 10.0.0.0/8 1; 39.23.2.0/24 1; 112.50.192.0/18 1; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,251019#msg-251019 From shahzaib.cb at gmail.com Wed Jun 18 18:30:42 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 18 Jun 2014 23:30:42 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: Message-ID: >>geo { default 0; 10.0.0.0/8 1; 39.23.2.0/24 1; 112.50.192.0/18 1; } Sorry i didn't write accurately here but it is 10.0.0.0/8 1; in nginx config, so the problem is not the wrong syntax for geo {}. On Wed, Jun 18, 2014 at 11:24 PM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > > > Btw, our local ISP provided us with some testing ip prefixes to check > > nginx > > based caching. i.e > > geo { > > default 0; > > 10.0.0.0/8 > > 39.23.2.0/24 1; > > 112.50.192.0/18 1; > > } > > > > Typo?? > > geo { > default 0; > 10.0.0.0/8 1; > 39.23.2.0/24 1; > 112.50.192.0/18 1; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249997,251019#msg-251019 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katmai at keptprivate.com Wed Jun 18 18:46:04 2014 From: katmai at keptprivate.com (Stefanita Rares Dumitrescu) Date: Wed, 18 Jun 2014 20:46:04 +0200 Subject: Sticky equivalent In-Reply-To: <20140618144914.GB86855@lo0.su> References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> <20140618144914.GB86855@lo0.su> Message-ID: <53A1DE6C.3080407@keptprivate.com> Oh god 1350$ On 18/06/2014 16:49, Ruslan Ermilov wrote: > On Wed, Jun 18, 2014 at 11:22:24AM -0300, Fabiano Furtado Pessoa Coelho wrote: >> Dear Ruslan, >> >> Can you post an example of using this hash feature? >> >> Thanks in advance. > upstream u { > hash $binary_remote_addr consistent; > server 10.0.0.1; > server 10.0.0.2; > server 10.0.0.3; > } > > You can use any expression as the "key", e.g. > > hash $cookie_uid consistent; > > It all depends on your needs actually. > >> On Wed, Jun 18, 2014 at 7:59 AM, Ruslan Ermilov wrote: >>> On Wed, Jun 18, 2014 at 12:24:32PM +0200, Stefanita Rares Dumitrescu wrote: >>>> Hi guys, >>>> >>>> I am running nginx 1.4 right now on a bunch of front end servers, i am >>>> running freebsd, and have nginx with sticky patch compiled from ports. >>>> >>>> I can't upgrade to 1.6 or later, because the sticky port seems to be >>>> broken. Was there any similar feature introduced in nginx, or how can we >>>> work that one out? >>> The latest version of nginx 1.7.2 includes the consistent hash feature [1]. >>> The commercial version of nginx includes the sticky functionality [2]. >>> >>> [1] http://nginx.org/r/hash >>> [2] http://nginx.org/r/sticky > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From luky-37 at hotmail.com Wed Jun 18 19:35:39 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 18 Jun 2014 21:35:39 +0200 Subject: Caching servers in Local ISPs !! In-Reply-To: References: , <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org>, Message-ID: Hi, > ok, but i have no idea why ISP is asking for BGP and matter of fact is, > i'll have to make BGP work somehow, so local caching server will fetch > the new subnets from ISP router automatically (and i don't know how). I strongly suggest you hire some consultant who can help you setting all those things up, because this is clearly a task too complex for a single mailing list thread and some nginx configurations. Also, why not host those file on a professional CDN instead of in-house? https://www.google.com/search?q=mp4+streaming+cdn Lukas From shahzaib.cb at gmail.com Wed Jun 18 19:45:48 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Jun 2014 00:45:48 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: >>why not host those file on a professional CDN instead of in-house? Because 80% of the traffic is from our country and 50% of that traffic is from the ISP we're talking to and this is the reason we deployed the caching box on this ISP edge. On Thu, Jun 19, 2014 at 12:35 AM, Lukas Tribus wrote: > Hi, > > > > ok, but i have no idea why ISP is asking for BGP and matter of fact is, > > i'll have to make BGP work somehow, so local caching server will fetch > > the new subnets from ISP router automatically (and i don't know how). > > I strongly suggest you hire some consultant who can help you setting > all those things up, because this is clearly a task too complex for > a single mailing list thread and some nginx configurations. > > Also, why not host those file on a professional CDN instead of in-house? > > https://www.google.com/search?q=mp4+streaming+cdn > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jun 18 19:47:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jun 2014 23:47:28 +0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> References: <406c2b5faabe8f4af1340d74c0098acc.NginxMailingListEnglish@forum.nginx.org> <428f460e1a336af8e4b395e06cec9363.NginxMailingListEnglish@forum.nginx.org> <5929c0ab0381ace2c1363cd565771151.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140618194728.GX1849@mdounin.ru> Hello! On Wed, Jun 18, 2014 at 09:21:55AM -0400, c0nw0nk wrote: > Even with PHP_FCGI_MAX_REQUESTS=10000 i still get it what log setting would > you like nginx set to ? > > Because mysql has no outputs in the slow query log that take more than 2 > seconds and PHP has no error outputs or crashes in my syslog. > > error_log logs/error.log crit; > > Is my current setting should i change it to debug ? It's very bad idea to run production systems with error_log level set to "crit". It basically forbids any logging of non-fatal errors, and backend timeouts are certainly non-fatal. To see log messages about backend issues you should set logging level to at least "error" (as it is by default). -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Wed Jun 18 20:05:24 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 18 Jun 2014 21:05:24 +0100 Subject: Caching servers in Local ISPs !! In-Reply-To: References: <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 18 Jun 2014 20:45, "shahzaib shahzaib" wrote: > > >>why not host those file on a professional CDN instead of in-house? > Because 80% of the traffic is from our country and 50% of that traffic is from the ISP we're talking to and this is the reason we deployed the caching box on this ISP edge. But, as this now pretty off-topic thread is repeatedly demonstrating, you haven't deployed diddly squat. You've just chucked a server in a rack and are having to rely on unpaid, debugging-by-email advice from an pseudonymous mailing list to get it even near functional. Let alone properly defined and understood. If your *business* needs to do this, pay a professional person or organisation to help you like others have suggested. The alternative, which you appear to be ending up with, is a black box of hacks known only to yourself and potentially understood by no-one, which will SPoF on you, personally, until you leave that organisation. You don't want that. Trust me. Just my 2 cents, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Jun 18 21:08:47 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 19 Jun 2014 01:08:47 +0400 Subject: Sticky equivalent In-Reply-To: <53A1DE6C.3080407@keptprivate.com> References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> <20140618144914.GB86855@lo0.su> <53A1DE6C.3080407@keptprivate.com> Message-ID: <20140618210847.GA99146@lo0.su> On Wed, Jun 18, 2014 at 08:46:04PM +0200, Stefanita Rares Dumitrescu wrote: > Oh god 1350$ Consistent hash is free of charge, it's in open source version. > On 18/06/2014 16:49, Ruslan Ermilov wrote: > > On Wed, Jun 18, 2014 at 11:22:24AM -0300, Fabiano Furtado Pessoa Coelho wrote: > >> Dear Ruslan, > >> > >> Can you post an example of using this hash feature? > >> > >> Thanks in advance. > > upstream u { > > hash $binary_remote_addr consistent; > > server 10.0.0.1; > > server 10.0.0.2; > > server 10.0.0.3; > > } > > > > You can use any expression as the "key", e.g. > > > > hash $cookie_uid consistent; > > > > It all depends on your needs actually. > > > >> On Wed, Jun 18, 2014 at 7:59 AM, Ruslan Ermilov wrote: > >>> On Wed, Jun 18, 2014 at 12:24:32PM +0200, Stefanita Rares Dumitrescu wrote: > >>>> Hi guys, > >>>> > >>>> I am running nginx 1.4 right now on a bunch of front end servers, i am > >>>> running freebsd, and have nginx with sticky patch compiled from ports. > >>>> > >>>> I can't upgrade to 1.6 or later, because the sticky port seems to be > >>>> broken. Was there any similar feature introduced in nginx, or how can we > >>>> work that one out? > >>> The latest version of nginx 1.7.2 includes the consistent hash feature [1]. > >>> The commercial version of nginx includes the sticky functionality [2]. > >>> > >>> [1] http://nginx.org/r/hash > >>> [2] http://nginx.org/r/sticky > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ruslan Ermilov From nginx-forum at nginx.us Wed Jun 18 22:25:15 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Wed, 18 Jun 2014 18:25:15 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: <20140618194728.GX1849@mdounin.ru> References: <20140618194728.GX1849@mdounin.ru> Message-ID: Yep it is a PHP application error. My Nginx log now outputs this. I doing more digging to see why it is acting up. 2014/06/18 23:20:49 [error] 3792#7764: *22709 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream, client: 10.71.9.108, server: domain.com, request: "GET /media-gallery/9329-video-2014-06-06-08-25-27.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9018", host: "www.domain.com", referrer: "http://www.domain.com/media-gallery.html?start=60" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251030#msg-251030 From nginx-forum at nginx.us Thu Jun 19 00:42:38 2014 From: nginx-forum at nginx.us (Yumi) Date: Wed, 18 Jun 2014 20:42:38 -0400 Subject: Optimization of Nginx for 128 MB RAM VPS In-Reply-To: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> References: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6fe84af673423f2e8e4e9f8c75399fb5.NginxMailingListEnglish@forum.nginx.org> > But what with the worker_connections? I have read that this is the number of simultaneous connections a single process can handle. How high should this value be? I am expecting lets say 20 connections per hour. I will host there only my personal home page and my school notebook (based on Wordpress). Will increasing this value result in high usage of RAM by idling nginx waiting for new connections? As echoed by others, I'd hardly be worrying about 20 requests / hour, but just FYI, that's a *maximum* number, so if you're not using it, it won't really consume more RAM. nginx handles many connections very well, so it won't really matter if you set it a fair bit higher. Generally PHP/MySQL would be your optimisation targets as they'll consume way more RAM than nginx, but again, for your load, I wouldn't be worried. > Should I set there something more to e.g. to make the file upload possible like with the phpMyAdmin or it will just work out of the box like with Apache and PHP module? If you're uploading large dumps, you may wish to increase the size of client_max_body_size: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251000,251031#msg-251031 From shahzaib.cb at gmail.com Thu Jun 19 05:20:19 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Jun 2014 10:20:19 +0500 Subject: Caching servers in Local ISPs !! In-Reply-To: References: <0afb20e02fe205634feb21cff91be8fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: @Jonathon, yes you're right i should not post off-topic here, offcourse i thought as nginx has tremendous amount of capabilities and there might be alternative possibility of BGP too but i was wrong. I would be thankful if you help me on ngx-http_geo_module as it is related to nginx and help me with the following problem :- --------------------------------------------------------------------------------------------------------------------- Our local ISP provided us with some testing ip prefixes to check nginx based caching. i.e geo { default 0; 10.0.0.0/8 1; 39.23.2.0/24 1; 112.50.192.0/18 1; } Now whenever we add the prefix 112.50.192.0/18 in geo {} , all the requests coming from the 39.23.2.0/24 and 10.0.0.0/8 returns nginx 504 gateway error and videos failed to stream. To resolve this issue, we have to remove 112.50.192.0/18 1; from geo block. On Thu, Jun 19, 2014 at 1:05 AM, Jonathan Matthews wrote: > On 18 Jun 2014 20:45, "shahzaib shahzaib" wrote: > > > > >>why not host those file on a professional CDN instead of in-house? > > Because 80% of the traffic is from our country and 50% of that traffic > is from the ISP we're talking to and this is the reason we deployed the > caching box on this ISP edge. > > But, as this now pretty off-topic thread is repeatedly demonstrating, you > haven't deployed diddly squat. You've just chucked a server in a rack and > are having to rely on unpaid, debugging-by-email advice from an > pseudonymous mailing list to get it even near functional. Let alone > properly defined and understood. > > If your *business* needs to do this, pay a professional person or > organisation to help you like others have suggested. The alternative, which > you appear to be ending up with, is a black box of hacks known only to > yourself and potentially understood by no-one, which will SPoF on you, > personally, until you leave that organisation. You don't want that. Trust > me. > > > Just my 2 cents, > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 19 05:24:53 2014 From: nginx-forum at nginx.us (prkumar) Date: Thu, 19 Jun 2014 01:24:53 -0400 Subject: http Keepalive implementation Message-ID: I was going through NGINX source code to implement keepalive for nginx zeromq plugin that I have developed. I have been inspired by ngx_http_upstream_keepalive_module. Was wondering why nginx uses a kind of two linkedlist based stack implementation to implement keepalive connection pool. Why not use typical linkedlist based queue implementation. Kindly refer to ngx_http_upstream_keepalive_module.c :ngx_http_upstream_get_keepalive_peer ngx_http_upstream_keepalive_module.c:ngx_http_upstream_free_keepalive_peer I used exactly like this without wondering why, now I kind of want to know the reason. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251033,251033#msg-251033 From nginx-forum at nginx.us Thu Jun 19 05:37:56 2014 From: nginx-forum at nginx.us (prkumar) Date: Thu, 19 Jun 2014 01:37:56 -0400 Subject: how to set timer? In-Reply-To: <1d426a08aeb874bc93b7ec8132369e0f.NginxMailingList@forum.nginx.org> References: <0e96d43ade091f18dbb98f62f9445495.NginxMailingList@forum.nginx.org> <1d426a08aeb874bc93b7ec8132369e0f.NginxMailingList@forum.nginx.org> Message-ID: What do yo by it does not work? You mean your handler does not get called? If I were you I will attach gdb put a breakpoint at ngx_event_timer.c::ngx_event_expire_timers and see that why at line number 149 ev->handler(ev); is not getting called.? For nginx debugging and figurng things out nothing works better than gdb trust me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,4265,251034#msg-251034 From nginx-forum at nginx.us Thu Jun 19 05:46:40 2014 From: nginx-forum at nginx.us (prkumar) Date: Thu, 19 Jun 2014 01:46:40 -0400 Subject: upstream on OpenBSD not executing requests In-Reply-To: References: Message-ID: <05ebd2cb579d57fdccca6a05276b68c6.NginxMailingListEnglish@forum.nginx.org> I think you can not have two server directive in upstream, how nginx would know which one to forward request to? Will it do round robin? Not sure about that will have to check.. Can you try by removing one.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250904,251036#msg-251036 From shahzaib.cb at gmail.com Thu Jun 19 06:12:00 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Jun 2014 11:12:00 +0500 Subject: Download full mp4 file with proxy_cache or proxy_store !! Message-ID: we're using two servers (one proxy and one backend). Proxy server is using proxy_cache to cache mp4 files from backend server and working fine. When i stream a full video from cache, the header response gives me the cache-status: HIT but whenever i seek the mp4 file i.e http://url/test.mp4?start=33 , the Cache-status changes to : MISS . Does that mean, the proxy server is again downloading the same file after the 33 seconds ? Can't i use nginx proxy_cache to download whole mp4 file and and than seek from it instead of fetching the file again and again ? Does proxy_store has this functionality if not proxy_cache ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 19 06:57:27 2014 From: nginx-forum at nginx.us (tsunny) Date: Thu, 19 Jun 2014 02:57:27 -0400 Subject: Passing arguments to os.execute() Message-ID: <02a946dfacd81f94dfa630a36b47188b.NginxMailingListEnglish@forum.nginx.org> Hello, How to send the value of a variable to a shell program using os.execute()? I want to send the value of $uri to my shell program. Below is my code, location / { set_by_lua $result 'os.execute("/tmp/test.sh $uri")'; } If I access $1 in my program, the value is just 'uri' not the value of the '$uri'. Can please anyone tell me how to do this correctly. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251038,251038#msg-251038 From nginx-forum at nginx.us Thu Jun 19 07:00:44 2014 From: nginx-forum at nginx.us (tsunny) Date: Thu, 19 Jun 2014 03:00:44 -0400 Subject: $upstream_addr is returning blank Message-ID: Hello, I want to access the value of $upstream_addr. Below is the code, location / { echo "up = $upstream_addr"; } The response is just "up = " . What could be the reason for this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251039,251039#msg-251039 From vbart at nginx.com Thu Jun 19 07:10:31 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 19 Jun 2014 11:10:31 +0400 Subject: upstream on OpenBSD not executing requests In-Reply-To: <05ebd2cb579d57fdccca6a05276b68c6.NginxMailingListEnglish@forum.nginx.org> References: <05ebd2cb579d57fdccca6a05276b68c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2158332.LFLkUTHPKW@vbart-workstation> On Thursday 19 June 2014 01:46:40 prkumar wrote: > I think you can not have two server directive in upstream, how nginx would > know which one to forward request to? > Will it do round robin? Not sure about that will have to check.. Can you try > by removing one.. > [..] You should read the documentation: http://nginx.org/r/upstream The main purpose of "upstream" directive is exactly to create a group of servers and to do load-balancing between them. wbr, Valentin V. Bartenev From vbart at nginx.com Thu Jun 19 07:32:57 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 19 Jun 2014 11:32:57 +0400 Subject: $upstream_addr is returning blank In-Reply-To: References: Message-ID: <2690990.VecHP0dBGW@vbart-workstation> On Thursday 19 June 2014 03:00:44 tsunny wrote: > Hello, > > I want to access the value of $upstream_addr. Below is the code, > > location / { > > echo "up = $upstream_addr"; > > } > > > The response is just "up = " . What could be the reason for this? > The $upstream_addr variable keeps the addresses of the servers that were contacted during request processing. So, if there aren't any at the moment of evaluation of the variable, then it will be empty. wbr, Valentin V. Bartenev From al-nginx at none.at Thu Jun 19 07:46:34 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 19 Jun 2014 09:46:34 +0200 Subject: Sticky equivalent In-Reply-To: <53A1DE6C.3080407@keptprivate.com> References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> <20140618144914.GB86855@lo0.su> <53A1DE6C.3080407@keptprivate.com> Message-ID: Am 18-06-2014 20:46, schrieb Stefanita Rares Dumitrescu: > Oh god 1350$ Per year. It's cheap compared to some other commercial Servers, from my point of view. > On 18/06/2014 16:49, Ruslan Ermilov wrote: >> On Wed, Jun 18, 2014 at 11:22:24AM -0300, Fabiano Furtado Pessoa >> Coelho wrote: >>> Dear Ruslan, >>> >>> Can you post an example of using this hash feature? >>> >>> Thanks in advance. >> upstream u { >> hash $binary_remote_addr consistent; >> server 10.0.0.1; >> server 10.0.0.2; >> server 10.0.0.3; >> } >> >> You can use any expression as the "key", e.g. >> >> hash $cookie_uid consistent; >> >> It all depends on your needs actually. >> >>> On Wed, Jun 18, 2014 at 7:59 AM, Ruslan Ermilov wrote: >>>> On Wed, Jun 18, 2014 at 12:24:32PM +0200, Stefanita Rares Dumitrescu >>>> wrote: >>>>> Hi guys, >>>>> >>>>> I am running nginx 1.4 right now on a bunch of front end servers, i >>>>> am >>>>> running freebsd, and have nginx with sticky patch compiled from >>>>> ports. >>>>> >>>>> I can't upgrade to 1.6 or later, because the sticky port seems to >>>>> be >>>>> broken. Was there any similar feature introduced in nginx, or how >>>>> can we >>>>> work that one out? >>>> The latest version of nginx 1.7.2 includes the consistent hash >>>> feature [1]. >>>> The commercial version of nginx includes the sticky functionality >>>> [2]. >>>> >>>> [1] http://nginx.org/r/hash >>>> [2] http://nginx.org/r/sticky >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Jun 19 08:53:50 2014 From: nginx-forum at nginx.us (crespin) Date: Thu, 19 Jun 2014 04:53:50 -0400 Subject: [PATCH] proposal to remove unused macro ngx_sleep() Message-ID: <1e5c0eff1c92369e4a9b0e2a0f66ffc4.NginxMailingListEnglish@forum.nginx.org> Hello, ngx_sleep() is newer used ... and I guess it will newer used. ./os/unix/ngx_time.h:#define ngx_sleep(s) (void) sleep(s) Regards, yves Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251044,251044#msg-251044 From mdounin at mdounin.ru Thu Jun 19 10:41:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jun 2014 14:41:14 +0400 Subject: Problem with big files In-Reply-To: <5f882c1b478211e080d2a603f4931358.NginxMailingListEnglish@forum.nginx.org> References: <5f882c1b478211e080d2a603f4931358.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140619104114.GY1849@mdounin.ru> Hello! On Tue, Jun 17, 2014 at 06:51:10PM -0400, jakubp wrote: > Hi Justin > > Justin Dorfman Wrote: > ------------------------------------------------------- > > > > > > I use a patch > > > Maxim provided some time ago allowing range requests to receive HTTP > > 206 if > > > a resource is not in cache but it's determined to be cacheable... > > > > > > Can you please link to this patch? > > > > > > http://mailman.nginx.org/pipermail/nginx/2012-April/033522.html JFYI: improved version of this patch was committed in nginx 1.5.13: *) Feature: byte ranges support in the ngx_http_mp4_module and while saving responses to cache. http://hg.nginx.org/nginx/rev/345e4fd4bb64 -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jun 19 12:24:25 2014 From: nginx-forum at nginx.us (tsunny) Date: Thu, 19 Jun 2014 08:24:25 -0400 Subject: $upstream_addr is returning blank In-Reply-To: <2690990.VecHP0dBGW@vbart-workstation> References: <2690990.VecHP0dBGW@vbart-workstation> Message-ID: Thanks Valentin V. Bartenev !! I was using it before the line proxy_pass http://xyz.com After placing the code after proxy_pass, I am able to see the values.. Regards, Sunny Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251039,251049#msg-251049 From nginx-forum at nginx.us Thu Jun 19 13:17:52 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 19 Jun 2014 09:17:52 -0400 Subject: Nginx on Windows PHP fastcgi read timeouts In-Reply-To: References: <20140618194728.GX1849@mdounin.ru> Message-ID: <5bfd9163ee84fb099f2300b8ee0599b7.NginxMailingListEnglish@forum.nginx.org> I found the cause on ever page i noticed it had Google Map's/Location running the person who wrote the Location addon in the component on Joomla for some reason thought it was good coding to wait for a link load. So the reason it time'd out is because it is connecting to a google maps link that never ever loads or finishes loading. (Invalid link.) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250964,251050#msg-251050 From mdounin at mdounin.ru Thu Jun 19 13:38:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jun 2014 17:38:06 +0400 Subject: http Keepalive implementation In-Reply-To: References: Message-ID: <20140619133806.GB1849@mdounin.ru> Hello! On Thu, Jun 19, 2014 at 01:24:53AM -0400, prkumar wrote: > I was going through NGINX source code to implement keepalive for nginx > zeromq plugin that I have developed. > I have been inspired by ngx_http_upstream_keepalive_module. Was wondering > why nginx uses a kind of two linkedlist based stack implementation to > implement keepalive connection pool. Why not use typical linkedlist based > queue implementation. > Kindly refer to ngx_http_upstream_keepalive_module.c > :ngx_http_upstream_get_keepalive_peer > ngx_http_upstream_keepalive_module.c:ngx_http_upstream_free_keepalive_peer > > I used exactly like this without wondering why, now I kind of want to know > the reason. The upstream keepalive module uses the "cache" queue as an LRU queue, to be able to drop unused connections if there isn't enough room. Hence it basically uses it as a stack while storing / retrieving connections. The "free" queue is used to avoid runtime allocations. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jun 19 13:53:24 2014 From: nginx-forum at nginx.us (roman_mir) Date: Thu, 19 Jun 2014 09:53:24 -0400 Subject: upstream on OpenBSD not executing requests In-Reply-To: <2158332.LFLkUTHPKW@vbart-workstation> References: <2158332.LFLkUTHPKW@vbart-workstation> Message-ID: Valentin, thank you for your insight, I am upset at myself for not noticing some obvious errors, but that's the result of my stupid schedule, which I am personally responsible for at the end. So yes, it was supposed to be 10.1.1.10 and 10.1.1.11. At this point the balancing works, I am balancing 2 Tomcat instances, having session replication between them and using ip_hash to force the same session to go to the same instance. I guess I just have to specify conditions under which the balancer will assume that an instance is down and force all requests to just one live instance. Also working out the details of https connectivity. This is on OpenBSD and since there were new OpenSSL security issues fixed on the 5th of June, this means patching first. Nginx configuration for SSL looks simple enough on the surface, I will soon find out how it goes in reality :) Thank you, Roman Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250904,251053#msg-251053 From luky-37 at hotmail.com Thu Jun 19 15:36:13 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 19 Jun 2014 17:36:13 +0200 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: Message-ID: > we're using two servers (one proxy and one backend). Proxy server? > is using proxy_cache to cache mp4 files from backend server and working? > fine. When i stream a full video from cache, the header response gives? > me the cache-status: HIT but whenever i seek the mp4 file i.e? > http://url/test.mp4?start=33 , the Cache-status changes to : MISS .? > Does that mean, the proxy server is again downloading the same file? > after the 33 seconds ?? >? > Can't i use nginx proxy_cache to download whole mp4 file and and than? > seek from it instead of fetching the file again and again ? Does? > proxy_store has this functionality if not proxy_cache ?? You would not have this problem with local files (rsync'ing them to your server, as was previsouly suggested in the other thread). What nginx release are you using? You probably need at least 1.5.13 as per: http://mailman.nginx.org/pipermail/nginx/2014-June/044118.html From nunomagalhaes at eu.ipp.pt Thu Jun 19 15:52:08 2014 From: nunomagalhaes at eu.ipp.pt (=?UTF-8?Q?Nuno_Magalh=C3=A3es?=) Date: Thu, 19 Jun 2014 16:52:08 +0100 Subject: Optimization of Nginx for 128 MB RAM VPS In-Reply-To: <6fe84af673423f2e8e4e9f8c75399fb5.NginxMailingListEnglish@forum.nginx.org> References: <6770c4e155dce95007379f703f19a7c5.NginxMailingListEnglish@forum.nginx.org> <6fe84af673423f2e8e4e9f8c75399fb5.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Thu, Jun 19, 2014 at 1:42 AM, Yumi wrote: > If you're uploading large dumps, you may wish to increase the size of > client_max_body_size And PHP's post_max_size: http://pt2.php.net/manual/en/ini.core.php#ini.post-max-size From shahzaib.cb at gmail.com Thu Jun 19 18:59:14 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Jun 2014 23:59:14 +0500 Subject: using 2000+ ip prefixes in nginx geo module !! Message-ID: We've added 2000+ ip prefixes in a file "geo.conf" included in nginx vhost by using ngx-http_geo_module and received the following warning :- 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22", value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.251.176.0/22", value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:50 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "202.141.224.0/19", value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:1312 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "202.142.160.0/19", value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:1355 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "202.5.136.0/21", value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:1528 Due to it, nginx showing 504 gateway error for all ips included in geo.conf file -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Jun 19 19:07:43 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 19 Jun 2014 20:07:43 +0100 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: References: Message-ID: On 19 June 2014 19:59, shahzaib shahzaib wrote: > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx vhost > by using ngx-http_geo_module and received the following warning :- > > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22", > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 What makes you think that this error message is incorrect? If it's correct and you have a duplicate entry, resolving the problem should be pretty simple ... From shahzaib.cb at gmail.com Thu Jun 19 20:06:48 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 20 Jun 2014 01:06:48 +0500 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: References: Message-ID: For testing purpose, i have added only few prefixes :- geo { default 0; include geo.conf; } geo.conf 39.49.59.0/24 PK; 110.93.192.0/24 TW; 110.93.192.0/18 TW; 117.20.16.0/20 TW; 119.63.128.0/20 TW; 202.163.104.6/32 ARY; 203.124.63.0/24 CM; 221.132.112.0/21 TW; Now, whenever some ip from the list send request, nginx reply with gateway timeout :- curl -I http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4 HTTP/1.1 504 Gateway Time-out Server: nginx Date: Thu, 19 Jun 2014 19:59:50 GMT Content-Type: text/html Content-Length: 176 Connection: keep-alive In order to resolve this error, i have to manually remove a network from the file which is 110.93.192.0/18 TW; What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is causing to crash every other requests ? On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews wrote: > On 19 June 2014 19:59, shahzaib shahzaib wrote: > > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx > vhost > > by using ngx-http_geo_module and received the following warning :- > > > > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22", > > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 > > What makes you think that this error message is incorrect? > If it's correct and you have a duplicate entry, resolving the problem > should be pretty simple ... > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Thu Jun 19 20:12:04 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 19 Jun 2014 21:12:04 +0100 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: References: Message-ID: <53A34414.30104@swsystem.co.uk> These 2 overlap 110.93.192.0/24 TW; 110.93.192.0/18 TW; The /24 is within the /18. In this instance you want to remove the /24. It might be worth investigating if you've got any others that overlap. I think you can probably override with a different country code but using the same makes no sense. Steve. On 19/06/14 21:06, shahzaib shahzaib wrote: > For testing purpose, i have added only few prefixes :- > > geo { > default 0; > include geo.conf; > } > > geo.conf > > 39.49.59.0/24 PK; > 110.93.192.0/24 TW; > 110.93.192.0/18 TW; > 117.20.16.0/20 TW; > 119.63.128.0/20 TW; > 202.163.104.6/32 ARY; > 203.124.63.0/24 CM; > 221.132.112.0/21 TW; > > > Now, whenever some ip from the list send request, nginx reply with > gateway timeout :- > > curl -I http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4 > HTTP/1.1 504 Gateway Time-out > Server: nginx > Date: Thu, 19 Jun 2014 19:59:50 GMT > Content-Type: text/html > Content-Length: 176 > Connection: keep-alive > > In order to resolve this error, i have to manually remove a network > from the file which is 110.93.192.0/18 TW; > > What so suspicious with this prefix 110.93.192.0/18 > TW ? Why it is causing to crash every other > requests ? > > > > On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews > > wrote: > > On 19 June 2014 19:59, shahzaib shahzaib > wrote: > > We've added 2000+ ip prefixes in a file "geo.conf" included in > nginx vhost > > by using ngx-http_geo_module and received the following warning :- > > > > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network > "103.24.96.0/22 ", > > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 > > What makes you think that this error message is incorrect? > If it's correct and you have a duplicate entry, resolving the problem > should be pretty simple ... > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lordnynex at gmail.com Thu Jun 19 21:32:57 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Thu, 19 Jun 2014 16:32:57 -0500 Subject: http Keepalive implementation In-Reply-To: References: Message-ID: Hello, It seems ZMQ and Nginx keep coming up in conversations lately. There seems to be differing opinions on it's performance/feasibility in production. Do you plan to open source your code? I'd be curious to look at it. On Thu, Jun 19, 2014 at 12:24 AM, prkumar wrote: > I was going through NGINX source code to implement keepalive for nginx > zeromq plugin that I have developed. > I have been inspired by ngx_http_upstream_keepalive_module. Was wondering > why nginx uses a kind of two linkedlist based stack implementation to > implement keepalive connection pool. Why not use typical linkedlist based > queue implementation. > Kindly refer to ngx_http_upstream_keepalive_module.c > :ngx_http_upstream_get_keepalive_peer > ngx_http_upstream_keepalive_module.c:ngx_http_upstream_free_keepalive_peer > > I used exactly like this without wondering why, now I kind of want to know > the reason. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,251033,251033#msg-251033 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Jun 20 04:55:40 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 20 Jun 2014 09:55:40 +0500 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: <53A34414.30104@swsystem.co.uk> References: <53A34414.30104@swsystem.co.uk> Message-ID: I removed /24 on per your suggestion and also used different code for override but the issue persists. Modified geo.conf :- 39.49.59.0/24 PK; 110.93.192.0/18 US; 117.20.16.0/20 TW; 119.63.128.0/20 TW; 202.163.104.6/32 ARY; 203.124.63.0/24 CM; 221.132.112.0/21 TW; 110.93.192.0/24 TW; is not added now. On Fri, Jun 20, 2014 at 1:12 AM, Steve Wilson wrote: > These 2 overlap > > 110.93.192.0/24 TW; > 110.93.192.0/18 TW; > > The /24 is within the /18. In this instance you want to remove the /24. > > It might be worth investigating if you've got any others that overlap. I > think you can probably override with a different country code but using the > same makes no sense. > > Steve. > > > On 19/06/14 21:06, shahzaib shahzaib wrote: > > For testing purpose, i have added only few prefixes :- > > geo { > default 0; > include geo.conf; > } > > geo.conf > > 39.49.59.0/24 PK; > 110.93.192.0/24 TW; > 110.93.192.0/18 TW; > 117.20.16.0/20 TW; > 119.63.128.0/20 TW; > 202.163.104.6/32 ARY; > 203.124.63.0/24 CM; > 221.132.112.0/21 TW; > > > Now, whenever some ip from the list send request, nginx reply with > gateway timeout :- > > curl -I http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4 > HTTP/1.1 504 Gateway Time-out > Server: nginx > Date: Thu, 19 Jun 2014 19:59:50 GMT > Content-Type: text/html > Content-Length: 176 > Connection: keep-alive > > In order to resolve this error, i have to manually remove a network from > the file which is 110.93.192.0/18 TW; > > What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is > causing to crash every other requests ? > > > > On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews < > contact at jpluscplusm.com> wrote: > >> On 19 June 2014 19:59, shahzaib shahzaib wrote: >> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx >> vhost >> > by using ngx-http_geo_module and received the following warning :- >> > >> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22", >> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 >> >> What makes you think that this error message is incorrect? >> If it's correct and you have a duplicate entry, resolving the problem >> should be pretty simple ... >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Jun 20 04:57:27 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 20 Jun 2014 09:57:27 +0500 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: References: <53A34414.30104@swsystem.co.uk> Message-ID: Issue will only resolve once i remove 110.93.192.0/18 US; from geo.conf. On Fri, Jun 20, 2014 at 9:55 AM, shahzaib shahzaib wrote: > I removed /24 on per your suggestion and also used different code for > override but the issue persists. Modified geo.conf :- > > 39.49.59.0/24 PK; > 110.93.192.0/18 US; > > 117.20.16.0/20 TW; > 119.63.128.0/20 TW; > 202.163.104.6/32 ARY; > 203.124.63.0/24 CM; > 221.132.112.0/21 TW; > > 110.93.192.0/24 TW; is not added now. > > > On Fri, Jun 20, 2014 at 1:12 AM, Steve Wilson > wrote: > >> These 2 overlap >> >> 110.93.192.0/24 TW; >> 110.93.192.0/18 TW; >> >> The /24 is within the /18. In this instance you want to remove the /24. >> >> It might be worth investigating if you've got any others that overlap. I >> think you can probably override with a different country code but using the >> same makes no sense. >> >> Steve. >> >> >> On 19/06/14 21:06, shahzaib shahzaib wrote: >> >> For testing purpose, i have added only few prefixes :- >> >> geo { >> default 0; >> include geo.conf; >> } >> >> geo.conf >> >> 39.49.59.0/24 PK; >> 110.93.192.0/24 TW; >> 110.93.192.0/18 TW; >> 117.20.16.0/20 TW; >> 119.63.128.0/20 TW; >> 202.163.104.6/32 ARY; >> 203.124.63.0/24 CM; >> 221.132.112.0/21 TW; >> >> >> Now, whenever some ip from the list send request, nginx reply with >> gateway timeout :- >> >> curl -I >> http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4 >> HTTP/1.1 504 Gateway Time-out >> Server: nginx >> Date: Thu, 19 Jun 2014 19:59:50 GMT >> Content-Type: text/html >> Content-Length: 176 >> Connection: keep-alive >> >> In order to resolve this error, i have to manually remove a network from >> the file which is 110.93.192.0/18 TW; >> >> What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is >> causing to crash every other requests ? >> >> >> >> On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews < >> contact at jpluscplusm.com> wrote: >> >>> On 19 June 2014 19:59, shahzaib shahzaib wrote: >>> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx >>> vhost >>> > by using ngx-http_geo_module and received the following warning :- >>> > >>> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22", >>> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 >>> >>> What makes you think that this error message is incorrect? >>> If it's correct and you have a duplicate entry, resolving the problem >>> should be pretty simple ... >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Jun 20 05:04:17 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 20 Jun 2014 10:04:17 +0500 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: References: <53A34414.30104@swsystem.co.uk> Message-ID: looks like i have got the issue. Any requests comes from the ip located in geo.conf will be forwarded to a domain whose ip resolve into 110.93.X.X. Now when a request comes from the ip 110.93.X.X , nginx somehow unable to proxy_pass this prefix(110.93.X.X) it to the ip 110.93.X.X and shows the bad gateway error. On Fri, Jun 20, 2014 at 9:57 AM, shahzaib shahzaib wrote: > Issue will only resolve once i remove 110.93.192.0/18 US; from geo.conf. > > > On Fri, Jun 20, 2014 at 9:55 AM, shahzaib shahzaib > wrote: > >> I removed /24 on per your suggestion and also used different code for >> override but the issue persists. Modified geo.conf :- >> >> 39.49.59.0/24 PK; >> 110.93.192.0/18 US; >> >> 117.20.16.0/20 TW; >> 119.63.128.0/20 TW; >> 202.163.104.6/32 ARY; >> 203.124.63.0/24 CM; >> 221.132.112.0/21 TW; >> >> 110.93.192.0/24 TW; is not added now. >> >> >> On Fri, Jun 20, 2014 at 1:12 AM, Steve Wilson > > wrote: >> >>> These 2 overlap >>> >>> 110.93.192.0/24 TW; >>> 110.93.192.0/18 TW; >>> >>> The /24 is within the /18. In this instance you want to remove the /24. >>> >>> It might be worth investigating if you've got any others that overlap. I >>> think you can probably override with a different country code but using the >>> same makes no sense. >>> >>> Steve. >>> >>> >>> On 19/06/14 21:06, shahzaib shahzaib wrote: >>> >>> For testing purpose, i have added only few prefixes :- >>> >>> geo { >>> default 0; >>> include geo.conf; >>> } >>> >>> geo.conf >>> >>> 39.49.59.0/24 PK; >>> 110.93.192.0/24 TW; >>> 110.93.192.0/18 TW; >>> 117.20.16.0/20 TW; >>> 119.63.128.0/20 TW; >>> 202.163.104.6/32 ARY; >>> 203.124.63.0/24 CM; >>> 221.132.112.0/21 TW; >>> >>> >>> Now, whenever some ip from the list send request, nginx reply with >>> gateway timeout :- >>> >>> curl -I >>> http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4 >>> HTTP/1.1 504 Gateway Time-out >>> Server: nginx >>> Date: Thu, 19 Jun 2014 19:59:50 GMT >>> Content-Type: text/html >>> Content-Length: 176 >>> Connection: keep-alive >>> >>> In order to resolve this error, i have to manually remove a network >>> from the file which is 110.93.192.0/18 TW; >>> >>> What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is >>> causing to crash every other requests ? >>> >>> >>> >>> On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews < >>> contact at jpluscplusm.com> wrote: >>> >>>> On 19 June 2014 19:59, shahzaib shahzaib wrote: >>>> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx >>>> vhost >>>> > by using ngx-http_geo_module and received the following warning :- >>>> > >>>> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22 >>>> ", >>>> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40 >>>> >>>> What makes you think that this error message is incorrect? >>>> If it's correct and you have a duplicate entry, resolving the problem >>>> should be pretty simple ... >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Jun 20 05:10:08 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 20 Jun 2014 07:10:08 +0200 Subject: Sticky equivalent In-Reply-To: References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> <20140618144914.GB86855@lo0.su> <53A1DE6C.3080407@keptprivate.com> Message-ID: On Thu, Jun 19, 2014 at 9:46 AM, Aleksandar Lazic wrote: > > Am 18-06-2014 20:46, schrieb Stefanita Rares Dumitrescu: > >> Oh god 1350$ >> > > Per year. > > It's cheap compared to some other commercial Servers, from my point of > view. > ?... and it is expensive compared to the open-source Apache.? ?nginx as FOSS is great. nginx Plus is worth as much as other commercial/closed-source products.?.. Well. The 'sticky' property is out of scope of this ML since not part of the FOSS nginx. End of discussion. The docs are already polluted with mentions of the commercial product since, surprisingly, nginx Plus docs have not been separated from nginx FOSS ones. On a personal stance, I would like not to see this ML polluted with discussions on nginx Plus. ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Fri Jun 20 08:51:38 2014 From: lists at ruby-forum.com (Yifeng Wang) Date: Fri, 20 Jun 2014 10:51:38 +0200 Subject: ssl proxys https web server is very slow Message-ID: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> Hi, It's my first time using NGINX to proxy other web servers. I set a variable in location, this variable may be gotten in cookie or args. if I use it directly likes "proxy_pass https://$nodeIp2;", it will get the response for a long time. but if I hardcode likes "proxy_pass https://147.128.22.152:8443" it works normally. Do I need to set more cofiguration parameters to solve this problem.Below is the segment of my windows https configuration. http { ... server { listen 443 ssl; server_name localhost; ssl_certificate server.crt; ssl_certificate_key server.key; location /pau6000lct/ { set $nodeIp 147.128.22.152:8443; proxy_pass https://$nodeIp; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } } } -- Posted via http://www.ruby-forum.com/. From al-nginx at none.at Fri Jun 20 12:07:43 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 20 Jun 2014 14:07:43 +0200 Subject: Sticky equivalent In-Reply-To: References: <53A168E0.70309@keptprivate.com> <20140618105904.GA81997@lo0.su> <20140618144914.GB86855@lo0.su> <53A1DE6C.3080407@keptprivate.com> Message-ID: Dear B. R. > Am 20-06-2014 07:10, schrieb B.R.: >> On Thu, Jun 19, 2014 at 9:46 AM, Aleksandar Lazic >> wrote: >> >>> Am 18-06-2014 20:46, schrieb Stefanita Rares Dumitrescu: >>> Oh god 1350$ >> >> Per year. >> >> It's cheap compared to some other commercial Servers, from my point of >> view. > ? ... and it is expensive compared to the open-source Apache.? No comment. ?> nginx as FOSS is great. Full Ack. > nginx Plus is worth as much as other commercial/closed-source > products.?.. On this point we have different opinions. > Well. > The 'sticky' property is out of scope of this ML since not part of the > FOSS nginx. I haven't seen in the past such a separation, but it's ok for me , if the ML-Moderators want this. http://nginx.org/en/support.html > End of discussion. It's up to you and every other ML participant to answer to a post on the ML or not. Command not acknowledge. Well, I'm with you that the price discussion is here wrong. > The docs are already polluted with mentions of the commercial product > since, surprisingly, > nginx Plus docs have not been separated from nginx FOSS ones. I like to know which features are available on a SW even in the commercial one. That's my point of view, of course yours could be another one. > On a personal stance, I would like not to see this ML polluted with > discussions on nginx Plus. But may be other? In the past I haven't seen to much such discussions on this ML. > B. R. Cheers Aleks From mdounin at mdounin.ru Fri Jun 20 12:20:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 16:20:30 +0400 Subject: ssl proxys https web server is very slow In-Reply-To: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> References: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> Message-ID: <20140620122030.GG1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 10:51:38AM +0200, Yifeng Wang wrote: > Hi, It's my first time using NGINX to proxy other web servers. I set a > variable in location, this variable may be gotten in cookie or args. if > I use it directly likes "proxy_pass https://$nodeIp2;", it will get the > response for a long time. but if I hardcode likes "proxy_pass > https://147.128.22.152:8443" it works normally. Do I need to set more > cofiguration parameters to solve this problem.Below is the segment of my > windows https configuration. > > http { > ... > server { > listen 443 ssl; > server_name localhost; > > ssl_certificate server.crt; > ssl_certificate_key server.key; > > location /pau6000lct/ { > set $nodeIp 147.128.22.152:8443; > proxy_pass https://$nodeIp; Use of variables in the proxy_pass, in particular, implies that SSL sessions will not be reused (as upstream address is not known in advance, and there is no associated storage for an SSL session). This means that each connection will have to do full SSL handshake, and this is likely the reason for the performance problems you see. Solution is to use proxy_pass without variables, or use preconfigured upstream{} blocks instead of ip addresses if you have to use variables. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 20 13:05:44 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 20 Jun 2014 09:05:44 -0400 Subject: strange map $request issue Message-ID: For example: map $request $testvar { default 0; ~*montytest 1; } if ($testvar) { return 412; } [20/Jun/2014:xx:xx:20 +0200] 69.64.xxxx:52393 - - "GET /montytest/ HTTP/1.1" 412 712 "-" "Mozilla/5.0........ [20/Jun/2014:xx:xx:29 +0200] 115.254.xxxx:59855 - - "HEAD /montytest/ HTTP/1.1" 400 0 "-" "-" "-" - Why does a GET work but not a HEAD ?? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251074,251074#msg-251074 From mdounin at mdounin.ru Fri Jun 20 13:21:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 17:21:23 +0400 Subject: strange map $request issue In-Reply-To: References: Message-ID: <20140620132123.GH1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 09:05:44AM -0400, itpp2012 wrote: > For example: > > map $request $testvar { > default 0; > ~*montytest 1; > } > > if ($testvar) { return 412; } > > [20/Jun/2014:xx:xx:20 +0200] 69.64.xxxx:52393 - - "GET /montytest/ HTTP/1.1" > 412 712 "-" "Mozilla/5.0........ > > [20/Jun/2014:xx:xx:29 +0200] 115.254.xxxx:59855 - - "HEAD /montytest/ > HTTP/1.1" 400 0 "-" "-" "-" - > > Why does a GET work but not a HEAD ?? It looks like the HEAD request was invalid, and hence 400 was returned before any further processing. The reason why 400 was returned should be in error log at info level. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 20 13:35:20 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 20 Jun 2014 09:35:20 -0400 Subject: strange map $request issue In-Reply-To: <20140620132123.GH1849@mdounin.ru> References: <20140620132123.GH1849@mdounin.ru> Message-ID: <7c6b047eaf73705bb7f8cf7c1b387c04.NginxMailingListEnglish@forum.nginx.org> The log entries are both in access.log, nothing in error.log, maybe a try_files thing ? though the IF is after try_files and works for a GET. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251074,251076#msg-251076 From mdounin at mdounin.ru Fri Jun 20 14:11:52 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 18:11:52 +0400 Subject: using 2000+ ip prefixes in nginx geo module !! In-Reply-To: <53A34414.30104@swsystem.co.uk> References: <53A34414.30104@swsystem.co.uk> Message-ID: <20140620141152.GI1849@mdounin.ru> Hello! On Thu, Jun 19, 2014 at 09:12:04PM +0100, Steve Wilson wrote: > These 2 overlap > > 110.93.192.0/24 TW; > 110.93.192.0/18 TW; > > The /24 is within the /18. In this instance you want to remove the /24. > > It might be worth investigating if you've got any others that overlap. I > think you can probably override with a different country code but using > the same makes no sense. For nginx, overlapping of CIDR blocks doesn't matter - it's correct and expected use case. It may appear, e.g., if a more specific block has some additional properties in the original data, or if some intermediate block was present at some point, but later was removed. Warning messages will only appear if exactly the same block is already present. That is, the following will produce a warning: 127.0.0.0/8 ZZ; 127.0.0.0/8 ZZ; But this will be fine: 127.0.0.0/8 ZZ; 127.0.0.0/24 ZZ; Note well that the warning messages are just warning messages. Configuration is handled fine, duplicate blocks will be simply ignored. The problem of the original question author is likely completely unrelated. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jun 20 14:24:05 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 18:24:05 +0400 Subject: strange map $request issue In-Reply-To: <7c6b047eaf73705bb7f8cf7c1b387c04.NginxMailingListEnglish@forum.nginx.org> References: <20140620132123.GH1849@mdounin.ru> <7c6b047eaf73705bb7f8cf7c1b387c04.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140620142405.GK1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 09:35:20AM -0400, itpp2012 wrote: > The log entries are both in access.log, nothing in error.log, maybe a > try_files thing ? though the IF is after try_files and works for a GET. Note that you have to configure error_log to log "info" level messages. By default, logging level is "error". http://nginx.org/r/error_log -- Maxim Dounin http://nginx.org/ From moseleymark at gmail.com Fri Jun 20 17:14:54 2014 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 20 Jun 2014 10:14:54 -0700 Subject: ssl proxys https web server is very slow In-Reply-To: <20140620122030.GG1849@mdounin.ru> References: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> <20140620122030.GG1849@mdounin.ru> Message-ID: On Fri, Jun 20, 2014 at 5:20 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jun 20, 2014 at 10:51:38AM +0200, Yifeng Wang wrote: > > > Hi, It's my first time using NGINX to proxy other web servers. I set a > > variable in location, this variable may be gotten in cookie or args. if > > I use it directly likes "proxy_pass https://$nodeIp2;", it will get the > > response for a long time. but if I hardcode likes "proxy_pass > > https://147.128.22.152:8443" it works normally. Do I need to set more > > cofiguration parameters to solve this problem.Below is the segment of my > > windows https configuration. > > > > http { > > ... > > server { > > listen 443 ssl; > > server_name localhost; > > > > ssl_certificate server.crt; > > ssl_certificate_key server.key; > > > > location /pau6000lct/ { > > set $nodeIp 147.128.22.152:8443; > > proxy_pass https://$nodeIp; > > Use of variables in the proxy_pass, in particular, implies that > SSL sessions will not be reused (as upstream address is not known > in advance, and there is no associated storage for an SSL > session). This means that each connection will have to do full > SSL handshake, and this is likely the reason for the performance > problems you see. > > Solution is to use proxy_pass without variables, or use > preconfigured upstream{} blocks instead of ip addresses if you > have to use variables. > So to prevent the heart attack I almost just had, can you confirm how I interpret that last statement: If you define your upstream using "upstream upstream_name etc" and then use a variable indicating the name of the upstream in proxy_pass statement, that will *not* cause SSL sessions to not be reused. I.e. proxy_pass with a variable indicating upstream would not cause a performance issue. Is that correct? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 20 17:35:50 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 20 Jun 2014 13:35:50 -0400 Subject: strange map $request issue In-Reply-To: <20140620142405.GK1849@mdounin.ru> References: <20140620142405.GK1849@mdounin.ru> Message-ID: <5fd565663dadc25907f938e5d7d781e7.NginxMailingListEnglish@forum.nginx.org> It was a malformed request so a 400 is correct, a valid HEAD in this case does return a 412. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251074,251080#msg-251080 From mdounin at mdounin.ru Fri Jun 20 19:13:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 23:13:07 +0400 Subject: ssl proxys https web server is very slow In-Reply-To: References: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> <20140620122030.GG1849@mdounin.ru> Message-ID: <20140620191306.GP1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 10:14:54AM -0700, Mark Moseley wrote: > On Fri, Jun 20, 2014 at 5:20 AM, Maxim Dounin wrote: > > > Hello! > > > > On Fri, Jun 20, 2014 at 10:51:38AM +0200, Yifeng Wang wrote: > > > > > Hi, It's my first time using NGINX to proxy other web servers. I set a > > > variable in location, this variable may be gotten in cookie or args. if > > > I use it directly likes "proxy_pass https://$nodeIp2;", it will get the > > > response for a long time. but if I hardcode likes "proxy_pass > > > https://147.128.22.152:8443" it works normally. Do I need to set more > > > cofiguration parameters to solve this problem.Below is the segment of my > > > windows https configuration. > > > > > > http { > > > ... > > > server { > > > listen 443 ssl; > > > server_name localhost; > > > > > > ssl_certificate server.crt; > > > ssl_certificate_key server.key; > > > > > > location /pau6000lct/ { > > > set $nodeIp 147.128.22.152:8443; > > > proxy_pass https://$nodeIp; > > > > Use of variables in the proxy_pass, in particular, implies that > > SSL sessions will not be reused (as upstream address is not known > > in advance, and there is no associated storage for an SSL > > session). This means that each connection will have to do full > > SSL handshake, and this is likely the reason for the performance > > problems you see. > > > > Solution is to use proxy_pass without variables, or use > > preconfigured upstream{} blocks instead of ip addresses if you > > have to use variables. > > > > So to prevent the heart attack I almost just had, can you confirm how I > interpret that last statement: > > If you define your upstream using "upstream upstream_name etc" and then use > a variable indicating the name of the upstream in proxy_pass statement, > that will *not* cause SSL sessions to not be reused. I.e. proxy_pass with a > variable indicating upstream would not cause a performance issue. > > Is that correct? Yes. If there is an upstream{} block, SSL sessions with upstream servers will be reused regardless of use of variables in the proxy_pass directive. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Fri Jun 20 19:14:36 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 21 Jun 2014 00:14:36 +0500 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: Message-ID: @Lukas, we're using nginx-1.6 and byte range caching is already enabled by default(i guess). Below is the curl request :- curl -H Range:bytes=16- -I http://videos.files.com/files/videos/2014/06/20/14032606291de19-360.mp4 HTTP/1.1 206 Partial Content Server: nginx Date: Fri, 20 Jun 2014 13:36:05 GMT Content-Type: video/mp4 Content-Length: 25446010 Connection: keep-alive Last-Modified: Fri, 20 Jun 2014 11:04:11 GMT ETag: "53a4152b-184468a" Expires: Fri, 27 Jun 2014 13:36:05 GMT Cache-Control: max-age=604800 X-Cache-Status: HIT Content-Range: bytes 16-25446025/25446026 Could you tell me how can i check using curl that if nginx downloading the whole file each time the user seek the video with mp4 psuedo module i.e http://url/files/videos/test-360.mp4?start=39. I am newbie to proxy_cache and much confused about the behavior. I know rsync is better solution but it cannot cache the videos on fly instead to have a schedule to rsync file in off-peak hours. We want to cache only videos which are accessed 10 times and nginx is doing well with proxy_cache_min directive. On Thu, Jun 19, 2014 at 8:36 PM, Lukas Tribus wrote: > > we're using two servers (one proxy and one backend). Proxy server > > is using proxy_cache to cache mp4 files from backend server and working > > fine. When i stream a full video from cache, the header response gives > > me the cache-status: HIT but whenever i seek the mp4 file i.e > > http://url/test.mp4?start=33 , the Cache-status changes to : MISS . > > Does that mean, the proxy server is again downloading the same file > > after the 33 seconds ? > > > > Can't i use nginx proxy_cache to download whole mp4 file and and than > > seek from it instead of fetching the file again and again ? Does > > proxy_store has this functionality if not proxy_cache ? > > You would not have this problem with local files (rsync'ing them to your > server, as was previsouly suggested in the other thread). > > > What nginx release are you using? You probably need at least 1.5.13 as > per: > http://mailman.nginx.org/pipermail/nginx/2014-June/044118.html > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eswenson at intertrust.com Fri Jun 20 19:32:36 2014 From: eswenson at intertrust.com (Eric Swenson) Date: Fri, 20 Jun 2014 19:32:36 +0000 Subject: No CORS Workaround - SSL Proxy Message-ID: We run a API web service and have two web sites that access the web service via AJAX. The web sites are accessed via HTTPS and, for security reasons, we need to have the API web service also accessed by HTTPS. Due to the need to support the IE9 browser, which does not properly support CORS, we are unable to have the web applications on our web servers configured to access the API web service through a different hostname than the hostnames of the two web sites. Consequently, we trick IE9 into thinking the origin host (web site) and destination host (API service) are on the same host and proxy requests from the web sites to the web service via proxy_pass. Unfortunately, since the API web service must be accessed by HTTPS, nginx has to establish an SSL session with the API web service, because we cannot proxy to HTTP. Our config looks something like this ? for simplicity I only show one of the web sites nginx config. server { listen 443; server_name app.example.com; // this is the web application server_tokens off; ssl on; ssl_certificate cert.pem; ssl_certificate_key cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; // this URL pattern is interpreted as meaning: forward the request to the web service running on another host location /svc/api/ { proxy_pass https://svc.example.com/api/; // this is the web service running on another host proxy_set_header Host svc.example.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } Location / { // normal web site access here } ? } This works fine. However, every once in a while (say, every week or so), traffic to https://app.example.com/svc/api/xxxx returns gateway 502 errors. The API service (located at https://svc.example.com/api) is working fine and is accessible directly. However, through the proxy setup (above), nginx will not pass traffic. Simply restarting nginx gets it working again for another week or so, only to have it get into the same state again some random interval later. Does anyone have any ideas what might be causing nginx to fail to proxy traffic when no changes to the configuration have been made and the backend service is functioning normally? Since I anticipate some will want to tell me that proxying to HTTPS is a bad idea, please realize we do not have the luxury of talking to the backend service (which lives on the Internet and is accessed by multiple parties) via HTTP. Also, yes, I realize that the proxy_set_header stuff probably has no useful effect with HTTPS proxying. Thanks much in advance. ? Eric From mdounin at mdounin.ru Fri Jun 20 22:46:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Jun 2014 02:46:26 +0400 Subject: No CORS Workaround - SSL Proxy In-Reply-To: References: Message-ID: <20140620224626.GR1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 07:32:36PM +0000, Eric Swenson wrote: [...] > This works fine. However, every once in a while (say, every > week or so), traffic to https://app.example.com/svc/api/xxxx > returns gateway 502 errors. The API service (located at > https://svc.example.com/api) is working fine and is accessible > directly. However, through the proxy setup (above), nginx will > not pass traffic. Simply restarting nginx gets it working again > for another week or so, only to have it get into the same state > again some random interval later. > > Does anyone have any ideas what might be causing nginx to fail > to proxy traffic when no changes to the configuration have been > made and the backend service is functioning normally? First of all, it may be a good idea to take a look into error log. -- Maxim Dounin http://nginx.org/ From eswenson at intertrust.com Fri Jun 20 23:29:39 2014 From: eswenson at intertrust.com (Eric Swenson) Date: Fri, 20 Jun 2014 23:29:39 +0000 Subject: No CORS Workaround - SSL Proxy In-Reply-To: <20140620224626.GR1849@mdounin.ru> References: <20140620224626.GR1849@mdounin.ru> Message-ID: Hello Maxim, On 6/20/14, 3:46 PM, "Maxim Dounin" wrote: >Hello! > >On Fri, Jun 20, 2014 at 07:32:36PM +0000, Eric Swenson wrote: > >[...] > >> This works fine. However, every once in a while (say, every >> week or so), traffic to https://app.example.com/svc/api/xxxx >> returns gateway 502 errors. The API service (located at >> https://svc.example.com/api) is working fine and is accessible >> directly. However, through the proxy setup (above), nginx will >> not pass traffic. Simply restarting nginx gets it working again >> for another week or so, only to have it get into the same state >> again some random interval later. >> >> Does anyone have any ideas what might be causing nginx to fail >> to proxy traffic when no changes to the configuration have been >> made and the backend service is functioning normally? > >First of all, it may be a good idea to take a look into error log. The only messages in the error log were ones for hours before and for hours after the ?problem?, of the form: 2014/06/17 17:14:16 [error] 28165#0: *1508 no user/password was provided for basic authentication, client: 11.12.13.14, server: app.test.example.com, request: "GET / HTTP/1.1", host: ?app.test.example.com" There was nothing related to the 502 errors around the time of the first request that failed with a 502 error code. The access log shows quite a few requests failing with a 502 status code from that point forward, up until the time that I restarted nginx. ? Eric From falcacibar at gmail.com Sat Jun 21 03:22:43 2014 From: falcacibar at gmail.com (Felipe Buccioni) Date: Fri, 20 Jun 2014 23:22:43 -0400 Subject: mail proxy auth Message-ID: I have wrote a perl module for nginx to authenticate IMAP/POP proxy against another IMAP server and I like your feedback because I wrote a little quick-and-dirty, and with your help this could be useful for many people the community. The module also have cool stuff like per-domain servers, ssl, DBI and plaintext for lookup servers/domains. All your feedback/comments is very welcome. https://github.com/falcacibar/nginx_auth_imap_perl Felipe Alcacibar Buccioni Development, consultancy and integration in Web, ITC and GIS solutions and systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Sat Jun 21 07:10:38 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 21 Jun 2014 09:10:38 +0200 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: , , Message-ID: Hi, > @Lukas, we're using nginx-1.6 and byte range caching is already enabled > by default(i guess). Below is the curl request :- > > curl -H Range:bytes=16- -I > http://videos.files.com/files/videos/2014/06/20/14032606291de19-360.mp4 > HTTP/1.1 206 Partial Content > Server: nginx > Date: Fri, 20 Jun 2014 13:36:05 GMT > Content-Type: video/mp4 > Content-Length: 25446010 > Connection: keep-alive > Last-Modified: Fri, 20 Jun 2014 11:04:11 GMT > ETag: "53a4152b-184468a" > Expires: Fri, 27 Jun 2014 13:36:05 GMT > Cache-Control: max-age=604800 > X-Cache-Status: HIT > Content-Range: bytes 16-25446025/25446026 > > Could you tell me how can i check using curl that if nginx downloading > the whole file each time the user seek the video with mp4 psuedo > module i.e http://url/files/videos/test-360.mp4?start=39. Check the Content-Length header in the response. Regards, Lukas From nginx-forum at nginx.us Sat Jun 21 14:35:20 2014 From: nginx-forum at nginx.us (alayim) Date: Sat, 21 Jun 2014 10:35:20 -0400 Subject: ngx_http_process_header_line function in source code In-Reply-To: <20140616113119.GP1849@mdounin.ru> References: <20140616113119.GP1849@mdounin.ru> Message-ID: I see your point. By the way, is there a more appropriate forum or email list I should use for these kinds of questions? I'm a fairly fresh computer science student who just started working in the industry, and all I'm ever allowed to work on, is Java and C#, so I decided to do some C hacking on a hobby basis...and chose you poor souls as my subject. My plan is basically to just stare at the code untill it makes a bit of sense, and ask questions if I see something I think might be an error of some sort, or if I don't understand it, and eventually, maybe I'm even able to contribute something worthwhile. So, I'm wondering, if there is any more appropriate place where I can ask questions about things I find in the code-base. For instance, right now I'm staring at ngx_http_parse.c It seems that the switch (p - m) block at line ~164 is lacking a default label. So to my naive eyes, it seems it's possible to run ngx_http_parse_request_line() on a request with a method of less than 3 characters or more than 9, and get a ok return value, where the http-method is still in its unmodified initial value of NGX_HTTP_UNKNOWN, which doesen't seem to be handled elsewhere in the code(I tried grepping for it). Should a default value return NGX_HTTP_PARSE_INVALID_METHOD? Did I actually spot something? Or is it left this way by design? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250869,251092#msg-251092 From nginx-forum at nginx.us Sat Jun 21 14:52:58 2014 From: nginx-forum at nginx.us (alayim) Date: Sat, 21 Jun 2014 10:52:58 -0400 Subject: ngx_http_process_header_line function in source code In-Reply-To: References: <20140616113119.GP1849@mdounin.ru> Message-ID: <4bbca5e52ae55299cfed0473566442a1.NginxMailingListEnglish@forum.nginx.org> Nevermind, I see there are multiple such cases, so I guess it's by design that you don't catch the error at the parsing stage. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250869,251093#msg-251093 From nginx-forum at nginx.us Sat Jun 21 15:37:10 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 21 Jun 2014 11:37:10 -0400 Subject: [ANN] Windows nginx 1.7.3.1 RedKnight Message-ID: 12:37 21-6-2014 nginx 1.7.3.1 RedKnight Based on nginx 1.7.3 (20-6-2014) with; + new best practice ssl_ciphers example (nginx-win.conf) + fastcgi/upstream fix: http://forum.nginx.org/read.php?29,250947,251007#msg-251007 + form-input-nginx-module (https://github.com/calio/form-input-nginx-module) + Naxsi WAF conf\naxsi_core.rules updated 15-6-2014; File uploads: 1500-1600 + nginx-auth-ldap (upgraded 12-6-2014) + lua-nginx-module v0.9.9 (upgraded 16-6-2014) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251094,251094#msg-251094 From mdounin at mdounin.ru Sun Jun 22 14:32:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Jun 2014 18:32:16 +0400 Subject: No CORS Workaround - SSL Proxy In-Reply-To: References: <20140620224626.GR1849@mdounin.ru> Message-ID: <20140622143216.GS1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 11:29:39PM +0000, Eric Swenson wrote: > Hello Maxim, > > On 6/20/14, 3:46 PM, "Maxim Dounin" wrote: > > >Hello! > > > >On Fri, Jun 20, 2014 at 07:32:36PM +0000, Eric Swenson wrote: > > > >[...] > > > >> This works fine. However, every once in a while (say, every > >> week or so), traffic to https://app.example.com/svc/api/xxxx > >> returns gateway 502 errors. The API service (located at > >> https://svc.example.com/api) is working fine and is accessible > >> directly. However, through the proxy setup (above), nginx will > >> not pass traffic. Simply restarting nginx gets it working again > >> for another week or so, only to have it get into the same state > >> again some random interval later. > >> > >> Does anyone have any ideas what might be causing nginx to fail > >> to proxy traffic when no changes to the configuration have been > >> made and the backend service is functioning normally? > > > >First of all, it may be a good idea to take a look into error log. > > The only messages in the error log were ones for hours before and for > hours after the ?problem?, of the form: > > 2014/06/17 17:14:16 [error] 28165#0: *1508 no user/password was provided > for basic authentication, client: 11.12.13.14, server: > app.test.example.com, request: "GET / HTTP/1.1", host: > ?app.test.example.com" > > There was nothing related to the 502 errors around the time of the first > request that failed with a 502 error code. The access log shows quite a > few requests failing with a 502 status code from that point forward, up > until the time that I restarted nginx. If there is nothing in error logs, and you are getting 502 errors, then there are two options: 1. The 502 errors are returned by your backend, not generated by nginx. 2. You did something wrong while configuring error logs and/or you are looking into a wrong log. In this particular case, I would suggest the latter. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Mon Jun 23 01:55:03 2014 From: lists at ruby-forum.com (Yifeng Wang) Date: Mon, 23 Jun 2014 03:55:03 +0200 Subject: ssl proxys https web server is very slow In-Reply-To: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> References: <3645f32afc8f53d4c15aac93a6799ec5@ruby-forum.com> Message-ID: <5554a2e7a6f8d9b5af0551b029edc96e@ruby-forum.com> Hi, I dont's use upstream, because the web server is added dynamically. I must get address from the cookie or args, then NGINX will proxy from this address to the client browser. I find that if I remove some security configuration in "web.xml" file of my project like below, CLIENT-CERT Client Cert Users-only Area SSL /* CONFIDENTIAL Oh, it works fast. Maybe I guess this is the reason why it runs slowly. Thanks, guys. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Mon Jun 23 06:57:51 2014 From: nginx-forum at nginx.us (Keyur) Date: Mon, 23 Jun 2014 02:57:51 -0400 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: <0ba92eaa217acbee68d2ee8c3c09177f.NginxMailingListEnglish@forum.nginx.org> References: <0ba92eaa217acbee68d2ee8c3c09177f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, Can someone please look into this.. I need it for proper website functionality. Regards, Keyur Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250823,251101#msg-251101 From contact at jpluscplusm.com Mon Jun 23 10:31:10 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 23 Jun 2014 11:31:10 +0100 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: <0ba92eaa217acbee68d2ee8c3c09177f.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 23 Jun 2014 07:58, "Keyur" wrote: > > Hello, > > Can someone please look into this.. I need it for proper website > functionality. I don't know the answer to your problem and perhaps, given the lack of reply, no one on this public mailing list mainly populated by non-nginx staff does either. If your business has an immediate, pressing need for support, you may wish to take advantage of the professional services nginx offer: http://nginx.com/products/services/ J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 23 11:05:21 2014 From: nginx-forum at nginx.us (Keyur) Date: Mon, 23 Jun 2014 07:05:21 -0400 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: Message-ID: Thanks Jonathan! Well I can not comment regarding getting professional service. Infact I will be glad to have support but If I go with this approach then I would rather be asked to use web server which supports the said feature. (This is doable in apache). And I really don't want to do away with Nginx. With lack of reply I understand this isn't possible in nginx at the moment but I hope this is taken as feature request so that I can use it in nginx and does not have to rely on other web server. Regards Keyur Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250823,251103#msg-251103 From lists-nginx at swsystem.co.uk Mon Jun 23 11:17:17 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 23 Jun 2014 12:17:17 +0100 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: Message-ID: <6d547b961f6ef12e5dbb1ab307a8add1@swsystem.co.uk> On 23/06/2014 12:05, Keyur wrote: > Thanks Jonathan! > > Well I can not comment regarding getting professional service. Infact I > will > be glad to have support but If I go with this approach then I would > rather > be asked to use web server which supports the said feature. (This is > doable > in apache). And I really don't want to do away with Nginx. > > With lack of reply I understand this isn't possible in nginx at the > moment > but I hope this is taken as feature request so that I can use it in > nginx > and does not have to rely on other web server. The chances are it's probably possible, however this mailing list/forum has a limited subset of nginx users available to assist. I'm not sure why you have the requirement to use the XFF header to get geoip, but have you looked at the realip module . My first thought would be to configure it to ignore RFC1918 addresses so the real IP would in theory become a true IP which geoip could then use for locating the user. I have a piwik install on nginx, behind varnish with another nginx instance doing SSL termination. Using realip to detect the user's location seems to have done the trick. Steve. From arut at nginx.com Mon Jun 23 12:22:29 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 23 Jun 2014 16:22:29 +0400 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: Message-ID: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> On 19 Jun 2014, at 10:12, shahzaib shahzaib wrote: > > we're using two servers (one proxy and one backend). Proxy server is using proxy_cache to cache mp4 files from backend server and working fine. When i stream a full video from cache, the header response gives me the cache-status: HIT but whenever i seek the mp4 file i.e http://url/test.mp4?start=33 , the Cache-status changes to : MISS . Does that mean, the proxy server is again downloading the same file after the 33 seconds ? Since default proxy_cache_key has $args in it, your second request has a different cache key, so the file is downloaded again. Moreover the mp4 module does not work over proxy cache. That means even if you fix the cache key issue mp4 seeking will not work. You need to have a local mp4 file to be able to seek mp4 like that. From luky-37 at hotmail.com Mon Jun 23 12:31:03 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 23 Jun 2014 14:31:03 +0200 Subject: GeoIP FirstNonPrivateXForwardedForIP In-Reply-To: References: , <0ba92eaa217acbee68d2ee8c3c09177f.NginxMailingListEnglish@forum.nginx.org>, Message-ID: > Hello, > > Can someone please look into this.. I need it for proper website > functionality. I don't see why you would need it once you properly setup the proxy whitelist? From shahzaib.cb at gmail.com Mon Jun 23 12:47:08 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 23 Jun 2014 17:47:08 +0500 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> Message-ID: @Roman thanks for reply, >> your second request has a different cache key, so the file is downloaded again. Means, if a user seeks through the video i.e http://url/test.mp4?start=99 , the whole file again gets downloaded or the partial part of the file from 99sec to onward gets downloaded ? If the whole file downloaded again each time, does nginx support something like, if user seeks through the video start=99 and the rest of the file gets download instead of the whole file ? Is the rsync only solution if not nginx? On Mon, Jun 23, 2014 at 5:22 PM, Roman Arutyunyan wrote: > > On 19 Jun 2014, at 10:12, shahzaib shahzaib wrote: > > > > > we're using two servers (one proxy and one backend). Proxy server > is using proxy_cache to cache mp4 files from backend server and working > fine. When i stream a full video from cache, the header response gives me > the cache-status: HIT but whenever i seek the mp4 file i.e > http://url/test.mp4?start=33 , the Cache-status changes to : MISS . Does > that mean, the proxy server is again downloading the same file after the 33 > seconds ? > > Since default proxy_cache_key has $args in it, your second request has a > different cache key, so the > file is downloaded again. > > Moreover the mp4 module does not work over proxy cache. That means even > if you fix the cache key issue > mp4 seeking will not work. You need to have a local mp4 file to be able > to seek mp4 like that. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 23 13:15:19 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 23 Jun 2014 09:15:19 -0400 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> Message-ID: <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> Roman Arutyunyan Wrote: ------------------------------------------------------- > Moreover the mp4 module does not work over proxy cache. That means > even if you fix the cache key issue > mp4 seeking will not work. You need to have a local mp4 file to be > able to seek mp4 like that. Hmm, what about a hack, if the file is cached keep a link to the cached file and its original name, if the next request matches a cached file and its original name and a seek is requested then pass the cache via its original name to allow seeking on the local (but cached) file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251037,251108#msg-251108 From shahzaib.cb at gmail.com Mon Jun 23 13:27:45 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 23 Jun 2014 18:27:45 +0500 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: >>Hmm, what about a hack, if the file is cached keep a link to the cached file and its original name, if the next request matches a cached file and its original name and a seek is requested then pass the cache via its original name to allow seeking on the local (but cached) file. That means, i should have double storage, one for cached files via proxy_cache and other for local files via rsync. On Mon, Jun 23, 2014 at 6:15 PM, itpp2012 wrote: > Roman Arutyunyan Wrote: > ------------------------------------------------------- > > Moreover the mp4 module does not work over proxy cache. That means > > even if you fix the cache key issue > > mp4 seeking will not work. You need to have a local mp4 file to be > > able to seek mp4 like that. > > Hmm, what about a hack, if the file is cached keep a link to the cached > file > and its original name, if the next request matches a cached file and its > original name and a seek is requested then pass the cache via its original > name to allow seeking on the local (but cached) file. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,251037,251108#msg-251108 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Jun 23 14:32:09 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 23 Jun 2014 18:32:09 +0400 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> Message-ID: On 23 Jun 2014, at 16:47, shahzaib shahzaib wrote: > @Roman thanks for reply, > > >> your second request has a different cache key, so the > file is downloaded again. > > Means, if a user seeks through the video i.e http://url/test.mp4?start=99 , the whole file again gets downloaded or the partial part of the file from 99sec to onward gets downloaded ? If the whole file downloaded again each time, does nginx support something like, if user seeks through the video start=99 and the rest of the file gets download instead of the whole file ? Is the rsync only solution if not nginx? If you have mp4 module enabled at the upstream, then partial mp4 (well, it?s not partial actually, but transformed) is downloaded and cached. If not then you should configure a proper proxy_cache_key to download the whole file once from the upstream and serve the whole mp4 from the cache. From arut at nginx.com Mon Jun 23 14:43:47 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 23 Jun 2014 18:43:47 +0400 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 23 Jun 2014, at 17:15, itpp2012 wrote: > Roman Arutyunyan Wrote: > ------------------------------------------------------- >> Moreover the mp4 module does not work over proxy cache. That means >> even if you fix the cache key issue >> mp4 seeking will not work. You need to have a local mp4 file to be >> able to seek mp4 like that. > > Hmm, what about a hack, if the file is cached keep a link to the cached file > and its original name, if the next request matches a cached file and its > original name and a seek is requested then pass the cache via its original > name to allow seeking on the local (but cached) file. You can use proxy_store with the mp4 module. Having a link to a nginx cache file is wrong since cache file has internal header and HTTP headers. Cached mp4 entry is not a valid mp4 meaning you can?t play it directly without stripping headers. From nginx-forum at nginx.us Mon Jun 23 14:56:17 2014 From: nginx-forum at nginx.us (ariel_esp) Date: Mon, 23 Jun 2014 10:56:17 -0400 Subject: fastcgi_cache Message-ID: <1dd9a1ab3bc9ccd1ad5ef2fa09fc247f.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying setup fastcgi_cache. Working fine.... BUT I need bypass some pages... when theses pages have header "no-cache".... but I dont know how to do this... The rules for bypass using urls, work fine.. like this: [code] if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $cache_uri 'null cache'; } [/code] How can I bypass cache setting header page "no-cache" ? Example, in varnish work fine: if (req.http.Cache-Control ~ "no-cache") { return (pipe); } thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251112,251112#msg-251112 From sarah at nginx.com Mon Jun 23 15:21:30 2014 From: sarah at nginx.com (Sarah Novotny) Date: Mon, 23 Jun 2014 08:21:30 -0700 Subject: Velocity Conference Message-ID: <3127CE53-E91B-4087-8A67-4196CE57943E@nginx.com> Hi All, We have some of the NGINX team at Velocity this week. If you?re in Santa Clara this week, stop by and see us we have lots planned! http://nginx.com/blog/velocity-need-for-speed/ sarah From mdounin at mdounin.ru Mon Jun 23 16:45:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Jun 2014 20:45:16 +0400 Subject: fastcgi_cache In-Reply-To: <1dd9a1ab3bc9ccd1ad5ef2fa09fc247f.NginxMailingListEnglish@forum.nginx.org> References: <1dd9a1ab3bc9ccd1ad5ef2fa09fc247f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140623164516.GY1849@mdounin.ru> Hello! On Mon, Jun 23, 2014 at 10:56:17AM -0400, ariel_esp wrote: > Hi, > I am trying setup fastcgi_cache. > Working fine.... BUT I need bypass some pages... when theses pages have > header "no-cache".... but I dont know how to do this... > The rules for bypass using urls, work fine.. like this: > > [code] > if ($request_uri ~* > "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") > { > set $cache_uri 'null cache'; > } > [/code] > > How can I bypass cache setting header page "no-cache" ? > Example, in varnish work fine: > if (req.http.Cache-Control ~ "no-cache") > { > return (pipe); > } http://nginx.org/r/fastcgi_cache_bypass -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jun 23 17:08:33 2014 From: nginx-forum at nginx.us (ariel_esp) Date: Mon, 23 Jun 2014 13:08:33 -0400 Subject: fastcgi_cache In-Reply-To: <20140623164516.GY1849@mdounin.ru> References: <20140623164516.GY1849@mdounin.ru> Message-ID: <301e72650d52642689366d20c057ab1a.NginxMailingListEnglish@forum.nginx.org> Hi, I already try this... but... not work =/ when in the page, I do "shift+f5", page is re-read "EXPIRED"... OK but, this entering in the page, or do F5 ... page = HIT cache... In this specifics pages, I always put php header "cache-control, pragma, etc" as "no-cache", so, I want always get a new page from backend... understand? fastcgi_cache microcache; fastcgi_cache_key $scheme$request_method$host$request_uri$http_x_custom_header; fastcgi_cache_valid any 1m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; fastcgi_cache_lock on; add_header Fastcgi-Cache $upstream_cache_status; if ($cache_uri != "null cache") { add_header Fastcgi-Cache $upstream_cache_status; add_header X-Cache-Debug "$cache_uri $cookie_nocache $arg_nocache$arg_comment $http_pragma $http_authorization"; set $skip_cache 0; } fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache_bypass $cookie_nocache $arg_nocache$arg_comment ; fastcgi_no_cache $cookie_nocache $arg_nocache$arg_comment; fastcgi_cache_bypass $http_pragma $http_authorization ; fastcgi_no_cache $http_pragma $http_authorization; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251112,251115#msg-251115 From shahzaib.cb at gmail.com Mon Jun 23 18:06:24 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 23 Jun 2014 23:06:24 +0500 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: >> You can use proxy_store with the mp4 module. So, proxy_store is able to download whole mp4 file once and than server that file locally without fetching each time from the origin if users seek through the video ? On Mon, Jun 23, 2014 at 7:43 PM, Roman Arutyunyan wrote: > > On 23 Jun 2014, at 17:15, itpp2012 wrote: > > > Roman Arutyunyan Wrote: > > ------------------------------------------------------- > >> Moreover the mp4 module does not work over proxy cache. That means > >> even if you fix the cache key issue > >> mp4 seeking will not work. You need to have a local mp4 file to be > >> able to seek mp4 like that. > > > > Hmm, what about a hack, if the file is cached keep a link to the cached > file > > and its original name, if the next request matches a cached file and its > > original name and a seek is requested then pass the cache via its > original > > name to allow seeking on the local (but cached) file. > > You can use proxy_store with the mp4 module. > > Having a link to a nginx cache file is wrong since cache file has internal > header and > HTTP headers. Cached mp4 entry is not a valid mp4 meaning you can?t play > it directly > without stripping headers. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 23 19:26:14 2014 From: nginx-forum at nginx.us (Varix) Date: Mon, 23 Jun 2014 15:26:14 -0400 Subject: Problem with auth_basic and auth_basic_user_file Message-ID: Hallo, I want a login for this location. location /TEST/ { alias /www/c2c/; } With location /TEST/ { alias /www/c2c/; auth_basic "Test Auth"; } no username/password box in the browser. I see the index.html site from the c2c folder With location /TEST/ { alias /www/c2c/; auth_basic "Test Auth"; auth_basic_user_file /nginx/conf/htpasswd; } and an "404 Not Found" error shows the browser. I use nginx 1.7.2 Where is the problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251118,251118#msg-251118 From lists at ruby-forum.com Mon Jun 23 21:41:45 2014 From: lists at ruby-forum.com (Amir Eldor) Date: Mon, 23 Jun 2014 23:41:45 +0200 Subject: nginx location+proxy_pass? In-Reply-To: <8c90cb7d0710190620md03c215rf02af555736e9d78@mail.gmail.com> References: <8c90cb7d0710190620md03c215rf02af555736e9d78@mail.gmail.com> Message-ID: Hi guys :), Back then in 1987 I think (when I was born) I think that I heard some dutch. Really helped me today when I grew older and 'm 27 (in August). Find me on amir at amir-x.com ;) I'm here to stay. Just don't press + look around coz it's hard with all the technology today... That's it! TAK TAK! ~~~~ Amir Eldor -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Tue Jun 24 02:28:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Jun 2014 06:28:04 +0400 Subject: fastcgi_cache In-Reply-To: <301e72650d52642689366d20c057ab1a.NginxMailingListEnglish@forum.nginx.org> References: <20140623164516.GY1849@mdounin.ru> <301e72650d52642689366d20c057ab1a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140624022804.GZ1849@mdounin.ru> Hello! On Mon, Jun 23, 2014 at 01:08:33PM -0400, ariel_esp wrote: > Hi, I already try this... but... not work =/ > when in the page, I do "shift+f5", page is re-read "EXPIRED"... OK > but, this entering in the page, or do F5 ... page = HIT cache... > In this specifics pages, I always put php header "cache-control, pragma, > etc" as "no-cache", so, I want always get a new page from backend... > understand? As long as _response_ headers contain "Cache-Control: no-cache", nginx will not cache a response, unless explicitly asked to ignore the Cache-Control header (using the fastcgi_ignore_headers directive). No special handling is needed, it will just work. If it doesn't work for you, this likely means that: - either you did something wrong in your nginx config (i.e., used fastcgi_ignore_headers to disable Cache-Control handling); - or you wrote the "Cache-Control: no-cache" incorrectly. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jun 24 02:36:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Jun 2014 06:36:00 +0400 Subject: Problem with auth_basic and auth_basic_user_file In-Reply-To: References: Message-ID: <20140624023600.GA1849@mdounin.ru> Hello! On Mon, Jun 23, 2014 at 03:26:14PM -0400, Varix wrote: > Hallo, > > I want a login for this location. > > location /TEST/ { > alias /www/c2c/; > } > > With > > location /TEST/ { > alias /www/c2c/; > auth_basic "Test Auth"; > } > > no username/password box in the browser. I see the index.html site from the > c2c folder This is expected. Both auth_basic and auth_basic_user_file must be set for auth_basic to work. > With > > location /TEST/ { > alias /www/c2c/; > auth_basic "Test Auth"; > auth_basic_user_file /nginx/conf/htpasswd; > } > > and an "404 Not Found" error shows the browser. This likely indicate you have some error_page's configured, and they intercept "401 Unauthorized" returned. Try looking into your configs and into error log. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jun 24 03:57:12 2014 From: nginx-forum at nginx.us (TheBritishGeek) Date: Mon, 23 Jun 2014 23:57:12 -0400 Subject: Best method for adding GeoIP support In-Reply-To: References: Message-ID: Thanks for help on this, I decided to just go ahead and compile this ourselves as as we really need the GoIP features. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250978,251123#msg-251123 From nginx-forum at nginx.us Tue Jun 24 04:43:12 2014 From: nginx-forum at nginx.us (TheBritishGeek) Date: Tue, 24 Jun 2014 00:43:12 -0400 Subject: Proxy Bypass only specific IP Message-ID: <1a6ea2c2cc83081acab225942b54fff6.NginxMailingListEnglish@forum.nginx.org> I am looking for a way for allow that proxy_cache_bypass but only on a secific hostname and client IP address. My current setup is as follows: location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|wmv|js|css|mp3|swf|ico|flv|json|csv|txt|svg|ttf|eot|otf|cff|afm|lwfn|ffil|fon|pfm|pfb|woff|std|pro|xsf|ps|pdf|bmp)$ { expires 1M; add_header X-Cache-Status $upstream_cache_status; proxy_cache_bypass $http_secret_header; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://www.domain.com; proxy_cache static-files.domain.com; proxy_cache_valid 200 360m; proxy_cache_valid 302 360m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout invalid_header updating; proxy_ignore_headers X-Accel-Expires Expires; } How can I add to this protection so that only 1 specific IPv4 & 1 IPv6 address can bypass the poxy ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251124,251124#msg-251124 From nginx-forum at nginx.us Tue Jun 24 06:17:01 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 24 Jun 2014 02:17:01 -0400 Subject: Proxy Bypass only specific IP In-Reply-To: <1a6ea2c2cc83081acab225942b54fff6.NginxMailingListEnglish@forum.nginx.org> References: <1a6ea2c2cc83081acab225942b54fff6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3263107e09e6b53f8dc3b7c1828911e5.NginxMailingListEnglish@forum.nginx.org> if ($remote_addr ~ "^(10.10.*.*)$") { .... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251124,251126#msg-251126 From nginx-forum at nginx.us Tue Jun 24 08:24:55 2014 From: nginx-forum at nginx.us (vamshi) Date: Tue, 24 Jun 2014 04:24:55 -0400 Subject: content_by_lua not reading/printing header values Message-ID: <879da66a20763b25894eaf885a1df99a.NginxMailingListEnglish@forum.nginx.org> My nginx.conf upstream testdb { postgres_server 10.0.9.43:9000 dbname=testdb user=postgres password=postgres; postgres_keepalive max=100 mode=multi overflow=ignore; } location / { set $x ""; content_by_lua ' local a = ngx.var.http_cookie local b = ngx.var.http_my_custom_auth print(a) print(b) if ngx.var.http_my_custom_auth ~= nil then local res = ngx.location.capture("/postgresquery", { args = {x = b } } ) print(res.status) print(res.header) print(res.body) -- I would like to do more actions based on the result. However, for now, we simply move ahead return ngx.exec("@final") elseif ngx.var.http_cookie ~= nil then return ngx.exec("@final") else ngx.status = 444 return end '; } location @final { # # This is an internal location. Cannot be accessed from the outside # internal; proxy_pass http://10.0.1.42; } location /postgresquery { internal; postgres_pass testdb; postgres_output text; postgres_escape $http_my_custom_auth; postgres_query "select id, name from testtable where key = $x"; } My error Log: 2014/06/24 13:01:36 [debug] 6073#0: epoll add event: fd:7 op:1 ev:00002001 2014/06/24 13:01:38 [debug] 6073#0: post event 099D1620 2014/06/24 13:01:38 [debug] 6073#0: delete posted event 099D1620 2014/06/24 13:01:38 [debug] 6073#0: accept on 0.0.0.0:80, ready: 0 2014/06/24 13:01:38 [debug] 6073#0: posix_memalign: 099A8D40:256 @16 2014/06/24 13:01:38 [debug] 6073#0: *1 accept: 127.0.0.1:57731 fd:11 2014/06/24 13:01:38 [debug] 6073#0: *1 event timer add: 11: 60000:3435819925 2014/06/24 13:01:38 [debug] 6073#0: *1 reusable connection: 1 2014/06/24 13:01:38 [debug] 6073#0: *1 epoll add event: fd:11 op:1 ev:80002001 2014/06/24 13:01:38 [debug] 6073#0: *1 post event 099D1688 2014/06/24 13:01:38 [debug] 6073#0: *1 delete posted event 099D1688 2014/06/24 13:01:38 [debug] 6073#0: *1 http wait request handler 2014/06/24 13:01:38 [debug] 6073#0: *1 malloc: 0999FA50:1024 2014/06/24 13:01:38 [debug] 6073#0: *1 recv: fd:11 98 of 1024 2014/06/24 13:01:38 [debug] 6073#0: *1 reusable connection: 0 2014/06/24 13:01:38 [debug] 6073#0: *1 posix_memalign: 099A56D0:4096 @16 2014/06/24 13:01:38 [debug] 6073#0: *1 http process request line 2014/06/24 13:01:38 [debug] 6073#0: *1 http request line: "HEAD / HTTP/1.1" 2014/06/24 13:01:38 [debug] 6073#0: *1 http uri: "/" 2014/06/24 13:01:38 [debug] 6073#0: *1 http args: "" 2014/06/24 13:01:38 [debug] 6073#0: *1 http exten: "" 2014/06/24 13:01:38 [debug] 6073#0: *1 http process request header line 2014/06/24 13:01:38 [debug] 6073#0: *1 http header: "User-Agent: curl/7.32.0" 2014/06/24 13:01:38 [debug] 6073#0: *1 http header: "Host: 127.0.0.1" 2014/06/24 13:01:38 [debug] 6073#0: *1 http header: "Accept: */*" 2014/06/24 13:01:38 [debug] 6073#0: *1 http header: "my-custom-auth: Vamshi" 2014/06/24 13:01:38 [debug] 6073#0: *1 http header done 2014/06/24 13:01:38 [debug] 6073#0: *1 event timer del: 11: 3435819925 2014/06/24 13:01:38 [debug] 6073#0: *1 rewrite phase: 0 2014/06/24 13:01:38 [debug] 6073#0: *1 test location: "/" 2014/06/24 13:01:38 [debug] 6073#0: *1 using configuration "/" 2014/06/24 13:01:38 [debug] 6073#0: *1 http cl:-1 max:1048576 2014/06/24 13:01:38 [debug] 6073#0: *1 rewrite phase: 2 2014/06/24 13:01:38 [debug] 6073#0: *1 http script value: "" 2014/06/24 13:01:38 [debug] 6073#0: *1 http script set $x 2014/06/24 13:01:38 [debug] 6073#0: *1 post rewrite phase: 3 2014/06/24 13:01:38 [debug] 6073#0: *1 generic phase: 4 2014/06/24 13:01:38 [debug] 6073#0: *1 generic phase: 5 2014/06/24 13:01:38 [debug] 6073#0: *1 access phase: 6 2014/06/24 13:01:38 [debug] 6073#0: *1 access phase: 7 2014/06/24 13:01:38 [debug] 6073#0: *1 post access phase: 8 2014/06/24 13:01:38 [debug] 6073#0: *1 lua content handler, uri:"/" c:1 2014/06/24 13:01:38 [debug] 6073#0: *1 lua reset ctx 2014/06/24 13:01:38 [debug] 6073#0: *1 lua creating new thread 2014/06/24 13:01:38 [debug] 6073#0: *1 http cleanup add: 099A5FAC 2014/06/24 13:01:38 [debug] 6073#0: *1 lua run thread, top:0 c:1 2014/06/24 13:01:38 [debug] 6074#0: epoll add event: fd:7 op:1 ev:00002001 Curl output: [vamshi at localhost ~]$ curl -I -H "my-custom-auth: Vamshi" http://127.0.0.1 curl: (52) Empty reply from server Postgres Table (on a different machine): 1 Row ID : 1 Name : Vamshi Krishna Ramaka Key: Vamshi I know that code 444 is being returned, because lua seems to find the cookie and my header as empty. Can you tell me why Lus thinks they are empty ? The current config is not production quality since this is my first attempt at nginx as well as Lua. I have almost removed everything else is trying to catch this bug, and this is where I have landed. So do excuse me if there are any glaringly obvious errors. -Vamshi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251127,251127#msg-251127 From nginx-forum at nginx.us Tue Jun 24 10:38:59 2014 From: nginx-forum at nginx.us (TheBritishGeek) Date: Tue, 24 Jun 2014 06:38:59 -0400 Subject: Proxy Bypass only specific IP In-Reply-To: <3263107e09e6b53f8dc3b7c1828911e5.NginxMailingListEnglish@forum.nginx.org> References: <1a6ea2c2cc83081acab225942b54fff6.NginxMailingListEnglish@forum.nginx.org> <3263107e09e6b53f8dc3b7c1828911e5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <33a75d2d3ab534e9af5068a7e655f15e.NginxMailingListEnglish@forum.nginx.org> I have tried if ($remote_addr ~ "^(1.1.1.1|a:b:c:d::1:2)$") { proxy_cache_bypass $http_secret_header; } But is does not pass testing nginx: [emerg] "proxy_cache_bypass" directive is not allowed here Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251124,251128#msg-251128 From nginx-forum at nginx.us Tue Jun 24 12:28:30 2014 From: nginx-forum at nginx.us (vamshi) Date: Tue, 24 Jun 2014 08:28:30 -0400 Subject: content_by_lua not reading/printing header values In-Reply-To: <879da66a20763b25894eaf885a1df99a.NginxMailingListEnglish@forum.nginx.org> References: <879da66a20763b25894eaf885a1df99a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <985ef969b3533e952284f7be52f8752c.NginxMailingListEnglish@forum.nginx.org> I found the issue(s). I am not sure why the error log did not have any logs after the lua thread got launched. Here is the updated /etc/nginx/nginx.conf location / { set $dbKey ""; content_by_lua ' local a = ngx.var.http_cookie local b = ngx.var.http_my_custom_auth print(a) print(b) if ngx.var.http_my_custom_auth ~= nil then local res = ngx.location.capture("/postgresquery", { args = {dbKey = b } } ) print(res.status) print(QueryResult) if res.status == 200 then return ngx.exec("@final") else ngx.status = 403 return end elseif ngx.var.http_cookie ~= nil then return ngx.exec("@final") else ngx.status = 444 return end '; } location /postgresquery { internal; postgres_pass testdb; postgres_escape $escaped_dbKey $arg_dbKey; postgres_query "select id from testtable where key = $escaped_dbKey;"; postgres_output value; } However, as you can see, I had to delete print(res.headers) and print(res.body), since lua was complaining that it expects a single value, but it got a table -Vamshi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251127,251129#msg-251129 From mdounin at mdounin.ru Tue Jun 24 14:23:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Jun 2014 18:23:38 +0400 Subject: Proxy Bypass only specific IP In-Reply-To: <33a75d2d3ab534e9af5068a7e655f15e.NginxMailingListEnglish@forum.nginx.org> References: <1a6ea2c2cc83081acab225942b54fff6.NginxMailingListEnglish@forum.nginx.org> <3263107e09e6b53f8dc3b7c1828911e5.NginxMailingListEnglish@forum.nginx.org> <33a75d2d3ab534e9af5068a7e655f15e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140624142337.GL1849@mdounin.ru> Hello! On Tue, Jun 24, 2014 at 06:38:59AM -0400, TheBritishGeek wrote: > I have tried > > if ($remote_addr ~ "^(1.1.1.1|a:b:c:d::1:2)$") > { > proxy_cache_bypass $http_secret_header; > } > > But is does not pass testing > > nginx: [emerg] "proxy_cache_bypass" directive is not allowed here You have to set a variable as appropriate, and then use it in the proxy_set_bypass directive. if (...) { set $bypass $http_secret_header; } proxy_cache_bypass $bypass; Also, apart from "if" checks, it may be good idea to consider geo and map blocks, see here: http://nginx.org/en/docs/http/ngx_http_geo_module.html http://nginx.org/en/docs/http/ngx_http_map_module.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jun 24 15:58:38 2014 From: nginx-forum at nginx.us (Varix) Date: Tue, 24 Jun 2014 11:58:38 -0400 Subject: Problem with auth_basic and auth_basic_user_file In-Reply-To: <20140624023600.GA1849@mdounin.ru> References: <20140624023600.GA1849@mdounin.ru> Message-ID: <5625a8e2735e0ba8eb9b77888ec669b5.NginxMailingListEnglish@forum.nginx.org> Hallo Maxim, thanks. I copy some errorpages in the custom errorpage folder and all is OK. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251118,251134#msg-251134 From schlie at comcast.net Tue Jun 24 18:49:57 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 24 Jun 2014 14:49:57 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? Message-ID: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> I've noticed that multiple (as great as 8 or more) parallel redundant streams and corresponding temp_files are opened reading the same file from a reverse proxy backend into nginx, upon even a single request by an up-stream client, if not already cached (or stored in a static proxy'ed file) local to nginx. This seems extremely wasteful of bandwidth between nginx and corresponding reverse proxy backends; does anyone know why this is occurring and how to limit this behavior? (For example, upon receiving a request for example small 250MB mp4 video podcast video file, it's not uncommon for 8 parallel streams to be opened, each sourcing (and competing for bandwidth) a corresponding temp_file, where the upstream client appears to being feed by the most complete stream/temp_file; but even upon the compete file being fully transferred to the upstream client,the remaining streams remain active until they too have finished their transfers, and then closed, and their corresponding temp_files deleted. All resulting in 2GB of data being transferred when only 250MB needed be, not to mention that the transfer took nearly 8x longer to complete, so unless there were concerns about the integrity of the connection, it seems like a huge waste of resources?) Thanks, any insight/assistance would be appreciated. From mdounin at mdounin.ru Tue Jun 24 22:36:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 02:36:01 +0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> Message-ID: <20140624223601.GS1849@mdounin.ru> Hello! On Tue, Jun 24, 2014 at 02:49:57PM -0400, Paul Schlie wrote: > I've noticed that multiple (as great as 8 or more) parallel > redundant streams and corresponding temp_files are opened > reading the same file from a reverse proxy backend into nginx, > upon even a single request by an up-stream client, if not > already cached (or stored in a static proxy'ed file) local to > nginx. > > This seems extremely wasteful of bandwidth between nginx and > corresponding reverse proxy backends; does anyone know why this > is occurring and how to limit this behavior? http://nginx.org/r/proxy_cache_lock -- Maxim Dounin http://nginx.org/ From schlie at comcast.net Tue Jun 24 23:51:04 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 24 Jun 2014 19:51:04 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <20140624223601.GS1849@mdounin.ru> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> Message-ID: <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> Thank you; however it appears to have no effect on reverse proxy_store'd static files? (Which seems odd, if it actually works for cached files; as both are first read into temp_files, being the root of the problem.) Any idea on how to prevent multiple redundant streams and corresponding temp_files being created when reading/updating a reverse proxy'd static file from the backend? (Out of curiosity, why would anyone ever want many multiple redundant streams/temp_files ever opened by default?) On Jun 24, 2014, at 6:36 PM, Maxim Dounin wrote: > Hello! > > On Tue, Jun 24, 2014 at 02:49:57PM -0400, Paul Schlie wrote: > >> I've noticed that multiple (as great as 8 or more) parallel >> redundant streams and corresponding temp_files are opened >> reading the same file from a reverse proxy backend into nginx, >> upon even a single request by an up-stream client, if not >> already cached (or stored in a static proxy'ed file) local to >> nginx. >> >> This seems extremely wasteful of bandwidth between nginx and >> corresponding reverse proxy backends; does anyone know why this >> is occurring and how to limit this behavior? > > http://nginx.org/r/proxy_cache_lock > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jun 25 00:18:04 2014 From: nginx-forum at nginx.us (badtzhou) Date: Tue, 24 Jun 2014 20:18:04 -0400 Subject: resolver for upstream Message-ID: <00e47f88559985c574d367a526fef7c4.NginxMailingListEnglish@forum.nginx.org> We are trying to use resolver on upstream. It is not working for some reason. We are using nginx 1.6.0. Supposedly the feature should be available on 1.5.12. When we try to use it, It always give us an error. nginx: [emerg] invalid parameter "resolve" in... And when I checked the source code, it doesn't look like the feature was in there. According to upstream documentation it should work like this: resolve monitors changes of the IP addresses that correspond to a domain name of the server, and automatically modifies the upstream configuration without the need of restarting nginx (1.5.12). In order for this parameter to work, the resolver directive must be specified in the http block. Example: http { resolver 10.0.0.1; upstream u { zone ...; ... server example.com resolve; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251138,251138#msg-251138 From mdounin at mdounin.ru Wed Jun 25 00:30:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 04:30:02 +0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> Message-ID: <20140625003002.GW1849@mdounin.ru> Hello! On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote: > Thank you; however it appears to have no effect on reverse proxy_store'd static files? Yes, it's part of the cache machinery. The proxy_store functionality is dumb and just provides a way to store responses received, nothing more. > (Which seems odd, if it actually works for cached files; as both > are first read into temp_files, being the root of the problem.) See above (and below). > Any idea on how to prevent multiple redundant streams and > corresponding temp_files being created when reading/updating a > reverse proxy'd static file from the backend? You may try to do so using limit_conn, and may be error_page and limit_req to introduce some delay. But unlikely it will be a good / maintainable / easy to write solution. > (Out of curiosity, why would anyone ever want many multiple > redundant streams/temp_files ever opened by default?) You never know if responses are going to be the same. The part which knows (or, rather, tries to) is called "cache", and has lots of directives to control it. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jun 25 00:32:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 04:32:16 +0400 Subject: resolver for upstream In-Reply-To: <00e47f88559985c574d367a526fef7c4.NginxMailingListEnglish@forum.nginx.org> References: <00e47f88559985c574d367a526fef7c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140625003216.GX1849@mdounin.ru> Hello! On Tue, Jun 24, 2014 at 08:18:04PM -0400, badtzhou wrote: > We are trying to use resolver on upstream. It is not working for some > reason. We are using nginx 1.6.0. Supposedly the feature should be available > on 1.5.12. > > When we try to use it, It always give us an error. nginx: [emerg] invalid > parameter "resolve" in... And when I checked the source code, it doesn't > look like the feature was in there. > > According to upstream documentation it should work like this: > resolve > monitors changes of the IP addresses that correspond to a domain name of the > server, and automatically modifies the upstream configuration without the > need of restarting nginx (1.5.12). > In order for this parameter to work, the resolver directive must be > specified in the http block. Example: You've missed this part above it: : Additionally, the following parameters are available as part of : our commercial subscription: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server -- Maxim Dounin http://nginx.org/ From schlie at comcast.net Wed Jun 25 00:58:32 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 24 Jun 2014 20:58:32 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <20140625003002.GW1849@mdounin.ru> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> Message-ID: <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> Again thank you. However ... (below) On Jun 24, 2014, at 8:30 PM, Maxim Dounin wrote: > Hello! > > On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote: > >> Thank you; however it appears to have no effect on reverse proxy_store'd static files? > > Yes, it's part of the cache machinery. The proxy_store > functionality is dumb and just provides a way to store responses > received, nothing more. - There should be no difference between how reverse proxy'd files are accessed and first stored into corresponding temp_files (and below). > >> (Which seems odd, if it actually works for cached files; as both >> are first read into temp_files, being the root of the problem.) > > See above (and below). > >> Any idea on how to prevent multiple redundant streams and >> corresponding temp_files being created when reading/updating a >> reverse proxy'd static file from the backend? > > You may try to do so using limit_conn, and may be error_page and > limit_req to introduce some delay. But unlikely it will be a > good / maintainable / easy to write solution. - Please consider implementing by default that no more streams than may become necessary if a previously opened stream appears to have died (timed out), as otherwise only more bandwidth and thereby delay will most likely result to complete the request. Further as there should be no difference between how reverse proxy read-streams and corresponding temp_files are created, regardless of whether they may be subsequently stored as either symbolically-named static files, or hash-named cache files; this behavior should be common to both. >> (Out of curiosity, why would anyone ever want many multiple >> redundant streams/temp_files ever opened by default?) > > You never know if responses are going to be the same. The part > which knows (or, rather, tries to) is called "cache", and has > lots of directives to control it. - If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx. (unless a backend server is frequently dropping connections, it's counterproductive to open multiple redundant streams; as doing so by default will only likely result in higher-bandwidth and thereby slower response completion.) > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From schlie at comcast.net Wed Jun 25 02:58:42 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 24 Jun 2014 22:58:42 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> Message-ID: Hi, Upon further testing, it appears the problem exists even with proxy_cache'd files with "proxy_cache_lock on". (Please consider this a serious bug, which I'm surprised hasn't been detected before; verified on recently released 1.7.2) On Jun 24, 2014, at 8:58 PM, Paul Schlie wrote: > Again thank you. However ... (below) > > On Jun 24, 2014, at 8:30 PM, Maxim Dounin wrote: > >> Hello! >> >> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote: >> >>> Thank you; however it appears to have no effect on reverse proxy_store'd static files? >> >> Yes, it's part of the cache machinery. The proxy_store >> functionality is dumb and just provides a way to store responses >> received, nothing more. > > - There should be no difference between how reverse proxy'd files are accessed and first stored into corresponding temp_files (and below). > >> >>> (Which seems odd, if it actually works for cached files; as both >>> are first read into temp_files, being the root of the problem.) >> >> See above (and below). >> >>> Any idea on how to prevent multiple redundant streams and >>> corresponding temp_files being created when reading/updating a >>> reverse proxy'd static file from the backend? >> >> You may try to do so using limit_conn, and may be error_page and >> limit_req to introduce some delay. But unlikely it will be a >> good / maintainable / easy to write solution. > > - Please consider implementing by default that no more streams than may become necessary if a previously opened stream appears to have died (timed out), as otherwise only more bandwidth and thereby delay will most likely result to complete the request. Further as there should be no difference between how reverse proxy read-streams and corresponding temp_files are created, regardless of whether they may be subsequently stored as either symbolically-named static files, or hash-named cache files; this behavior should be common to both. > >>> (Out of curiosity, why would anyone ever want many multiple >>> redundant streams/temp_files ever opened by default?) >> >> You never know if responses are going to be the same. The part >> which knows (or, rather, tries to) is called "cache", and has >> lots of directives to control it. > > - If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx. > (unless a backend server is frequently dropping connections, it's counterproductive to open multiple redundant streams; as doing so by default will only likely result in higher-bandwidth and thereby slower response completion.) > >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jun 25 07:24:26 2014 From: nginx-forum at nginx.us (ashishadhav) Date: Wed, 25 Jun 2014 03:24:26 -0400 Subject: number of requests in nginx queue Message-ID: <973cd1309bbde2b2cf72aa43cc399faa.NginxMailingListEnglish@forum.nginx.org> Hi, I want to find out how many requests are queued ,that is are yet to be processed ,in nginx at any given moment. >From this i wish to calculate if my server is overloaded with requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251143,251143#msg-251143 From nginx-forum at nginx.us Wed Jun 25 08:09:50 2014 From: nginx-forum at nginx.us (m1nu2) Date: Wed, 25 Jun 2014 04:09:50 -0400 Subject: Alias or rewrite to get Horde rpc.php to work In-Reply-To: References: Message-ID: <9a9b515492b36b8cf5d3341e92579e82.NginxMailingListEnglish@forum.nginx.org> m1nu2 Wrote: ------------------------------------------------------- > Hello all, > > I'm trying to setup a nginx server on a RaspberryPi to host a Horde > Groupware. The Webinterface is working so far. But I have problems > setting up the Microsoft-Server-ActiveSync URL to sync my Smartphone > and Tablet. I already asked in the IRC channel and posted a question > at Serverfault but did not get an answer yet. I also googled a lot, > tried different solutions, but non of these worked. So I would be > really happy if someone could help me with that. Here are more details > (mainly taken from the Serverfault question located here: > http://serverfault.com/questions/604866/nginx-alias-or-rewrite-for-hor > de-groupware-activesync-url-does-not-process-the-r ): > > The configuration should allow to access the ActiveSync part via the > URL /horde/Microsoft-Server-ActiveSync. The horde webinterface is > already accessible via /horde > > My configuration looks like this: > > default-ssl.conf: > > server { > listen 443 ssl; > ssl on; > ssl_certificate /opt/nginx/conf/certs/server.crt; > ssl_certificate_key /opt/nginx/conf/certs/server.key; > server_name example.com; > index index.html index.php; > root /var/www; > > include sites-available/horde.conf; > } > > > horde.conf: > > location /horde { > rewrite_log on; > rewrite ^/horde/Microsoft-Server-ActiveSync(.*)$ > /horde/rpc.php$1 last; > > try_files $uri $uri/ /rampage.php?$args; > > location ~ \.php$ { > try_files $uri =404; > include sites-available/horde.fcgi-php.conf; > } > }. > > > horde.fcgi-php.conf: > > include fastcgi_params; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > So why is the rpc.php not correctly working? > If I understand correctly it tries to establish a basic authentication > process. > Could this be a problem? > Best regards, > m1nu2 I copy my answer from serverfault here just from completeness: After more trying and looking at logs (finally the right ones) I found my fault. After all it was a configuration error. The Horde Wiki gives the hint in one single line: To activate the server, it needs to be enabled in Horde's configuration, on the ActiveSync tab. This one gives me the right direction: I searched in the configuration but could not found the ActiveSync stuff. So I digged deeper and ended up with the awareness that I did not install the library Horder_ActiveSync. :( After doing a sudo pear install horde/horde_activesync and updating the database tables from the WebUI everything works now! Hope this helps someone else. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250907,251144#msg-251144 From kirpit at gmail.com Wed Jun 25 08:19:18 2014 From: kirpit at gmail.com (kirpit) Date: Wed, 25 Jun 2014 11:19:18 +0300 Subject: number of requests in nginx queue In-Reply-To: <973cd1309bbde2b2cf72aa43cc399faa.NginxMailingListEnglish@forum.nginx.org> References: <973cd1309bbde2b2cf72aa43cc399faa.NginxMailingListEnglish@forum.nginx.org> Message-ID: You should never let the users get into queue anyway, it is for unexpected peaks. # Total amount of users you can serve = worker_processes * worker_connections http://stackoverflow.com/questions/7325211/tuning-nginx-worker-process-to-obtain-100k-hits-per-min On Wed, Jun 25, 2014 at 10:24 AM, ashishadhav wrote: > Hi, > I want to find out how many requests are queued ,that is are yet to be > processed ,in nginx at any given moment. > From this i wish to calculate if my server is overloaded with requests. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,251143,251143#msg-251143 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 25 09:56:15 2014 From: nginx-forum at nginx.us (ashishadhav) Date: Wed, 25 Jun 2014 05:56:15 -0400 Subject: number of requests in nginx queue In-Reply-To: References: Message-ID: <4a961fd18d402f8159c499700f9edf8b.NginxMailingListEnglish@forum.nginx.org> Hi , thanks for quick reply. Actually Im running a web app under nginx which has max limit of 1000 requests / sec . So understanding queued requests number is important for me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251143,251146#msg-251146 From mdounin at mdounin.ru Wed Jun 25 10:57:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 14:57:26 +0400 Subject: number of requests in nginx queue In-Reply-To: <973cd1309bbde2b2cf72aa43cc399faa.NginxMailingListEnglish@forum.nginx.org> References: <973cd1309bbde2b2cf72aa43cc399faa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140625105726.GA1849@mdounin.ru> Hello! On Wed, Jun 25, 2014 at 03:24:26AM -0400, ashishadhav wrote: > Hi, > I want to find out how many requests are queued ,that is are yet to be > processed ,in nginx at any given moment. > From this i wish to calculate if my server is overloaded with requests. There is no queues in nginx itself. All requests nginx knows about are being processed, and various numbers can be seen in stub_status. Note well that it's usually important to look at nginx and your backends listen socket queues. To find out listen socket queues, consider your OS docs. E.g., under FreeBSD "netstat -Lan" shows various listen queues. On recent enough versions of Linux, listen queues can be seen with "netstat -nlt" or "ss -nlt". -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jun 25 11:34:58 2014 From: nginx-forum at nginx.us (khav) Date: Wed, 25 Jun 2014 07:34:58 -0400 Subject: Recommended spdy configuration Message-ID: <0695aa8ecf181ca2c0505c2cd930d77a.NginxMailingListEnglish@forum.nginx.org> What is the most suitable value according to you for the following directives `spdy_headers_comp` and `spdy_chunk_size` . i want to optimize spdy to get better performance.I am only using Nginx as my webserver (No apache or any other software). My Nginx vesion is : 1.7.2 Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251150,251150#msg-251150 From mrauch at autoscout24.com Wed Jun 25 15:35:19 2014 From: mrauch at autoscout24.com (Michael Rauch) Date: Wed, 25 Jun 2014 15:35:19 +0000 Subject: Nginx + PHP-fpm + NFS poor performance Message-ID: <8f31f6643f7e4d7490151751d951ca37@OEXMBXV001.as24.local> Hello, we facing the same issue with version "nginx-1.4.7-1.el6.ngx.x86_64". Did anyone find a solution? Regards, Michael Rauch -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 25 15:43:47 2014 From: nginx-forum at nginx.us (dompz) Date: Wed, 25 Jun 2014 11:43:47 -0400 Subject: Debugging symbols for nginx-1.4.7-1.el6.ngx.x86_64.rpm In-Reply-To: <63bceb85a65664b5383f865e7c1f88d8.NginxMailingListEnglish@forum.nginx.org> References: <0F585B6D-7C3B-4C22-BEE6-4C196B99F8E7@nginx.com> <63bceb85a65664b5383f865e7c1f88d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7ccb15f46ae8509980ffc8218bbdc9b7.NginxMailingListEnglish@forum.nginx.org> On a related question - would it also be possible to provide a Debuginfo package for future Debian and Ubuntu packages? Thanks, Dominik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248527,251157#msg-251157 From gauravphoenix at gmail.com Wed Jun 25 18:41:16 2014 From: gauravphoenix at gmail.com (Gaurav Kumar) Date: Wed, 25 Jun 2014 11:41:16 -0700 Subject: How to log all headers in nginx? Message-ID: How do I go about logging all of the headers client browser has sent in Nginx? I also want to log response header. Note that I am using nginx as reverse proxy. After going through documentation, I understand that I can log a specific header, but I want to log all of the headers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jun 26 05:01:13 2014 From: nginx-forum at nginx.us (RandysPants) Date: Thu, 26 Jun 2014 01:01:13 -0400 Subject: Block Message-ID: <897ebec00d8d6dfd9bb93a596656722c.NginxMailingListEnglish@forum.nginx.org> Hello, new to all this so I apologize if my question is not formatted correctly. We only allow https from outside our network (via hardware firewall). I want to block access to usr/share/nginx/html but allow access access to usr/share/nginx/html/folder1 I have figured I need to be working with the ssl.conf file and have successfully blocked /usr/share/nginx/html/folder2 I just cant figure out how to deny all for the main html folder while allowing /folder1 to be accessible. If I add: location / { deny all; } It blocks just fine I just cant figure out how to allow /folder2 Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251176,251176#msg-251176 From francis at daoine.org Thu Jun 26 07:59:23 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 26 Jun 2014 08:59:23 +0100 Subject: Block In-Reply-To: <897ebec00d8d6dfd9bb93a596656722c.NginxMailingListEnglish@forum.nginx.org> References: <897ebec00d8d6dfd9bb93a596656722c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140626075923.GU16942@daoine.org> On Thu, Jun 26, 2014 at 01:01:13AM -0400, RandysPants wrote: > If I add: > location / { > deny all; > } > > It blocks just fine I just cant figure out how to allow /folder2 location /folder2 {} This does assume there are no other location{} blocks; and it does allow more than just /folder2/ contents. http://nginx.org/r/location for the details, and how to allow only /folder2 contents (by using two separate location{}s). f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jun 26 13:41:15 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 09:41:15 -0400 Subject: Nginx Windows High Traffic issues Message-ID: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> So i spent a while on this one and turns out the problem is a little function in nginx's core called "worker_rlimit_nofile". But for me on windows (i don't know if it does it for linux users too.) grinds my site down to a halt unless you increase its value. Why does it do this ? http://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile Even with the following nginx builds for high traffic production enviorments http://nginx-win.ecsds.eu/ It still seems like the worker_rlimit_nofile is to small by default on nginx start up so unless you know to add the command into your nginx config with some insanely high value to it your site will seem slow as hell. I read on stackoverflow the following. events { worker_connections 19000; # It's the key to high performance - have a lot of connections available } worker_rlimit_nofile 20000; # Each connection needs a filehandle (or 2 if you are proxying) But in my config i set the following just to test events { worker_connections 16384; multi_accept on; } worker_rlimit_nofile 9990000000; And everything still works fine can anyone explain so i can understand why this value is so small in the first place ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251186#msg-251186 From nginx-forum at nginx.us Thu Jun 26 14:02:26 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 26 Jun 2014 10:02:26 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> References: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: The way things have been redesigned, worker_rlimit_nofile has no purpose anymore, it's best not to set any value. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251187#msg-251187 From nginx-forum at nginx.us Thu Jun 26 14:18:16 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 10:18:16 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2d174dd447de87dc8b5397b60eadc3b8.NginxMailingListEnglish@forum.nginx.org> Well without a value everything is very very slow. With a value its nice and fast. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251188#msg-251188 From nginx-forum at nginx.us Thu Jun 26 15:20:47 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 11:20:47 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <2d174dd447de87dc8b5397b60eadc3b8.NginxMailingListEnglish@forum.nginx.org> References: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> <2d174dd447de87dc8b5397b60eadc3b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8635b1b40070104ac838b6030e312ba0.NginxMailingListEnglish@forum.nginx.org> I recon its because i have media sites with lots of files and pictures videos content etc so i need it to be a large limit. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251192#msg-251192 From nginx-forum at nginx.us Thu Jun 26 17:26:43 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 26 Jun 2014 13:26:43 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <2d174dd447de87dc8b5397b60eadc3b8.NginxMailingListEnglish@forum.nginx.org> References: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> <2d174dd447de87dc8b5397b60eadc3b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: c0nw0nk Wrote: ------------------------------------------------------- > Well without a value everything is very very slow. With a value its > nice and fast. Interesting to know, the Windows design and other portions scale automatically between 4 API's to deal with high performance while offloading that to multiple workers at the same time. This design is limitless but some baseline values have been set fixed because you need to start somewhere before tuning runs and after all workers have settled down. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251194#msg-251194 From nginx-forum at nginx.us Thu Jun 26 18:03:09 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 14:03:09 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> <2d174dd447de87dc8b5397b60eadc3b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: I don't know how you would try to replicate this issue because i have thousands upon thousands of files being accessed simultaneously without me setting that value insanely high pages and access to thing take 10 seconds and more even timeouts was occurring but as soon as i set that value it all stops and everything seems to be running fast as its suppose to. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251196#msg-251196 From reallfqq-nginx at yahoo.fr Thu Jun 26 18:22:48 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 26 Jun 2014 20:22:48 +0200 Subject: 403 error logging Message-ID: Hello, I recently stumbled into a situation where I could not seem to find a way of disabling unwanted errors being logged. Configuration: location / { try_files $uri $uri/ @https; } location @https { return 301 https://$host$request_uri; } That works well for locations like '/foo' where it falls back to https server. However, when accessing root location '/', the '$uri/' part of the try_files directive produces a 403 'directory index [...] is forbidden'. I thought that adding 'error_page 403 @https;' to the prefix location would override it, but no. I also tried to use 'error_page 403 = @https;' to pass error processing to the named location, hoping for the 301 redirection would cancel it. Of course, that did not happen. ^^ Here are the solutions I envision, though none of them satisfy me: 1?) Using 'autoindex on;' in the prefix location: that is not the wanted behavior 2?) Using 'try_files $uri @https;' in the prefix location would avoid directory existence try... Although display of location /test if test is an existing directory won't happen and fallback to he HTTPS server will occur. That is not the wanted behavior 3?) Using 'error_page 403 = @https;' in the prefix location, then use 'error_log /dev/null;' in the named one, but that looks ugly, more like a 'hack'. Moreover, it does not seem to work. What to do then? Since it seems I cannot prevent 403 to spawn out of the prefix location, is there a way to send it to the named one and then trap it/ignore it there? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jun 26 18:44:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 22:44:16 +0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> References: <512cccbd312ddc34327be6aabe6208cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140626184416.GP1849@mdounin.ru> Hello! On Thu, Jun 26, 2014 at 09:41:15AM -0400, c0nw0nk wrote: > So i spent a while on this one and turns out the problem is a little > function in nginx's core called "worker_rlimit_nofile". > > > But for me on windows (i don't know if it does it for linux users too.) > grinds my site down to a halt unless you increase its value. > > Why does it do this ? > http://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile On unix, worker_rlimit_nofile does exactly what's documented: it calls setrlimit(RLIMIT_NOFILE) within worker process. This allows to change OS-imposed limit without restarting nginx. And there is no default in nginx itself - the default is set by OS and it's configuration. Note well that the words "without restarting" is actually the reason why this directive exists at all. If a restart isn't a big deal, then OS limit can be changed by native means ("ulimit -n" and friends). In official nginx on Windows, worker_rlimit_nofile does nothing. Not sure if there is an equivalent limit on Windows at all. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jun 26 19:27:42 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 15:27:42 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <20140626184416.GP1849@mdounin.ru> References: <20140626184416.GP1849@mdounin.ru> Message-ID: <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Jun 26, 2014 at 09:41:15AM -0400, c0nw0nk wrote: > > > So i spent a while on this one and turns out the problem is a little > > function in nginx's core called "worker_rlimit_nofile". > > > > > > But for me on windows (i don't know if it does it for linux users > too.) > > grinds my site down to a halt unless you increase its value. > > > > Why does it do this ? > > http://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile > > On unix, worker_rlimit_nofile does exactly what's documented: it > calls setrlimit(RLIMIT_NOFILE) within worker process. This allows > to change OS-imposed limit without restarting nginx. And there is > no default in nginx itself - the default is set by OS and it's > configuration. > > Note well that the words "without restarting" is actually the > reason why this directive exists at all. If a restart isn't a big > deal, then OS limit can be changed by native means ("ulimit -n" > and friends). > > In official nginx on Windows, worker_rlimit_nofile does nothing. > Not sure if there is an equivalent limit on Windows at all. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I find it very hard to believe it does nothing without me setting it everything is very slow ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251200#msg-251200 From nginx-forum at nginx.us Thu Jun 26 19:33:52 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 15:33:52 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> Message-ID: Could it be possible my server slows down because all connections are in use ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251201#msg-251201 From nginx-forum at nginx.us Thu Jun 26 21:20:47 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 26 Jun 2014 17:20:47 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> Message-ID: <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: ------------------------------------------------------- > Could it be possible my server slows down because all connections are > in use ? No, it's a recycling and auto-tuning issue as far as I can see, have you determined at which value you noticed the difference or is this value simply a big number ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251204#msg-251204 From steve at greengecko.co.nz Thu Jun 26 21:40:39 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 27 Jun 2014 09:40:39 +1200 Subject: Nginx + PHP-fpm + NFS poor performance In-Reply-To: <8f31f6643f7e4d7490151751d951ca37@OEXMBXV001.as24.local> References: <8f31f6643f7e4d7490151751d951ca37@OEXMBXV001.as24.local> Message-ID: <1403818839.16189.112.camel@steve-new> Hi! On Wed, 2014-06-25 at 15:35 +0000, Michael Rauch wrote: > Hello, > > > > we facing the same issue with version ?nginx-1.4.7-1.el6.ngx.x86_64?. > > Did anyone find a solution? > > > > Regards, > > > > Michael Rauch > SysAdmin's view... It does to some extent depend on how NFS is set up... on a 10Gig LAN, or over the internet will make a big difference. Have you looked into whether implementing CacheFS in conjunction with NFS helps, or alternatively, a rsync ( + lsyncd ) based solution to keep local files in sync? hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Thu Jun 26 21:53:37 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 17:53:37 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> Message-ID: <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> Now i am clueless because i dropped keepalive requests i also dropped any send_timeout values. And this is what my bandwidth output looks like its very jumpy when it should not be and my page loads are very slow even on static files like html, mp4, flv etc and considering its nginx that delievers them i am very sure nginx is the problem. http://i633.photobucket.com/albums/uu52/C0nw0nk/Untitled-4.png Something is very wrong with nginx i recon i am completely out of connections avaliable and it is waiting for a connection to open to use. Unless anyone knows what i could be needing to add to my config so that while people are downloading/streaming videos that are upto 500mb in size. I already use limit_rate but with that my bandwidth output should not be jumpy like in the picture it would be a straight smooth line like it used to be but with more traffic i recon my connection limit is reached ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251206#msg-251206 From nginx-forum at nginx.us Thu Jun 26 22:29:08 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 26 Jun 2014 18:29:08 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> Message-ID: <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> When i said "my bandwidth output looks like its very jumpy". on a 1gig per second connection my output jumps up and down 10% (100mb) used then it will jump to like 40% (400mb) and it changes so much before when i had less traffic it used to be a very stead and stable 400-500mb output and hardly ever changed so dramaticly. In the following screenshot you will see me I/O usage from Nginx is extremely high. http://s633.photobucket.com/user/C0nw0nk/media/Untitled-5.png.html And i would like to add the only reason in that picture the nginx processes are using so much memory is because i set the "worker_connections 1900000;" to a high value. I don't know if it should use so much memory or if it is just wasting my system resources. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251207#msg-251207 From nginx-forum at nginx.us Fri Jun 27 05:56:47 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Jun 2014 01:56:47 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> This is a disk IO issue, not running out of connections, setting 1900000 is pointless, 16k is more then enough, no more then 2 workers per cpu, I see 12 workers so do you have enough cpu's to cover that? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251210#msg-251210 From vbart at nginx.com Fri Jun 27 07:13:49 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 27 Jun 2014 11:13:49 +0400 Subject: Recommended spdy configuration In-Reply-To: <0695aa8ecf181ca2c0505c2cd930d77a.NginxMailingListEnglish@forum.nginx.org> References: <0695aa8ecf181ca2c0505c2cd930d77a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2603612.hWRy9yMEGO@vbart-laptop> On Wednesday 25 June 2014 07:34:58 khav wrote: > What is the most suitable value according to you for the following > directives `spdy_headers_comp` and `spdy_chunk_size` . i want to optimize > spdy to get better performance.I am only using Nginx as my webserver (No > apache or any other software). > > My Nginx vesion is : 1.7.2 > Usually the most suitable values are the default ones. You can also turn on the response header compression: spdy_headers_comp 1; but please note, that these compressed headers can be a subject for the CRIME attack. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jun 27 13:07:27 2014 From: nginx-forum at nginx.us (crespin) Date: Fri, 27 Jun 2014 09:07:27 -0400 Subject: [PROPOSAL PATCH] use a return code for ngx_http_terminate_request() Message-ID: Hello, Reading ngx_http_request.c source code, I notice some call to ngx_http_terminate_request() is called sometimes with 0 instead of a return code. 0 is a correct valid for a return code ... it's NGX_OK. Is the patch valid ? It's based on nginx-1.7.2 version. Thanks for your reply. yves Subject: [PATCH] use a return code for ngx_http_terminate_request() --- 1.0/source/nginx-1.7.2/src/http/ngx_http_request.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c b/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c index 4bf9d1f..8ddfc60 100644 --- a/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c +++ b/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c @@ -2329,7 +2329,7 @@ ngx_http_finalize_request(ngx_http_request_t *r, ngx_int_t rc) if (r->buffered || r->postponed) { if (ngx_http_set_write_handler(r) != NGX_OK) { - ngx_http_terminate_request(r, 0); + ngx_http_terminate_request(r, NGX_ERROR); } return; @@ -2381,7 +2381,7 @@ ngx_http_finalize_request(ngx_http_request_t *r, ngx_int_t rc) if (ngx_http_post_request(pr, NULL) != NGX_OK) { r->main->count++; - ngx_http_terminate_request(r, 0); + ngx_http_terminate_request(r, NGX_ERROR); return; } @@ -2395,7 +2395,7 @@ ngx_http_finalize_request(ngx_http_request_t *r, ngx_int_t rc) if (r->buffered || c->buffered || r->postponed || r->blocked) { if (ngx_http_set_write_handler(r) != NGX_OK) { - ngx_http_terminate_request(r, 0); + ngx_http_terminate_request(r, NGX_ERROR); } return; -- 1.7.10.4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251227,251227#msg-251227 From nginx-forum at nginx.us Fri Jun 27 13:26:23 2014 From: nginx-forum at nginx.us (crespin) Date: Fri, 27 Jun 2014 09:26:23 -0400 Subject: [PROPOSAL PATCH] use a return code for ngx_http_close_request() Message-ID: Hello, here is another path still on ngx_http_request.c. In function ngx_http_close_request(), the second parameter is an error code. This error code is used in ngx_http_free_request() to set the HTTP status code if it's not present or if no bytes are already sent. Use NGX_OK instead of zero seems - for me - valid. When ngx_http_close_request() is called after an error, I guess it's must be NGX_HTTP_INTERNAL_SERVER_ERROR. Perhaps, it's better to do two patch one for zero to NGX_OK and another for NGX_HTTP_INTERNAL_SERVER_ERROR. Thanks for your comments, Best regards, yves Subject: [PATCH] use an error code for ngx_http_close_request() --- 1.0/source/nginx-1.7.2/src/http/ngx_http_request.c | 38 ++++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c b/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c index 8ddfc60..1979229 100644 --- a/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c +++ b/1.0/source/nginx-1.7.2/src/http/ngx_http_request.c @@ -2429,7 +2429,7 @@ ngx_http_finalize_request(ngx_http_request_t *r, ngx_int_t rc) } if (c->read->eof) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); return; } @@ -2493,7 +2493,7 @@ ngx_http_terminate_handler(ngx_http_request_t *r) r->count = 1; - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); } @@ -2504,7 +2504,7 @@ ngx_http_finalize_connection(ngx_http_request_t *r) #if (NGX_HTTP_SPDY) if (r->spdy_stream) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); return; } #endif @@ -2523,7 +2523,7 @@ ngx_http_finalize_connection(ngx_http_request_t *r) } } - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); return; } @@ -2546,7 +2546,7 @@ ngx_http_finalize_connection(ngx_http_request_t *r) return; } - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); } @@ -2581,7 +2581,7 @@ ngx_http_set_write_handler(ngx_http_request_t *r) } if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return NGX_ERROR; } @@ -2622,7 +2622,7 @@ ngx_http_writer(ngx_http_request_t *r) ngx_add_timer(wev, clcf->send_timeout); if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); } return; @@ -2635,7 +2635,7 @@ ngx_http_writer(ngx_http_request_t *r) "http writer delayed"); if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); } return; @@ -2659,7 +2659,7 @@ ngx_http_writer(ngx_http_request_t *r) } if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); } return; @@ -2696,7 +2696,7 @@ ngx_http_block_reading(ngx_http_request_t *r) && r->connection->read->active) { if (ngx_del_event(r->connection->read, NGX_READ_EVENT, 0) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); } } } @@ -2798,7 +2798,7 @@ ngx_http_test_reading(ngx_http_request_t *r) if ((ngx_event_flags & NGX_USE_LEVEL_EVENT) && rev->active) { if (ngx_del_event(rev, NGX_READ_EVENT, 0) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); } } @@ -2869,7 +2869,7 @@ ngx_http_set_keepalive(ngx_http_request_t *r) cscf->large_client_header_buffers.num * sizeof(ngx_buf_t *)); if (hc->free == NULL) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } } @@ -3198,7 +3198,7 @@ ngx_http_set_lingering_close(ngx_http_request_t *r) ngx_add_timer(rev, clcf->lingering_timeout); if (ngx_handle_read_event(rev, 0) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERRORNGX_ERROR); return; } @@ -3207,7 +3207,7 @@ ngx_http_set_lingering_close(ngx_http_request_t *r) if (wev->active && (ngx_event_flags & NGX_USE_LEVEL_EVENT)) { if (ngx_del_event(wev, NGX_WRITE_EVENT, 0) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } } @@ -3215,7 +3215,7 @@ ngx_http_set_lingering_close(ngx_http_request_t *r) if (ngx_shutdown_socket(c->fd, NGX_WRITE_SHUTDOWN) == -1) { ngx_connection_error(c, ngx_socket_errno, ngx_shutdown_socket_n " failed"); - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } @@ -3242,13 +3242,13 @@ ngx_http_lingering_close_handler(ngx_event_t *rev) "http lingering close handler"); if (rev->timedout) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); return; } timer = (ngx_msec_t) r->lingering_time - (ngx_msec_t) ngx_time(); if ((ngx_msec_int_t) timer <= 0) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_OK); return; } @@ -3258,14 +3258,14 @@ ngx_http_lingering_close_handler(ngx_event_t *rev) ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "lingering read: %d", n); if (n == NGX_ERROR || n == 0) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, (n == 0) ? NGX_OK : NGX_ERROR); return; } } while (rev->ready); if (ngx_handle_read_event(rev, 0) != NGX_OK) { - ngx_http_close_request(r, 0); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } -- 1.7.10.4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251228,251228#msg-251228 From nicolas.flinois at amadeus.com Fri Jun 27 13:53:24 2014 From: nicolas.flinois at amadeus.com (Nicolas Flinois) Date: Fri, 27 Jun 2014 15:53:24 +0200 Subject: Upstream performances: what if one node only ? Message-ID: Hi all, I am wondering about the possible extra-cost of using a single-node upstream into proxy_pass compared with 'proxy_pass host' directly. I need to automate application servers move, and find convenient to update upstream definitions only (defined into dedicated files). Solution1: upstream upOne { server somehost; } [..] proxy_pass upOne; Solution2: proxy_pass somehost; Is solution1 more time-consuming than solution2 at run-time ? Many thanks for your advises ! Regards, Nicolas FLINOIS ALTEN Contracting Company Amadeus, Sales & e-Commerce Platform T: + 33 (0) 4 92 94 63 50 (Ext:6350) nicolas.flinois at amadeus.com www.amadeus.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 27 14:14:59 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 10:14:59 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: I also just to try and check if it was my connection limit enabled nginx_status and this was my output. Active connections: 1032 server accepts handled requests 8335 8335 12564 Reading: 0 Writing: 197 Waiting: 835 How can i fix the I/O issue why is nginx consuming so much in the first place if i close the nginx process nothing else is even using it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251232#msg-251232 From nginx-forum at nginx.us Fri Jun 27 14:28:29 2014 From: nginx-forum at nginx.us (vamshi) Date: Fri, 27 Jun 2014 10:28:29 -0400 Subject: Matching a href spec with Lua regex Message-ID: <2487bf998a2356be17d53b745f1b7c14.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to modify the response body in the following way : If there href="http://www.google.com", I will convert it to href="http://nginx-ip/?_url_={url-encoded-form-of www.google.com} This is what I have in my nginx.conf location / { .... .... .... body_filter_by_lua ' local escUri = function (m) return "href=\\"http://10.0.9.44/?_url_=" .. ngx.escape_uri("$1") .. "\\"" end local newStr, n, err = ngx.re.gsub(ngx.arg[1], "href=\\"(.-)\\", escUri, "i") '; } But I cannot see absolutely any change in the href part of the response Can someone help me understand why this is not matching ? What am I doing wrong ? -Vamshi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251234,251234#msg-251234 From nginx-forum at nginx.us Fri Jun 27 15:50:47 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 27 Jun 2014 11:50:47 -0400 Subject: difficulty adding headers Message-ID: i need to ensure the Accept-Ranges header is present to serve video files while supporting forward/backwards seeking. i notice in many tutorials for nginx that this header is shown as being present in server response headers by default, yet not on my present setup. i have used the following to add the header manually in the relevant places, yet so far have not been successful: # streamable mp4 location ~* \.(mp4|mp4a)$ #location ~* \.mp4$ #location ^~ /file/download/ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 20M; gzip off; gzip_static off; limit_rate_after 10m; limit_rate 1m; # here you add response header "Content-Disposition" # with value of "filename=" + name of file (in variable $request_uri), # so for url example.com/static/audio/blahblah.mp3 # it will be /static/audio/blahblah.mp3 # ---- #set $sent_http_content_disposition filename=$request_uri; # or #add_header "content_disposition" "filename=$request_uri"; # here you add header "Accept-Ranges" #set $sent_http_accept_ranges bytes; more_set_headers 'Accept-Ranges: bytes'; # add_header "Accept-Ranges" "bytes"; add_header "Cache-Control" "private"; add_header "Pragma" "private"; # tell nginx that final HTTP Status Code should be 206 not 200 return 206; } as you can see, i have played with various options, yet none have succeeded. i am not even sure that any of these directives are being called at all. (this is part of a large-ish config file for a social network with many features). i am aware that nginx ignores add_header directives except for the ones in the final location block for the presently served file.. yet so far this awareness has not yielded a solution. i am also not seeing a return code of 206 - instead the usual 200 is returned when i access an mp4 file directly via curl. anyone know what i am missing? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251236#msg-251236 From nginx-forum at nginx.us Fri Jun 27 16:25:30 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Jun 2014 12:25:30 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> Looking at the disk activity access to disk is using all your resources not nginx. Here http://s633.photobucket.com/user/C0nw0nk/media/Untitled-5.png.html you see nginx itself is waiting for disk IO to complete, all processes are doing just about nothing other then waiting for the harddisk, the main waiting issue looks like it is writing to disk which isn't going fast enough. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251237#msg-251237 From nginx-forum at nginx.us Fri Jun 27 16:28:13 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 12:28:13 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> So the soloution could be a different hard drive possibly a solid state drive ? This is my current hard drive http://www.hgst.com/hard-drives/enterprise-hard-drives/enterprise-sata-drives/ultrastar-7k4000 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251238#msg-251238 From mdounin at mdounin.ru Fri Jun 27 16:35:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 20:35:02 +0400 Subject: Upstream performances: what if one node only ? In-Reply-To: References: Message-ID: <20140627163502.GV1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 03:53:24PM +0200, Nicolas Flinois wrote: > Hi all, > > I am wondering about the possible extra-cost of using a single-node > upstream into proxy_pass compared with 'proxy_pass host' directly. > I need to automate application servers move, and find convenient to update > upstream definitions only (defined into dedicated files). > > Solution1: > > upstream upOne { > server somehost; > } > [..] > proxy_pass upOne; > > > Solution2: > > proxy_pass somehost; > > Is solution1 more time-consuming than solution2 at run-time ? There is no difference. Internally, proxy_pass with a hostname creates an implicit upstream{} with a single server, and uses it. -- Maxim Dounin http://nginx.org/ From nicolas.flinois at amadeus.com Fri Jun 27 16:39:20 2014 From: nicolas.flinois at amadeus.com (Nicolas Flinois) Date: Fri, 27 Jun 2014 18:39:20 +0200 Subject: Upstream performances: what if one node only ? In-Reply-To: <20140627163502.GV1849@mdounin.ru> References: <20140627163502.GV1849@mdounin.ru> Message-ID: Many thanks Maxim. Have a nice week-end.. Nicolas FLINOIS ALTEN Contracting Company Amadeus, Sales & e-Commerce Platform T: + 33 (0) 4 92 94 63 50 (Ext:6350) nicolas.flinois at amadeus.com www.amadeus.com/ From: Maxim Dounin To: nginx at nginx.org, Date: 27/06/2014 18:35 Subject: Re: Upstream performances: what if one node only ? Sent by: nginx-bounces at nginx.org Hello! On Fri, Jun 27, 2014 at 03:53:24PM +0200, Nicolas Flinois wrote: > Hi all, > > I am wondering about the possible extra-cost of using a single-node > upstream into proxy_pass compared with 'proxy_pass host' directly. > I need to automate application servers move, and find convenient to update > upstream definitions only (defined into dedicated files). > > Solution1: > > upstream upOne { > server somehost; > } > [..] > proxy_pass upOne; > > > Solution2: > > proxy_pass somehost; > > Is solution1 more time-consuming than solution2 at run-time ? There is no difference. Internally, proxy_pass with a hostname creates an implicit upstream{} with a single server, and uses it. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jun 27 16:49:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 20:49:09 +0400 Subject: difficulty adding headers In-Reply-To: References: Message-ID: <20140627164909.GW1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 11:50:47AM -0400, ura wrote: > i need to ensure the Accept-Ranges header is present to serve video files > while supporting forward/backwards seeking. > i notice in many tutorials for nginx that this header is shown as being > present in server response headers by default, yet not on my present setup. > > i have used the following to add the header manually in the relevant places, > yet so far have not been successful: You shouldn't try to add Accept-Ranges header manually. It will be added automatically when nginx supports range requests to the resource in question. > # streamable mp4 > location ~* \.(mp4|mp4a)$ > #location ~* \.mp4$ > #location ^~ /file/download/ > { > mp4; Range requests to responses of mp4 module are supported starting at 1.5.13 (previously, range requests was supported only for files returned without modifications). If it doesn't work for you, it's probably time to upgrade. [...] > # tell nginx that final HTTP Status Code should be 206 not 200 > return 206; This is just wrong, as it prevents nginx from returning a requested resource and unconditionally returns 206 response with an empty response body. Again, you shouldn't try to return 206 manually. The 206 response code is set automatically when a requested range is returned (instead of the full response). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 27 16:54:09 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 27 Jun 2014 12:54:09 -0400 Subject: difficulty adding headers In-Reply-To: <20140627164909.GW1849@mdounin.ru> References: <20140627164909.GW1849@mdounin.ru> Message-ID: <4a2d1439d3deb5222d851243d2b2f5fd.NginxMailingListEnglish@forum.nginx.org> thanks for responding here. the 206 code was advised by every tutorial i found online. i am using nginx 1.7.2, so cannot upgrade. >You shouldn't try to add Accept-Ranges header manually. It will >be added automatically when nginx supports range requests to the >resource in question. how do i notify nginx to serve particular file extensions with the accept-ranges header? what would cause these headers to not be outputted in this case? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251242#msg-251242 From nginx-forum at nginx.us Fri Jun 27 16:57:09 2014 From: nginx-forum at nginx.us (vamshi) Date: Fri, 27 Jun 2014 12:57:09 -0400 Subject: Matching a href spec with Lua regex In-Reply-To: <2487bf998a2356be17d53b745f1b7c14.NginxMailingListEnglish@forum.nginx.org> References: <2487bf998a2356be17d53b745f1b7c14.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8f03d79ed6da8534c9aacb9f63387726.NginxMailingListEnglish@forum.nginx.org> As usual, found my error. The following is properly matcing the regex : local escUri = function (m) local _str = "href=\\"http://10.0.9.44/?_redir_=" _str = _str .. ngx.escape_uri(m[1]) .. "\\"" return _str end local newStr, n, err = ngx.re.gsub(ngx.arg[1], "href=\\"(.*)\\"", escUri, "ijox") print(ngx.arg[1]) --> still original text >From my debug logs, I can see that the regex susbstitution is happening properly. But the href still remains the same. Probably because of chunked data. I will try to aggregate the response body and then do the substitution. But if someone sees anything wrong, your help would be appreciated. -Vamshi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251234,251243#msg-251243 From nginx-forum at nginx.us Fri Jun 27 16:58:17 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 27 Jun 2014 12:58:17 -0400 Subject: difficulty adding headers In-Reply-To: <4a2d1439d3deb5222d851243d2b2f5fd.NginxMailingListEnglish@forum.nginx.org> References: <20140627164909.GW1849@mdounin.ru> <4a2d1439d3deb5222d851243d2b2f5fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: also.. since only the headers added via the final location block will be used, does this then mean that i need to put conditional logic into that block to check the current url for particular paths - if some headers are needed for some paths only.. ? since most of my served items will end in the .php location codeblock, if my thinking here is correct, i would essentially need to move nearly all of my existing location config blocks inside the php codeblock and then maybe use 'if' statements to apply headers in appropriate ways. or am i missing something here? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251244#msg-251244 From nginx-forum at nginx.us Fri Jun 27 17:03:29 2014 From: nginx-forum at nginx.us (vamshi) Date: Fri, 27 Jun 2014 13:03:29 -0400 Subject: Matching a href spec with Lua regex In-Reply-To: <8f03d79ed6da8534c9aacb9f63387726.NginxMailingListEnglish@forum.nginx.org> References: <2487bf998a2356be17d53b745f1b7c14.NginxMailingListEnglish@forum.nginx.org> <8f03d79ed6da8534c9aacb9f63387726.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sorry for the double post, but wanted to post the complete conf ... just in case there was a mistake server { listen 80; server_name 127.0.0.1 10.0.9.44; set $_ActualTarget ""; location / { rewrite_by_lua ' local _args = ngx.req.get_uri_args() ngx.var._ActualTarget = _args["_url_"]; _args["_url_"] = nil print(ngx.var._ActualTarget) ngx.req.set_uri_args(_args); '; resolver 8.8.8.8; proxy_pass_request_body on; proxy_pass_request_headers on; proxy_pass $scheme://$_ActualTarget; header_filter_by_lua ' ngx.header.content_length = nil ngx.header.set_cookie = nil if ngx.header.location then local _location = ngx.header.location _location = ngx.escape_uri(_location) _location = "http://10.0.9.44/?_url_=" .. _location ngx.header.location = _location end '; body_filter_by_lua ' local escUri = function (m) local _str = "href=\\"http://10.0.9.44/?_url_=" _str = _str .. ngx.escape_uri(m[1]) .. "\\"" return _str end local newStr, n, err = ngx.re.gsub(ngx.arg[1], "href=\\"(.*)\\"", escUri, "ijox") print(ngx.arg[1]) '; } # # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/html; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251234,251245#msg-251245 From nginx-forum at nginx.us Fri Jun 27 17:16:10 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Jun 2014 13:16:10 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> Message-ID: <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> It all depends what you are writing, too small blocksize, many seeks, onboard diskcache not working (writeback). Run some disk benchmarks to see what your storage is capable of and compare that to how much data your attempting to write. At the moment your disks are not keeping up with the amount of write requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251246#msg-251246 From nginx-forum at nginx.us Fri Jun 27 17:23:04 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 13:23:04 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> Since i have never had to benchmark a hard drive before this will be a new experience for me any tools you recommend to use specifically. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251247#msg-251247 From nginx-forum at nginx.us Fri Jun 27 17:34:15 2014 From: nginx-forum at nginx.us (vamshi) Date: Fri, 27 Jun 2014 13:34:15 -0400 Subject: changes to ngx.arg[1] not getting reflected in final response Message-ID: <05805c32b1bb5742ccd2b2eec92c5363.NginxMailingListEnglish@forum.nginx.org> header_filter_by_lua ' ngx.header.content_length = nil ngx.header.set_cookie = nil if ngx.header.location then local _location = ngx.header.location _location = ngx.escape_uri(_location) _location = "http://10.0.9.44/?_redir_=" .. _location ngx.header.location = _location end '; body_filter_by_lua ' local escUri = function (m) local _esc = "href=\\"http://10.0.9.44/?_redir_=" .. ngx.escape_uri(m[1]) .. "\\"" print(_esc) return _esc end local chunk, eof = ngx.arg[1], ngx.arg[2] local buffered = ngx.ctx.buffered if not buffered then buffered = {} ngx.ctx.buffered = buffered end if chunk ~= "" then buffered[#buffered + 1] = chunk ngx.arg[1] = nil end if eof then local whole = table.concat(buffered) ngx.ctx.buffered = nil local newStr, n, err = ngx.re.gsub(whole, "href=\\"(.*)\\"", escUri, "i") ngx.arg[1] = whole print(whole) end '; Debug Logs: 2014/06/27 22:51:34 [debug] 9059#0: *1 http output filter "/?" 2014/06/27 22:51:34 [debug] 9059#0: *1 http copy filter: "/?" 2014/06/27 22:51:34 [debug] 9059#0: *1 lua body filter for user lua code, uri "/" 2014/06/27 22:51:34 [debug] 9059#0: *1 lua fetching existing ngx.ctx table for the current request 2014/06/27 22:51:34 [debug] 9059#0: *1 lua fetching existing ngx.ctx table for the current request 2014/06/27 22:51:34 [debug] 9059#0: *1 lua compiling gsub regex "href="(.*)"" with options "i" (compile once: 0) (dfa mode: 0) (jit mode: 0) 2014/06/27 22:51:34 [notice] 9059#0: *1 [lua] body_filter_by_lua:5: href="http://10.0.9.44/?_redir_=http%3a%2f%2fwww.google.co.in%2f%3fgfe_rd%3dcr%26amp%3bei%3dHqitU86qKubV8gfRzYDoAQ" while sending to client, client: 10.0.9.44, server: 127.0.0.1, request: "GET /?_redir_=www.google.com HTTP/1.1", upstream: "http://173.194.36.52:80/", host: "10.0.9.44" 2014/06/27 22:51:34 [debug] 9059#0: *1 lua allocate new chainlink and new buf of size 261, cl:086B49B0 2014/06/27 22:51:34 [notice] 9059#0: *1 [lua] body_filter_by_lua:26: 302 Moved

302 Moved

The document has moved here.^M ^M while sending to client, client: 10.0.9.44, server: 127.0.0.1, request: "GET /?_redir_=www.google.com HTTP/1.1", upstream: "http://173.194.36.52:80/", host: "10.0.9.44" 2014/06/27 22:51:34 [debug] 9059#0: *1 lua capture body filter, uri "/" As you can see, print(_esc) show that the URL was successfully URLencoded. Yet, the print(whole) line does not reflect the gsub() What could be issue here ? -Vamshi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251248,251248#msg-251248 From nginx-forum at nginx.us Fri Jun 27 17:55:42 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Jun 2014 13:55:42 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: Try via a forum like http://www.overclock.net/t/1193676/looking-for-hdd-benchmark-utility Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251249#msg-251249 From mdounin at mdounin.ru Fri Jun 27 18:06:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 22:06:20 +0400 Subject: difficulty adding headers In-Reply-To: <4a2d1439d3deb5222d851243d2b2f5fd.NginxMailingListEnglish@forum.nginx.org> References: <20140627164909.GW1849@mdounin.ru> <4a2d1439d3deb5222d851243d2b2f5fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140627180620.GY1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 12:54:09PM -0400, ura wrote: > thanks for responding here. > the 206 code was advised by every tutorial i found online. > i am using nginx 1.7.2, so cannot upgrade. Ok, so your problem is likely due to "return 206" added. Just remove it, as well as other garbage from your config you don't understand. It's really good idea to read the documentation to make sure you understand what you are writing into your config instead of blindly following "tutorials online". I very much doubt there is any tutorial which suggests to use "return 206" though, as it's always wrong. > >You shouldn't try to add Accept-Ranges header manually. It will > >be added automatically when nginx supports range requests to the > >resource in question. > > how do i notify nginx to serve particular file extensions with the > accept-ranges header? Again: you shouldn't do anything. If range requests are supported, the header will be added automatically. > what would cause these headers to not be outputted in this case? See above, mostly likely it's caused by "return 206" in your config. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 27 18:23:47 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 27 Jun 2014 14:23:47 -0400 Subject: difficulty adding headers In-Reply-To: <20140627180620.GY1849@mdounin.ru> References: <20140627180620.GY1849@mdounin.ru> Message-ID: <22e220b23ba53455537ffb6cd63d21ff.NginxMailingListEnglish@forum.nginx.org> this stackoverflow response on the topic is one that quotes the code i used... i have also seen this page linked by several other pages which said this was a workable approach: http://stackoverflow.com/questions/14598565/serving-206-byte-range-through-nginx-django i am not blindly following anything - i am looking at all the available information and referred to the nginx documentation - i am using what works and disregarding what does not. so far i have not found an explanation of this situation i am experiencing in the documentation or on forums, hence i am here asking. in any case, i removed return 206; the situation did not change. i am unclear why removing 'return 206' would cause a 200 response to become a 206 response! ;) " If range requests are supported, the header will be added automatically." i am making a simple request for a static mp4 file via curl to nginx 1.7.2.. i am not clear on your use of the idea of 'if range requests are supported' in this context. how would i know if they are supported or not? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251255#msg-251255 From nginx-forum at nginx.us Fri Jun 27 18:33:37 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 14:33:37 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> The results got even more fascinating as i increased the buffer size's to the following. client_max_body_size 0; client_body_buffer_size 1000m; mp4_buffer_size 700m; mp4_max_buffer_size 1000m; http://s633.photobucket.com/user/C0nw0nk/media/Untitled-6.png.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251256#msg-251256 From nginx-forum at nginx.us Fri Jun 27 18:50:18 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 27 Jun 2014 14:50:18 -0400 Subject: difficulty adding headers In-Reply-To: <22e220b23ba53455537ffb6cd63d21ff.NginxMailingListEnglish@forum.nginx.org> References: <20140627180620.GY1849@mdounin.ru> <22e220b23ba53455537ffb6cd63d21ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2aa1fe592eaee51ea66f565c6e0eb76d.NginxMailingListEnglish@forum.nginx.org> i just did a test of inserting a meaningless header into the response, by adding the add_header directive into the various levels of the nginx config, beginning with http, then server and then the location that i have setup to focus on mp4 files. i found that the header is successfully inserted inside the http and server blocks, yet when i request an mp4 file via curl, the header is not returned.. which suggests to me that the location block i am using is dysfunctional in some way. currently i am using the block below, though i have used various other versions which also failed. location ~ \.(mp4|mp4a)$ { add_header "name" "value"; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251257#msg-251257 From mdounin at mdounin.ru Fri Jun 27 19:20:41 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 23:20:41 +0400 Subject: difficulty adding headers In-Reply-To: <22e220b23ba53455537ffb6cd63d21ff.NginxMailingListEnglish@forum.nginx.org> References: <20140627180620.GY1849@mdounin.ru> <22e220b23ba53455537ffb6cd63d21ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140627192041.GZ1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 02:23:47PM -0400, ura wrote: > this stackoverflow response on the topic is one that quotes the code i > used... i have also seen this page linked by several other pages which said > this was a workable approach: > http://stackoverflow.com/questions/14598565/serving-206-byte-range-through-nginx-django I think I understand the problem of the author of this question. I've added an answere there - most likely, it has ssi or something like enabled for all MIME types. > i am not blindly following anything - i am looking at all the available > information and referred to the nginx documentation - i am using what works > and disregarding what does not. so far i have not found an explanation of > this situation i am experiencing in the documentation or on forums, hence i > am here asking. > > in any case, i removed return 206; > the situation did not change. > i am unclear why removing 'return 206' would cause a 200 response to become > a 206 response! ;) The 206 code indicates that "this was 200, but due to Range header only part of the actual response is returned". > " If range requests are > supported, the header will be added automatically." > > i am making a simple request for a static mp4 file via curl to nginx 1.7.2.. > i am not clear on your use of the idea of 'if range requests are supported' > in this context. how would i know if they are supported or not? If you don't see the Accept-Ranges header added to a response, then range requests are not supported. If you don't see Accept-Ranges header added to responses to a static mp4 file, then there is something wrong in your config. Either a response is handled in a wrong location and ends up being handled not by nginx, or there is a filter active for some/all responses which may modify them and hence disables range support. If you can't trace the problematic configuration and/or not sure it's the case, you may start testing with a clean config. E.g., with trivial configuration like this: http { include mime.types; server { listen 8080; location / { mp4; } } } A request to static mp4 file results in: $ curl -I http://localhost:8080/mp4/test.mp4 HTTP/1.1 200 OK Server: nginx/1.7.3 Date: Fri, 27 Jun 2014 18:38:24 GMT Content-Type: video/mp4 Content-Length: 7147296 Last-Modified: Fri, 15 Jun 2012 15:50:37 GMT Connection: keep-alive ETag: "4fdb59cd-6d0f20" Accept-Ranges: bytes -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 27 19:25:16 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Jun 2014 15:25:16 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: Which shows disk IO is much better which to me indicates there were/are too many small writes to disk, when some parts are slow tuning is a big time issue with nginx no matter which OS your running. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251259#msg-251259 From nginx-forum at nginx.us Fri Jun 27 19:35:11 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 15:35:11 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> Hmm well i have figured out it is my mp4 buffers that need fixing but i recon my largest video file size on the server is maybe 700mb as of figuring out what to set this to i am currently just playing around with it to see what works best. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251261#msg-251261 From mdounin at mdounin.ru Fri Jun 27 19:38:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 23:38:29 +0400 Subject: [PROPOSAL PATCH] use a return code for ngx_http_terminate_request() In-Reply-To: References: Message-ID: <20140627193829.GA1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 09:07:27AM -0400, crespin wrote: > Hello, > > Reading ngx_http_request.c source code, I notice some call to > ngx_http_terminate_request() is called sometimes with 0 instead of a return > code. > > 0 is a correct valid for a return code ... it's NGX_OK. > > Is the patch valid ? The ngx_http_terminate_request() function uses "rc" only when it's positive, and "0" doesn't really mean anything except "we don't have error code". -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jun 27 19:41:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 23:41:59 +0400 Subject: [PROPOSAL PATCH] use a return code for ngx_http_close_request() In-Reply-To: References: Message-ID: <20140627194159.GB1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 09:26:23AM -0400, crespin wrote: > Hello, > > here is another path still on ngx_http_request.c. > In function ngx_http_close_request(), the second parameter is an error > code. > > This error code is used in ngx_http_free_request() to set the HTTP status > code if it's not present or if no bytes are already sent. > > Use NGX_OK instead of zero seems - for me - valid. The same logic applies as in the previous answer about ngx_http_terminate_request(). > When ngx_http_close_request() is called after an error, I guess it's must be > NGX_HTTP_INTERNAL_SERVER_ERROR. > > Perhaps, it's better to do two patch one for zero to NGX_OK and another for > NGX_HTTP_INTERNAL_SERVER_ERROR. Use of NGX_HTTP_INTERNAL_SERVER_ERROR is wrong, as there are no chances that this response code will be ever actually sent. Response headers are either already sent, or won't be sent at all. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jun 27 20:20:11 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 16:20:11 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> I think i found the soloution rather than buffer or envolve pseudo streaming mp4 already html5 compatible videos. I just leave it to the browsers rather than my server. So to solve my I/O usage issue i dropped "mp4;" from my server config "#mp4;" and now my I/O usage is basically back at 0. Perhaps nginx should look at the I/O usage to do with that function and see if they can make it better. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251264#msg-251264 From nginx-forum at nginx.us Fri Jun 27 20:30:01 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 16:30:01 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> Message-ID: My new soloution did not last very long everything shot up again so the mp4 function is needed to drop I/O usage but as of what the optimal setting for the buffers are realy does baffle me Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251265#msg-251265 From nginx-forum at nginx.us Fri Jun 27 21:19:22 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Jun 2014 17:19:22 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> Message-ID: c0nw0nk Wrote: ------------------------------------------------------- > Perhaps nginx should look at the I/O usage to do with that function > and see if they can make it better. Its a disk subsystem issue which is under control by the OS not nginx, a good 15k sas does wonders. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251266#msg-251266 From nginx-forum at nginx.us Fri Jun 27 21:46:45 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 17:46:45 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> Message-ID: So a disk spinning at 15000 rpm compared to my current hard drive spinning at 7000 rpm does better than a SSD still ? This is my current hard drive i posted earlyer i do believe http://www.hgst.com/hard-drives/enterprise-hard-drives/enterprise-sata-drives/ultrastar-7k4000 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251267#msg-251267 From schlie at comcast.net Fri Jun 27 22:44:18 2014 From: schlie at comcast.net (Paul Schlie) Date: Fri, 27 Jun 2014 18:44:18 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: <20140626184416.GP1849@mdounin.ru> <35e633dc7102f09d9e9f3ad945d18a95.NginxMailingListEnglish@forum.nginx.org> <287ab5bd18dd6c3d8a5e35440e79fe99.NginxMailingListEnglish@forum.nginx.org> <942fa1dc854a6ee05080a74beef99076.NginxMailingListEnglish@forum.nginx.org> <75218bf1a872bf24caa11b2f0c213590.NginxMailingListEnglish@forum.nginx.org> <0672f88cb185fba67ab1efe913a911ca.NginxMailingListEnglish@forum.nginx.org> <694221f54c843622f81e561cd02445b4.NginxMailingListEnglish@forum.nginx.org> <874c7fd6567ef3ff601710aa51760514.NginxMailingListEnglish@forum.nginx.org> <86cb6842e81fdb9a1301ef8c47adcd3c.NginxMailingListEnglish@forum.nginx.org> <28b9de60b107b79caf6bce0b8f88a5ad.NginxMailingListEnglish@forum.nginx.org> <98812c2c0ba1ebd8cd733c99c31533fb.NginxMailingListEnglish@forum.nginx.org> <14bd9cdd1c85b0221eece156585c6bf7.NginxMailingListEnglish@forum.nginx.org> <633f6747f3fa52292487bf7003152747.NginxMailingListEnglish@forum.nginx.org> Message-ID: I don't know if what you're experiencing is related to a problem I'm still tracking down, specifically that multiple redundant read-streams and corresponding temp_files are being opened to read the same file from a backend server for what appears to be a single initial get request by a client for a large mp4 file which was not yet been locally reverse proxy cashed by nginx as an substantially static file. This appears to end up creating 6-10x more traffic and disk activity than is actually required to cache the single file (depending on how many redundant read-stream/temp_files are created. If a server is attempting to reverse proxy many such relatively large files, it could easily saturate nginx with network/disk traffic until most such files requested were eventually locally cached. On Jun 27, 2014, at 4:30 PM, c0nw0nk wrote: > My new soloution did not last very long everything shot up again so the mp4 > function is needed to drop I/O usage but as of what the optimal setting for > the buffers are realy does baffle me > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251265#msg-251265 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Jun 27 23:26:04 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 27 Jun 2014 19:26:04 -0400 Subject: difficulty adding headers In-Reply-To: <20140627192041.GZ1849@mdounin.ru> References: <20140627192041.GZ1849@mdounin.ru> Message-ID: <0c2b988397b6234db8d4050b4f7c9314.NginxMailingListEnglish@forum.nginx.org> ok, thanks for clarifying. i just did a clean test as suggested and do indeed see the Accept-Ranges header being returned automatically by nginx. in doing that - the mp4 video still does not stream/pre-buffer as i am desiring. i accessed the test video file that is on the homepage of the video.js website via curl and there is no Accept-Ranges header being sent in the response for their file, on their server... yet their test video file preloads and streams correctly when i play it via the video.js player on my local dev machine (served via nginx). when i download their test video and play it via my same dev machine here, the preloading does not function. so... a) Accept-Ranges is not the cause of the lack of streaming here. b) the same file will preload on another server, yet not on mine (the headers being sent from the other server are not obviously different to the ones being sent from my own nginx server). i even downloaded a php class to correct the moov atom for the mp4 file, in case that was the challenge here. although some files did need to be fixed, the issue of the video files' moov atoms being in the wrong position, is not the cause of my main challenge with the pre-buffering of videos. so now i am stuck again, with no idea of what i am missing from my server to activate pre-buffering of video. perhaps i will message the video.js coders directly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251269#msg-251269 From nginx-forum at nginx.us Sat Jun 28 00:27:13 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 27 Jun 2014 20:27:13 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: References: Message-ID: <3f78d45ecff14664772e941a9a994bce.NginxMailingListEnglish@forum.nginx.org> Paul Schlie Wrote: ------------------------------------------------------- > I don't know if what you're experiencing is related to a problem I'm > still tracking down, specifically that multiple redundant read-streams > and corresponding temp_files are being opened to read the same file > from a backend server for what appears to be a single initial get > request by a client for a large mp4 file which was not yet been > locally reverse proxy cashed by nginx as an substantially static file. > This appears to end up creating 6-10x more traffic and disk activity > than is actually required to cache the single file (depending on how > many redundant read-stream/temp_files are created. If a server is > attempting to reverse proxy many such relatively large files, it could > easily saturate nginx with network/disk traffic until most such files > requested were eventually locally cached. > > On Jun 27, 2014, at 4:30 PM, c0nw0nk wrote: > > > My new soloution did not last very long everything shot up again so > the mp4 > > function is needed to drop I/O usage but as of what the optimal > setting for > > the buffers are realy does baffle me > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,251186,251265#msg-251265 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Well i don't proxy anything everything is hosted locally and php is run by fastcgi but it is all on the same machine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251270#msg-251270 From mdounin at mdounin.ru Sat Jun 28 02:00:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 28 Jun 2014 06:00:48 +0400 Subject: difficulty adding headers In-Reply-To: <0c2b988397b6234db8d4050b4f7c9314.NginxMailingListEnglish@forum.nginx.org> References: <20140627192041.GZ1849@mdounin.ru> <0c2b988397b6234db8d4050b4f7c9314.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140628020048.GC1849@mdounin.ru> Hello! On Fri, Jun 27, 2014 at 07:26:04PM -0400, ura wrote: > ok, thanks for clarifying. > i just did a clean test as suggested and do indeed see the Accept-Ranges > header being returned automatically by nginx. > > in doing that - the mp4 video still does not stream/pre-buffer as i am > desiring. > > i accessed the test video file that is on the homepage of the video.js > website via curl and there is no Accept-Ranges header being sent in the > response for their file, on their server... yet their test video file > preloads and streams correctly when i play it via the video.js player on my > local dev machine (served via nginx). > > when i download their test video and play it via my same dev machine here, > the preloading does not function. > > so... > a) Accept-Ranges is not the cause of the lack of streaming here. > b) the same file will preload on another server, yet not on mine (the > headers being sent from the other server are not obviously different to the > ones being sent from my own nginx server). > > i even downloaded a php class to correct the moov atom for the mp4 file, in > case that was the challenge here. although some files did need to be fixed, > the issue of the video files' moov atoms being in the wrong position, is not > the cause of my main challenge with the pre-buffering of videos. > > so now i am stuck again, with no idea of what i am missing from my server to > activate pre-buffering of video. perhaps i will message the video.js coders > directly. As far as I see, "video.js" isn't a player per se, but rather an interface for HTML5 player in your browser. You may start debugging with just a