From mdounin at mdounin.ru Sat Mar 1 00:11:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Mar 2014 04:11:24 +0400 Subject: custom handler module - dynamic response with unknown content length In-Reply-To: References: Message-ID: <20140301001124.GV34696@mdounin.ru> Hello! On Fri, Feb 28, 2014 at 10:44:38PM +0330, Yasser Zamani wrote: > Hi there, > > I learned some about how to write a handler module from [1] and [2]. > [1] http://blog.zhuzhaoyuan.com/2009/08/creating-a-hello-world-nginx-module/ > [2] http://www.evanmiller.org/nginx-modules-guide.html#compiling > > But I need to rewrite [1] to send dynamically generated octect stream to > client with unknown content length but it'll be large usually. Firstly I > tried: [...] > /* send the buffer chain of your response */ > int i; > for(i=1;i<10000000;i++){b->flush = > (0==(i%1000));rc=ngx_http_output_filter(r, > &out);if(rc!=NGX_OK){ngx_log_error(NGX_LOG_ALERT,r->connection->log,0,"bad > rc, rc:%d", rc);return rc;}} > b->last_buf = 1; > return ngx_http_output_filter(r, &out); > > But it simply fails with following errors: > > 2014/02/28 22:17:25 [alert] 25115#0: *1 zero size buf in writer t:0 r:0 f:0 > 00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server: > localhost, request: "GET / HTTP/1.1", host: "localhost:8080" > 2014/02/28 22:17:25 [alert] 25115#0: *1 bad rc, rc:-1, client: 127.0.0.1, > server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080" You've tried to send the same chain with the same buffer multiple times. After a buffer is sent for the first time, its pointers are adjusted to indicate it was sent - b->pos moved to b->last, and buffer's size become zero. Second attempt to send the same buffer will expectedly trigger the "zero size buf" check. > WHAT IS THE CORRECT WAY TO ACCOMPLISH MY NEED? (I searched a lot but I only > found [3] which has rc=-2 rather than -1) > [3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6 Trivial aproach is to prepare full output chain, and then send it using single ngx_http_output_filter() call. -- Maxim Dounin http://nginx.org/ From agentzh at gmail.com Sat Mar 1 04:09:34 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 28 Feb 2014 20:09:34 -0800 Subject: Best possible configuration for file upload In-Reply-To: <7d5578f969e82d90766e268aa9f478d7.NginxMailingListEnglish@forum.nginx.org> References: <7d5578f969e82d90766e268aa9f478d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Feb 26, 2014 at 2:41 AM, snarapureddy wrote: > We are using nginx for file uploads instead of directing to the backend > servrs. Used lua openresty module to get the data in chunks in write it to > local disk. File size could vary from few KB's to 10MB. > Ensure you're interleaving disk writes and network reads. The recommended way is to read a chunk from the network and write it immediately to the file system and again and again. > We are tuning worker process, connections, accept_mutex off etc, but if we > cuncerrently upload files some of the connections were very slow. > > Chuck size is 4096. You can try a bit larger chunk sizes like 16KB or 32KB. > CPU utilization is very minimal. We are running 10 > worker processes, but most of the cases sam process is handling multiple > connections and they are becoming slow. This sounds like a good analysis candidate for the off-CPU flame graph tool: https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt-off-cpu This can help us identify exactly what the bottleneck is in your nginx. Also, measuring the epoll loop blocking latency distribution could be insightful too: https://github.com/agentzh/stapxx#epoll-loop-blocking-distr Try running more nginx worker processes if file IO syscalls are blocking your workers too much. BTW, you may get more and faster responses if you post such questions on the openresty-en mailing list: https://groups.google.com/group/openresty-en Best regards, -agentzh From yasser.zamani at live.com Sat Mar 1 09:18:11 2014 From: yasser.zamani at live.com (Yasser Zamani) Date: Sat, 1 Mar 2014 12:48:11 +0330 Subject: custom handler module - dynamic response with unknown content length In-Reply-To: <20140301001124.GV34696@mdounin.ru> References: <20140301001124.GV34696@mdounin.ru> Message-ID: Thanks for your response.... On Sat 01 Mar 2014 03:41:24 AM IRST, Maxim Dounin wrote: > Hello! > > You've tried to send the same chain with the same buffer multiple > times. After a buffer is sent for the first time, its pointers > are adjusted to indicate it was sent - b->pos moved to b->last, and > buffer's size become zero. Second attempt to send the same buffer > will expectedly trigger the "zero size buf" check. > Great! I tried: for(i=1;i<10000000;i++){b->flush = (0==(i%100));rc=ngx_http_output_filter(r, &out);if(rc!=NGX_OK){ngx_log_error(NGX_LOG_ALERT,r->connection->log,0,"bad rc, rc:%d", rc);return rc;}b->pos = ngx_hello_string;b->last = ngx_hello_string + sizeof(ngx_hello_string) - 1;b->memory = 1;b->last_buf = 0;} which now fails with: 2014/03/01 12:23:39 [alert] 5022#0: *1 bad rc, rc:-2, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080" 2014/03/01 12:23:39 [alert] 5022#0: *1 zero size buf in writer t:0 r:0 f:0 00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080" And to resolve this I know I should follow the solution at [3]. [3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6 But is this a clean way to call 'ngx_http_output_filter' more than once? (please see below to know why I have to call it multiple times) FYI: Previous try did not fail in SECOND attempt but it failed when client successfully download 936 bytes of repeated "Hello, World!"s (72 attempts). > Trivial aproach is to prepare full output chain, and then send it > using single ngx_http_output_filter() call. The full output chain will be usually a long video which is not a file but will be generated in memory on the fly. I have to send each chunk as soon as it's ready because the stream generation is time consuming and client COULD NOT wait for all to be done. Suppose it's a 1 hour video which dynamically has been generated and I would like to send each minute as soon as it's ready without waiting for all 1 hour transcoding. I'm aware about nginx's mp4 module but it does not support time consuming dynamically generated video on memory. WHAT WILL BE THE CORRECT WAY TO DO THIS IN NGINX? Thanks again! From anoopalias01 at gmail.com Sat Mar 1 09:35:37 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 1 Mar 2014 15:05:37 +0530 Subject: Using fastcgi_cache with mediawiki Message-ID: Hi, There is sample varnish config file for mediawiki that purge cache on updates ######### http://www.mediawiki.org/wiki/Manual:Varnish_caching#Configuring_Varnish_3.x ######### Can the same setting be used in LocalSettings.php and used with the fastcgi_cache with the aid of nginx_ngx_cache_purge and if so what will be the nginx config for the same Thanks, -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Mar 1 11:31:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Mar 2014 15:31:42 +0400 Subject: custom handler module - dynamic response with unknown content length In-Reply-To: References: <20140301001124.GV34696@mdounin.ru> Message-ID: <20140301113142.GB34696@mdounin.ru> Hello! On Sat, Mar 01, 2014 at 12:48:11PM +0330, Yasser Zamani wrote: > Thanks for your response.... > > On Sat 01 Mar 2014 03:41:24 AM IRST, Maxim Dounin wrote: > >Hello! > > > >You've tried to send the same chain with the same buffer multiple > >times. After a buffer is sent for the first time, its pointers > >are adjusted to indicate it was sent - b->pos moved to b->last, and > >buffer's size become zero. Second attempt to send the same buffer > >will expectedly trigger the "zero size buf" check. > > > Great! I tried: > > for(i=1;i<10000000;i++){b->flush = > (0==(i%100));rc=ngx_http_output_filter(r, > &out);if(rc!=NGX_OK){ngx_log_error(NGX_LOG_ALERT,r->connection->log,0,"bad > rc, rc:%d", rc);return rc;}b->pos = ngx_hello_string;b->last = > ngx_hello_string + sizeof(ngx_hello_string) - 1;b->memory = 1;b->last_buf = > 0;} > > which now fails with: > > 2014/03/01 12:23:39 [alert] 5022#0: *1 bad rc, rc:-2, client: 127.0.0.1, > server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080" > 2014/03/01 12:23:39 [alert] 5022#0: *1 zero size buf in writer t:0 r:0 f:0 > 00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server: > localhost, request: "GET / HTTP/1.1", host: "localhost:8080" > > And to resolve this I know I should follow the solution at [3]. > [3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6 > > But is this a clean way to call 'ngx_http_output_filter' more than once? > (please see below to know why I have to call it multiple times) The ngx_http_output_filter() function can be called more than once, but usually it doesn't make sense - instead, one should install r->write_event_handler and do subsequent calls once it's possible to write additional data to socket buffer. Working with event's isn't something trivial though. > FYI: Previous try did not fail in SECOND attempt but it failed when client > successfully download 936 bytes of repeated "Hello, World!"s (72 attempts). > > >Trivial aproach is to prepare full output chain, and then send it > >using single ngx_http_output_filter() call. > > The full output chain will be usually a long video which is not a file but > will be generated in memory on the fly. I have to send each chunk as soon as > it's ready because the stream generation is time consuming and client COULD > NOT wait for all to be done. Suppose it's a 1 hour video which dynamically > has been generated and I would like to send each minute as soon as it's > ready without waiting for all 1 hour transcoding. I'm aware about nginx's > mp4 module but it does not support time consuming dynamically generated > video on memory. > > WHAT WILL BE THE CORRECT WAY TO DO THIS IN NGINX? Quoting Winnie-the-Pooh, "You needn't shout so loud". Doing time-consuming transcoding in nginx worker isn't correct in any case, as it will block all connections in this worker process. So you have to do transcoding in some external process, and talk to this process to get transcoded data. This is basically what upstream module do (as used by proxy, fastcgi, etc.), and it can be used as an example of "how to do this in nginx". -- Maxim Dounin http://nginx.org/ From yasser.zamani at live.com Sat Mar 1 14:57:38 2014 From: yasser.zamani at live.com (Yasser Zamani) Date: Sat, 1 Mar 2014 18:27:38 +0330 Subject: custom handler module - dynamic response with unknown content length In-Reply-To: <20140301113142.GB34696@mdounin.ru> References: <20140301001124.GV34696@mdounin.ru> <20140301113142.GB34696@mdounin.ru> Message-ID: On Sat 01 Mar 2014 03:01:42 PM IRST, Maxim Dounin wrote: > Hello! > > The ngx_http_output_filter() function can be called more than > once, but usually it doesn't make sense - instead, one should > install r->write_event_handler and do subsequent calls once it's > possible to write additional data to socket buffer. Working with > event's isn't something trivial though. Thanks a lot for write_event_handler. > Quoting Winnie-the-Pooh, "You needn't shout so loud". > > Doing time-consuming transcoding in nginx worker isn't correct in > any case, as it will block all connections in this worker process. > So you have to do transcoding in some external process, and talk > to this process to get transcoded data. This is basically what > upstream module do (as used by proxy, fastcgi, etc.), and it > can be used as an example of "how to do this in nginx". The transcoding process already is doing by an external process, ffmpeg, into an incomplete file. I just need to read from first of this incomplete file one by one chunks and send them to client until seeing a real EOF. I have following plan for nginx configuration: location \*.mp4 { mymodule; } So, did you mean if two clients get x.mp4 and y.mp4 in same time then one of them is blocked until another one get complete file?! I don't think so while web servers usually make new threads. I saw './nginx-1.4.5/src/http/ngx_http_upstream.c' but was so complex for me to understand. However, I saw FastCGI is simple for me to understand. So, do you advise me to `regularly read ffmpeg output file` in a FastCGI script and then fasctcgi_pass the nginx to that? Sorry for my questions...I think these are last ones ;) Thank you so much! From yasser.zamani at live.com Sat Mar 1 17:26:40 2014 From: yasser.zamani at live.com (Yasser Zamani) Date: Sat, 1 Mar 2014 20:56:40 +0330 Subject: custom handler module - dynamic response with unknown content length In-Reply-To: References: <20140301001124.GV34696@mdounin.ru> <20140301113142.GB34696@mdounin.ru> Message-ID: On Sat 01 Mar 2014 06:27:38 PM IRST, Yasser Zamani wrote: > > > The transcoding process already is doing by an external process, > ffmpeg, into an incomplete file. I just need to read from first of > this incomplete file one by one chunks and send them to client until > seeing a real EOF. I have following plan for nginx configuration: > location \*.mp4 { mymodule; } > > So, did you mean if two clients get x.mp4 and y.mp4 in same time then > one of them is blocked until another one get complete file?! I don't > think so while web servers usually make new threads. > > I saw './nginx-1.4.5/src/http/ngx_http_upstream.c' but was so complex > for me to understand. > > However, I saw FastCGI is simple for me to understand. So, do you > advise me to `regularly read ffmpeg output file` in a FastCGI script > and then fasctcgi_pass the nginx to that? Thank you very much Maxim; Good news that as you advised, I finally have done it in a nice way via FastCGI: 1. I wrote my code in a FastCGI structure with a lot of thanks to [4]. [4] http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/ 2. I compiled and fcgi-spawn my executable on 127.0.0.1:8000 (see [4]) 3. I configured nginx to proxy requests to 127.0.0.1:8000 (see [4]) 4. I started my friend, nginx, and pointed the browser to localhost:8080. RESULTS: 1. Multiple sametime clients download same file in a very good balanced speed. 2. There is no error in nginx error log. 3. OK, the best one result...we escaped from write_event_handler and NGX_AGAIN(=-2) :) THE FASTCGI CODE: (for future researchers ;) ) #include #include "fcgio.h" using namespace std; int main(void) { // Backup the stdio streambufs streambuf * cin_streambuf = cin.rdbuf(); streambuf * cout_streambuf = cout.rdbuf(); streambuf * cerr_streambuf = cerr.rdbuf(); FCGX_Request request; FCGX_Init(); FCGX_InitRequest(&request, 0, 0); while (FCGX_Accept_r(&request) == 0) { fcgi_streambuf cin_fcgi_streambuf(request.in); fcgi_streambuf cout_fcgi_streambuf(request.out); fcgi_streambuf cerr_fcgi_streambuf(request.err); cin.rdbuf(&cin_fcgi_streambuf); cout.rdbuf(&cout_fcgi_streambuf); cerr.rdbuf(&cerr_fcgi_streambuf); cout << "Content-type: application/octet-stream\r\n"; int i; for(i=0;i<1000000;i++) cout << "\r\n" << "\n" << " \n" << " Hello, World!\n" << " \n" << " \n" << "

Hello, World!

\n" << " \n" << "\n"; // Note: the fcgi_streambuf destructor will auto flush } // restore stdio streambufs cin.rdbuf(cin_streambuf); cout.rdbuf(cout_streambuf); cerr.rdbuf(cerr_streambuf); return 0; } From nginx-forum at nginx.us Sun Mar 2 10:32:43 2014 From: nginx-forum at nginx.us (pumiz) Date: Sun, 02 Mar 2014 05:32:43 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: <6af75127b0ee997bb7f80b6b50c5a28e.NginxMailingListEnglish@forum.nginx.org> Hi All, I am also stuck trying to load aspx pages with raspberry pi and nginx. I start the mono server with the below line: sudo fastcgi-mono-server4 /applications=localhost:/:/home/pi/var/www/ /soket=tcp:127.0.0.1:9000 and nginx with the following: sudo /etc/init.d/nginx start and the output I get is ------------------------------------------------------- No Application Found Unable to find a matching application for request: Host 10.75.2.5 Port 80 Request Path /Default.aspx Physical Path /home/pi/var/www/Default.aspx ------------------------------------------------------- Please find below all my files. Could you please help me to find out what i am missing to get this simple page up and running? _____________________________________________________________________________________________ my /etc/nginx/sites-available/default file is as follows: server { listen 80; server_name localhost; location /{ root /home/pi/var/www/; index index.html index.htm default.aspx Default.aspx; include /etc/nginx/fastcgi_params; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; } } _____________________________________________________________________________________________ my /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; _____________________________________________________________________________________________ my /home/pi/var/www/Default.aspx file:

Hello World!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,248026#msg-248026 From mdounin at mdounin.ru Sun Mar 2 13:10:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 2 Mar 2014 17:10:08 +0400 Subject: ASP.NET pages with nginx In-Reply-To: <6af75127b0ee997bb7f80b6b50c5a28e.NginxMailingListEnglish@forum.nginx.org> References: <6af75127b0ee997bb7f80b6b50c5a28e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140302131008.GD34696@mdounin.ru> Hello! On Sun, Mar 02, 2014 at 05:32:43AM -0500, pumiz wrote: [...] > and the output I get is > > ------------------------------------------------------- > No Application Found > > Unable to find a matching application for request: > > Host 10.75.2.5 > Port 80 > Request Path /Default.aspx > Physical Path /home/pi/var/www/Default.aspx > ------------------------------------------------------- > > > Please find below all my files. > Could you please help me to find out what i am missing to get this simple > page up and running? > _____________________________________________________________________________________________ > > my /etc/nginx/sites-available/default file is as follows: > > server { > listen 80; > server_name localhost; > > location /{ > root /home/pi/var/www/; > index index.html index.htm default.aspx Default.aspx; > > include /etc/nginx/fastcgi_params; > fastcgi_index Default.aspx; > fastcgi_pass 127.0.0.1:9000; > } > } > > _____________________________________________________________________________________________ > my /etc/nginx/fastcgi_params: > > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_FILENAME $request_filename; [...] > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; In your fastcgi_params the SCRIPT_FILENAME param is set multiple times, and at least one of the values ($request_filename) is wrong for the configuration used. I would recommend you to revert all the modifications to fastcgi_params, and use root /home/pi/var/www/; location / { fastcgi_pass 127.0.0.1:9000; fastcgi_index Default.aspx; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } in the configuration instead. Alternatively, use "fastcgi.conf" as available for simple configurations, it already has SCRIPT_FILENAME set: root /home/pi/var/www/; location / { fastcgi_pass 127.0.0.1:9000; fastcgi_index Default.aspx; include fastcgi.conf; } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Mar 2 22:40:09 2014 From: nginx-forum at nginx.us (badtzhou) Date: Sun, 02 Mar 2014 17:40:09 -0500 Subject: 100% CPU nginx master process Message-ID: <1cec046875eb963b8c00fc5676537422.NginxMailingListEnglish@forum.nginx.org> We are have issue that nginx master process is at 100% CPU and it stop responding to any request. The CPU utilization for all the workers are low. I am also seeing multiple nginx master process running at the same time when server stop responding, all of them are at 100% CPU. For my understand, there should only be one master process. What will cause multiple master process running? The problem is very intermittent. It will go away by itself. Any kind of help will be appreciated. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248029,248029#msg-248029 From agentzh at gmail.com Sun Mar 2 23:24:36 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 2 Mar 2014 15:24:36 -0800 Subject: 100% CPU nginx master process In-Reply-To: <1cec046875eb963b8c00fc5676537422.NginxMailingListEnglish@forum.nginx.org> References: <1cec046875eb963b8c00fc5676537422.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sun, Mar 2, 2014 at 2:40 PM, badtzhou wrote: > We are have issue that nginx master process is at 100% CPU and it stop > responding to any request. The CPU utilization for all the workers are low. > If I were you, I would just take a C-land on-CPU flame graph for the process with high CPU usage by sampling for a few or a dozen seconds: https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt The graph would tell us the complete story about all the hot spots during the sampling window. Regards, -agentzh From nginx-forum at nginx.us Mon Mar 3 07:01:24 2014 From: nginx-forum at nginx.us (badtzhou) Date: Mon, 03 Mar 2014 02:01:24 -0500 Subject: 100% CPU nginx master process In-Reply-To: References: Message-ID: We seeing lot of empty requests from nginx process trace. Is this normal or some kind of Dos attack? 0.000048 recvfrom(254, "", 1024, 0, NULL, NULL) = 0 0.000033 close(254) = 0 0.000054 recvfrom(600, "", 1024, 0, NULL, NULL) = 0 0.000029 close(600) = 0 0.000040 recvfrom(147, "", 1024, 0, NULL, NULL) = 0 0.000029 close(147) = 0 0.000066 recvfrom(644, "", 1024, 0, NULL, NULL) = 0 0.000029 close(644) = 0 0.000642 recvfrom(519, "", 1024, 0, NULL, NULL) = 0 0.000040 close(519) = 0 0.000047 recvfrom(361, "", 1024, 0, NULL, NULL) = 0 0.000031 close(361) = 0 0.000047 recvfrom(248, "", 1024, 0, NULL, NULL) = 0 0.000036 close(248) = 0 0.000044 recvfrom(691, "", 1024, 0, NULL, NULL) = 0 0.000029 close(691) = 0 0.000046 recvfrom(561, "", 1024, 0, NULL, NULL) = 0 0.000031 close(561) = 0 0.000046 recvfrom(270, "", 1024, 0, NULL, NULL) = 0 0.000031 close(270) = 0 0.000049 recvfrom(723, "", 1024, 0, NULL, NULL) = 0 0.000030 close(723) = 0 0.000046 recvfrom(362, "", 1024, 0, NULL, NULL) = 0 0.000032 close(362) = 0 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248029,248031#msg-248031 From nginx-forum at nginx.us Mon Mar 3 11:58:45 2014 From: nginx-forum at nginx.us (snarapureddy) Date: Mon, 03 Mar 2014 06:58:45 -0500 Subject: Best possible configuration for file upload In-Reply-To: References: Message-ID: <776324d99af8d3d616a8c7e1264c7bed.NginxMailingListEnglish@forum.nginx.org> Thankyou Yichun for the response. We are using lua-resty-upload and reading chunk by chunk from network and writing it to local disk. I'll post more detail in openresty fourm as you suggested. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247951,248033#msg-248033 From mdounin at mdounin.ru Mon Mar 3 12:40:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Mar 2014 16:40:47 +0400 Subject: 100% CPU nginx master process In-Reply-To: <1cec046875eb963b8c00fc5676537422.NginxMailingListEnglish@forum.nginx.org> References: <1cec046875eb963b8c00fc5676537422.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140303124047.GF34696@mdounin.ru> Hello! On Sun, Mar 02, 2014 at 05:40:09PM -0500, badtzhou wrote: > We are have issue that nginx master process is at 100% CPU and it stop > responding to any request. The CPU utilization for all the workers are low. > > I am also seeing multiple nginx master process running at the same time when > server stop responding, all of them are at 100% CPU. For my understand, > there should only be one master process. What will cause multiple master > process running? Multiple master processes is not something expected to happen during normal operation. Multiple master processes may be present during binary upgrade, but this isn't something expected to happen automatically. It might also be new worker processes just spawn - before they were able to change name and credentials, I've seen something like this on systems with a LDAP-based user database used when LDAP server was unresponsive. Note well that master processes doesn't handle client connections. Master process only manages worker processes and a configuration. If you see nginx not responding - there should be another reason for this. Looking into nginx error log might be a good idea, as well as examining your system's logs. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 3 15:43:01 2014 From: nginx-forum at nginx.us (zuckbin) Date: Mon, 03 Mar 2014 10:43:01 -0500 Subject: redirect 301 alias domain Message-ID: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> hi, i want to do a 301 redirect on all my alias domain for a domain but i stuck in an infinite loop! server{ listen 80; server_name www.aaa.com www.aa.com ... return 301 http://www.bbb.com$request_uri; } server{ listen 80; server_name www.bbb.com ... .... } how to solve this ? thanks, bye Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248038,248038#msg-248038 From nginx-forum at nginx.us Mon Mar 3 16:28:51 2014 From: nginx-forum at nginx.us (zuckbin) Date: Mon, 03 Mar 2014 11:28:51 -0500 Subject: vbulletin vbseo rewrite rule Message-ID: <53a8f7e1730290e98687dbcef1b0ed57.NginxMailingListEnglish@forum.nginx.org> hi, i try to convert theses rules from apache for vbseo vbulletin, but got 404 errors How to convert theses rules for nginx ? RewriteEngine On RewriteBase /forum/ RewriteRule ^[^/]+/.+-([0-9]+)\.html$ vbseo.php?vbseourl=showthread.php&t=$1 [L] RewriteRule ^((urllist|sitemap_).*\.(xml|txt)(\.gz)?)$ vbseo_sitemap/vbseo_getsitemap.php?sitemap=$1 [L] RewriteCond %{REQUEST_URI} !(admincp/|modcp/|cron|vbseo_sitemap|api\.php) RewriteRule ^((archive/)?(.*\.php(/.*)?))$ vbseo.php [L,QSA] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !/(admincp|modcp|clientscript|cpstyles|images)/ RewriteRule ^(.+)$ vbseo.php [L,QSA] thanks, bye Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248039,248039#msg-248039 From pierre.hureaux at crous-rouen.fr Mon Mar 3 16:31:50 2014 From: pierre.hureaux at crous-rouen.fr (Pierre Hureaux) Date: Mon, 3 Mar 2014 17:31:50 +0100 Subject: vbulletin vbseo rewrite rule In-Reply-To: <53a8f7e1730290e98687dbcef1b0ed57.NginxMailingListEnglish@forum.nginx.org> References: <53a8f7e1730290e98687dbcef1b0ed57.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1D6DBB81-FABD-40D0-B1FF-F094B492E0BD@crous-rouen.fr> Hi, I use this : rewrite ^/((urllist|sitemap).*\.(xml|txt)(\.gz)?)$ /vbseo_sitemap/vbseo_getsitemap.php?sitemap=$1 last; error_page 404 /404.php; if ($request_filename ~ ?\.php$? ) { rewrite ^(.*)$ /vbseo.php last; } if (!-e $request_filename) { rewrite ^/(.*)$ /vbseo.php last; } # We don't want to allow the browsers to see .hidden linux/unix files location ~ /\. { deny all; access_log off; log_not_found off; } location ~ /(images/|clientscript/).* { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; try_files $uri $uri/ /vbseo.php?$args; } p. Le 3 mars 2014 ? 17:28, zuckbin a ?crit : > hi, > > i try to convert theses rules from apache for vbseo vbulletin, but got 404 > errors > > How to convert theses rules for nginx ? > > RewriteEngine On > > RewriteBase /forum/ > > RewriteRule ^[^/]+/.+-([0-9]+)\.html$ vbseo.php?vbseourl=showthread.php&t=$1 > [L] > > RewriteRule ^((urllist|sitemap_).*\.(xml|txt)(\.gz)?)$ > vbseo_sitemap/vbseo_getsitemap.php?sitemap=$1 [L] > > RewriteCond %{REQUEST_URI} !(admincp/|modcp/|cron|vbseo_sitemap|api\.php) > RewriteRule ^((archive/)?(.*\.php(/.*)?))$ vbseo.php [L,QSA] > > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteCond %{REQUEST_FILENAME} > !/(admincp|modcp|clientscript|cpstyles|images)/ > RewriteRule ^(.+)$ vbseo.php [L,QSA] > > > thanks, > bye > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248039,248039#msg-248039 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Mar 3 16:37:23 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Mar 2014 16:37:23 +0000 Subject: redirect 301 alias domain In-Reply-To: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> References: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140303163723.GO29880@craic.sysops.org> On Mon, Mar 03, 2014 at 10:43:01AM -0500, zuckbin wrote: Hi there, > i want to do a 301 redirect on all my alias domain for a domain > > but i stuck in an infinite loop! What's the problem? What request do you make; what response do you get; what response do you want? f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Mar 3 16:46:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Mar 2014 20:46:39 +0400 Subject: redirect 301 alias domain In-Reply-To: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> References: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140303164639.GN34696@mdounin.ru> Hello! On Mon, Mar 03, 2014 at 10:43:01AM -0500, zuckbin wrote: > hi, > > i want to do a 301 redirect on all my alias domain for a domain > > but i stuck in an infinite loop! > > server{ > listen 80; > server_name www.aaa.com www.aa.com ... > return 301 http://www.bbb.com$request_uri; > > } > > server{ > listen 80; > server_name www.bbb.com > ... > .... > > } > > how to solve this ? The configuration above should not result in an infinite loop. Things to check in no particular order: - Make sure configuration you are working with is actually one loaded by nginx. That is, make sure you've done a configuration reload and it was successfull. - Make sure you aren't testing your browser's cache instead of current configuration. That is, make sure to clear cache before each test. Or, better yet, use telnet or curl for tests. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 3 17:45:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Mar 2014 21:45:18 +0400 Subject: [alert] could not add new SSL session to the session cache while SSL handshaking In-Reply-To: References: Message-ID: <20140303174518.GO34696@mdounin.ru> Hello! On Mon, Mar 03, 2014 at 05:11:22PM +0000, Reid, Mike wrote: > I am experiencing the following in my error logs after a recent > upgrade to NGiNX 1.5.10 (from 1.5.8), and also applying SSL / > TLS updates as described on istlsfastyet.com > > [alert] 3319#0: *301399 could not add new SSL session to the > session cache while SSL handshaking > > Any ideas on why these alerts would now be showing up? I am not > sure how to address, or whether there should be cause for > concern? > > NGiNX 1.5.10 w/ SPDY 3.1 # Previously 1.5.8, now including > --with-http_spdy_module and using openssl-1.0.1f (previously > openssl-1.0.1e without http_spdy_module) > ssl_session_cache shared:SSL:10m; # No change > ssl_buffer_size 1400; # New > ssl_session_timeout 24h; # Previously 10m > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # No change You've changed SSL session timeout from 10 minutes to 24 hours, and this basically means that sessions will use 144 times more space in session cache. On the other hand, cache size wasn't changed - so you've run out of space in the cache configured. If there is no space in a cache nginx will try to drop one non-expired session from the cache, but it may not be enough to store a new session (as different sessions may occupy different space), resulting in alerts you've quoted. Note well that configuring ssl_buffer_size to 1400 isn't a good idea unless you are doing so for your own performance testing. See previous discussions for details. Overral, this doesn't looks relevant to nginx-devel at . Please use nginx@ for futher questions. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 3 20:52:56 2014 From: nginx-forum at nginx.us (justink101) Date: Mon, 03 Mar 2014 15:52:56 -0500 Subject: HttpLimitReqModule should return 429 not 503 Message-ID: <3f49bdf7084eec6aabfc8dc52de38f66.NginxMailingListEnglish@forum.nginx.org> Can we get a flag in HttpLimitReqModule to return a http status code 429 Too Many Requests instead of 503 Service Temporarily Unavailable since this is a more accurate status code? I know that 429 is not officially defined, but it is a defacto-standard. Perhaps something like: limit_req_exceeded 429; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248049,248049#msg-248049 From vbart at nginx.com Mon Mar 3 20:54:59 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 04 Mar 2014 00:54:59 +0400 Subject: HttpLimitReqModule should return 429 not 503 In-Reply-To: <3f49bdf7084eec6aabfc8dc52de38f66.NginxMailingListEnglish@forum.nginx.org> References: <3f49bdf7084eec6aabfc8dc52de38f66.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1710911.WZFWXoIduu@vbart-laptop> On Monday 03 March 2014 15:52:56 justink101 wrote: > Can we get a flag in HttpLimitReqModule to return a http status code 429 Too > Many Requests instead of 503 Service Temporarily Unavailable since this is a > more accurate status code? I know that 429 is not officially defined, but it > is a defacto-standard. > > Perhaps something like: > > limit_req_exceeded 429; > http://nginx.org/r/limit_req_status wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Mar 3 21:11:52 2014 From: nginx-forum at nginx.us (talkingnews) Date: Mon, 03 Mar 2014 16:11:52 -0500 Subject: Confusion over apparently conflicting advice in guide/wiki/examples Message-ID: <0bef30fd963bc947d84e402ba3ee025e.NginxMailingListEnglish@forum.nginx.org> I'd call myself very much a beginner with NGiNX, but I've been looking further through the documentation, particularly the http://wiki.nginx.org/Pitfalls page, and now I'm left with confusion! This page http://wiki.nginx.org/PHPFcgiExample says "This guide run fine on php.ini cgi.fix_pathinfo = 1 (the default). Some guide insist to change it to cgi.fix_pathinfo = 0 but doing that make PHP_SELF variable broken (not equal to DOCUMENT_URI).". But http://wiki.nginx.org/Pitfalls says: Set cgi.fix_pathinfo=0 in php.ini. This causes the PHP interpreter to only try the literal path given and to stop processing if the file is not found. And the provided nginx/sites-available/default says # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini Which is correct? My second question: As I understand it, you should always make parameter changes only where they are needed, and in an overriding way - ie: one never touches php.ini itself. So, I am looking at this entry: http://wiki.nginx.org/Pyrocms In the server stanza there is: server { fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; fastcgi_read_timeout 180; .... and then separately it says to add to fastcgi_params the following: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors off; Some of those numbers are HUGE - most of the buffer defaults are normally 4k|8k. And 3 minutes between connections? Is this over-the-top? And the three items in server are conflicted by different values in fastcgi params. And isn't that going to "pollute" the whole fpm server? I thought it would be better to have it in the fpm pool, so first I had it like this: php_value[upload_max_filesize] = 128M php_value[max_file_uploads] = 60 php_value[default_socket_timeout] = 180 php_value[date.timezone] = 'Europe/London' php_value[session.gc_maxlifetime] = 28800 The I realised I only needed these high values for one area of my server, so again I changed it: location ~ /upload/ { location ~ \.(php)$ { try_files $uri =404; set $php_value "post_max_size = 128M"; set $php_value "$php_value \n upload_max_filesize = 128M"; set $php_value "$php_value \n session.gc_maxlifetime = 28800"; set $php_value "$php_value \n max_file_uploads = 60"; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PHP_VALUE $php_value; include fastcgi_params; } } And it works fine. No core ini files are touched, only the area which need to change is changed. Also, the example config has: location ~ \.php { fastcgi_pass unix:/tmp/domain.sock; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; But the Pitfalls guide suggests this is dangerous. So, my question would be: Is this example file wrong/outdated/dangerous? Or am I completely misunderstanding something? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248051,248051#msg-248051 From nginx-forum at nginx.us Mon Mar 3 22:26:55 2014 From: nginx-forum at nginx.us (tt5430) Date: Mon, 03 Mar 2014 17:26:55 -0500 Subject: nginx as front end cache for WMS Message-ID: <357931b35c12f3a966532e473f814989.NginxMailingListEnglish@forum.nginx.org> I tried to set up nginx as a front end cache for my WMS server. The typical WMS request is: http://localhost:8080/geoserver/somestring/wms?version=1.1.1&service=WMS&request=WMS&.... Below is my nginx configuration: /etc/nginx/nginx.conf: user vriuser; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; proxy_buffering on; proxy_cache_valid any 10m; proxy_cache_path /var/geoserver/cache levels=1:2 keys_zone=one:10m inactive=7d max_size=5000m; proxy_temp_path /var/geoserver/cache/tmp; gzip on; gzip_disable "msie6"; # gzip_vary on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/geoserver.conf server { listen 80; #server_name localhost; root /home/geoserver/webapps/geoserver; location /geoserver { proxy_pass http://localhost:8080; proxy_cache one; proxy_cache_key backend$request_uri; proxy_cache_valid 200 5h; proxy_cache_use_stale error timeout invalid_header; } } Both nginx and my WMS servers were running without errror. However, when I inspect the /var/geoserver/cache directory, I did not see anything file in there. Does it mean my configuration is not correct? How do I know if nginx caches my WMS responses? Any help on this is greatly appreciated. Many thanks in advance. Regards, Tam Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248053,248053#msg-248053 From nginx-forum at nginx.us Tue Mar 4 05:07:46 2014 From: nginx-forum at nginx.us (loki) Date: Tue, 04 Mar 2014 00:07:46 -0500 Subject: High CPU Usage and NGINX Hangup Message-ID: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> I am seeing my NGINX server randomly hangup. Multiple worker processes utilizing 100% CPU. I see the hangups in strace, but not quite sure what would be causing the issue. 0.000081 futex(0x7f563c893068, FUTEX_WAIT, 0, NULL) = ? ERESTARTSYS (To be restarted) 92.011974 --- SIGHUP (Hangup) @ 0 (0) --- 0.000069 rt_sigreturn(0x934b9b) = -1 EINTR (Interrupted system call) 0.000056 futex(0x7f563c893068, FUTEX_WAIT, 0, NULL) = 0 154.792082 futex(0x7f563c893068, FUTEX_WAKE, 1) = 0 0.000036 recvmsg(32, 0x7fff134e28b0, 0) = -1 EAGAIN (Resource temporarily unavailable) 0.000842 epoll_wait(33, {{EPOLLIN, {u32=404296080, u64=140007748211088}}}, 512, 54019) = 1 49.323010 recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\2\0\0\0\0\0\0\0\261m\0\0\0\0\0\0\5\0\0\0\0\0\0\0\377\377\377\377\0\0\0\0", 32}], msg_controllen=0, msg_flags=0}, 0) = 32 0.000129 close(23) = 0 0.002116 recvmsg(32, 0x7fff134e28b0, 0) = -1 EAGAIN (Resource temporarily unavailable) 0.000837 epoll_wait(33, {{EPOLLIN, {u32=404296080, u64=140007748211088}}}, 512, 4695) = 1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248055,248055#msg-248055 From alex at zeitgeist.se Tue Mar 4 08:22:48 2014 From: alex at zeitgeist.se (Alex) Date: Tue, 04 Mar 2014 09:22:48 +0100 Subject: [alert] could not add new SSL session to the session cache while SSL handshaking In-Reply-To: <20140303174518.GO34696@mdounin.ru> References: <20140303174518.GO34696@mdounin.ru> Message-ID: Hi! On 2014-03-03 18:45, Maxim Dounin wrote: > Note well that configuring ssl_buffer_size to 1400 isn't a good > idea unless you are doing so for your own performance testing. > See previous discussions for details. Maxim, I remember the discussion that was started by Ilya. From what I understood is that it depends on your specific needs. If you have a website with standard markup and without serving large files, it seems reasonable to choose a smaller ssl buffer size to avoid TLS record fragmentation (and thus optimize time to first byte). On the other hand, if you deliver large streams, it would seem be counter-productive to limit the buffer size since you'd occur more bandwidth and processing overhead. Or did I misunderstand and you'd still say that a ssl_buffer_size of 1400 is generally a bad idea? Best Alex From reallfqq-nginx at yahoo.fr Tue Mar 4 08:59:22 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 4 Mar 2014 09:59:22 +0100 Subject: Confusion over apparently conflicting advice in guide/wiki/examples In-Reply-To: <0bef30fd963bc947d84e402ba3ee025e.NginxMailingListEnglish@forum.nginx.org> References: <0bef30fd963bc947d84e402ba3ee025e.NginxMailingListEnglish@forum.nginx.org> Message-ID: ?? On Mon, Mar 3, 2014 at 10:11 PM, talkingnews wrote: > This page http://wiki.nginx.org/PHPFcgiExample says > "This guide run fine on php.ini cgi.fix_pathinfo = 1 (the default). Some > guide insist to change it to cgi.fix_pathinfo = 0 but doing that make > PHP_SELF variable broken (not equal to DOCUMENT_URI).". > To know what cgi.fix_path_info does ?depending on its value, rely on core php.ini documentation . The default value of '1' fixes an erroneous behavior of earlier PHP versions not using PHP_INFO information properly. THe '0' value seems to exist for backward-compatibility as it provides a broken environment. Thus, scripts relying on such a value are highly suspicious to my eyes. Where does the 'sites-available' directory of nginx came from? I do not have such one (using Debian official stable package, currently 1.4.5). Besides, there is no such DOCUMENT_URI server variable in PHP (at least as of 4.1.0 as the list of PHP server variablesstates and I wonder if it had ever existed before). Another note: what the wiki says is not exact, refer to PHP documentation to know the real impact of PHP configuration directives (sounds obvious...). The nginx wiki has not the reputation of being a trustable source of information. Prefer referring to the official documentation, either nginx or PHP one. My second question: As I understand it, you should always make parameter > changes only where they are needed, and in an overriding way - ie: one > never > touches php.ini itself. > ?Well, changing php.ini file modifies the behavior for all scripts using it. If you have multiple environments needing different specific settings, then it is indeed safer to configure them on-the-fly through FastCGI parameterization of nginx. Thus, basing all your different configurations on top of the default one is a rather straightward way of doing it. Moreover, when updating PHP packages between major versions, its default configuration files? ?usually also change. When you will wish to ?test your production setup for an upgrade, you will be happier if you are as close to the original files as possible. But the Pitfalls guide suggests this is dangerous. > ?What exactly are you referring to in the pitfalls page saying that you setup is dangerous?? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 4 10:11:53 2014 From: nginx-forum at nginx.us (zuckbin) Date: Tue, 04 Mar 2014 05:11:53 -0500 Subject: vbulletin vbseo rewrite rule In-Reply-To: <1D6DBB81-FABD-40D0-B1FF-F094B492E0BD@crous-rouen.fr> References: <1D6DBB81-FABD-40D0-B1FF-F094B492E0BD@crous-rouen.fr> Message-ID: <0fb2665f14bf4923c294acf6817cd504.NginxMailingListEnglish@forum.nginx.org> i try this and it doesn't work for me. maybe because i got some custom urls in vbseo. and why all my urls are with httpS ?! boring... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248039,248062#msg-248062 From citrin at citrin.ru Tue Mar 4 10:22:27 2014 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 04 Mar 2014 14:22:27 +0400 Subject: nginx as front end cache for WMS In-Reply-To: <357931b35c12f3a966532e473f814989.NginxMailingListEnglish@forum.nginx.org> References: <357931b35c12f3a966532e473f814989.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5315A963.1050204@citrin.ru> On 03/04/14 02:26, tt5430 wrote: > How do I know if > nginx caches my WMS responses? It will be useful to log $upstream_cache_status and $upstream_response_time in access_log. Also check, that geoserver return code is 200 (and not redirect to some other URI). From nginx-forum at nginx.us Tue Mar 4 10:23:22 2014 From: nginx-forum at nginx.us (zuckbin) Date: Tue, 04 Mar 2014 05:23:22 -0500 Subject: server block conflict Message-ID: hi, got multiples server block for different domains, but it seem for fastcgi_param PHP_VALUE; they are in conflict all together here an exemple: server{ server_name aaa; ... location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } server{ server_name bbb; ... location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; include disable_functions; } } here content of disable_functions fastcgi_param PHP_VALUE "disable_functions=allow_url_include, allow_url_fopen, eval, base64_decode, php_uname, getmyuid, getmypid, passthru, leak, listen, diskfreespace, tmpfile, link, ignore_user_abord, shell_exec, dl, set_time_limit, exec, system, highlight_file, source, show_source, fpaththru, virtual, proc_open, proc_close, proc_get_status, proc_nice, proc_terminate, phpinfo, pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority, posix_ctermid, posix_getcwd, posix_getegid, posix_geteuid, posix_getgid, posix_getgrgid, posix_getgrnam, posix_getgroups, posix_getlogin, posix_getpgid, posix_getpgrp, posix_getpid, posix_getppid, posix_getpwnam, posix_getpwuid, posix_getrlimit, posix_getsid, posix_getuid, posix_isatty, posix_kill, posix_mkfifo, posix_setegid, posix_seteuid, posix_setgid, posix_setpgid, posix_setsid, posix_setuid, posix_times, posix_ttyname, posix_uname"; the problem is on server aaa, it use disable functions; but i don't want i had clear my cache, restart nginx. don't know why server aaa user properties of server bbb thanks for your help. Bye Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248063,248063#msg-248063 From nginx-forum at nginx.us Tue Mar 4 10:25:43 2014 From: nginx-forum at nginx.us (justink101) Date: Tue, 04 Mar 2014 05:25:43 -0500 Subject: HttpLimitReqModule should return 429 not 503 In-Reply-To: <1710911.WZFWXoIduu@vbart-laptop> References: <1710911.WZFWXoIduu@vbart-laptop> Message-ID: <502c39ee41fe5b37031670a03e420cbf.NginxMailingListEnglish@forum.nginx.org> Thanks, did not see this directive, exactly what is needed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248049,248065#msg-248065 From mdounin at mdounin.ru Tue Mar 4 10:46:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Mar 2014 14:46:00 +0400 Subject: [alert] could not add new SSL session to the session cache while SSL handshaking In-Reply-To: References: <20140303174518.GO34696@mdounin.ru> Message-ID: <20140304104600.GP34696@mdounin.ru> Hello! On Tue, Mar 04, 2014 at 09:22:48AM +0100, Alex wrote: > Hi! > > On 2014-03-03 18:45, Maxim Dounin wrote: > > Note well that configuring ssl_buffer_size to 1400 isn't a good > > idea unless you are doing so for your own performance testing. > > See previous discussions for details. > > Maxim, I remember the discussion that was started by Ilya. From what I > understood is that it depends on your specific needs. If you have a > website with standard markup and without serving large files, it seems > reasonable to choose a smaller ssl buffer size to avoid TLS record > fragmentation (and thus optimize time to first byte). On the other hand, > if you deliver large streams, it would seem be counter-productive to > limit the buffer size since you'd occur more bandwidth and processing > overhead. > > Or did I misunderstand and you'd still say that a ssl_buffer_size of > 1400 is generally a bad idea? Bandwidth and processing overhead isn't something specific to serving large files, it's always here - even if you serve small resources. On the other hand, from TTFB point of view there is almost no difference between 1400 and 4096 - as long as resulting payload is under initial congestion window. That is, from time to first byte optimization point of view, I would recommend using ssl_buffer_size 4k (or, if your server follows IW10, 8k may be a better idea). Just for the record, previous discussion can be found here: http://mailman.nginx.org/pipermail/nginx/2013-December/041533.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Mar 4 11:10:31 2014 From: nginx-forum at nginx.us (zuckbin) Date: Tue, 04 Mar 2014 06:10:31 -0500 Subject: redirect 301 alias domain In-Reply-To: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> References: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <966bd8981fc09ed8a77565e6787d5cb1.NginxMailingListEnglish@forum.nginx.org> i got this error The page isn't redirecting properly in firebug i can see this many times: GET www.aaa.com 301 Moved Permanently aaa.com it seem this is not redirect well Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248038,248068#msg-248068 From francis at daoine.org Tue Mar 4 12:15:41 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Mar 2014 12:15:41 +0000 Subject: redirect 301 alias domain In-Reply-To: <966bd8981fc09ed8a77565e6787d5cb1.NginxMailingListEnglish@forum.nginx.org> References: <64682dca1963bffb56cf2ffebff4d7ea.NginxMailingListEnglish@forum.nginx.org> <966bd8981fc09ed8a77565e6787d5cb1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140304121541.GQ29880@craic.sysops.org> On Tue, Mar 04, 2014 at 06:10:31AM -0500, zuckbin wrote: Hi there, > in firebug i can see this many times: > > GET www.aaa.com 301 Moved Permanently aaa.com > > it seem this is not redirect well This seems to say the when you ask for http://www.aaa.com/, you are redirected to http://aaa.com/. The config fragment you provided showed that you should be redirected to http://www.bbb.com/. I suspect that the config fragment you provided is not the relevant part of the configuration file that is actually being used by nginx. The output of "curl -i http://www.aaa.com/" will probably be useful to see if your nginx is being used at all. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Mar 4 13:27:39 2014 From: nginx-forum at nginx.us (zuckbin) Date: Tue, 04 Mar 2014 08:27:39 -0500 Subject: redirect 301 alias domain In-Reply-To: <20140304121541.GQ29880@craic.sysops.org> References: <20140304121541.GQ29880@craic.sysops.org> Message-ID: curl -i "http://www.aaa.com/" HTTP/1.1 301 Moved Permanently Server: nginx Date: Tue, 04 Mar 2014 13:25:24 GMT Content-Type: text/html Content-Length: 178 Connection: keep-alive Keep-Alive: timeout=5 Location: http://www.aaa.com/ 301 Moved Permanently

301 Moved Permanently


nginx
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248038,248070#msg-248070 From nginx-forum at nginx.us Tue Mar 4 13:30:50 2014 From: nginx-forum at nginx.us (zuckbin) Date: Tue, 04 Mar 2014 08:30:50 -0500 Subject: redirect 301 alias domain In-Reply-To: References: <20140304121541.GQ29880@craic.sysops.org> Message-ID: <8960b515624f4b4ab3c902de51774a07.NginxMailingListEnglish@forum.nginx.org> i forgot to say that i used pound on my server before to send traffic to nginx Maybe, there is a conflict with it Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248038,248071#msg-248071 From nginx-forum at nginx.us Tue Mar 4 14:33:56 2014 From: nginx-forum at nginx.us (greekduke) Date: Tue, 04 Mar 2014 09:33:56 -0500 Subject: Worker dies with a segfault error In-Reply-To: <20140224124507.GS33573@mdounin.ru> References: <20140224124507.GS33573@mdounin.ru> Message-ID: <5cd170eafb069b40732a4444870e8a15.NginxMailingListEnglish@forum.nginx.org> Hello Finally I managed to get a core dump and below is the output: [root at ape-01 nginx]# gdb /usr/local/nginx/sbin/nginx /var/log/nginx/core.31000 GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/local/nginx/sbin/nginx...done. [New Thread 31000] Missing separate debuginfo for /usr/local/pgsql/lib/libpq.so.5 Try: yum --disablerepo='*' --enablerepo='*-debug*' install /usr/lib/debug/.build-id/50/df7babee4e7e51a305aed74590ad3d040c5ffb Missing separate debuginfo for Try: yum --disablerepo='*' --enablerepo='*-debug*' install /usr/lib/debug/.build-id/81/a81be2e44c93640adedb62adc93a47f4a09dd1 Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libcrypt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypt.so.1 Reading symbols from /usr/local/pgsql/lib/libpq.so.5...(no debugging symbols found)...done. Loaded symbols for /usr/local/pgsql/lib/libpq.so.5 Reading symbols from /lib64/libpcre.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libpcre.so.0 Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libcrypto.so.10 Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libz.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libfreebl3.so...(no debugging symbols found)...done. Loaded symbols for /lib64/libfreebl3.so Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 ngx_pnalloc (pool=0x0, size=950) at src/core/ngx_palloc.c:155 155 if (size <= pool->max) { Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.132.el6.x86_64 nss-softokn-freebl-3.14.3-9.el6.x86_64 openssl-1.0.1e-15.el6.x86_64 pcre-7.8-6.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) ################################## nginx.conf ############################################### user nginx; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { use epoll; worker_connections 20480; } worker_rlimit_nofile 30000; worker_rlimit_core 500M; working_directory /var/log/nginx; http { include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; log_format cdr '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$msisdn" "$apn"'; log_format body $request_body; access_log /var/log/nginx/body.log body; access_log /var/log/nginx/access.log main; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 300m; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; #keepalive_timeout 65; client_body_in_file_only off; #client_body_in_single_buffer on; client_body_timeout 5s; #gzip on; upstream db { postgres_server 10.9.115.6:9999 dbname=nodeb user=nodeb password=banana; } server { listen 8000; location / { #flv; #mp4; #mp4_buffer_size 4M; #mp4_max_buffer_size 10M; resolver 127.0.0.1; eval_subrequest_in_memory off; set $test ""; set $apn ""; set $msisdn ""; set $ms ""; set $mmcurl 192.168.200.95; eval $result { postgres_pass db; postgres_query "select cast(count(url) as varchar),'0' from urls where url like '%$http_host%' union select calledstationid,callingstationid from mappings where framedipaddress='$http_x_forwarded_for'"; postgres_output text; } # Get APN and MSISND from the result of the query #if ($result ~ ([^,]*)){ if ($result ~ ([^,]+),([^,]+),([^,]+),([^,]+) ){ set $urlfound $1; set $apn $3; set $msisdn $4; } # Check is $msisdn is null if ($msisdn = ""){ return 403; } # Check APN and destination host if ($apn = test.wap){ set $test testwap; } if ($apn = oper-mms){ set $test testmms; } if ($apn = mms.domain.com){ set $test operMms; } if ($apn = wap.domain.com){ set $test operWap; } if ($http_host = $mmcurl){ set $test "${test}Mmsc"; } if ($http_host != $mmcurl){ set $test "${test}Browsing"; } if ($urlfound ){ set $test "${test}1"; } if ($test = operMmsBrowsing) { return 403; break; } if ($test = operWapMmsc) { return 403; break; } if ($test = testwapMmsc) { return 403; break; } if ($test = testmmsBrowsing) { return 403; break; } if ($test = operMmsMmsc) { set $ms $msisdn; } if ($test = operMmsMmsc) { set $ms $msisdn; } if ($test = operMmsBrowsing1) { set $ms $msisdn; } if ($test = testwapBrowsing1) { set $ms $msisdn; } if ($test = testmmsMmsc) { set $ms $msisdn; } access_log /var/log/nginx/cdr.log cdr; error_log /var/log/nginx/debug.log debug; proxy_set_header Host $host; proxy_set_header "X-Nokia-msisdn" $ms; proxy_pass http://$http_host$uri$is_args$args; } error_page 500 502 503 504 /50x.html; error_page 403 /403.html; location = /403.html { root html; allow all; } location = /50x.html { root html; allow all; } location = /testingpage.html { root html; access_log /var/log/nginx/lbprobe.log main; } } } ######################################################################################### Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247886,248074#msg-248074 From mdounin at mdounin.ru Tue Mar 4 14:53:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Mar 2014 18:53:56 +0400 Subject: Worker dies with a segfault error In-Reply-To: <5cd170eafb069b40732a4444870e8a15.NginxMailingListEnglish@forum.nginx.org> References: <20140224124507.GS33573@mdounin.ru> <5cd170eafb069b40732a4444870e8a15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140304145356.GS34696@mdounin.ru> Hello! On Tue, Mar 04, 2014 at 09:33:56AM -0500, greekduke wrote: > Hello > > Finally I managed to get a core dump and below is the output: > > [root at ape-01 nginx]# gdb /usr/local/nginx/sbin/nginx > /var/log/nginx/core.31000 > GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1) [...] > Loaded symbols for /lib64/libnss_files.so.2 > Core was generated by `nginx: worker process > '. > Program terminated with signal 11, Segmentation fault. > #0 ngx_pnalloc (pool=0x0, size=950) at src/core/ngx_palloc.c:155 > 155 if (size <= pool->max) { > Missing separate debuginfos, use: debuginfo-install > glibc-2.12-1.132.el6.x86_64 nss-softokn-freebl-3.14.3-9.el6.x86_64 > openssl-1.0.1e-15.el6.x86_64 pcre-7.8-6.el6.x86_64 zlib-1.2.3-29.el6.x86_64 > (gdb) backtrace? [...] > eval $result { > postgres_pass db; > postgres_query "select cast(count(url) as Note well that looking at your config suggests that the problem is in 3rd party module you use (either eval or postgres). In your original message you've claimed that you have segfaults without 3rd party modules. It's a good idea to reproduce the problem without 3rd party modules. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Mar 4 15:07:28 2014 From: nginx-forum at nginx.us (greekduke) Date: Tue, 04 Mar 2014 10:07:28 -0500 Subject: Worker dies with a segfault error In-Reply-To: <20140304145356.GS34696@mdounin.ru> References: <20140304145356.GS34696@mdounin.ru> Message-ID: <90ee92d37bc58828dcecfc57bae5b33c.NginxMailingListEnglish@forum.nginx.org> Hello Below is the backtrace. I have seen the behaviour without eval and Postgres and I will try to take a trace tomorrow. In the debug log actually there is no error. Since I have only one worker the PID just changes to the new worker without any info. Since I am using nginx as proxy only I am suspecting that the error has to do with the proxy functionality. [root at ape-01 nginx]# gdb /usr/local/nginx/sbin/nginx /var/log/nginx/core.31000 GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/local/nginx/sbin/nginx...done. [New Thread 31000] Missing separate debuginfo for /usr/local/pgsql/lib/libpq.so.5 Try: yum --disablerepo='*' --enablerepo='*-debug*' install /usr/lib/debug/.build-id/50/df7babee4e7e51a305aed74590ad3d040c5ffb Missing separate debuginfo for Try: yum --disablerepo='*' --enablerepo='*-debug*' install /usr/lib/debug/.build-id/81/a81be2e44c93640adedb62adc93a47f4a09dd1 Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libcrypt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypt.so.1 Reading symbols from /usr/local/pgsql/lib/libpq.so.5...(no debugging symbols found)...done. Loaded symbols for /usr/local/pgsql/lib/libpq.so.5 Reading symbols from /lib64/libpcre.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libpcre.so.0 Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libcrypto.so.10 Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libz.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libfreebl3.so...(no debugging symbols found)...done. Loaded symbols for /lib64/libfreebl3.so Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 ngx_pnalloc (pool=0x0, size=950) at src/core/ngx_palloc.c:155 155 if (size <= pool->max) { Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.132.el6.x86_64 nss-softokn-freebl-3.14.3-9.el6.x86_64 openssl-1.0.1e-15.el6.x86_64 pcre-7.8-6.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) backtrace full #0 ngx_pnalloc (pool=0x0, size=950) at src/core/ngx_palloc.c:155 m = p = #1 0x0000000000437b20 in ngx_http_log_handler (r=0x1bfb540) at src/http/modules/ngx_http_log_module.c:323 line = p = len = 950 i = l = log = op = 0x19d6358 buffer = lcf = 0x19e8b18 #2 0x000000000042d9dd in ngx_http_log_request (r=0x1bfb540) at src/http/ngx_http_request.c:3455 i = n = 1 log_handler = 0x1a02868 #3 0x000000000042e04e in ngx_http_free_request (r=0x1bfb540, rc=0) at src/http/ngx_http_request.c:3402 log = 0x1b400f0 pool = linger = {l_onoff = 568, l_linger = 0} cln = 0x0 ctx = #4 0x000000000042fe5b in ngx_http_set_keepalive (r=0x1bfb540) at src/http/ngx_http_request.c:2840 tcp_nodelay = rev = 0x7fc3cce9d3f8 b = 0x1b402d0 ---Type to continue, or q to quit--- i = f = wev = c = 0x7fc3cd0a74d0 hc = 0x1b40138 clcf = 0x19e8890 #5 ngx_http_finalize_connection (r=0x1bfb540) at src/http/ngx_http_request.c:2483 clcf = 0x19e8890 #6 0x0000000000430724 in ngx_http_finalize_request (r=0x1bfb540, rc=-4) at src/http/ngx_http_request.c:2213 c = 0x7fc3cd0a74d0 pr = #7 0x000000000042be5d in ngx_http_core_content_phase (r=0x1bfb540, ph=0x1a03468) at src/http/ngx_http_core_module.c:1410 root = 4369847 rc = path = {len = 28761224, data = 0x19bdde0 "\320\353\233\001"} #8 0x0000000000426873 in ngx_http_core_run_phases (r=0x1bfb540) at src/http/ngx_http_core_module.c:888 rc = ph = 0x1a03378 cmcf = #9 0x000000000042db58 in ngx_http_run_posted_requests (c=0x7fc3cd0a74d0) at src/http/ngx_http_request.c:2171 r = 0x1bfb540 ctx = pr = #10 0x0000000000440cfb in ngx_http_upstream_handler (ev=0x7fc3cce9e438) at src/http/ngx_http_upstream.c:980 c = 0x7fc3cd0a74d0 r = 0x1b6d1d0 ctx = ---Type to continue, or q to quit--- u = 0x1b6dc88 #11 0x0000000000423257 in ngx_epoll_process_events (cycle=0x19bdde0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:691 events = 1 revents = 5 instance = i = level = err = rev = 0x7fc3cce9e438 wev = queue = c = 0x7fc3cd0a92d0 #12 0x000000000041abd3 in ngx_process_events_and_timers (cycle=0x19bdde0) at src/event/ngx_event.c:248 flags = 1 timer = 4998 delta = 1393942527536 #13 0x0000000000421a82 in ngx_worker_process_cycle (cycle=0x19bdde0, data=) at src/os/unix/ngx_process_cycle.c:816 worker = i = c = #14 0x00000000004201ac in ngx_spawn_process (cycle=0x19bdde0, proc=0x42198c , data=0x0, name=0x4771b3 "worker process", respawn=-3) at src/os/unix/ngx_process.c:198 on = 1 pid = 0 s = 0 ---Type to continue, or q to quit--- #15 0x0000000000420e9a in ngx_start_worker_processes (cycle=0x19bdde0, n=1, type=-3) at src/os/unix/ngx_process_cycle.c:364 i = ch = {command = 1, pid = 0, slot = 0, fd = 0} #16 0x0000000000422103 in ngx_master_process_cycle (cycle=0x19bdde0) at src/os/unix/ngx_process_cycle.c:136 title = 0x1a03543 "master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf" p = size = 79 i = 3 n = sigio = set = {__val = {0 }} itv = {it_interval = {tv_sec = 26996763, tv_usec = 0}, it_value = {tv_sec = 0, tv_usec = 0}} live = delay = ls = ccf = 0x19bed58 #17 0x0000000000404e24 in main (argc=, argv=) at src/core/nginx.c:407 i = log = 0x6a0fa0 cycle = 0x19bdde0 init_cycle = {conf_ctx = 0x0, pool = 0x19bd8f0, log = 0x6a0fa0, new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = 0x0, action = 0x0, next = 0x0}, log_use_stderr = 0, files = 0x0, free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = { last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 32, ---Type to continue, or q to quit--- data = 0x7fff2bd1df50 ""}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 22, data = 0x7fff2bd1df50 ""}, prefix = {len = 17, data = 0x472f78 "/usr/local/nginx/"}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}} ccf = (gdb) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247886,248076#msg-248076 From nginx-forum at nginx.us Tue Mar 4 15:15:25 2014 From: nginx-forum at nginx.us (arunh) Date: Tue, 04 Mar 2014 10:15:25 -0500 Subject: Nginx postgres problem Message-ID: <14d66306501be40aca614988ce69ac05.NginxMailingListEnglish@forum.nginx.org> Hello All, I am newbie to Nginx and I am using Nginx with PostgreSQL database. When there is a http request to the server I want to extract one of the input variables in the request, query the Database and retrun the result, the variable inside the postgres_query is empty. But when I try to print the value of "$1" variable and also after assigning it to the variable "$tenantID" I do get the value. I guess in the eval block the postgres_escape is not getting set. Can some one throw some light on this issue if I am doing something wrong. Also is there any supporting module to use the echo statements inside the postgres block. Thanks, Arun Request : http://localhost/tenantservices/tenantID/service/userID/EndPointName location ~ ^/tenantservices/(.*)\/(.*)\/(.*)\/(.*) { echo teh complete request uri is :$request_uri; echo The first parameter is :$1; echo the second aprameter is :$2; echo the third parameter is :$3; echo the fourth param is :$4; set $tenantID $1; set $serviceName $2; set $userID $3; set $endPointName $4; echo After setting the tenantID:$tenantID; eval_subrequest_in_memory off; eval $max_no_clusters { default_type 'text/plain'; postgres_pass tenantdatabase; postgres_escape $tenantURID $tenantID; postgres_query "SELECT max_no_clusters FROM load_balancer WHERE id=$tenantURID"; postgres_output value ; } echo MaxNoOfclusters is : $max_no_clusters; } Log message on Nginx: 2014/03/04 15:46:05 [error] 31995#0: *152 postgres: "postgres_output value" received 0 value(s) instead of expected single value in location "/eval_155080700" while processing result from PostgreSQL database, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248077,248077#msg-248077 From mdounin at mdounin.ru Tue Mar 4 15:23:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Mar 2014 19:23:19 +0400 Subject: nginx-1.5.11 Message-ID: <20140304152318.GT34696@mdounin.ru> Changes with nginx 1.5.11 04 Mar 2014 *) Security: memory corruption might occur in a worker process on 32-bit platforms while handling a specially crafted request by ngx_http_spdy_module, potentially resulting in arbitrary code execution (CVE-2014-0088); the bug had appeared in 1.5.10. Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. Manuel Sadosky, Buenos Aires, Argentina. *) Feature: the $ssl_session_reused variable. *) Bugfix: the "client_max_body_size" directive might not work when reading a request body using chunked transfer encoding; the bug had appeared in 1.3.9. Thanks to Lucas Molas. *) Bugfix: a segmentation fault might occur in a worker process when proxying WebSocket connections. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_spdy_module was used on 32-bit platforms; the bug had appeared in 1.5.10. *) Bugfix: the $upstream_status variable might contain wrong data if the "proxy_cache_use_stale" or "proxy_cache_revalidate" directives were used. Thanks to Piotr Sikora. *) Bugfix: a segmentation fault might occur in a worker process if errors with code 400 were redirected to a named location using the "error_page" directive. *) Bugfix: nginx/Windows could not be built with Visual Studio 2013. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 4 15:23:54 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Mar 2014 19:23:54 +0400 Subject: nginx-1.4.6 Message-ID: <20140304152354.GX34696@mdounin.ru> Changes with nginx 1.4.6 04 Mar 2014 *) Bugfix: the "client_max_body_size" directive might not work when reading a request body using chunked transfer encoding; the bug had appeared in 1.3.9. Thanks to Lucas Molas. *) Bugfix: a segmentation fault might occur in a worker process when proxying WebSocket connections. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 4 15:24:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Mar 2014 19:24:12 +0400 Subject: nginx security advisory (CVE-2014-0088) Message-ID: <20140304152412.GB34696@mdounin.ru> Hello! A bug in the experimental SPDY implementation in nginx 1.5.10 was found, which might allow an attacker to corrupt worker process memory by using a specially crafted request, potentially resulting in arbitrary code execution (CVE-2014-0088). The problem only affects nginx 1.5.10 on 32-bit platforms, compiled with the ngx_http_spdy_module module (which is not compiled by default), if the "spdy" option of the "listen" directive is used in a configuration file. The problem is fixed in nginx 1.5.11. Patch for the problem can be found here: http://nginx.org/download/patch.2014.spdy.txt Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. Manuel Sadosky, Buenos Aires, Argentina. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 4 16:10:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Mar 2014 20:10:08 +0400 Subject: Worker dies with a segfault error In-Reply-To: <90ee92d37bc58828dcecfc57bae5b33c.NginxMailingListEnglish@forum.nginx.org> References: <20140304145356.GS34696@mdounin.ru> <90ee92d37bc58828dcecfc57bae5b33c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140304161008.GG34696@mdounin.ru> Hello! On Tue, Mar 04, 2014 at 10:07:28AM -0500, greekduke wrote: > Hello > > Below is the backtrace. I have seen the behaviour without eval and Postgres > and I will try to take a trace tomorrow. In the debug log actually there is > no error. Since I have only one worker the PID just changes to the new > worker without any info. Since I am using nginx as proxy only I am > suspecting that the error has to do with the proxy functionality. Your config uses eval{} block in "location /", and postgres module is used inside of the eval{}. That is, two 3rd party modules are used for each request, and bugs in these modules can easily cause problems. [...] > #0 ngx_pnalloc (pool=0x0, size=950) at src/core/ngx_palloc.c:155 > #1 0x0000000000437b20 in ngx_http_log_handler (r=0x1bfb540) at > #2 0x000000000042d9dd in ngx_http_log_request (r=0x1bfb540) at > #3 0x000000000042e04e in ngx_http_free_request (r=0x1bfb540, rc=0) at > #4 0x000000000042fe5b in ngx_http_set_keepalive (r=0x1bfb540) at > #5 ngx_http_finalize_connection (r=0x1bfb540) at > #6 0x0000000000430724 in ngx_http_finalize_request (r=0x1bfb540, rc=-4) at > #7 0x000000000042be5d in ngx_http_core_content_phase (r=0x1bfb540, > #8 0x0000000000426873 in ngx_http_core_run_phases (r=0x1bfb540) at > #9 0x000000000042db58 in ngx_http_run_posted_requests (c=0x7fc3cd0a74d0) at > #10 0x0000000000440cfb in ngx_http_upstream_handler (ev=0x7fc3cce9e438) at > #11 0x0000000000423257 in ngx_epoll_process_events (cycle=0x19bdde0, [...] Trace suggests most likely there is a problem with r->count reference counting somewhere. As a next step, I would recommend you to try to reproduce the problem without 3rd party modules. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Mar 4 19:48:09 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Mar 2014 19:48:09 +0000 Subject: redirect 301 alias domain In-Reply-To: References: <20140304121541.GQ29880@craic.sysops.org> Message-ID: <20140304194809.GS29880@craic.sysops.org> On Tue, Mar 04, 2014 at 08:27:39AM -0500, zuckbin wrote: Hi there, this... > curl -i "http://www.aaa.com/" > Location: http://www.aaa.com/ says that the config that you think nginx is using is not the config that nginx is using. You have a few other recent mails which probably have the same starting cause. Change things so that you know exactly which config file is being used, and then put your intended config in that file and restart nginx. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Mar 4 20:51:36 2014 From: nginx-forum at nginx.us (talkingnews) Date: Tue, 04 Mar 2014 15:51:36 -0500 Subject: Confusion over apparently conflicting advice in guide/wiki/examples In-Reply-To: References: Message-ID: <8f6cd659438bc68ffbbca71380032f0e.NginxMailingListEnglish@forum.nginx.org> Hi BR and thank you for your reply. You said: > Where does the 'sites-available' directory of nginx came from? Standard "apt-get install nginx" on Ubunutu. Stable and mainline. Like Apache, 'sites-available' contains all sites, then you can symlink to 'sites-enabled' for running sites. It's just the Ubuntu way :) > There is no such DOCUMENT_URI server variable in PHP > The nginx wiki has not the reputation of being a trustable source I know you say not to trust the wiki (it appears in http://wiki.nginx.org/PHPFcgiExample) but it also is in the standard install of nginx on ubuntu which comes with an /etc/nginx/fastcgi_params file containing fastcgi_param DOCUMENT_URI $document_uri; Perhaps it should not even be there? Should I report it as a possible error to the Ubuntu package maintainers? > The '0' value seems to exist for backward-compatibility as it provides a broken environment. > Thus, scripts relying on such a value are highly suspicious to my eyes. > What exactly are you referring to in the pitfalls page saying that you setup is dangerous?? Well, in your reply you say that it provides a broken environment, but as I mentioned, in both the nginx wiki AND in the default config file which comes with a standard nginx install on Ubuntu, it says # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini So, you can understand my confusion here! PHP says leave it on. You say leave it on. Nginx stand install and wiki says turn it off so that nginx doesn't keep trying files. The pitfalls page says: ------------------------ "For instance, if a request is made for /forum/avatar/1232.jpg/file.php which does not exist but if /forum/avatar/1232.jpg does, the PHP interpreter will process /forum/avatar/1232.jpg instead. If this contains embedded PHP code, this code will be executed accordingly. Options for avoiding this are: Set cgi.fix_pathinfo=0 in php.ini. This causes the PHP interpreter to only try the literal path given and to stop processing if the file is not found." ------------------------ So what I meant was that setting cgi.fix_pathinfo = 1 may leave this security gap of executing unwanted code. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248051,248110#msg-248110 From francis at daoine.org Tue Mar 4 21:31:14 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Mar 2014 21:31:14 +0000 Subject: Confusion over apparently conflicting advice in guide/wiki/examples In-Reply-To: <0bef30fd963bc947d84e402ba3ee025e.NginxMailingListEnglish@forum.nginx.org> References: <0bef30fd963bc947d84e402ba3ee025e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140304213114.GT29880@craic.sysops.org> On Mon, Mar 03, 2014 at 04:11:52PM -0500, talkingnews wrote: Hi there, > I'd call myself very much a beginner with NGiNX, but I've been looking > further through the documentation, particularly the > http://wiki.nginx.org/Pitfalls page, and now I'm left with confusion! The wiki is pretty much free for anyone to edit. The documentation is somewhere else. > This page http://wiki.nginx.org/PHPFcgiExample says > "This guide run fine on php.ini cgi.fix_pathinfo = 1 (the default). Some > guide insist to change it to cgi.fix_pathinfo = 0 but doing that make > PHP_SELF variable broken (not equal to DOCUMENT_URI).". > > But http://wiki.nginx.org/Pitfalls says: > Set cgi.fix_pathinfo=0 in php.ini. This causes the PHP interpreter to only > try the literal path given and to stop processing if the file is not found. Two different wiki pages, two different authors with different requirements and expectations. Probably. The short version is: nginx and php and fastcgi are three different things. nginx doesn't care what php configuration you use. nginx also doesn't care if your php configuration works at all. It's the job of the person doing the configuring to care about that. > Which is correct? You're more likely to get correct php advice on a php list. I suspect that the only reason it is mentioned anywhere on the nginx wiki is that some time ago, someone reported that they configured their nginx to ask their fastcgi server to consider the filename /var/www/file.txt/fake.php to be a php script; and the fastcgi server and/or php interpreter instead processed the file /var/www/file.txt as a php script; and that this was a problem and that it was also somehow nginx's problem or fault. (I agree it's a problem; I disagree it's nginx's problem.) I'd say use "cgi.fix_pathinfo = 0", and fix any php script or environment that has problems. But I'd also say don't expect good knitting advice on a non-knitting list. > My second question: As I understand it, you should always make parameter > changes only where they are needed, and in an overriding way - ie: one never > touches php.ini itself. That's another php question. nginx doesn't care. > In the server stanza there is: > > server { > fastcgi_buffers 8 16k; > fastcgi_buffer_size 32k; > fastcgi_read_timeout 180; > .... > > and then separately it says to add to fastcgi_params the following: > > fastcgi_connect_timeout 60; > fastcgi_send_timeout 180; > fastcgi_read_timeout 180; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > fastcgi_intercept_errors off; > > Some of those numbers are HUGE - most of the buffer defaults are normally > 4k|8k. And 3 minutes between connections? Is this over-the-top? And the > three items in server are conflicted by different values in fastcgi params. Maybe those big values are suitable for that specific application. Or maybe the random author who put it into the wiki found that this set of values worked for them without testing what else might have worked. In this case, putting values both at server level and in a file included at location level means that the nginx inheritance rules will apply, and not all of the server-level values will matter. (And putting any non-fastcgi_param values in a file called fastcgi_params is probably a poor idea.) > And isn't that going to "pollute" the whole fpm server? No. Those settings are purely for the fastcgi client, which is nginx. The only ones the server sees are fastcgi_param values -- and they are random key/value pairs that nginx doesn't care about. The fastcgi server does with them whatever it wants. > I thought it would > be better to have it in the fpm pool, so first I had it like this: > > php_value[upload_max_filesize] = 128M > php_value[max_file_uploads] = 60 > php_value[default_socket_timeout] = 180 > php_value[date.timezone] = 'Europe/London' > php_value[session.gc_maxlifetime] = 28800 php question. > The I realised I only needed these high values for one area of my server, so > again I changed it: > > location ~ /upload/ { > location ~ \.(php)$ { > try_files $uri =404; > set $php_value "post_max_size = 128M"; > set $php_value "$php_value \n upload_max_filesize = 128M"; > set $php_value "$php_value \n session.gc_maxlifetime = 28800"; > set $php_value "$php_value \n max_file_uploads = 60"; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_param PHP_VALUE $php_value; > include fastcgi_params; > } > } > > And it works fine. No core ini files are touched, only the area which need > to change is changed. Strictly a php question still. nginx (the fastcgi client) can send any key/value pairs as fastcgi_params. What the fastcgi server does with them is not nginx's concern. If your fastcgi server does something useful with PHP_VALUE, then it can be useful for you to set it, like you do here. But nginx doesn't know what the values mean, or what they invite your fastcgi server to do. > Also, the example config has: > > location ~ \.php { > fastcgi_pass unix:/tmp/domain.sock; > fastcgi_split_path_info ^(.+\.php)(.*)$; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > > But the Pitfalls guide suggests this is dangerous. The Pitfalls guide should explain why it is suggests this is dangerous. What url might match this location? What fastcgi_param SCRIPT_FILENAME might your fastcgi server receive? What file might your fastcgi server process as if it were a php script? If you can't tell, then there's a danger -- but it's not necessarily on the nginx side. > So, my question would be: > > Is this example file wrong/outdated/dangerous? > > Or am I completely misunderstanding something? I suspect that the multiple voices on the wiki have not explained clearly to you why what they suggest is true, is true. Perhaps it's not true at all. Or perhaps it is true in a specific set of circumstances -- which differ for each voice. Are things any clearer if you limit yourself to things at http://www.nginx.org/ ? Perhaps there aren't enough examples there, I don't know. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Mar 4 21:40:47 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Mar 2014 21:40:47 +0000 Subject: Confusion over apparently conflicting advice in guide/wiki/examples In-Reply-To: <8f6cd659438bc68ffbbca71380032f0e.NginxMailingListEnglish@forum.nginx.org> References: <8f6cd659438bc68ffbbca71380032f0e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140304214047.GU29880@craic.sysops.org> On Tue, Mar 04, 2014 at 03:51:36PM -0500, talkingnews wrote: Hi there, continuing from my previous mail... > > There is no such DOCUMENT_URI server variable in PHP > > The nginx wiki has not the reputation of being a trustable source > > I know you say not to trust the wiki (it appears in > http://wiki.nginx.org/PHPFcgiExample) but it also is in the standard install > of nginx on ubuntu which comes with an /etc/nginx/fastcgi_params file > containing > fastcgi_param DOCUMENT_URI $document_uri; > > Perhaps it should not even be there? Should I report it as a possible error > to the Ubuntu package maintainers? nginx is the fastcgi client. It can send any key/value pairs to the fastcgi server. If you read the fastcgi spec, you'll see that certain keys are expected to exist. And if you read your fastcgi server documentation, you'll see that certain keys are heeded. Those lists of keys may not be identical. A lot of the fastcgi_params file seems to be things that some common fastcgi servers and/or the code they run will typically make use of. They are things added to be helpful in some cases, which are unlikely to ever be harmful. Perhaps your fastcgi server will be happy with just "fastcgi_param SCRIPT_FILENAME /tmp/env.php", and with no other fastcgi_param values at all. Or perhaps it ignores SCRIPT_FILENAME and instead uses some different keys to identify the file to be processed. And perhaps your next fastcgi server will do something different. You must configure your nginx to say whatever your fastcgi server needs to hear. Many of the "default" params are to make it Just Work with different servers. (I think.) > So, you can understand my confusion here! PHP says leave it on. You say > leave it on. Nginx stand install and wiki says turn it off so that nginx > doesn't keep trying files. No. nginx doesn't keep trying files. The fastcgi server might, but that's a "fix your fastcgi server" issue. > So what I meant was that setting cgi.fix_pathinfo = 1 may leave this > security gap of executing unwanted code. ...in the php interpreter. Not in nginx. Fix php problems in php, and things will be easier. Cheers, f -- Francis Daly francis at daoine.org From agentzh at gmail.com Tue Mar 4 23:19:23 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 4 Mar 2014 15:19:23 -0800 Subject: Nginx postgres problem In-Reply-To: <14d66306501be40aca614988ce69ac05.NginxMailingListEnglish@forum.nginx.org> References: <14d66306501be40aca614988ce69ac05.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Mar 4, 2014 at 7:15 AM, arunh wrote: > But when I try to print the value of "$1" variable and also after assigning > it to the variable "$tenantID" I do get the value. > I guess in the eval block the postgres_escape is not getting set. > The "eval" directive always runs before the directives of the standard ngx_rewrite module. No matter in what order you write them down in nginx.conf. As a co-maintainer of ngx_postgres, I recommend you use ngx_lua module to communicate with ngx_postgres. Basically you can use rewrite_by_lua to inject some Lua code and use ngx.location.capture to issue a subrequest to your postgres location. Regards, -agentzh From nginx-forum at nginx.us Wed Mar 5 00:34:02 2014 From: nginx-forum at nginx.us (tt5430) Date: Tue, 04 Mar 2014 19:34:02 -0500 Subject: nginx as front end cache for WMS In-Reply-To: <5315A963.1050204@citrin.ru> References: <5315A963.1050204@citrin.ru> Message-ID: Hi Anton, I followed your suggestion and modified my conf files. I also added some new stuff but still could not get it to work. I did not see any message written to the access log and did not see anything written to the cache. I included my new conf files and a request/response block that I captured using FireFox Live Http Headers addon. I hope someone can catch my mistake and show me what was wrong. [/etc/nginx/nginx.conf] user vriuser; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; log_format geoserver-cache '$remote_addr - $upstream_cache_status [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } [/etc/nginx/sites-enabled/proxy.conf] proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header X-Cache $upstream_cache_status; proxy_connect_timeout 60; proxy_read_timeout 90; proxy_send_timeout 60; proxy_buffering on; proxy_buffer_size 8k; proxy_buffers 100 8k; [/etc/nginx/sites-enabled/geoserver.conf] upstream backend { server localhost; } proxy_cache_path /var/geoserver/cache levels=1:2 keys_zone=geoserver-cache:10m inactive=7d max_size=5000m; proxy_temp_path /var/geoserver/cache/tmp; server { listen 80; #server_name localhost; #root /home/geoserver/webapps/geoserver; root /opt/tomcat/webapps/geoserver; proxy_redirect off; proxy_buffering on; proxy_cache_valid any 7d; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log /var/log/nginx/geoserver/access.log geoserver-cache; error_log /var/log/nginx/geoserver/error.log; proxy_cache geoserver-cache; proxy_cache_valid 200 2h; proxy_cache_valid 302 2h; proxy_cache_valid 301 4h; proxy_cache_valid any 1m; resolver 127.0.0.1; location / { proxy_pass http://backend; proxy_cache_key $scheme://backend:8080/$request_uri; proxy_cache_valid 200; } } Request/Response headers: http://localhost:8080/geoserver/osm/wms?LAYERS=Florida&STYLES=&FORMAT=image%2Fpng&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A3857&BBOX=-9046077.9720712,3223712.9634365,-9032807.336146,3240204.627499&WIDTH=412&HEIGHT=512 GET /geoserver/osm/wms?LAYERS=Florida&STYLES=&FORMAT=image%2Fpng&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A3857&BBOX=-9046077.9720712,3223712.9634365,-9032807.336146,3240204.627499&WIDTH=412&HEIGHT=512 HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://localhost:8080/geoserver/osm/wms?service=WMS&version=1.1.0&request=GetMap&layers=Florida&styles=&bbox=-9735065.0,2733130.75,-8885187.0,3788597.25&width=412&height=512&srs=EPSG:3857&format=application/openlayers Cookie: JSESSIONID=BDE023A90406794200BCF412045BC96C; JSESSIONID=maqv107kg0me Connection: keep-alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Disposition: inline; filename=Florida.png Content-Type: image/png Transfer-Encoding: chunked Date: Wed, 05 Mar 2014 00:15:35 GMT Regards, Tam Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248053,248117#msg-248117 From nginx-forum at nginx.us Wed Mar 5 04:18:48 2014 From: nginx-forum at nginx.us (loki) Date: Tue, 04 Mar 2014 23:18:48 -0500 Subject: High CPU Usage and NGINX Hangup In-Reply-To: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> References: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <435bfb65c74ee446c01c8b2629cf162e.NginxMailingListEnglish@forum.nginx.org> I am now seeing a bunch of epoll_waits in the strace. Multiple master processes spawn. 4519 0.000000 epoll_wait(143, {}, 512, 500) = 0 4519 0.500833 epoll_wait(143, {}, 512, 500) = 0 4519 0.500785 epoll_wait(143, {}, 512, 500) = 0 4519 0.500756 epoll_wait(143, {}, 512, 500) = 0 4519 0.500742 epoll_wait(143, {}, 512, 500) = 0 4519 0.500760 epoll_wait(143, {}, 512, 500) = 0 4519 0.500815 epoll_wait(143, {}, 512, 500) = 0 4519 0.500792 epoll_wait(143, {}, 512, 500) = 0 4519 0.500727 epoll_wait(143, {}, 512, 500) = 0 4519 0.500760 epoll_wait(143, {}, 512, 500) = 0 4519 0.500754 epoll_wait(143, {}, 512, 500) = 0 4519 0.500702 epoll_wait(143, {}, 512, 500) = 0 4519 0.500742 epoll_wait(143, {}, 512, 500) = 0 4519 0.500751 epoll_wait(143, {}, 512, 500) = 0 4519 0.500786 epoll_wait(143, {}, 512, 500) = 0 4519 0.500764 epoll_wait(143, {}, 512, 500) = 0 4519 0.500743 epoll_wait(143, {}, 512, 500) = 0 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248055,248119#msg-248119 From agentzh at gmail.com Wed Mar 5 05:16:10 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 4 Mar 2014 21:16:10 -0800 Subject: High CPU Usage and NGINX Hangup In-Reply-To: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> References: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Mon, Mar 3, 2014 at 9:07 PM, loki wrote: > I am seeing my NGINX server randomly hangup. Multiple worker processes > utilizing 100% CPU. 100% CPU occupancy usually happen on the userland code, so strace is usually not help by providing info on the syscall level. It'll be ideal if you can take a C-land on-CPU flame graph for your worker processes spining at 100% CPU time, which will give the whole picture about how CPU time is distributed among all the code paths: https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt We've been using this to analyse and optimize CPU hogs in our production environment, and not just for NGINX but for everything that we can get proper backtraces :) Best regards, -agentzh From igor at sysoev.ru Wed Mar 5 07:06:43 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 5 Mar 2014 11:06:43 +0400 Subject: High CPU Usage and NGINX Hangup In-Reply-To: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> References: <0de366f504487eb0e2849ac9c6ffe2d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mar 4, 2014, at 9:07 , loki wrote: > I am seeing my NGINX server randomly hangup. Multiple worker processes > utilizing 100% CPU. I see the hangups in strace, but not quite sure what > would be causing the issue. > > 0.000081 futex(0x7f563c893068, FUTEX_WAIT, 0, NULL) = ? ERESTARTSYS (To be > restarted) > 92.011974 --- SIGHUP (Hangup) @ 0 (0) --- > 0.000069 rt_sigreturn(0x934b9b) = -1 EINTR (Interrupted system > call) > 0.000056 futex(0x7f563c893068, FUTEX_WAIT, 0, NULL) = 0 > 154.792082 futex(0x7f563c893068, FUTEX_WAKE, 1) = 0 > > 0.000036 recvmsg(32, 0x7fff134e28b0, 0) = -1 EAGAIN (Resource temporarily > unavailable) > 0.000842 epoll_wait(33, {{EPOLLIN, {u32=404296080, > u64=140007748211088}}}, 512, 54019) = 1 > 49.323010 recvmsg(32, {msg_name(0)=NULL, > msg_iov(1)=[{"\2\0\0\0\0\0\0\0\261m\0\0\0\0\0\0\5\0\0\0\0\0\0\0\377\377\377\377\0\0\0\0", > 32}], msg_controllen=0, msg_flags=0}, 0) = 32 > 0.000129 close(23) = 0 > 0.002116 recvmsg(32, 0x7fff134e28b0, 0) = -1 EAGAIN (Resource > temporarily unavailable) > 0.000837 epoll_wait(33, {{EPOLLIN, {u32=404296080, > u64=140007748211088}}}, 512, 4695) = 1 nginx does not use mutexes. It seems you run 3rd-party modules. -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Wed Mar 5 08:18:31 2014 From: nginx-forum at nginx.us (zuckbin) Date: Wed, 05 Mar 2014 03:18:31 -0500 Subject: redirect 301 alias domain In-Reply-To: <20140304194809.GS29880@craic.sysops.org> References: <20140304194809.GS29880@craic.sysops.org> Message-ID: <33831540514aef79615e9318632f3032.NginxMailingListEnglish@forum.nginx.org> i don't understand why you said that is not the good conf file is used ? how do you know this ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248038,248123#msg-248123 From nginx-forum at nginx.us Wed Mar 5 10:47:58 2014 From: nginx-forum at nginx.us (luckyswede) Date: Wed, 05 Mar 2014 05:47:58 -0500 Subject: proxy_pass with variable removes uri Message-ID: <9e85274344530e4dba8f482a15c40404.NginxMailingListEnglish@forum.nginx.org> Hi, I have a conf with two virtual hosts and a proxy-pass that is dependent on which host the request arrived to, like this: server { listen 80; server_name x.com y.com; resolver 8.8.8.8; root /var/www/html; location / { # whatever } location /api/ { proxy_pass http://api.$host/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; } } Note the trailing "/" on proxy_pass which should forward the uri untouched, stripping out "/api". However, the uri is not forwarded at all, e.g. GET http://x.com/api/somehing is forwarded to api.x.com without the "/something" part. But, if I hard code the proxy_pass url, like this: proxy_pass http://api.x.com/; it works, the uri is properly forwarded. Doesn't proxy_pass have proper support for variables or have I done something wrong? Many thanks / Jonas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248124,248124#msg-248124 From richard.ibbotson at gmail.com Wed Mar 5 11:13:05 2014 From: richard.ibbotson at gmail.com (Richard Ibbotson) Date: Wed, 05 Mar 2014 11:13:05 +0000 Subject: Some Issues with Configuration Message-ID: <2077506.vnlZ8baFKa@sheflug.sheflug.net> Hi I've been having a few problems with configuration of NGINX. No problems with running Apache or Lighttpd on my own Linux box but I've been scratching my head over NGINX. When I've compiled from source or used the vanilla Ubuntu package I find that I can download the front page from my box which is http://sleepypenguin.homelinux.org. This is an HTML page. I can't download other pages which are serveral instances of Wordpress. Such as http://sleepypenguin.homelinux.org/blog. What I get instead is that my web browser (I have tried several web browsers in different locations) asks me to download a BIN file instead of a web page. I'm sure that someone has had this problem when configuring NGINX. Can anyone point me in the right direction ? I thought it might be something to do with the location statment. Such as .... /etc/nginx/common/locations.conf # Blog location = /blog { allow all; access_log off; log_not_found off; } But.. No... That's not it either. -- Richard Sheffield UK https://twitter.com/SleepyPenguin1 From mdounin at mdounin.ru Wed Mar 5 11:22:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Mar 2014 15:22:04 +0400 Subject: proxy_pass with variable removes uri In-Reply-To: <9e85274344530e4dba8f482a15c40404.NginxMailingListEnglish@forum.nginx.org> References: <9e85274344530e4dba8f482a15c40404.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140305112204.GP34696@mdounin.ru> Hello! On Wed, Mar 05, 2014 at 05:47:58AM -0500, luckyswede wrote: > Hi, > I have a conf with two virtual hosts and a proxy-pass that is dependent on > which host the request arrived to, like this: > > server { > listen 80; > server_name x.com y.com; > resolver 8.8.8.8; > root /var/www/html; > > location / { > # whatever > } > > location /api/ { > proxy_pass http://api.$host/; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $host; > } > } > > Note the trailing "/" on proxy_pass which should forward the uri untouched, > stripping out "/api". However, the uri is not forwarded at all, e.g. GET > http://x.com/api/somehing is forwarded to api.x.com without the "/something" > part. > But, if I hard code the proxy_pass url, like this: > proxy_pass http://api.x.com/; > it works, the uri is properly forwarded. > > Doesn't proxy_pass have proper support for variables or have I done > something wrong? The "proxy_pass" directive, if used with variables, specifies _full_ URI to request, see http://nginx.org/r/proxy_pass: : A server name, its port and the passed URI can also be specified : using variables: : : proxy_pass http://$host$uri; : : or even like this: : : proxy_pass $request; -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Mar 5 11:23:08 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 05 Mar 2014 15:23:08 +0400 Subject: proxy_pass with variable removes uri In-Reply-To: <9e85274344530e4dba8f482a15c40404.NginxMailingListEnglish@forum.nginx.org> References: <9e85274344530e4dba8f482a15c40404.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2731548.yxxLrOp24l@vbart-laptop> On Wednesday 05 March 2014 05:47:58 luckyswede wrote: > Hi, > I have a conf with two virtual hosts and a proxy-pass that is dependent on > which host the request arrived to, like this: > > server { > listen 80; > server_name x.com y.com; > resolver 8.8.8.8; > root /var/www/html; > > location / { > # whatever > } > > location /api/ { > proxy_pass http://api.$host/; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $host; > } > } > > Note the trailing "/" on proxy_pass which should forward the uri untouched, > stripping out "/api". However, the uri is not forwarded at all, e.g. GET > http://x.com/api/somehing is forwarded to api.x.com without the "/something" > part. > But, if I hard code the proxy_pass url, like this: > proxy_pass http://api.x.com/; > it works, the uri is properly forwarded. > > Doesn't proxy_pass have proper support for variables or have I done > something wrong? [..] If variables are used in proxy_pass, then a full path should be specified. For example: proxy_pass http://api.$host$uri; wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Mar 5 11:29:29 2014 From: nginx-forum at nginx.us (luckyswede) Date: Wed, 05 Mar 2014 06:29:29 -0500 Subject: proxy_pass with variable removes uri In-Reply-To: <2731548.yxxLrOp24l@vbart-laptop> References: <2731548.yxxLrOp24l@vbart-laptop> Message-ID: <4230602a5e397b4d78b4aaa1be7987fa.NginxMailingListEnglish@forum.nginx.org> Thanks, But I want to automatically remove the "/api" part, just as it does if I don't use variables. So that isn't possible? BR / Jonas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248124,248128#msg-248128 From nginx-forum at nginx.us Wed Mar 5 12:23:23 2014 From: nginx-forum at nginx.us (luckyswede) Date: Wed, 05 Mar 2014 07:23:23 -0500 Subject: proxy_pass with variable removes uri In-Reply-To: <20140305112204.GP34696@mdounin.ru> References: <20140305112204.GP34696@mdounin.ru> Message-ID: <9ed76540401a8449ec645d1c5613a1bb.NginxMailingListEnglish@forum.nginx.org> Hi, I've had troubles with url-decoding using this kind configuration, e.g. get variables with values containing spaces have been decoded before proxied which is resulting in an error. For example I've tried: location ~ ^/api/(.*) { proxy_pass http://api.$host/$1$is_args$args; } but that gives an error if the uri is urlencoded. Any ideas? BR / Jonas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248124,248129#msg-248129 From nginx-forum at nginx.us Wed Mar 5 12:26:18 2014 From: nginx-forum at nginx.us (luckyswede) Date: Wed, 05 Mar 2014 07:26:18 -0500 Subject: proxy_pass with variable removes uri In-Reply-To: <4230602a5e397b4d78b4aaa1be7987fa.NginxMailingListEnglish@forum.nginx.org> References: <2731548.yxxLrOp24l@vbart-laptop> <4230602a5e397b4d78b4aaa1be7987fa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f96e7e169ece14272ef4be46db2a965.NginxMailingListEnglish@forum.nginx.org> Also, I want to make use of a resolver, which requires variables in the proxy_pass directive. Does this mean that it is not possible to automatically "strip" out the leading "/api" (which seems to not work with variables) and using resolvers (which requires variables)? BR / Jonas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248124,248130#msg-248130 From mdounin at mdounin.ru Wed Mar 5 12:33:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Mar 2014 16:33:39 +0400 Subject: proxy_pass with variable removes uri In-Reply-To: <9ed76540401a8449ec645d1c5613a1bb.NginxMailingListEnglish@forum.nginx.org> References: <20140305112204.GP34696@mdounin.ru> <9ed76540401a8449ec645d1c5613a1bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140305123339.GQ34696@mdounin.ru> Hello! On Wed, Mar 05, 2014 at 07:23:23AM -0500, luckyswede wrote: > Hi, > I've had troubles with url-decoding using this kind configuration, e.g. get > variables with values containing spaces have been decoded before proxied > which is resulting in an error. > For example I've tried: > location ~ ^/api/(.*) { > proxy_pass http://api.$host/$1$is_args$args; > } > but that gives an error if the uri is urlencoded. > > Any ideas? When using variables you are responsible for proper encoding of URIs used. If you really want to use proxy_pass with variables, try this instead: location /api/ { rewrite ^/api(/.*) $1 break; proxy_pass http://api.$host; } It relies on the fact that if there is no URI at all, original request uri will be used. Though I would recommend using hardcoded name instead. Note that using proxy_pass with variables implies various other side effects, notably use of resolver for dynamic name resolution, and generally less effective than using names in a configuration. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Mar 5 12:43:03 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Mar 2014 12:43:03 +0000 Subject: proxy_pass with variable removes uri In-Reply-To: <9ed76540401a8449ec645d1c5613a1bb.NginxMailingListEnglish@forum.nginx.org> References: <20140305112204.GP34696@mdounin.ru> <9ed76540401a8449ec645d1c5613a1bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140305124303.GV29880@craic.sysops.org> On Wed, Mar 05, 2014 at 07:23:23AM -0500, luckyswede wrote: Hi there, > I've had troubles with url-decoding using this kind configuration, e.g. get > variables with values containing spaces have been decoded before proxied > which is resulting in an error. Untested, but I'd suggest to use a map (http://nginx.org/r/map) to save the part of $request_uri that you want to use in the proxy_pass url. It may become complicated if you want to handle people requesting things like /ap%69/stuff, but otherwise should probably work. (The alternative is probably to use a url encoder on the appropriate parts of your url; I don't know of one that drop-in works.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Mar 5 12:46:38 2014 From: nginx-forum at nginx.us (luckyswede) Date: Wed, 05 Mar 2014 07:46:38 -0500 Subject: proxy_pass with variable removes uri In-Reply-To: <20140305123339.GQ34696@mdounin.ru> References: <20140305123339.GQ34696@mdounin.ru> Message-ID: Cool, that works! I don't understand why though, why is the uri urldecoded in my example but not in your example? Also, I actually want the dns resolution to take place because I'm running in an AWS environment.. Thanks / Jonas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248124,248134#msg-248134 From francis at daoine.org Wed Mar 5 13:01:12 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Mar 2014 13:01:12 +0000 Subject: redirect 301 alias domain In-Reply-To: <33831540514aef79615e9318632f3032.NginxMailingListEnglish@forum.nginx.org> References: <20140304194809.GS29880@craic.sysops.org> <33831540514aef79615e9318632f3032.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140305130112.GW29880@craic.sysops.org> On Wed, Mar 05, 2014 at 03:18:31AM -0500, zuckbin wrote: > i don't understand why you said that is not the good conf file is used ? > > how do you know this ? You request http://www.aaa.com/. You get a redirect to http://www.aaa.com/. Which part of the configuration you showed does a redirect to http://www.aaa.com/? f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Mar 5 13:42:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Mar 2014 17:42:01 +0400 Subject: proxy_pass with variable removes uri In-Reply-To: References: <20140305123339.GQ34696@mdounin.ru> Message-ID: <20140305134201.GR34696@mdounin.ru> Hello! On Wed, Mar 05, 2014 at 07:46:38AM -0500, luckyswede wrote: > Cool, that works! > I don't understand why though, why is the uri urldecoded in my example but > not in your example? In your config, URI is defined using variables, and your are responsible for proper urlencoding. Config suggested by me uses proxy_pass without URI component (which uses request URI, even with variables), and a rewrite to change request URI - hence nginx does appropriate urlencoding. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Mar 5 14:56:05 2014 From: nginx-forum at nginx.us (greekduke) Date: Wed, 05 Mar 2014 09:56:05 -0500 Subject: Worker dies with a segfault error In-Reply-To: <20140304161008.GG34696@mdounin.ru> References: <20140304161008.GG34696@mdounin.ru> Message-ID: <3603f98a4f5b41ff103bdb1644c486c9.NginxMailingListEnglish@forum.nginx.org> Hello, I suppose you are correct. I have tried to recreate the problem without the two modules and I haven't succeeded yet. Anyway I need the modules so I can live with some segmentation faults. From the part of debug log below I think that the worker (5201) before dying closes properly the connection. In all the cases I have a similar behaviour. Can you comment on that? 2014/03/05 15:22:22 [debug] 5201#0: *13655 cleanup http upstream request: "/economy" 2014/03/05 15:22:22 [debug] 5201#0: *13655 finalize http upstream request: -4 2014/03/05 15:22:22 [debug] 5201#0: *13655 finalize http proxy request 2014/03/05 15:22:22 [debug] 5201#0: *13655 free rr peer 1 0 2014/03/05 15:22:22 [debug] 5201#0: *13655 close http upstream connection: 16 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000E834D0, unused: 48 2014/03/05 15:22:22 [debug] 5201#0: *13655 event timer del: 16: 1394025802596 2014/03/05 15:22:22 [debug] 5201#0: *13655 reusable connection: 0 2014/03/05 15:22:22 [debug] 5201#0: *13655 http finalize request: -4, "/economy?" a:1, c:1 2014/03/05 15:22:22 [debug] 5201#0: *13655 http request count:1 blk:0 2014/03/05 15:22:22 [debug] 5201#0: *13655 http close request 2014/03/05 15:22:22 [debug] 5201#0: *13655 http log handler 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000ED1B70 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000E7B2F0, unused: 1 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000E7C300, unused: 0 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000ED0B60, unused: 72 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000ED2B80, unused: 3614 2014/03/05 15:22:22 [debug] 5201#0: *13655 close http connection: 3 2014/03/05 15:22:22 [debug] 5201#0: *13655 reusable connection: 0 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000EC0B30 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000E768F0, unused: 0 2014/03/05 15:22:22 [debug] 5201#0: *13655 free: 0000000000E76A50, unused: 144 2014/03/05 15:22:22 [debug] 5201#0: *13655 http log handler 2014/03/05 15:22:26 [debug] 5210#0: *13748 http cl:-1 max:314572800 2014/03/05 15:22:26 [debug] 5210#0: *13748 rewrite phase: 2 2014/03/05 15:22:26 [debug] 5210#0: *13748 http subrequest "/eval_15287696/?_rdr" 2014/03/05 15:22:26 [debug] 5210#0: *13748 http posted request: "/eval_15287696/?_rdr" Kostas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247886,248139#msg-248139 From jmrepetti at gmail.com Wed Mar 5 14:57:54 2014 From: jmrepetti at gmail.com (=?ISO-8859-1?Q?Juan_Mat=EDas?=) Date: Wed, 5 Mar 2014 15:57:54 +0100 Subject: NGINX proxy, 502 error while SSL handshaking to upstream Message-ID: Hello! On Tue, Feb 25, 2014 at 04:34:34PM +0100, Juan Mat?as wrote: >* Hello everyone, I'm new here and this my first post in this mailing list, *> >* Maybe this is a frequently answered question but I could't find a solution. *>* Maybe is a "layer 8" issue. *> >* Right now, I have a Nginx(1.0.8) proxy running on Ubuntu 10.04 32bits, *>* OpenSSL 0.9.8 doing a https upstream on port 33195. Here is a piece of the *>* nginx.conf file: *> >* ...... *>* location /external_services { *>* proxy_pass https://x.x.x.x:33195/external_service; *>* allow x.x.x.x; *>* deny all; *>* } *>* ...... *> > >* It is working, but I need to migrate this proxy to a new server. This new *>* server runs Ubuntu 12.04, OpenSSL 1.0.1 and Nginx 1.5.10. *> >* This server receive an http://myproxy/external_services request and proxy *>* it to https://x.x.x.x:33195/external_service; (http to https) *> >* When I try to access http://myproxy/external_services on the new server, I *>* got a 502 error and I see this message in error.log : *> >* "peer closed connection in SSL handshake while SSL handshaking to *>* upstream" *> >* I found that I can connect(from the proxy server) to *>* https://x.x.x.x:33195/external_service using openssl, doing this: *> >* $ openssl s_client -connect https://x.x.x.x:33195/external_service-no_tls1_1 *> >* I tried to disable TLSv1.1 in Nginx using the directive: ssl_protocols *>* SSLv3 TLSv1; but nothing change. * You have to use proxy_ssl_protocols, not ssl_protocols. See http://nginx.org/r/proxy_ssl_protocols. The proxy_ssl_ciphers directive may help, too, depending on what exactly triggers the problem on your backend. -- Maxim Douninhttp://nginx.org/ Thanks Maxim Dounin for the answer I tried that but did not work. I tried using directives on nginx config file but the issue continue. I can't ensure but looks like Nginx was using TLSv1.1 or 1.2 anyway and the SSL handshake failed. And I didn't find a way to disable this version of the protocol. So I fixed the problem compiling nginx(1.0.15) from source using openSSL 0.9.8e. This version of OpenSSL doesnt support TLSv1.1. And that's works. I have no option, the provider that I'm dealing with doesn't support TLSv1.1 and they are not going to update his service. Thanks, Mat?as. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Mar 5 15:02:45 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 5 Mar 2014 10:02:45 -0500 Subject: [nginx-announce] nginx-1.4.6 In-Reply-To: <20140304152358.GY34696@mdounin.ru> References: <20140304152358.GY34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.6 for Windows http://goo.gl/kNh5PS (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 4, 2014 at 10:23 AM, Maxim Dounin wrote: > Changes with nginx 1.4.6 04 Mar > 2014 > > *) Bugfix: the "client_max_body_size" directive might not work when > reading a request body using chunked transfer encoding; the bug had > appeared in 1.3.9. > Thanks to Lucas Molas. > > *) Bugfix: a segmentation fault might occur in a worker process when > proxying WebSocket connections. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Mar 5 15:14:13 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 5 Mar 2014 10:14:13 -0500 Subject: nginx-1.5.11 In-Reply-To: <20140304152318.GT34696@mdounin.ru> References: <20140304152318.GT34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.11 for Windows http://goo.gl/S5nlT1 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 4, 2014 at 10:23 AM, Maxim Dounin wrote: > Changes with nginx 1.5.11 04 Mar > 2014 > > *) Security: memory corruption might occur in a worker process on > 32-bit > platforms while handling a specially crafted request by > ngx_http_spdy_module, potentially resulting in arbitrary code > execution (CVE-2014-0088); the bug had appeared in 1.5.10. > Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. > Manuel Sadosky, Buenos Aires, Argentina. > > *) Feature: the $ssl_session_reused variable. > > *) Bugfix: the "client_max_body_size" directive might not work when > reading a request body using chunked transfer encoding; the bug had > appeared in 1.3.9. > Thanks to Lucas Molas. > > *) Bugfix: a segmentation fault might occur in a worker process when > proxying WebSocket connections. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_spdy_module was used on 32-bit platforms; the bug had > appeared in 1.5.10. > > *) Bugfix: the $upstream_status variable might contain wrong data if > the > "proxy_cache_use_stale" or "proxy_cache_revalidate" directives were > used. > Thanks to Piotr Sikora. > > *) Bugfix: a segmentation fault might occur in a worker process if > errors with code 400 were redirected to a named location using the > "error_page" directive. > > *) Bugfix: nginx/Windows could not be built with Visual Studio 2013. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 5 16:02:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Mar 2014 20:02:18 +0400 Subject: Worker dies with a segfault error In-Reply-To: <3603f98a4f5b41ff103bdb1644c486c9.NginxMailingListEnglish@forum.nginx.org> References: <20140304161008.GG34696@mdounin.ru> <3603f98a4f5b41ff103bdb1644c486c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140305160218.GT34696@mdounin.ru> Hello! On Wed, Mar 05, 2014 at 09:56:05AM -0500, greekduke wrote: > Hello, > > I suppose you are correct. I have tried to recreate the problem without the > two modules and I haven't succeeded yet. Anyway I need the modules so I can > live with some segmentation faults. From the part of debug log below I think > that the worker (5201) before dying closes properly the connection. In all > the cases I have a similar behaviour. Can you comment on that? As already suggested, the problem is likely wrong request reference counting (likely in one of 3rd party modules you use) - nginx frees a request, and then dies trying to do something with the request. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Mar 5 16:39:06 2014 From: nginx-forum at nginx.us (arunh) Date: Wed, 05 Mar 2014 11:39:06 -0500 Subject: Nginx postgres problem In-Reply-To: References: Message-ID: <62a3064d5d803c436d0acad5fb2b8bda.NginxMailingListEnglish@forum.nginx.org> Hello Yichun, Thank you very much for the reply. I have used ngx.location.capture for extracting the values from postgres database as suggested by you. I had to make a series of postgres calls so I used content_by_lua along by sharing the varialbes as I had to query tables from multiple databases. I have now got the needed nodeIP and port ultimately. But I am not able to redirect the url. I have used rewrite_by_lua in the end to redirect to the new url, but it does not seem to be happening. If I use http rewrite module again the values of the nodeIP and port would be empty. Could you please suggest how to redirect to the new url with the extracted nodeIP and port? Thanks in advance, Arun CODE: location /postgresrewrite { rewrite ^ "http://$nodeIP:$port/$inputURI"; } location ~ ^/tenantservices/(.*)\/(.*)\/(.*)\/(.*) { set $inputURI $request_uri; set $tenantURI $1; set $serviceName $2; set $userID $3; set $endPointName $4; set $id ''; set $tenantID ''; set $instanceUUID ''; set $nodeIP '';d set $port ''; set $redirectURL ''; content_by_lua ' res = ngx.location.capture( "/postgrestenantkey", { share_all_vars = true } ); ngx.var.id = res.body; ngx.print(res.body); ngx.print(ngx.var.id); res1=ngx.location.capture( "/postgrestenantid", { share_all_vars = true } ); ngx.var.tenantID = res1.body; ngx.print(res1.body); ngx.print(ngx.var.tenantID); res2=ngx.location.capture( "/postgresinstanceid", { share_all_vars = true } ); ngx.var.instanceUUID = res2.body; ngx.print(res2.body); ngx.print(ngx.var.instanceUUID); res3=ngx.location.capture( "/postgresnodeip", { share_all_vars = true } ); ngx.var.nodeIP = res3.body; ngx.print(res3.body); ngx.print(ngx.var.nodeIP); res4=ngx.location.capture( "/postgresport", { share_all_vars = true } ); ngx.var.port = res4.body; ngx.print(res4.body); ngx.print(ngx.var.port); '; rewrite_by_lua ' res5=ngx.location.capture( "/postgresrewrite", { share_all_vars = true } ); '; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248077,248147#msg-248147 From stl at wiredrive.com Wed Mar 5 19:49:24 2014 From: stl at wiredrive.com (Scott Larson) Date: Wed, 5 Mar 2014 11:49:24 -0800 Subject: OCSP, ssl_trusted_certificate, and ssl_stapling_verify Message-ID: In setting up OCSP stapling on 1.5.10 I've found it behaving in a way which is opposite to what I perceive is documented. There it states that the contents of ssl_trusted_certificate are not sent to the client. However when I enable ssl_stapling_verify, which requires the inclusion of in this case the GeoTrust root certificate for the OCSP response to work, this root certificate is included in the response back to the client. Am I just interpreting the documentation incorrectly? It's not a dire issue, simply unexpected, and when including the root cert the SSL handshake increases from 4434 bytes to 5293. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Mar 5 19:58:01 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Mar 2014 11:58:01 -0800 Subject: Nginx postgres problem In-Reply-To: <62a3064d5d803c436d0acad5fb2b8bda.NginxMailingListEnglish@forum.nginx.org> References: <62a3064d5d803c436d0acad5fb2b8bda.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Mar 5, 2014 at 8:39 AM, arunh wrote: > I had to make a series of postgres calls so I used content_by_lua along by > sharing the varialbes as I had to query tables from multiple databases. > No, you should use rewrite_by_lua exclusively for all the ngx.location.capture calls here I think. > I have now got the needed nodeIP and port ultimately. But I am not able to > redirect the url. > I have used rewrite_by_lua in the end to redirect to the new url, but it > does not seem to be happening. rewrite_by_lua always runs before content_by_lua. Because NGINX's "rewrite" phase always runs before the "content" phase. Just put all your subrequest calls in the same code chunk as your redirect Lua code in rewrite_by_lua. What kind of redirect do you really need? 301? 302? or internal redirect? See https://github.com/chaoslawful/lua-nginx-module#ngxredirect https://github.com/chaoslawful/lua-nginx-module#ngxexec Speaking of running order, I suggest you read my NGINX tutorials to prevent such confusions: http://openresty.org/download/agentzh-nginx-tutorials-en.html BTW, you're recommended to join the openresty-en mailing list and discuss such things there: https://groups.google.com/group/openresty-en You might get faster and more responses on that list. Best regards, -agentzh From agentzh at gmail.com Wed Mar 5 20:03:51 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Mar 2014 12:03:51 -0800 Subject: Nginx postgres problem In-Reply-To: <62a3064d5d803c436d0acad5fb2b8bda.NginxMailingListEnglish@forum.nginx.org> References: <62a3064d5d803c436d0acad5fb2b8bda.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Mar 5, 2014 at 8:39 AM, arunh wrote: > location /postgresrewrite > { > rewrite ^ "http://$nodeIP:$port/$inputURI"; > } > [...] > rewrite_by_lua ' > res5=ngx.location.capture( > "/postgresrewrite", { share_all_vars = true } ); > '; > Forgot to mention that redirects in a subrequest won't affect its parent requests. They're separate requests anyway. Try thinking about it :) Use either ngx.req.set_uri() or ngx.exec() in rewrite_by_lua. (If you use content_by_lua, then you can only use ngx.exec for internal redirects). See https://github.com/chaoslawful/lua-nginx-module#ngxreqset_uri https://github.com/chaoslawful/lua-nginx-module#ngxexec Also, it is a bad idea to enable "share_all_vars" due to potential bad side-effects that lead to weird issues that are extremely hard to debug. Just share the NGINX variables you want to share. Best regards, -agentzh From agentzh at gmail.com Wed Mar 5 20:06:57 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Mar 2014 12:06:57 -0800 Subject: Worker dies with a segfault error In-Reply-To: <3603f98a4f5b41ff103bdb1644c486c9.NginxMailingListEnglish@forum.nginx.org> References: <20140304161008.GG34696@mdounin.ru> <3603f98a4f5b41ff103bdb1644c486c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Mar 5, 2014 at 6:56 AM, greekduke wrote: > I suppose you are correct. I have tried to recreate the problem without the > two modules and I haven't succeeded yet. The ngx_eval module is known to have issues. Can you reproduce the crash by replacing ngx_eval with ngx_lua? If you still can, please post the details (bt full ouptuts in gdb, for example) to the openresty-en mailing list instead: https://groups.google.com/group/openresty-en Both ngx_lua and ngx_postgres are supported modules in the OpenResty bundle so you're welcome to ask there. Thanks! -agentzh From contact at jpluscplusm.com Wed Mar 5 21:03:22 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 5 Mar 2014 21:03:22 +0000 Subject: Some Issues with Configuration In-Reply-To: <2077506.vnlZ8baFKa@sheflug.sheflug.net> References: <2077506.vnlZ8baFKa@sheflug.sheflug.net> Message-ID: On 5 March 2014 11:13, Richard Ibbotson wrote: > Hi > > I've been having a few problems with configuration of NGINX. No > problems with running Apache or Lighttpd on my own Linux box but I've > been scratching my head over NGINX. > > When I've compiled from source or used the vanilla Ubuntu package I > find that I can download the front page from my box which is > http://sleepypenguin.homelinux.org. This is an HTML page. I can't > download other pages which are serveral instances of Wordpress. Such > as http://sleepypenguin.homelinux.org/blog. What I get instead is > that my web browser (I have tried several web browsers in different > locations) asks me to download a BIN file instead of a web page. > > I'm sure that someone has had this problem when configuring NGINX. Can > anyone point me in the right direction ? I thought it might be > something to do with the location statment. Such as .... > > /etc/nginx/common/locations.conf > > # Blog > location = /blog { > allow all; > access_log off; > log_not_found off; > } > > > But.. No... That's not it either. Nginx doesn't execute PHP. It passes each request destined for your blog (i.e. the locations you decide are "your blog") to another process that runs/is-running the PHP. Take a look here, and it might help: http://wiki.nginx.org/WordPress If you're still stuck, please have a google before asking more questions here. There are many, many, *many* articles out there, explaining how to get Nginx+PHP/WordPress working, and the config you posted above strongly suggests you've not read any of them yet! The Internet is your friend ... ;-) Jonathan From richard.ibbotson at gmail.com Wed Mar 5 21:18:04 2014 From: richard.ibbotson at gmail.com (Richard Ibbotson) Date: Wed, 05 Mar 2014 21:18:04 +0000 Subject: Some Issues with Configuration In-Reply-To: References: <2077506.vnlZ8baFKa@sheflug.sheflug.net> Message-ID: <5716412.BYK95o1Wmg@sheflug.sheflug.net> On Wednesday 05 Mar 2014 21:03:22 Jonathan Matthews wrote: > Nginx doesn't execute PHP. It passes each request destined for your > blog (i.e. the locations you decide are "your blog") to another > process that runs/is-running the PHP. Take a look here, and it might > help: >> http://wiki.nginx.org/WordPress<< I'll have a read through this again. The part that might work is... location /blog { try_files $uri $uri/ /blog/index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(/blog)(/.*)$; } But... I have not just /blog but others. Such as /journalism and others. Do I just put in another location for that ? Such as .... location /journalism { try_files $uri $uri/ /journalism/index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(/journalism)(/.*)$; } > If you're still stuck, please have a google before asking more > questions here. There are many, many, *many* articles out there, > explaining how to get Nginx+PHP/WordPress working, and the config > you posted above strongly suggests you've not read any of them yet! > The Internet is your friend ... ;-) I spent a month doing that. Been going to ApacheCon since 2001. Done most international GNU/Linux and BSD conferences. Seen a few things. I'm a bit lost on NGINX configuration. Something new to learn :) -- Richard www.sheflug.org.uk From contact at jpluscplusm.com Wed Mar 5 21:46:42 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 5 Mar 2014 21:46:42 +0000 Subject: Some Issues with Configuration In-Reply-To: <5716412.BYK95o1Wmg@sheflug.sheflug.net> References: <2077506.vnlZ8baFKa@sheflug.sheflug.net> <5716412.BYK95o1Wmg@sheflug.sheflug.net> Message-ID: On 5 March 2014 21:18, Richard Ibbotson wrote: > On Wednesday 05 Mar 2014 21:03:22 Jonathan Matthews wrote: >> Nginx doesn't execute PHP. It passes each request destined for your >> blog (i.e. the locations you decide are "your blog") to another >> process that runs/is-running the PHP. Take a look here, and it might >> help: > >>> http://wiki.nginx.org/WordPress<< > > I'll have a read through this again. The part that might work is... > > location /blog { > try_files $uri $uri/ /blog/index.php?$args; > } > > location ~ \.php$ { > fastcgi_split_path_info ^(/blog)(/.*)$; > } Your indenting is/might-be misleading. Are you intending to encapsulate the PHP regex inside the /blog prefix, as your indenting suggests? 'Cos your braces aren't doing that. > But... I have not just /blog but others. Such as /journalism and > others. Do I just put in another location for that ? Such as .... > > location /journalism { > try_files $uri $uri/ /journalism/index.php?$args; > } > > location ~ \.php$ { > fastcgi_split_path_info ^(/journalism)(/.*)$; > } In general, each request is handled in one location and one location *only* in nginx. There are some directives which inherit down config levels (http{}, server{}, location{}, nested location {}) but they're not the ones you'll (probably) be caring about here. If you can, pop your blog on its own FQDN so you can seperate out traffic into different server{} stanzas. That works well as a base level of distinction between traffic which should hit PHP and that which shouldn't. >> If you're still stuck, please have a google before asking more >> questions here. There are many, many, *many* articles out there, >> explaining how to get Nginx+PHP/WordPress working, and the config >> you posted above strongly suggests you've not read any of them yet! >> The Internet is your friend ... ;-) > > I spent a month doing that. Been going to ApacheCon since 2001. Done > most international GNU/Linux and BSD conferences. Seen a few things. > I'm a bit lost on NGINX configuration. Something new to learn :) Apologies - I assumed from the config you posted you'd not done any reading :-) Nothing you've posted yet specifies the method you're using to talk to your PHP-executing process. Perhaps you should post a more complete config and let us know what you've tried already ... J From nginx-forum at nginx.us Wed Mar 5 21:52:29 2014 From: nginx-forum at nginx.us (talkingnews) Date: Wed, 05 Mar 2014 16:52:29 -0500 Subject: Confusion over apparently conflicting advice in guide/wiki/examples In-Reply-To: <8f6cd659438bc68ffbbca71380032f0e.NginxMailingListEnglish@forum.nginx.org> References: <8f6cd659438bc68ffbbca71380032f0e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8b5fd5b7caafd188286547a2f93bb481.NginxMailingListEnglish@forum.nginx.org> Thanks for the replies, and especially to Francis for clearly differentiating the aspects and responsibilities of the NGi?X "flow", if I can call it that. The only remaining confusion comes with the provided nginx config files which appear to contradict best practice (as well as setting up parameters which don't even exist!) , but I now realise and acknowledge that this will be an issue for the Ubuntu NGi?X package maintainer, not NGi?X. I think what I might do is to do another complete brand new install of NGi?X and PyroCMS, see what the most minimal config changes I can make from the default are for my application, test the hell out of it, and tentatively make changes to the Wiki page for the Pyrocms config. Thanks again. PS - as I post this, I see below that today has set a new record for numbers of users and guests on the forum. Excellent! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248051,248160#msg-248160 From nginx-forum at nginx.us Wed Mar 5 22:21:24 2014 From: nginx-forum at nginx.us (arunh) Date: Wed, 05 Mar 2014 17:21:24 -0500 Subject: Nginx postgres problem In-Reply-To: References: Message-ID: Hello Yichun, The use case I am trying to test is the client sends a request to nginx server with some parameters say "/a/b/c/d": http://NginxHost/a/b/c/d Depending on the parameters a,b,c and d I will get the IP and port of the destination server (by communicating with postgres) where the request must be redirected to ie the new url is of the form: http://IP:port/a/b/c/d. Using both ngx.redirect and nginx.exec() are giving errors. I tried to redirect the url to "www.google.com" using ngx.redirect inside rewrite_by_lua. Even I get the same error. You had mentioned that " redirects in a subrequest won't affect its parent requests." Does that mean that I cannot change the url inside the rewrite_by_lua module? I cannot use http rewrite module as this will be executed before the rewrite_by_lua code is executed. Please suggest what can be done to redirect the parent url to new url. Thank you, Arun ERROR LOG: CODE: location /postgresrewrite { rewrite ^ "http://www.google.co.in"; } ngx.exec("/postgresrewrite"); LOG: 2014/03/05 22:59:42 [error] 8129#0: *374 lua entry thread aborted: runtime error: [string "rewrite_by_lua"]:32: attempt to call ngx.exec after sending out response headers stack traceback: coroutine 0: [C]: in function 'exec' [string "rewrite_by_lua"]:32: in function <[string "rewrite_by_lua"]:1> while sending to client, client: xx.xx.xx.xx, server: localhost, request: "GET /tenantservices/ArunTenant.com/Service1/13717e3b-c32d-4172-a316-74857b1237e1/httpSoapProvider HTTP/1.1", host: "yy.yy.yy.yy" CODE: return ngx.redirect("http://www.google.co.in"); LOG: 2014/03/05 22:23:05 [error] 5397#0: *366 lua entry thread aborted: runtime error: [string "rewrite_by_lua"]:32: attempt to call ngx.redirect after sending out the headers stack traceback: coroutine 0: [C]: in function 'redirect' [string "rewrite_by_lua"]:32: in function <[string "rewrite_by_lua"]:1> while sending to client, client: xx.xx.xx.xx, server: localhost, request: "GET /tenantservices/ArunTenant.com/Service1/13717e3b-c32d-4172-a316-74857b1237e1/httpSoapProvider HTTP/1.1", host: "yy.yy.yy.yy" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248077,248161#msg-248161 From agentzh at gmail.com Wed Mar 5 22:32:50 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Mar 2014 14:32:50 -0800 Subject: Nginx postgres problem In-Reply-To: References: Message-ID: Hello! On Wed, Mar 5, 2014 at 2:21 PM, arunh wrote: > Depending on the parameters a,b,c and d I will get the IP and port of the > destination server (by communicating with postgres) where the request must > be redirected to ie the new url is of the form: > http://IP:port/a/b/c/d. > You need either a 301/302 redirect or a proxy_pass to go to external target. Below is an example: rewrite_by_lua ' local new_target = "http:/IP:port/a/b/c/d" return ngx.redirect(new_target) '; > Using both ngx.redirect and nginx.exec() are giving errors. > I tried to redirect the url to "www.google.com" using ngx.redirect inside > rewrite_by_lua. Even I get the same error. > Please provide a minimal but still complete example that can reproduce the issue. > You had mentioned that " redirects in a subrequest won't affect its parent > requests." Does that mean that I cannot change the url inside the > rewrite_by_lua module? > You created a subrequest with ngx.location.capture and you used the "rewrite" directive in the location targeted by the subrequest in the hope that it will change the parent request calling ngx.location.capture. What you need is to initiate redirects in your *main* request. So you should not use ngx.location.capture for a redirect. I never say you cannot perform redirect in rewrite_by_lua. Do not get me wrong. > I cannot use http rewrite module as this will be executed before the > rewrite_by_lua code is executed. > No, as I've said, you should use the Lua API in rewrite_by_lua to perform redirects. > 2014/03/05 22:59:42 [error] 8129#0: *374 lua entry thread aborted: runtime > error: [string "rewrite_by_lua"]:32: attempt to call ngx.exec after sending > out response headers The error clearly indicates the problem. You should not send the response header before doing redirects (note that, ngx.say, ngx.print, ngx.flush all trigger sending out the response header automatically, so avoid them). Regards, -agentzh From nginx-forum at nginx.us Wed Mar 5 22:47:47 2014 From: nginx-forum at nginx.us (PetrHolik) Date: Wed, 05 Mar 2014 17:47:47 -0500 Subject: Memory usage doubles on reload Message-ID: <3b769829e7d51497db81bd29a7af643d.NginxMailingListEnglish@forum.nginx.org> Hello we are running nginx 1.2.7 with this in conf: output_buffers 5 5m; sendfile off; That works well, BUT if I reload server configuration with nginx -s reload Memory consumptions for few hours(clients use long lived(few hours) tcp connections). Is this behavior correct? Can we avoid this? We have to had as twice much RAM to be able to restart nginx under load. Sincerely Petr Holik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248163,248163#msg-248163 From francis at daoine.org Wed Mar 5 23:31:27 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Mar 2014 23:31:27 +0000 Subject: Confusion over apparently conflicting advice in guide/wiki/examples In-Reply-To: <8b5fd5b7caafd188286547a2f93bb481.NginxMailingListEnglish@forum.nginx.org> References: <8f6cd659438bc68ffbbca71380032f0e.NginxMailingListEnglish@forum.nginx.org> <8b5fd5b7caafd188286547a2f93bb481.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140305233127.GX29880@craic.sysops.org> On Wed, Mar 05, 2014 at 04:52:29PM -0500, talkingnews wrote: Hi there, just to avoid sending package maintainers on a wild goose chase... > The only remaining confusion comes with the provided nginx config files > which appear to contradict best practice To understand that, you should probably find what "best practice" actually is, and (more importantly) the specific set of circumstances when this set of best practice applies. Pretty much all guidelines have exceptions. > (as well as setting up parameters > which don't even exist!) I'm not sure what you mean by that. In the context of fastcgi / php, if you set "fastcgi_param HELLO $request_uri;", then when you look in your php $_SERVER (or wherever your fastcgi server presents the information), you will have a parameter HELLO with the value of the incoming request. If a fastcgi_param is set (explicitly or implicitly), it exists. If it is not, it does not. The "defaults", either from Ubuntu or nginx, are presumably "a set that the author thinks are probably frequently useful". You can always set exactly the values that your fastcgi server, your version of php, and your application, require, and remove all of the others. And then do it again when you add another application, or change any of your stack. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Mar 6 03:48:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Mar 2014 07:48:35 +0400 Subject: Memory usage doubles on reload In-Reply-To: <3b769829e7d51497db81bd29a7af643d.NginxMailingListEnglish@forum.nginx.org> References: <3b769829e7d51497db81bd29a7af643d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140306034834.GY34696@mdounin.ru> Hello! On Wed, Mar 05, 2014 at 05:47:47PM -0500, PetrHolik wrote: > Hello we are running nginx 1.2.7 with this in conf: > output_buffers 5 5m; > sendfile off; > > That works well, BUT if I reload server configuration with nginx -s reload > Memory consumptions for few hours(clients use long lived(few hours) tcp > connections). Is this behavior correct? Can we avoid this? We have to had as > twice much RAM to be able to restart nginx under load. On configuration reload nginx spawns new worker processes, and gracefully shuts down old worker processes (and gracefull shutdown may take a while, especially if there are long-lived requests). That is, behaviour you observe is expected. Some more details can be found here: http://nginx.org/en/docs/control.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Mar 6 04:31:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Mar 2014 08:31:22 +0400 Subject: OCSP, ssl_trusted_certificate, and ssl_stapling_verify In-Reply-To: References: Message-ID: <20140306043122.GB34696@mdounin.ru> Hello! On Wed, Mar 05, 2014 at 11:49:24AM -0800, Scott Larson wrote: > In setting up OCSP stapling on 1.5.10 I've found it behaving in a way > which is opposite to what I perceive is documented. There it states that > the contents of ssl_trusted_certificate are not sent to the client. However > when I enable ssl_stapling_verify, which requires the inclusion of in this > case the GeoTrust root certificate for the OCSP response to work, this root > certificate is included in the response back to the client. > Am I just interpreting the documentation incorrectly? It's not a dire > issue, simply unexpected, and when including the root cert the SSL > handshake increases from 4434 bytes to 5293. The difference between ssl_trusted_certificate and ssl_client_certificate is that latter is sent to a client in a certificate request, in a list of distinguished names of accepted certifcate authorities, see here: http://tools.ietf.org/html/rfc5246#section-7.4.4 What you see is likely auto chain building as done by OpenSSL if certificate chain isn't explicitly specified. It shouldn't happen as long as there is at least one intermediate cert in ssl_certificate file. -- Maxim Dounin http://nginx.org/ From steve at greengecko.co.nz Thu Mar 6 05:19:31 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 06 Mar 2014 18:19:31 +1300 Subject: Some Issues with Configuration In-Reply-To: <5716412.BYK95o1Wmg@sheflug.sheflug.net> References: <2077506.vnlZ8baFKa@sheflug.sheflug.net> <5716412.BYK95o1Wmg@sheflug.sheflug.net> Message-ID: <1394083171.3093.174.camel@steve-new> Hi! On Wed, 2014-03-05 at 21:18 +0000, Richard Ibbotson wrote: > On Wednesday 05 Mar 2014 21:03:22 Jonathan Matthews wrote: > > Nginx doesn't execute PHP. It passes each request destined for your > > blog (i.e. the locations you decide are "your blog") to another > > process that runs/is-running the PHP. Take a look here, and it might > > help: > > >> http://wiki.nginx.org/WordPress<< > > I'll have a read through this again. The part that might work is... > > location /blog { > try_files $uri $uri/ /blog/index.php?$args; > } > > location ~ \.php$ { > fastcgi_split_path_info ^(/blog)(/.*)$; > } > > But... I have not just /blog but others. Such as /journalism and > others. Do I just put in another location for that ? Such as .... > > location /journalism { > try_files $uri $uri/ /journalism/index.php?$args; > } > > location ~ \.php$ { > fastcgi_split_path_info ^(/journalism)(/.*)$; > } > > > If you're still stuck, please have a google before asking more > > questions here. There are many, many, *many* articles out there, > > explaining how to get Nginx+PHP/WordPress working, and the config > > you posted above strongly suggests you've not read any of them yet! > > The Internet is your friend ... ;-) > > I spent a month doing that. Been going to ApacheCon since 2001. Done > most international GNU/Linux and BSD conferences. Seen a few things. > I'm a bit lost on NGINX configuration. Something new to learn :) > Nginx is just acting as a switch here, so in the location block for .php files, you need to hand over to php for further processing. This is usually done via fastcgi to php-fpm - especially for the performance gains after using APC, etc... You can do this in one of two ways: I use the more convoluted one... in nginx.conf, define a php backend: http { ... upstream backend { server unix:/var/run/php5-fpm.sock; } } ( ensure that the php-fpm process is listening on that socket - you can also use ports ) in the site config pass them over: location ~ \.php$ { fastcgi_split_path_info ^(/journalism)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass backend; } That should approximately do it - depending on the contents of /etc/nginx/fastcgi-params which I modified years ago! hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Thu Mar 6 11:21:22 2014 From: nginx-forum at nginx.us (sephreph) Date: Thu, 06 Mar 2014 06:21:22 -0500 Subject: Advice on max_ranges for specific location block Message-ID: Hi, Hopefully this question isn't too basic, I just want to check if I'm missing something obvious. I'm setting up a basic nginx installation with php-fpm behind it running on 127.0.0.1:9000 - that's working fine. I'm using max_ranges set to 5 globally in the server block (example below), but I have a specific PHP script (/free.php) that I want max_ranges set to 0 for. I'm not sure how to achieve this. max_ranges needs to be in a http, server or location block, so would the best solution be to copy the "location ~ \.php" block and call it "location ~* /free.php" and just add the "max_ranges 0" setting in there? I was hoping that location blocks cascade so I could overwrite that setting and then the request would fall into the .php block but I don't think that's actually the case? A basic example of what I'm currently using is: server { listen 80; server_name localhost; max_ranges 5; location / { root /var/www/example.com; index index.php; } location ~ \.php { set $script $uri; set $path_info ""; if ($uri ~ "^(.+\.php)(/.*)") { set $script $1; set $path_info $2; } client_body_temp_path /tmp 1; client_max_body_size 2048m; fastcgi_pass 127.0.0.1:9000; fastcgi_read_timeout 300; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/example.com$script; include /etc/nginx/fastcgi_params; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248175,248175#msg-248175 From chencw1982 at gmail.com Thu Mar 6 11:24:04 2014 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Thu, 6 Mar 2014 19:24:04 +0800 Subject: [ANNOUNCE] Tengine-2.0.1 is released Message-ID: Hi folks, We are glad to announce that Tengine-2.0.1 (development version) has been released. You can either checkout the source code from github: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.0.1.tar.gz The changelog is as follows: *) Feature: now non-buffering request body mechanism supports chunked input. (yaoweibin) *) Feature: trim module added more rules, and now can be enabled according to variables. (taoyuanyuan) *) Feature: resolver can be configured automatically from /etc/resolv.conf. (lifeibo, yaoweibin) *) Feature: added variables starting with "$ascii_", which can represent arbitrary ASCII characters. (yzprofile) *) Feature: added a new directive "image_filter_crop_offset". (lax) *) Change: merged changes between nginx-1.4.4 and nginx-1.4.6. (chobits, cfsego) *) Bugfix: upstream health check module failed occasionally when using keep-alive connections. (lilbedwin) *) Bugfix: nginx crashed when upstream rejected nginx WebSocket connection. http://trac.nginx.org/nginx/ticket/503 (Hao Chen) *) Bugfix: reduce nginx memory consumption when processing large files. (cfsego) *) Bugfix: disabled redirects to named locations if URI is not set. For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 6 12:19:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Mar 2014 16:19:14 +0400 Subject: Advice on max_ranges for specific location block In-Reply-To: References: Message-ID: <20140306121914.GE34696@mdounin.ru> Hello! On Thu, Mar 06, 2014 at 06:21:22AM -0500, sephreph wrote: > Hi, > > Hopefully this question isn't too basic, I just want to check if I'm missing > something obvious. > > I'm setting up a basic nginx installation with php-fpm behind it running on > 127.0.0.1:9000 - that's working fine. I'm using max_ranges set to 5 > globally in the server block (example below), but I have a specific PHP > script (/free.php) that I want max_ranges set to 0 for. I'm not sure how to > achieve this. > > max_ranges needs to be in a http, server or location block, so would the > best solution be to copy the "location ~ \.php" block and call it "location > ~* /free.php" and just add the "max_ranges 0" setting in there? I was > hoping that location blocks cascade so I could overwrite that setting and > then the request would fall into the .php block but I don't think that's > actually the case? Generally I would recommend adding a "location = /free.php" with settings specific to a particular script. On the other hand, in this case there is no real reason to do anything as nginx won't try to add ranges support for fastcgi output. It's up to your fastcgi script decide how to to handle range requests. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Thu Mar 6 13:22:13 2014 From: lists at ruby-forum.com (David Pesce) Date: Thu, 06 Mar 2014 14:22:13 +0100 Subject: Removing a request header in an access phase handler In-Reply-To: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> References: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> Message-ID: <17354444bb0305b53db0d7b34c80485e@ruby-forum.com> Jan Algermissen wrote in post #1114756: > Hi, > > I developing a handler for the access phase. In this handler I intend to > remove a certain header. > > It seems that this is exceptionally hard to do - the only hint I have is > how it is done in the headers_more module. > > However, I wonder, whether there is an easier way, given that it is not > an unusual operation. > > If not, I'd greatly benefit from a documentation of the list and > list-part types. Is that available somewhere? Seems hard to figure out > all the bits and pieces that one has to go through to cleanly remove an > element from a list. > > Jan Yes, it's really hard. Headers are made of part list. Those list might have several elements. You need to remove the part if there is only one element setting the next pointer of the previous part on the next current part and decreasing the nelts counter. If there is several elements, you need to copy next element on the previous for the whole element array from the removed element position. Here some code I've found: static ngx_int_t ngx_list_delete_elt(ngx_list_t *list, ngx_list_part_t *cur, ngx_uint_t i) { u_char *s, *d, *last; s = (u_char *) cur->elts + i * list->size; d = s + list->size; last = (u_char *) cur->elts + cur->nelts * list->size; while (d < last) { *s++ = *d++; } cur->nelts--; return NGX_OK; } ngx_int_t ngx_list_delete(ngx_list_t *list, void *elt) { u_char *data; ngx_uint_t i; ngx_list_part_t *part, *pre; part = &list->part; pre = part; data = part->elts; for (i = 0; /* void */; i++) { if (i >= part->nelts) { if (part->next == NULL) { break; } i = 0; pre = part; part = part->next; data = part->elts; } if ((data + i * list->size) == (u_char *) elt) { if (&list->part != part && part->nelts == 1) { pre->next = part->next; if (part == list->last) { list->last = pre; } return NGX_OK; } return ngx_list_delete_elt(list, part, i); } } return NGX_ERROR; } -- Posted via http://www.ruby-forum.com/. From yasser.zamani at live.com Thu Mar 6 15:40:52 2014 From: yasser.zamani at live.com (Yasser Zamani) Date: Thu, 6 Mar 2014 19:10:52 +0330 Subject: FastCGI + fork Message-ID: Hi there, I rewrote FastCGI's echo_cpp.cpp example to ridirect another process e.g. ls output DIRECTLY (to achieve high performance) to client but unfortunately does not work as expected! P.S. if I simply read the pipe I get output but I lost performance. So, I need to directly redirect the output to client without any copying streams! P.S. if I replace fork saffs with just a simple cout<<, I can see the output. Anyone has any idea about how to correct it? I worked so hard with no success. #include #ifdef _WIN32 #include #else #include extern char ** environ; #endif #include "fcgio.h" #include "fcgi_config.h" // HAVE_IOSTREAM_WITHASSIGN_STREAMBUF //my includes #include #include #include using namespace std; int main (void) { streambuf * cin_streambuf = cin.rdbuf(); streambuf * cout_streambuf = cout.rdbuf(); streambuf * cerr_streambuf = cerr.rdbuf(); FCGX_Request request; FCGX_Init(); FCGX_InitRequest(&request, 0, 0); while (FCGX_Accept_r(&request) == 0) { fcgi_streambuf cin_fcgi_streambuf(request.in); fcgi_streambuf cout_fcgi_streambuf(request.out); fcgi_streambuf cerr_fcgi_streambuf(request.err); #if HAVE_IOSTREAM_WITHASSIGN_STREAMBUF cin = &cin_fcgi_streambuf; cout = &cout_fcgi_streambuf; cerr = &cerr_fcgi_streambuf; #else cin.rdbuf(&cin_fcgi_streambuf); cout.rdbuf(&cout_fcgi_streambuf); cerr.rdbuf(&cerr_fcgi_streambuf); #endif cout << "Content-type: text/html\r\n" "\r\n"; cout.flush(); int pipe_fd[2]; pipe(pipe_fd); close(STDOUT_FILENO); dup2(pipe_fd[0], STDOUT_FILENO);// redirecting pipe read side to stdout int fid = fork(); if(0==fid) { //here we are in child prcess! close(STDOUT_FILENO); dup2(pipe_fd[1],STDOUT_FILENO);//redirecting pipe write side to stdout system("ls -l ./"); close(pipe_fd[1]); return 0; } wait(0);//waiting for child process in parent } #if HAVE_IOSTREAM_WITHASSIGN_STREAMBUF cin = cin_streambuf; cout = cout_streambuf; cerr = cerr_streambuf; #else cin.rdbuf(cin_streambuf); cout.rdbuf(cout_streambuf); cerr.rdbuf(cerr_streambuf); #endif return 0; } From nginx-forum at nginx.us Thu Mar 6 16:54:53 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Thu, 06 Mar 2014 11:54:53 -0500 Subject: Transforming nginx for Windows In-Reply-To: References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> How ready is this for production? I seem to be getting a lot of intermittent timeouts/dropped connections to the backend or something doing upstream proxying. Just wondering before I go digging into this any more. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248190#msg-248190 From nginx-forum at nginx.us Thu Mar 6 17:10:04 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Thu, 06 Mar 2014 12:10:04 -0500 Subject: Transforming nginx for Windows In-Reply-To: <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9223eb68df0cfa0b025cadf20e470919.NginxMailingListEnglish@forum.nginx.org> Actually, this may be my issue: http://forum.nginx.org/read.php?15,239760,239760 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248193#msg-248193 From tommy at fam-berglund.eu Thu Mar 6 17:38:51 2014 From: tommy at fam-berglund.eu (Tommy Berglund) Date: Thu, 06 Mar 2014 18:38:51 +0100 Subject: Redirect host Message-ID: <5318B2AB.9010309@fam-berglund.eu> I have a question, is this the best way to perform this action? Only allow https://webmail.exampel.com in the server block ssl and redirect any host that isn't webmail.example.com to the example.com if ($host !~* 'webmail.example.com' ) { rewrite ^/(.*)$ http://example.com/$1 permanent; } nginx version: nginx/1.4.5 (Ubuntu) //Tommy From igor at sysoev.ru Thu Mar 6 18:11:16 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 6 Mar 2014 22:11:16 +0400 Subject: Redirect host In-Reply-To: <5318B2AB.9010309@fam-berglund.eu> References: <5318B2AB.9010309@fam-berglund.eu> Message-ID: <65FC8E2C-FA7D-4336-8CD9-6F51CD8E8428@sysoev.ru> On Mar 6, 2014, at 21:38 , Tommy Berglund wrote: > I have a question, is this the best way to perform this action? > > Only allow https://webmail.exampel.com in the server block ssl > and redirect any host that isn't webmail.example.com to the example.com > > if ($host !~* 'webmail.example.com' ) { > rewrite ^/(.*)$ http://example.com/$1 permanent; > } > > > nginx version: nginx/1.4.5 (Ubuntu) http://nginx.org/en/docs/http/converting_rewrite_rules.html -- Igor Sysoev http://nginx.com From etienne.champetier at free.fr Thu Mar 6 18:24:39 2014 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Thu, 6 Mar 2014 19:24:39 +0100 (CET) Subject: map or not map? In-Reply-To: <839245952.1136016382.1394129787978.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <510882107.1136050615.1394130279145.JavaMail.root@zimbra65-e11.priv.proxad.net> Hi, To help with debbuging, I'm adding an header like this (multiple nginx behind a load balancer) add_header X-Served-By $hostname $hostname are like 'myserverXX', how can i put only the XX instead of full $hostname (to put less informations) Is using map the only solution (i'm looking for a simple regexp)? Thanks in advance Etienne From nginx-forum at nginx.us Thu Mar 6 18:28:44 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 06 Mar 2014 13:28:44 -0500 Subject: Transforming nginx for Windows In-Reply-To: <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> Message-ID: tonyschwartz Wrote: ------------------------------------------------------- > How ready is this for production? I seem to be getting a lot of > intermittent timeouts/dropped connections to the backend or something > doing upstream proxying. Just wondering before I go digging into this > any more. It is ready for production use and it used in production servers, just keep an eye out for Beta's which offer solutions to many other problems as we go along redeveloping. Problems with backends are common and mostly not related to nginx as nginx just passes to a backend, anything can go wrong while talking to a backend. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248199#msg-248199 From nginx-forum at nginx.us Thu Mar 6 19:18:01 2014 From: nginx-forum at nginx.us (agriz) Date: Thu, 06 Mar 2014 14:18:01 -0500 Subject: Nginx - PHP FPM Server load is very high Message-ID: <00d920d62752374b87f96e64e9f38be1.NginxMailingListEnglish@forum.nginx.org> Hi The server is struggling to handle the traffic. I have 8GB ram. Quad core server. I have changed the config file for nginx and i have default config for php fpm. Please advice the best config. Right now, the load is about 50 user nginx; worker_processes 4; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } client_header_timeout 3m; client_max_body_size 4M; client_body_timeout 3m; send_timeout 3m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248201,248201#msg-248201 From nginx-forum at nginx.us Thu Mar 6 20:01:49 2014 From: nginx-forum at nginx.us (PetrHolik) Date: Thu, 06 Mar 2014 15:01:49 -0500 Subject: Memory usage doubles on reload In-Reply-To: <20140306034834.GY34696@mdounin.ru> References: <20140306034834.GY34696@mdounin.ru> Message-ID: <51884af648418e128696739560fed678.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, thanks for reply. Is there possibility to purge allocated buffer(RAM) in old(gracefully) worker processes? IMO worker thread have allocated all memory till last clients disconnects. That is really isue for us - we have currently 32Gigs of spare RAM to be able to handle reload under load. Sincelery Petr Holik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248163,248202#msg-248202 From nginx-forum at nginx.us Thu Mar 6 21:23:27 2014 From: nginx-forum at nginx.us (sephreph) Date: Thu, 06 Mar 2014 16:23:27 -0500 Subject: Advice on max_ranges for specific location block In-Reply-To: <20140306121914.GE34696@mdounin.ru> References: <20140306121914.GE34696@mdounin.ru> Message-ID: Hi Maxim! Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > Generally I would recommend adding a "location = /free.php" with > settings specific to a particular script. > > On the other hand, in this case there is no real reason to do > anything as nginx won't try to add ranges support for fastcgi > output. It's up to your fastcgi script decide how to to handle > range requests. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I'm using the XSendFile module too (I stripped that out of my example for brevity but maybe that was a poor idea). That's why I was relying on max_ranges, but I guess I could just check for a range request and return a 416 through the PHP script if one's requested? I'll have to test to see if that works, but if it doesn't I'll just duplicate the existing location block and make it specific to free.php, and add that setting. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248175,248204#msg-248204 From nginx-forum at nginx.us Thu Mar 6 22:23:12 2014 From: nginx-forum at nginx.us (sephreph) Date: Thu, 06 Mar 2014 17:23:12 -0500 Subject: Advice on max_ranges for specific location block In-Reply-To: References: <20140306121914.GE34696@mdounin.ru> Message-ID: <0deaaade23409c78f2d258e1dacfad97.NginxMailingListEnglish@forum.nginx.org> Sorry to reply to my own post, but for anyone else who come across this, it looks like the easiest thing to do is to just add the max_ranges directive inside the location block containing the XSendFile alias - so in my case, I made the location that the free.php script uses different from the other scripts and bam - success. Thanks for all your Help Maxim! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248175,248205#msg-248205 From pierre.hureaux at crous-rouen.fr Thu Mar 6 22:29:53 2014 From: pierre.hureaux at crous-rouen.fr (Pierre Hureaux) Date: Thu, 6 Mar 2014 23:29:53 +0100 Subject: vbulletin vbseo rewrite rule In-Reply-To: <0fb2665f14bf4923c294acf6817cd504.NginxMailingListEnglish@forum.nginx.org> References: <1D6DBB81-FABD-40D0-B1FF-F094B492E0BD@crous-rouen.fr> <0fb2665f14bf4923c294acf6817cd504.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8E822D9B-4EA5-4311-A645-8193A8ED0C79@crous-rouen.fr> AHMA, you have two options : - think yourself to find solutions - give us some configuration informations. p/ Le 4 mars 2014 ? 11:11, zuckbin a ?crit : > i try this and it doesn't work for me. > > maybe because i got some custom urls in vbseo. > > and why all my urls are with httpS ?! > > boring... > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248039,248062#msg-248062 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Mar 6 23:05:03 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Thu, 06 Mar 2014 18:05:03 -0500 Subject: Transforming nginx for Windows In-Reply-To: References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2273e334c0a50991283c7b9dd367f656.NginxMailingListEnglish@forum.nginx.org> There is definitely some issue doing proxying. At some point, the connection to the back end appears to go bad. The request from the browser to nginx just spins and spins. This occurs against (ahem...) IIS6 and also against the Cassini local visual studio development environment exhibits this behavior. I will try this from a linux host to see if I can reproduce the issue there. The issue I'm experiencing seems to be very much the same as in the other topic I referenced above. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248207#msg-248207 From contact at jpluscplusm.com Thu Mar 6 23:28:47 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 6 Mar 2014 23:28:47 +0000 Subject: map or not map? In-Reply-To: <510882107.1136050615.1394130279145.JavaMail.root@zimbra65-e11.priv.proxad.net> References: <839245952.1136016382.1394129787978.JavaMail.root@zimbra65-e11.priv.proxad.net> <510882107.1136050615.1394130279145.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: On 6 March 2014 18:24, wrote > Is using map the only solution (i'm looking for a simple regexp)? For this, a map is the /best/ solution! J From contact at jpluscplusm.com Thu Mar 6 23:41:47 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 6 Mar 2014 23:41:47 +0000 Subject: Nginx - PHP FPM Server load is very high In-Reply-To: <00d920d62752374b87f96e64e9f38be1.NginxMailingListEnglish@forum.nginx.org> References: <00d920d62752374b87f96e64e9f38be1.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 March 2014 19:18, agriz wrote: > The server is struggling to handle the traffic. > I have 8GB ram. Quad core server. [snip] > Right now, the load is about 50 I very much doubt your problem is a simple one which can be solved by tweaking your nginx config. I say this because you have (50/4 == 12.5) times as much work to do on this server as you have CPU cores to do it on. It looks very much like you need ... more hardware! You may be able to offload some static file serving from PHP to nginx/etc via X-Accel-Redirect; you might cache some content using Nginx's (or some other) HTTP caching. But you'll need to really understand your application in order to do them correctly, and no-one here can tell you /exactly/ how to implement them for your situation. If you want to fix this quickly, buy/lease/provision more hardware/VMs now. If you want to fix it cheaply, you'll need to spend time investigating what the PHP is doing that's taking the time, hence how you can help it do it more efficiently (X-Accel-Redirect) or not at all (caching). Cheers, Jonathan From artemrts at ukr.net Fri Mar 7 07:57:26 2014 From: artemrts at ukr.net (wishmaster) Date: Fri, 07 Mar 2014 09:57:26 +0200 Subject: Nginx - PHP FPM Server load is very high In-Reply-To: <00d920d62752374b87f96e64e9f38be1.NginxMailingListEnglish@forum.nginx.org> References: <00d920d62752374b87f96e64e9f38be1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1394178828.699319717.uhdfhnet@frv34.fwdcdn.com> --- Original message --- From: "agriz" Date: 6 March 2014, 21:18:05 > Hi > > The server is struggling to handle the traffic. > I have 8GB ram. Quad core server. > > I have changed the config file for nginx and i have default config for php > fpm. > Please advice the best config. > > Right now, the load is about 50 > > > user nginx; > worker_processes 4; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > events { > worker_connections 1024; > } > > > client_header_timeout 3m; > client_max_body_size 4M; > client_body_timeout 3m; > send_timeout 3m; > > client_header_buffer_size 1k; > large_client_header_buffers 4 4k; > > gzip on; > gzip_min_length 1100; > gzip_buffers 4 8k; > gzip_types text/plain; > > output_buffers 1 32k; > postpone_output 1460; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 75 20; > The information you introduced is very little. But in any case to achieve little system load you should use fastcgi_cache. From nginx-forum at nginx.us Fri Mar 7 09:21:54 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 07 Mar 2014 04:21:54 -0500 Subject: Transforming nginx for Windows In-Reply-To: <2273e334c0a50991283c7b9dd367f656.NginxMailingListEnglish@forum.nginx.org> References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> <2273e334c0a50991283c7b9dd367f656.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6fadcb0ee85e21646b63ad8e41029682.NginxMailingListEnglish@forum.nginx.org> tonyschwartz Wrote: ------------------------------------------------------- > There is definitely some issue doing proxying. At some point, the > connection to the back end appears to go bad. The request from the > browser to nginx just spins and spins. This occurs against (ahem...) > IIS6 and also against the Cassini local visual studio development 99% sure this is a backend issue, we're using it for proxied IIS5/6/7, tomcat, java. Fcgi'd php and RoR. The WAF (naxsi) for protecting internet facing applications, plenty of Lua code provisioning, Lua scaled backend load-balancing, authentication, etc... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248221#msg-248221 From mdounin at mdounin.ru Fri Mar 7 10:18:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Mar 2014 14:18:38 +0400 Subject: Transforming nginx for Windows In-Reply-To: <2273e334c0a50991283c7b9dd367f656.NginxMailingListEnglish@forum.nginx.org> References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> <34c3e6c7ba71e0cd63bb9370b4fd038f.NginxMailingListEnglish@forum.nginx.org> <2273e334c0a50991283c7b9dd367f656.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140307101838.GT34696@mdounin.ru> Hello! On Thu, Mar 06, 2014 at 06:05:03PM -0500, tonyschwartz wrote: > There is definitely some issue doing proxying. At some point, the > connection to the back end appears to go bad. The request from the browser > to nginx just spins and spins. This occurs against (ahem...) IIS6 and also > against the Cassini local visual studio development environment exhibits > this behavior. I will try this from a linux host to see if I can reproduce > the issue there. The issue I'm experiencing seems to be very much the same > as in the other topic I referenced above. The issue in a topic you've referenced is wrong address used in proxying. See http://trac.nginx.org/nginx/ticket/496 for detailed explanation. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Mar 7 10:27:05 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Mar 2014 14:27:05 +0400 Subject: Memory usage doubles on reload In-Reply-To: <51884af648418e128696739560fed678.NginxMailingListEnglish@forum.nginx.org> References: <20140306034834.GY34696@mdounin.ru> <51884af648418e128696739560fed678.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140307102705.GU34696@mdounin.ru> Hello! On Thu, Mar 06, 2014 at 03:01:49PM -0500, PetrHolik wrote: > Hello Maxim, thanks for reply. > > Is there possibility to purge allocated buffer(RAM) in old(gracefully) > worker processes? IMO worker thread have allocated all memory till last > clients disconnects. That is really isue for us - we have currently 32Gigs > of spare RAM to be able to handle reload under load. All allocations done for a particular requests are freed as long as the request is finished. The problem though is that OS memory allocators rarely return freed memory back to the OS, especially on Linux. You may try playing with allocators and/or allocator options to make things better. Note well that "spare" memory isn't really spare, as it can be used by OS for file cache. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Mar 7 13:08:04 2014 From: nginx-forum at nginx.us (flarik) Date: Fri, 07 Mar 2014 08:08:04 -0500 Subject: How to "Require User" like Apache in Nginx Message-ID: <87f8ea93c4322f38dcfea9643b9765f3.NginxMailingListEnglish@forum.nginx.org> Hi all, I was just wondering, how to do the following in nginx: # apache.conf: # ----------------------------------------------------------------------- AuthType Basic AuthUserFile /var/www/auth/htpasswd AuthName "pass please" require valid-user Allow from all Require user userX Require valid-user Require user userX userY Require user userX userZ # ----------------------------------------------------------------------- For what I came up with, but I'm still missing the default user 'userX' for everything. # nginx.conf # ----------------------------------------------------------------------- location / { auth_basic "Munin"; auth_basic_user_file /var/www/auth/htpassd; # userX is allow everywhere # below makes site inaccessible # if ( $remote_user != 'userX' ) { # return 401; # } } location ~ ^/location-1/ { if ( $remote_user !~ '^(userX|userY)$' ) { return 401; } } location ~ ^/location-2/ { if ( $remote_user !~ '^(userX|userZ)$' ) { return 401; } } # ----------------------------------------------------------------------- Any idea how to solve this? Regards, Frodo Larik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248229,248229#msg-248229 From nginx-forum at nginx.us Fri Mar 7 14:40:24 2014 From: nginx-forum at nginx.us (PetrHolik) Date: Fri, 07 Mar 2014 09:40:24 -0500 Subject: Memory usage doubles on reload In-Reply-To: <20140307102705.GU34696@mdounin.ru> References: <20140307102705.GU34696@mdounin.ru> Message-ID: <5c2a707889b9468eb0db26186fb10882.NginxMailingListEnglish@forum.nginx.org> Ok, thanks for info. I'll, do some research. I read some articles about memory allocation and I think when the system will be going to out of memory, the will try to reclaim freed pages which in normal situations when have enough ram does not because of avoiding memory fragmentation. Petr Holik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248163,248230#msg-248230 From nginx-forum at nginx.us Fri Mar 7 20:16:46 2014 From: nginx-forum at nginx.us (Per Hansson) Date: Fri, 07 Mar 2014 15:16:46 -0500 Subject: No SPDY support in the official repository packages In-Reply-To: <825d83bf650ba5f2ec133180b5c1b86e.NginxMailingListEnglish@forum.nginx.org> References: <6224078.WFQ0l5ZoG9@vbart-laptop> <000e6813fb1e7069ee13617faf43f2d1.NginxMailingListEnglish@forum.nginx.org> <825d83bf650ba5f2ec133180b5c1b86e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <39ffc7fe8bdeb0ce59c513979c0ffda1.NginxMailingListEnglish@forum.nginx.org> I second this request, it would be very welcome :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245553,248234#msg-248234 From nginx-forum at nginx.us Sat Mar 8 02:38:42 2014 From: nginx-forum at nginx.us (lerouxt) Date: Fri, 07 Mar 2014 21:38:42 -0500 Subject: Authentication module help! Message-ID: Hi, I'm trying to integrate nginx with a proprietary authentication scheme and I need a bit of help! The auth scheme is this: traffic is allowed through nginx if there exists a cookie containing a valid HMAC. If not, nginx is to redirect to an auth server (same domain) which will prompt the user for credentials. Upon successful login the auth server will emit a valid HMAC and then redirect the user back to nginx which will then validate and do its thing. The HMAC validation is proprietary and there exists a C lib to perform the task. I figured writing an nginx module that will exeucte during the access phase would do the trick. Trouble is, I can't figure out how to do the redirect to the auth server in the case the HMAC is missing or invalid. Try as I might, I just can't get nginx to do a temporary redirect in the access phase (i can do this just fine in the content phase!). What's the preferred approach for doing this? Can it be done all in the module, or do I need a combination of module + error_page redirection? -Tom Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248235,248235#msg-248235 From richard.ibbotson at gmail.com Sat Mar 8 21:02:37 2014 From: richard.ibbotson at gmail.com (Richard Ibbotson) Date: Sat, 08 Mar 2014 21:02:37 +0000 Subject: Some Issues with Configuration In-Reply-To: References: <2077506.vnlZ8baFKa@sheflug.sheflug.net> <5716412.BYK95o1Wmg@sheflug.sheflug.net> Message-ID: <3741178.QJVc0Gz6Y9@sheflug.sheflug.net> > >>> http://wiki.nginx.org/WordPress<< > > > > I'll have a read through this again. The part that might work Thanks to Jonathan Matthews and Steve Holdoway for some helpful comments. I've made some progress and probably a couple of steps backwards as well. Bit more help could be useful. NGINX version: 1.4.5 (Ubuntu). Looks like fastcgi is broken. There isn't a fix for this until the back end of April. PHP-FPM is working and so I'll have to use that. NGINX is serving up static HTML pages but I now find that WordPress pages are being served as plain text. What I mean by that is that NGINX serves up this page.... " Hello. I have two nginx instances - let's call them upstream and downstream. Both are running Ubuntu 13.10 64-bit and nginx 1.4.1. I want to use proxy_store to mirror some rarely-changing files from upstream to downstream. On the downstream server, I have created a /var/www directory owned by www-data (the user configured to run worker nginx processes). All files are served out of this directory. The directory (and its sub-dirs) have 755 permissions. In theory, when I ask for a file from the downstream server, my understanding is that it should look under /var/www for it; upon not finding it, get it from upstream and store it locally in downstream; and then serve the file from downstream on an on-going basis. The upstream server should only show one access in its access log. This is not happening. The downstream server keeps complaining that the file cannot be found locally, and continually fetches the file from upstream instead. So each access attempt to downstream for that file generates one "no such file or directory" error in the downstream error log, and a regular GET in the upstream access log. If I instead touch a file at the location (as the www-data user) where nginx wants to find the file locally on the downstream server; do a GET for that file; and then delete the file, nginx will do the right thing (i.e., get the file from upstream, store it at that location, and then serve it). If I skip the GET, nginx continues to not save the file locally, and keeps getting it each time from upstream. Any idea what's going on? Here's my downstream server's config: upstream download_servers { server download.foobar.com; } server { listen 80; server_name www.foobar.com; location / { root /var/www; index index.html; proxy_redirect off; } location /download/ { root /var/www/download/fetch/; error_page 404 = /fetch$url; } location /fetch/ { internal; proxy_store /var/www/download${uri}; proxy_http_version 1.1; proxy_pass http://download_servers; proxy_store_access user:rw group:rw all:r; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248241,248241#msg-248241 From nginx-forum at nginx.us Sat Mar 8 22:45:29 2014 From: nginx-forum at nginx.us (nginx_newbie_too) Date: Sat, 08 Mar 2014 17:45:29 -0500 Subject: proxy_store help requested In-Reply-To: <2166561d5a155f879bd229505149da2e.NginxMailingListEnglish@forum.nginx.org> References: <2166561d5a155f879bd229505149da2e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0e20091559ceba2f7426db35dcffdcdb.NginxMailingListEnglish@forum.nginx.org> More details: nginx version: nginx/1.4.1 (Ubuntu) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.4.1/debian/modules/ngx_http_substitutions_filter_module Linux 3.11.0-18-generic #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248241,248243#msg-248243 From nginx-forum at nginx.us Sat Mar 8 23:21:14 2014 From: nginx-forum at nginx.us (nginx_newbie_too) Date: Sat, 08 Mar 2014 18:21:14 -0500 Subject: proxy_store help requested In-Reply-To: <2166561d5a155f879bd229505149da2e.NginxMailingListEnglish@forum.nginx.org> References: <2166561d5a155f879bd229505149da2e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <90e01c3cfef00b236852f24c29465311.NginxMailingListEnglish@forum.nginx.org> OK, I just tracked this down to whether the proxy_pass value refers to a load-balancing upstream collection (as I'm doing above) vs. a hard-coded reference to one server. So, if I change the proxy_pass config value to refer to http://download.foobar.com instead of http://download_servers, everything works. Is this a known limitation, or a bug that I should file? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248241,248244#msg-248244 From mdounin at mdounin.ru Sun Mar 9 00:14:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 9 Mar 2014 04:14:33 +0400 Subject: proxy_store help requested In-Reply-To: <90e01c3cfef00b236852f24c29465311.NginxMailingListEnglish@forum.nginx.org> References: <2166561d5a155f879bd229505149da2e.NginxMailingListEnglish@forum.nginx.org> <90e01c3cfef00b236852f24c29465311.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140309001433.GE34696@mdounin.ru> Hello! On Sat, Mar 08, 2014 at 06:21:14PM -0500, nginx_newbie_too wrote: > OK, I just tracked this down to whether the proxy_pass value refers to a > load-balancing upstream collection (as I'm doing above) vs. a hard-coded > reference to one server. So, if I change the proxy_pass config value to > refer to http://download.foobar.com instead of http://download_servers, > everything works. > > Is this a known limitation, or a bug that I should file? The difference between "download.foobar.com" and "download_servers" is a name that will be used in requests to an upstream (see http://nginx.org/r/proxy_set_header). And if your upstream sever responds with an error due to incorrect name used, nginx will not store the response. That is, from the information you provided what you describe looks like a result of a misconfiguration. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Mar 9 01:51:16 2014 From: nginx-forum at nginx.us (nginx_newbie_too) Date: Sat, 08 Mar 2014 20:51:16 -0500 Subject: proxy_store help requested In-Reply-To: <20140309001433.GE34696@mdounin.ru> References: <20140309001433.GE34696@mdounin.ru> Message-ID: <433847ee32730ef6e891de0cdd3d34b9.NginxMailingListEnglish@forum.nginx.org> Maxim, thank you for the prompt response. I am entirely willing to believe that this is a misconfiguration, but I cannot figure out what I've misconfigured. The upstream server shows no errors in its error log; its access log does a 200 for the first GET, and 304's for subsequent GETs. The downstream server continues to log errors about no such file or directory, and the file doesn't exist in the expected location. The behavior remains consistent even if I set proxy_set_header to $host on the downstream server. What should I be looking at to track this down further? I'm happy to look at (or post here) any logs or headers or configuration. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248241,248247#msg-248247 From francis at daoine.org Sun Mar 9 11:06:42 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Mar 2014 11:06:42 +0000 Subject: Some Issues with Configuration In-Reply-To: <3741178.QJVc0Gz6Y9@sheflug.sheflug.net> References: <2077506.vnlZ8baFKa@sheflug.sheflug.net> <5716412.BYK95o1Wmg@sheflug.sheflug.net> <3741178.QJVc0Gz6Y9@sheflug.sheflug.net> Message-ID: <20140309110642.GC29880@craic.sysops.org> On Sat, Mar 08, 2014 at 09:02:37PM +0000, Richard Ibbotson wrote: Hi there, > NGINX is serving up static HTML pages but I now find that WordPress > pages are being served as plain text. nginx uses a single config file. There is an "include" directive where other files are (effectively) copied in to the configuration when nginx reads it. Any file not mentioned in an "include" directive is irrelevant to nginx. http://nginx.org/en/docs/http/request_processing.html When a request comes in, nginx uses the "listen" and "server_name" directives in every "server" block to choose the one "server" block that will handle this request. If you only have a single "server" block, that's the one that will be used. If you have more than one, make sure that you can know which one will be used for the request you are making. After any server-level rewrite-module directives, nginx chooses a single "location" block in which to process the request. In that block, there will either be a directive that causes the request to be handled elsewhere -- e.g. proxy_pass or fastcgi_pass -- or there is a default "serve it from the filesystem". (There are also subrequests and internal rewrites within nginx; there may be more than one nginx request associated with a single incoming http request.) So, in each case where the output is not what you want, you should be able to say something like I request http://example.com/blog I want a 301 redirection to http://example.com/blog/ I get a 301 redirection to http://www.example.com/blog/ or I request http://example.com/blog/ I want the php-processed content of the file /var/www/blog/index.php I get the unprocessed content of the file /var/www/blog/index.php and with that information, you should be able to see which one location was used to process the request; and you should be able to see why the response was what it was. In your specific case of unprocessed content, the most likely thing is that the one location that processed your request did not have any other "handle it elsewhere" directives, and so a file was served from the filesystem. The config snippet you provided has no "fastcgi_pass" directives, but it does have some "include" directives, so it is possible that there is configuration in one of those files that affects this request. Without those "include" files, it looks like "location /" would be the one that would handle your request (whether it is /blog, /blog/, or /blog/index.php); and that just says "serve it from the filesystem". > What do I have to change to make this work properly ? Tell nginx what you want it to do with requests for php files, if it isn't just "serve it from the filesystem". Good luck with it, f -- Francis Daly francis at daoine.org From luky-37 at hotmail.com Sun Mar 9 11:23:23 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 9 Mar 2014 12:23:23 +0100 Subject: [alert] could not add new SSL session to the session cache while SSL handshaking In-Reply-To: <20140303174518.GO34696@mdounin.ru> References: , <20140303174518.GO34696@mdounin.ru> Message-ID: Hi Maxim, > You've changed SSL session timeout from 10 minutes to 24 hours, > and this basically means that sessions will use 144 times more > space in session cache. On the other hand, cache size wasn't > changed - so you've run out of space in the cache configured. If > there is no space in a cache nginx will try to drop one > non-expired session from the cache, but it may not be enough to > store a new session (as different sessions may occupy different > space), resulting in alerts you've quoted. I'm trying to understand this better, and read the comments in ngx_ssl_new_session(). Please tell if I got this right. Basically this can only happen if we delete a SSLv2 session from the cache and but need the space for a SSLv3/TLS session (SSLv2 session id needs 16 bytes and a SSLv3/TLS session id 32 bytes). So if SSLv2 hasn't been enabled via ssl_protocols directive, this problem will not happen, correct? Also this cannot happen on a 64-bit platform, as we use 128 bytes sized allocations for the session id + ASN1 representation even with SSLv2. So the two workarounds to avoid this problem in production would be: - disable SSLv2 (which is default anyway and shouldn't be enabled) - use a 64-bit platform > Note well that configuring ssl_buffer_size to 1400 isn't a good > idea unless you are doing so for your own performance testing. > See previous discussions for details. Sure, but in a production environment we also need to consider that we hit such limits, even if buffer_size and timeouts are tuned and state of the art. An attacker can flood the SSL session cache easily and if he is aware of such a limitation he could flood it with SSLv2 sessions, basically disabling the SSL session cache, right? Wouldn't it be better: - drop all expired sessions from cache instead of just one or two? - try dropping up to 4 non-expired session times before giving up ? (max allocation is 128, min allocation is 32, max / min = 4 sessions ? to drop in the worst case) Please let me know if I understood this correctly, thanks! Regards, Lukas From nginx-forum at nginx.us Sun Mar 9 18:58:51 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 09 Mar 2014 14:58:51 -0400 Subject: [ANN] Windows nginx 1.5.12.1 Cheshire Message-ID: 13:58 9-3-2014 nginx 1.5.12.1 Cheshire Based on nginx 1.5.12 (9-3-2014) with; + Fixed a c99 logging issue in naxsi + Now includes nginx_basic. Need a simple powerful Windows webserver without all the bling of it's big brother ? then nginx_basic is for you, other custom builds are available upon request + nginx security advisory (CVE-2014-0088) + encrypted-session-nginx-module (https://github.com/agentzh/encrypted-session-nginx-module) + Fully ASLR and DEP compliant for shared memory (ea. limit_conn_zone, limit_req_zone, etc.) + lua-upstream-nginx-module (https://github.com/agentzh/lua-upstream-nginx-module) + lua-nginx-module v0.9.5rc2 (upgraded 8-3-2014) + Streaming with nginx-rtmp-module, v1.1.3 (upgraded 8-3-2014) + echo-nginx-module v0.51 (upgraded 21-2-2014) + headers-more-nginx-module v0.25 (upgraded 17-1-2014) + nginx-auth-ldap (upgraded 21-2-2014) + HttpSubsModule (upgraded 21-2-2014) + Additional custom 503 error handler via 513 (see onsite readme for example) + Select-boost: event driven, non-blocking API select() replacement, Beta No need to enable anything, it is fully automatic and won't be used if certain conditions do not pass internal tests + Source changes back ported + Source changes add-on's back ported * Additional specifications are like 14:05 10-1-2014: nginx 1.5.9.1 Cheshire Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248253,248253#msg-248253 From nginx-forum at nginx.us Mon Mar 10 02:34:08 2014 From: nginx-forum at nginx.us (nginx_newbie_too) Date: Sun, 09 Mar 2014 22:34:08 -0400 Subject: proxy_store help requested In-Reply-To: <20140309001433.GE34696@mdounin.ru> References: <20140309001433.GE34696@mdounin.ru> Message-ID: The 304 response from the upstream server ended up being the culprit. If I changed the upstream server to have 'if_modified_since off;" and thus always respond with a 200 and the content, the problem is resolved. To freshen the mirror, I can then simply remove the mirrored content from the downstream server; no nginx processes even need to be restarted. Maxim, this may be obvious to you, but it wasn't to me, and no documentation pointed me in this direction. As a suggestion, a small note about the significance of setting i-m-s off on upstream servers in such mirroring situations in the documentation about proxy_store would be helpful. As always, I'm extremely grateful for your work, and for the others that provide this awesome software. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248241,248255#msg-248255 From nginx-forum at nginx.us Mon Mar 10 09:49:07 2014 From: nginx-forum at nginx.us (autogun) Date: Mon, 10 Mar 2014 05:49:07 -0400 Subject: Seeking for dynamic proxy_pass solution Message-ID: <6c41b902cb356dcf1e1553ecbbea666a.NginxMailingListEnglish@forum.nginx.org> Hello, I'd like to achieve something as follow using nginx - When browsing to http://myserver/10.10.21.102 for example, I want proxy_pass to be set to http://10.10.21.102; Im getting somewhere but still need help, Catching the argument this way - http://server/?device=10.10.21.102 In nginx config - set $ip $arg_device; proxy_pass http://$ip; Which actually works but breaks when there's an image/css/js, for example - [09/Mar/2014:23:05:44 +0200] "GET /?device=10.10.21.102 HTTP/1.1" 200 689 "-" "Mozilla/4.0 ... [09/Mar/2014:22:58:30 +0200] "GET /welcome.png HTTP/1.1" 500 594 "http://server/?device=10.10.21.102" "Mozilla/4.0 ... When I try to access http://server/?device=10.10.21.102/welcome.png - It works. What would be the easiest way to solve this so css,js,images will be called as: GET /?device=10.10.21.102/welcome.png and not just GET /welcome.png Thank you, D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248258,248258#msg-248258 From etienne.champetier at free.fr Mon Mar 10 12:51:54 2014 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Mon, 10 Mar 2014 13:51:54 +0100 (CET) Subject: fastcgi_response_time ? In-Reply-To: <1472508484.1149270051.1394455448595.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <753997875.1149289200.1394455914059.JavaMail.root@zimbra65-e11.priv.proxad.net> Hi I'm using nginx in front of fastcgi servers (fastcgi_pass ...), I would like to have in my logs the response_time and the fastcgi_response_time (which doesn't exist), is it possible? I've tried upstream_response_time but it's always the same value as fastcgi_response_time, even with slow connection (simulated with Network Link Conditioner on a mac) Thanks in advance Etienne From aweber at comcast.net Mon Mar 10 12:52:19 2014 From: aweber at comcast.net (AJ Weber) Date: Mon, 10 Mar 2014 08:52:19 -0400 Subject: No SPDY support in the official repository packages In-Reply-To: <39ffc7fe8bdeb0ce59c513979c0ffda1.NginxMailingListEnglish@forum.nginx.org> References: <6224078.WFQ0l5ZoG9@vbart-laptop> <000e6813fb1e7069ee13617faf43f2d1.NginxMailingListEnglish@forum.nginx.org> <825d83bf650ba5f2ec133180b5c1b86e.NginxMailingListEnglish@forum.nginx.org> <39ffc7fe8bdeb0ce59c513979c0ffda1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <531DB583.8030502@comcast.net> This may be obvious, but as such, the OpenSSL 1.0.1e package is available to virtually all CentOS 6.x via the official yum repos, so it's not just for CentOS 6.5 (technically). -AJ On 3/7/2014 3:16 PM, Per Hansson wrote: > I second this request, it would be very welcome :) > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245553,248234#msg-248234 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From richard.ibbotson at gmail.com Mon Mar 10 21:53:53 2014 From: richard.ibbotson at gmail.com (Richard Ibbotson) Date: Mon, 10 Mar 2014 21:53:53 +0000 Subject: Some Issues with Configuration In-Reply-To: References: Message-ID: <1890068.ekSLzqCoy5@sheflug.sheflug.net> Hi Thanks for some helpful answers so far. I'm sure I'm neary there. Just need a bit more help. I'm getting error 404 when I try to load a Wordpress page. That's better than the previous attempt where NGINX was loading the PHP code from WP. Some progress. I can also load static HTML pages from my web server. NGINX access.log and error.log are working. The error.log says.... *1 open() "/usr/share/nginx/html/blog/index.php" failed (2: No such file or directory), *1 open() "/usr/share/nginx/html/blog/index.php" failed (2: No such file or directory), I've done a search around the net for this error. I get some web pages that suggest removing the $uri/ in the nginx.conf file. I've tried that but it doesn't work. I was kind of hoping that someone out there might be able to suggest what to add to the nginx.conf file or what to take away. There is nothing in conf.d other than fastcgi_params.conf. Nothing in sites-available or sites-enabled. So far I've not run ' ln -s sites-available sites-enabled' . So, mostly something to do with nginx.conf ? ............ user www-data; pid /run/nginx.pid; worker_processes 4; events { worker_connections 1024; multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 10 10; types_hash_max_size 2048; client_max_body_size 20000k; client_header_timeout 10; client_body_timeout 10; send_timeout 10; index index.html index.htm index.php; upstream php { server unix:/tmp/php-fpm.sock; server 127.0.0.1:9000; } server { server_name sleepypenguin; root /var/www/; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x- javascript text/xml application/xml application/xml+rss text/javascript; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /var/www { try_files $uri /blog/index.php?$args; } location ~ .php$ { root html; } #location @wp { #rewrite ^/files(.*) /wp-includes/ms-files.php?file=$1 last; #root html; location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(blog.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/blog$fastcgi_script_name; include fastcgi_params; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*.php) $2 last; rewrite ^/(.*)$ /index.php?q=$1 last; } location ~* ^.+.(js|css|png|jpg|jpeg|gif|ico)$ { rewrite ^.*/files/(.*(js|css|png|jpg|jpeg|gif|ico))$ /wp- includes/ms-files.php?file=$1 last; expires 24h; log_not_found off; } rewrite ^.*/files/(.*)$ /wp-includes/ms-files.php?file=$1 last; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } -- Richard www.sheflug.org.uk https://twitter.com/SleepyPenguin1 From mdounin at mdounin.ru Mon Mar 10 22:34:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Mar 2014 02:34:50 +0400 Subject: fastcgi_response_time ? In-Reply-To: <753997875.1149289200.1394455914059.JavaMail.root@zimbra65-e11.priv.proxad.net> References: <1472508484.1149270051.1394455448595.JavaMail.root@zimbra65-e11.priv.proxad.net> <753997875.1149289200.1394455914059.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <20140310223450.GG34696@mdounin.ru> Hello! On Mon, Mar 10, 2014 at 01:51:54PM +0100, etienne.champetier at free.fr wrote: > Hi > > I'm using nginx in front of fastcgi servers (fastcgi_pass ...), > I would like to have in my logs the response_time and the fastcgi_response_time (which doesn't exist), > is it possible? > > I've tried upstream_response_time but it's always the same value as fastcgi_response_time, > even with slow connection (simulated with Network Link Conditioner on a mac) There are two variables available: - $request_time, time elapsed since first bytes were read from a client; - $upstream_response_time, time elapsed since a request was sent to an upstream till last bytes of a response were got from an upstream. Most noticeable difference between these variables is that $request_time includes time taken to read a request from a client. This difference can be easily seen by typing a request by hand, line by line, in telnet. [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_time [2] http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 10 23:06:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Mar 2014 03:06:58 +0400 Subject: proxy_store help requested In-Reply-To: References: <20140309001433.GE34696@mdounin.ru> Message-ID: <20140310230658.GJ34696@mdounin.ru> Hello! On Sun, Mar 09, 2014 at 10:34:08PM -0400, nginx_newbie_too wrote: > The 304 response from the upstream server ended up being the culprit. If I > changed the upstream server to have 'if_modified_since off;" and thus always > respond with a 200 and the content, the problem is resolved. To freshen the > mirror, I can then simply remove the mirrored content from the downstream > server; no nginx processes even need to be restarted. So the actual problem was incorrect testing, not a misconfiguration. And yes, proxy_store only stores 200 responses and nothing more, so anything else won't be stored, including 304. This is generally good enough, as 304 doesn't contain response body and hence doesn't imply much traffic. > Maxim, this may be obvious to you, but it wasn't to me, and no documentation > pointed me in this direction. As a suggestion, a small note about the > significance of setting i-m-s off on upstream servers in such mirroring > situations in the documentation about proxy_store would be helpful. 1) This is not something I would recommend to do, at least not something to be done in general. 2) Note that proxy_store is something very basic. It can be used to do powerful things, but if you are looking for something "ready to use out of the box" - consider using proxy_cache instead. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 10 23:49:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Mar 2014 03:49:51 +0400 Subject: Authentication module help! In-Reply-To: References: Message-ID: <20140310234951.GK34696@mdounin.ru> Hello! On Fri, Mar 07, 2014 at 09:38:42PM -0500, lerouxt wrote: > Hi, I'm trying to integrate nginx with a proprietary authentication scheme > and I need a bit of help! > > The auth scheme is this: traffic is allowed through nginx if there exists a > cookie containing a valid HMAC. If not, nginx is to redirect to an auth > server (same domain) which will prompt the user for credentials. Upon > successful login the auth server will emit a valid HMAC and then redirect > the user back to nginx which will then validate and do its thing. > > The HMAC validation is proprietary and there exists a C lib to perform the > task. I figured writing an nginx module that will exeucte during the access > phase would do the trick. Trouble is, I can't figure out how to do the > redirect to the auth server in the case the HMAC is missing or invalid. Try > as I might, I just can't get nginx to do a temporary redirect in the access > phase (i can do this just fine in the content phase!). > > What's the preferred approach for doing this? Can it be done all in the > module, or do I need a combination of module + error_page redirection? A redirect can be returned from an access phase handler as usual, by adding appropriate Location header and returning a NGX_HTTP_MOVED_TEMPORARILY code: r->headers_out.location = ngx_list_push(&r->headers_out.headers); if (r->headers_out.location == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } r->headers_out.location->hash = 1; ngx_str_set(&r->headers_out.location->key, "Location"); ngx_str_set(&r->headers_out.location->value, "http://example.com"); return NGX_HTTP_MOVED_TEMPORARILY; -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 11 00:49:05 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Mar 2014 04:49:05 +0400 Subject: [alert] could not add new SSL session to the session cache while SSL handshaking In-Reply-To: References: <20140303174518.GO34696@mdounin.ru> Message-ID: <20140311004905.GM34696@mdounin.ru> Hello! On Sun, Mar 09, 2014 at 12:23:23PM +0100, Lukas Tribus wrote: > Hi Maxim, > > > > You've changed SSL session timeout from 10 minutes to 24 hours, > > and this basically means that sessions will use 144 times more > > space in session cache. On the other hand, cache size wasn't > > changed - so you've run out of space in the cache configured. If > > there is no space in a cache nginx will try to drop one > > non-expired session from the cache, but it may not be enough to > > store a new session (as different sessions may occupy different > > space), resulting in alerts you've quoted. > > I'm trying to understand this better, and read the comments in > ngx_ssl_new_session(). > > Please tell if I got this right. > > > Basically this can only happen if we delete a SSLv2 session from > the cache and but need the space for a SSLv3/TLS session (SSLv2 > session id needs 16 bytes and a SSLv3/TLS session id 32 bytes). > > So if SSLv2 hasn't been enabled via ssl_protocols directive, this > problem will not happen, correct? > > Also this cannot happen on a 64-bit platform, as we use 128 bytes > sized allocations for the session id + ASN1 representation even > with SSLv2. > > So the two workarounds to avoid this problem in production would be: > - disable SSLv2 (which is default anyway and shouldn't be enabled) > - use a 64-bit platform Length of a serialized session may also vary depending on a key size negotiated, various TLS extensions negotiated and so on. Try looking into d2i_SSL_SESSION() source. I also suspect the comment you are referring to is largely outdated, and modern numbers are very different. > > Note well that configuring ssl_buffer_size to 1400 isn't a good > > idea unless you are doing so for your own performance testing. > > See previous discussions for details. > > Sure, but in a production environment we also need to consider > that we hit such limits, even if buffer_size and timeouts are > tuned and state of the art. The ssl_buffer_size directive is completely unrelated to session caching. > An attacker can flood the SSL session cache easily and if he is aware > of such a limitation he could flood it with SSLv2 sessions, basically > disabling the SSL session cache, right? At worst, the cache won't be effective for as many new sessions as many "bad" sessions previously added to it (or rather half of that number, assuming removing two "bad" sessions frees enough space for one new session). And this is basically identical to what can be done by just adding new sessions to the cache, thus forcing expiration of previously added sessions. > Wouldn't it be better: > - drop all expired sessions from cache instead of just one or two? If for each session added two expired sessions are removed, this basically means that there will be no expired sessions - but work done on each particular request is limited. > - try dropping up to 4 non-expired session times before giving up > ? (max allocation is 128, min allocation is 32, max / min = 4 sessions > ? to drop in the worst case) I don't think your calculations are correct, but dropping couple of sessions instead of just one may be beneficial. On the other hand, it has a downside of making it easier to expire previously stored sessions. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Mar 11 08:51:47 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Mar 2014 08:51:47 +0000 Subject: Some Issues with Configuration In-Reply-To: <1890068.ekSLzqCoy5@sheflug.sheflug.net> References: <1890068.ekSLzqCoy5@sheflug.sheflug.net> Message-ID: <20140311085147.GH29880@craic.sysops.org> On Mon, Mar 10, 2014 at 09:53:53PM +0000, Richard Ibbotson wrote: Hi there, > Thanks for some helpful answers so far. I'm sure I'm neary there. > Just need a bit more help. I'm getting error 404 when I try to load a > Wordpress page. You can make it easier for people to help you if you don't force them to guess what you are doing. "I make this request" -- you didn't say, so I'll assume "/blog/index.php" "I get this response" -- you did say http 404 "I want this response" -- presumably the php-processed output of one specific file on your filesystem, but you didn't say which one. So, given that the request is /blog/index.php, which one of your six location blocks will nginx use to process the request? > location = /favicon.ico { > location = /robots.txt { > location /var/www { > location ~ .php$ { > location ~ [^/]\.php(/|$) { > location ~* ^.+.(js|css|png|jpg|jpeg|gif|ico)$ { The docs are at http://nginx.org/r/location What is the configuration in that location that tells nginx how to process the request? Is it what you want it to be? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Mar 11 16:46:18 2014 From: nginx-forum at nginx.us (nginx_newbie_too) Date: Tue, 11 Mar 2014 12:46:18 -0400 Subject: proxy_store help requested In-Reply-To: <20140310230658.GJ34696@mdounin.ru> References: <20140310230658.GJ34696@mdounin.ru> Message-ID: <6e58fe9b2941cda88f5c3b6ca5069b25.NginxMailingListEnglish@forum.nginx.org> Maxim, one last piece of advice requested. Would it be more proper to turn off i-m-s in the request body (by setting proxy_pass_request_headers to off in the downstream server configuration) instead of turning it off on the upstream server? I think that's more correct behavior, but I'm not sure. Yes, proxy_cache simply works out of the box, and it's awesome. But I couldn't understand how to use it so that the downstream server doesn't naively GET content again from the upstream after the expiration time period had passed. I would have wanted instead to only have the cache refreshed if i-m-s suggested that the upstream content had changed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248241,248283#msg-248283 From nginx-forum at nginx.us Tue Mar 11 18:42:54 2014 From: nginx-forum at nginx.us (mrdziuban) Date: Tue, 11 Mar 2014 14:42:54 -0400 Subject: Building cache with cURL request Message-ID: I'm trying to build the cache for a certain URL by making a simple cURL request, but it's not working. Do I need to specify any headers or change any configurations to get this to work? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248284,248284#msg-248284 From nginx-forum at nginx.us Wed Mar 12 04:04:35 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 12 Mar 2014 00:04:35 -0400 Subject: Cloak a hostname Message-ID: <5c50ada5930d03a420d9868b79026580.NginxMailingListEnglish@forum.nginx.org> Is it possible assuming I have a domain and uri: http://foo.bar/candy To have the browser location still show http://foo.bar/candy but actually fetch the content from: http://newdomain.com/candy I.E. simply replace the host. Assume I have: server { listen 443; server_name foo.bar; root /srv/www/domains/foo.bar; index index.html; location / { } } Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248288,248288#msg-248288 From mdounin at mdounin.ru Wed Mar 12 11:48:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Mar 2014 15:48:23 +0400 Subject: proxy_store help requested In-Reply-To: <6e58fe9b2941cda88f5c3b6ca5069b25.NginxMailingListEnglish@forum.nginx.org> References: <20140310230658.GJ34696@mdounin.ru> <6e58fe9b2941cda88f5c3b6ca5069b25.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140312114823.GI34696@mdounin.ru> Hello! On Tue, Mar 11, 2014 at 12:46:18PM -0400, nginx_newbie_too wrote: > Maxim, one last piece of advice requested. Would it be more proper to turn > off i-m-s in the request body (by setting proxy_pass_request_headers to off > in the downstream server configuration) instead of turning it off on the > upstream server? I think that's more correct behavior, but I'm not sure. Something like proxy_set_header If-Modified-Since ""; proxy_set_header If-None-Match ""; on a frontend should be a good way to disable If-* in requests to an upstream server. I would recommend to don't touch it at all though, and just ignore 304 responses which are not stored. Number of 304 returned by upstream should be small enough. > Yes, proxy_cache simply works out of the box, and it's awesome. But I > couldn't understand how to use it so that the downstream server doesn't > naively GET content again from the upstream after the expiration time period > had passed. I would have wanted instead to only have the cache refreshed if > i-m-s suggested that the upstream content had changed. To make proxy_cache behave more like proxy_store and ignore response expiration, use proxy_ignore_headers (and proxy_cache_valid to set cache time): proxy_ignore_headers Cache-Control Expires; proxy_cache_valid 200 365d; Use of If-Modified-Since to revalidate cached data can be activated by proxy_cache_revalidate directive. See here for documentation: http://nginx.org/r/proxy_set_header http://nginx.org/r/proxy_ignore_headers http://nginx.org/r/proxy_cache_valid http://nginx.org/r/proxy_cache_revalidate -- Maxim Dounin http://nginx.org/ From r at roze.lv Wed Mar 12 14:50:33 2014 From: r at roze.lv (Reinis Rozitis) Date: Wed, 12 Mar 2014 16:50:33 +0200 Subject: Cloak a hostname In-Reply-To: <5c50ada5930d03a420d9868b79026580.NginxMailingListEnglish@forum.nginx.org> References: <5c50ada5930d03a420d9868b79026580.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9BF35AE7E0244B3A93F1DFC073BA2E1D@MasterPC> > Is it possible assuming I have a domain and uri: > > http://foo.bar/candy > To have the browser location still show http://foo.bar/candy but actually > fetch the content from: > > http://newdomain.com/candy If your server runs on foo.bar then: server { location / { proxy_pass http://newdomain.com; } } p.s. though It depends if the returned content contains relative or full urls - then you might/can in combination with the proxy_pass use also the Sub module ( http://nginx.org/en/docs/http/ngx_http_sub_module.html ) - egg add something like: sub_filter //newdomain.com //foo.bar; sub_filter_once off; rr From jayadev at ymail.com Wed Mar 12 19:31:42 2014 From: jayadev at ymail.com (Jayadev C) Date: Wed, 12 Mar 2014 12:31:42 -0700 (PDT) Subject: nginx module dev: loadbalancer vs upstream handler options Message-ID: <1394652702.50727.YahooMailNeo@web163505.mail.gq1.yahoo.com> First time here, was looking at supporting http protocol (using nginx) over our custom zeromq server talking protocol buf. Read the excelled tutorial by Evan and was also looking at few similar plugins to get an idea. One confusion I have is, I see some plugins like memcache/redis ones where the http request and response parsing is done by upstream handler modules, while I also see some modules like https://github.com/chaoslawful/drizzle-nginx-module where the request/response handling is done by peer.init_upstream class of functions.? My typical flow would : parse http request -> convert to protobuf request object -> send to zmq server -> {nginx event notification} -> convert protobuf response back to http response -> respond to client. I can imagine doing all the request/response handling during connection send/receive part or by registering create_request,process_header , filter hooks. Is there any guideline on which is the right approach. (I have seen https://github.com/FRiCKLE/ngx_zeromq , not exactly what I want) Thanks Jai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 13 12:05:37 2014 From: nginx-forum at nginx.us (Emarginated) Date: Thu, 13 Mar 2014 08:05:37 -0400 Subject: New to nginx: ssl/sticky/conditional proxying Message-ID: Good morning all, it was long time i was around with some solution to help me supporting something right now is really complex, but i need to do. Nginx looks like me last solution, so i've started with preliminary test, now , and i'm asking you a little help to validate first if it is possible and how to accomplish. I would thanks in adavance whoever can be so kind to help me. Scenario: a sinlge server (physical) with a lot of ram, near 96Gb that should be used to run from a minimun of 3 to 6 or even more tomcat istances with same configuration, apart listening port. The matter is that JVM running on big heap max memory took a life to complete Gargage collections, so it is a reasonable solution to split up the jvm with lower memory and run in parallel tomcat. But this should be done smoothly with one only real IP no subnet, no possibility to do L3 balance, so NGINX seems to be the only solution. server NGINX+TOMCAT on the backend NGINX on port 80 TOMCAT will run from 81 to 88 for example. But, i have some constraints. A) i have to use HTTP and HTTPS B) sticky: who use tomcat1 should remain there for both HTTP/HTTPS protocol C) if incoming IP is x.x.x.x it shoudl go to tomcat 8 D) if one of the tomcat is down, it won't be used anymore E) if i have a peak of reqeustes, i should be able to add more instances without stopping anything and/or use backup servers realtime. I believe NGINX can do it, but it is a matter of starting to read right documentantion. Thank you for any help on this. Michele Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248320,248320#msg-248320 From nginx-forum at nginx.us Thu Mar 13 15:26:35 2014 From: nginx-forum at nginx.us (greekduke) Date: Thu, 13 Mar 2014 11:26:35 -0400 Subject: Worker dies with a segfault error In-Reply-To: References: Message-ID: <40dbe6e1ee86a0cb7365703fb5944003.NginxMailingListEnglish@forum.nginx.org> Hello, As agentzh mentioned eval was causing the problem. I replaced eval with lua and after changing the configuration accordingly I haven't seen any segfaults again. Thanks everybody for the help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247886,248324#msg-248324 From nginx-forum at nginx.us Thu Mar 13 15:43:37 2014 From: nginx-forum at nginx.us (nginxu14) Date: Thu, 13 Mar 2014 11:43:37 -0400 Subject: secp521r1 removed from 1.4.6 Message-ID: <76ea16e53fccf5edf5a288245369f318.NginxMailingListEnglish@forum.nginx.org> Hi, It seems that secp521r1 has been removed from 1.4.6. Trying to use it in ssl_ecdh_curve doesnt work but worked in 1.4.5. Was this just a mistake or is there a reason why it has been removed? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248325,248325#msg-248325 From nginx-forum at nginx.us Thu Mar 13 16:20:41 2014 From: nginx-forum at nginx.us (bcx) Date: Thu, 13 Mar 2014 12:20:41 -0400 Subject: Ignored Cache-Control request header Message-ID: <07944deee92ddd83d7e0f22b846d6c9e.NginxMailingListEnglish@forum.nginx.org> I noticed that the nginx http proxy module by default does nothing with the Cache-Control request header that is sent by browsers. Most browsers (I tested Crome and Firefox, but from my online research it showed that even Internet Explorer has the same behaviour) send a Cache-Control: no-cache header when the page requested is with Ctrl-F5 (as opposed to a normal F5 or page hit). I would like to configure my nginx caching proxy to take this request as an instruction to invalidate the cache, send a request to an upstream server, and send and cache that response. Note that I'm NOT talking about the Cache-Control header sent from upstream webservers to the proxy. It's the Cache-Control request headers, not the response header. Is there a configuration option that I've missed? I spent quite some time reading the documentation. Sadly the search terms that I can come up with (cache-control, proxy, etc) are too generic for what I want to express. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248326,248326#msg-248326 From mdounin at mdounin.ru Thu Mar 13 16:22:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Mar 2014 20:22:36 +0400 Subject: nginx module dev: loadbalancer vs upstream handler options In-Reply-To: <1394652702.50727.YahooMailNeo@web163505.mail.gq1.yahoo.com> References: <1394652702.50727.YahooMailNeo@web163505.mail.gq1.yahoo.com> Message-ID: <20140313162236.GU34696@mdounin.ru> Hello! On Wed, Mar 12, 2014 at 12:31:42PM -0700, Jayadev C wrote: > > > First time here, was looking at supporting http protocol (using > nginx) over our custom zeromq server talking protocol buf. Read > the excelled tutorial by Evan and was also looking at few > similar plugins to get an idea. > > One confusion I have is, I see some plugins like memcache/redis > ones where the http request and response parsing is done by > upstream handler modules, while I also see some modules like > https://github.com/chaoslawful/drizzle-nginx-module where the > request/response handling is done by peer.init_upstream class of > functions.? My typical flow would : parse http request -> > convert to protobuf request object -> send to zmq server -> > {nginx event notification} -> convert protobuf response back to > http response -> respond to client. > > I can imagine doing all the request/response handling during > connection send/receive part or by registering > create_request,process_header , filter hooks. Is there any > guideline on which is the right approach. (I have seen > https://github.com/FRiCKLE/ngx_zeromq , not exactly what I want) > The peer.init_upstream callback is to initialize balancing metadata, not to implement any protocol-related details. Correct approach to implement your own protocol on top of the upstream module is to use create_request/process_header/etc callbacks. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Mar 13 16:27:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Mar 2014 20:27:07 +0400 Subject: secp521r1 removed from 1.4.6 In-Reply-To: <76ea16e53fccf5edf5a288245369f318.NginxMailingListEnglish@forum.nginx.org> References: <76ea16e53fccf5edf5a288245369f318.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140313162707.GV34696@mdounin.ru> Hello! On Thu, Mar 13, 2014 at 11:43:37AM -0400, nginxu14 wrote: > Hi, It seems that secp521r1 has been removed from 1.4.6. Trying to use it in > ssl_ecdh_curve doesnt work but worked in 1.4.5. > > Was this just a mistake or is there a reason why it has been removed? It wasn't - nginx just uses what's available from your OpenSSL library. Use $ openssl ecparam -list_curves to find out which curves are supported by OpenSSL library on your host. As long as you are using CentOS 6, likely you've hit something similar to what's described in this ticket: http://trac.nginx.org/nginx/ticket/515 I.e., the ssl_ecdh_curve directive is now actually used and the value is rejected as not supported by OpenSSL on you host, rather than being ignored. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Mar 13 16:31:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Mar 2014 20:31:03 +0400 Subject: Ignored Cache-Control request header In-Reply-To: <07944deee92ddd83d7e0f22b846d6c9e.NginxMailingListEnglish@forum.nginx.org> References: <07944deee92ddd83d7e0f22b846d6c9e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140313163103.GW34696@mdounin.ru> Hello! On Thu, Mar 13, 2014 at 12:20:41PM -0400, bcx wrote: > I noticed that the nginx http proxy module by default does nothing with the > Cache-Control request header that is sent by browsers. > > Most browsers (I tested Crome and Firefox, but from my online research it > showed that even Internet Explorer has the same behaviour) send a > Cache-Control: no-cache header when the page requested is with Ctrl-F5 (as > opposed to a normal F5 or page hit). I would like to configure my nginx > caching proxy to take this request as an instruction to invalidate the > cache, send a request to an upstream server, and send and cache that > response. > > Note that I'm NOT talking about the Cache-Control header sent from upstream > webservers to the proxy. It's the Cache-Control request headers, not the > response header. > > Is there a configuration option that I've missed? I spent quite some time > reading the documentation. Sadly the search terms that I can come up with > (cache-control, proxy, etc) are too generic for what I want to express. The proxy_cache_bypass directive can be used for what you are looking for, see docs here: http://nginx.org/r/proxy_cache_bypass (Note well that in many cases it's not a good idea to allow users to bypass server-side cache, as this may be used as a DoS vector.) -- Maxim Dounin http://nginx.org/ From jeroen.ooms at stat.ucla.edu Thu Mar 13 17:05:22 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Thu, 13 Mar 2014 10:05:22 -0700 Subject: nginx in ubuntu (trusty) 14.04 Message-ID: After upgrading my stack Apache2-Nginx Ubuntu stack to Ubuntu 14.04 beta, I am getting these errors in /var/log/nginx/error.log 2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98: Address already in use) 2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98: Address already in use) 2014/03/13 16:11:01 [emerg] 14625#0: still could not bind() However the strange thing is that the default configuration of nginx on Ubuntu does not include any sites running on port 80 (because Apache uses this port). In previous versions of Ubuntu I have never seen these messages. So I don't quite understand where these errors are coming from. Anyone familiar with the nginx package in Ubuntu can explain? From nginx-forum at nginx.us Thu Mar 13 17:59:03 2014 From: nginx-forum at nginx.us (abstein2) Date: Thu, 13 Mar 2014 13:59:03 -0400 Subject: Include Performance Message-ID: Is there any negative performance impact with chaining include commands on nginx? For example, are any of these worse than any of the others from a performance perspective: In nginx.conf: include domain_config_1.conf; include domain_config_2.conf; OR In nginx.conf: include domain_configs.conf; In domain_configs.conf: include domain_config_1.conf; include domain_config_2.conf; OR In nginx.conf: include domain_configs.conf; In domain_configs.conf: include domain_config_list.conf; In domain_config_list.conf: include domain_config_1.conf; include domain_config_2.conf; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248331,248331#msg-248331 From igor at sysoev.ru Thu Mar 13 18:00:40 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 13 Mar 2014 22:00:40 +0400 Subject: Include Performance In-Reply-To: References: Message-ID: <5C67D6B7-D9F0-4DED-864E-AB1D559F4FF7@sysoev.ru> On 13 Mar 2014, at 21:59, abstein2 wrote: > Is there any negative performance impact with chaining include commands on > nginx? > > For example, are any of these worse than any of the others from a > performance perspective: > > In nginx.conf: > include domain_config_1.conf; > include domain_config_2.conf; > > OR > > In nginx.conf: > include domain_configs.conf; > > In domain_configs.conf: > include domain_config_1.conf; > include domain_config_2.conf; > > OR > > In nginx.conf: > include domain_configs.conf; > > In domain_configs.conf: > include domain_config_list.conf; > > In domain_config_list.conf: > include domain_config_1.conf; > include domain_config_2.conf; No impact. ? Igor Sysoev http://nginx.com From nginx-forum at nginx.us Thu Mar 13 18:04:19 2014 From: nginx-forum at nginx.us (abstein2) Date: Thu, 13 Mar 2014 14:04:19 -0400 Subject: Include Performance In-Reply-To: <5C67D6B7-D9F0-4DED-864E-AB1D559F4FF7@sysoev.ru> References: <5C67D6B7-D9F0-4DED-864E-AB1D559F4FF7@sysoev.ru> Message-ID: Awesome -- thanks so much for the quick reply! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248331,248333#msg-248333 From nginx-forum at nginx.us Thu Mar 13 19:04:11 2014 From: nginx-forum at nginx.us (nginxu14) Date: Thu, 13 Mar 2014 15:04:11 -0400 Subject: secp521r1 removed from 1.4.6 In-Reply-To: <20140313162707.GV34696@mdounin.ru> References: <20140313162707.GV34696@mdounin.ru> Message-ID: Sorry for wasting your time you are correct secp512r1 isnt there when I run the command. Im guessing that secp256r1 isnt in the list because its just the default one. Just using the default settings and not setting a curve uses secp256r1 and secp384r1 works by setting it in ssl_ecdh_curve. I like CentOS its the only OS I use for servers but this kind of thing annoys me about CentOS because its waiting for Red Hat to enable secp521r1. I dont have the need for it but it would be nice to have the option. Looking at this: https://bugzilla.redhat.com/show_bug.cgi?id=1021897#c7 it is coming but not sure when. Thanks very much for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248325,248334#msg-248334 From jefftk at google.com Thu Mar 13 19:34:16 2014 From: jefftk at google.com (Jeff Kaufman) Date: Thu, 13 Mar 2014 15:34:16 -0400 Subject: Guide on switching from distro-provided nginx to nginx built from source? Message-ID: I haven't been able to find a good guide for people who have been using nginx as installed by their linux distribution who want to switch to using an nginx they built themselves. This comes up a lot with ngx_pagespeed because for many users we're the first module they want which isn't packaged with their distro. Does this guide exist? If not, I might write one for ubuntu at least. This is somewhat similar to a regular "install nginx from source" guide but it needs to have at least: * how to get a list of the modules you need to add (nginx -V | grep configure_arguments) * how to copy your config over * how to keep and modify your nginx init script when uninstalling the distro-provided nginx * how to do all of this with minimal downtime and risk on a single VPS Jeff From richard.ibbotson at gmail.com Thu Mar 13 20:11:51 2014 From: richard.ibbotson at gmail.com (Richard Ibbotson) Date: Thu, 13 Mar 2014 20:11:51 +0000 Subject: nginx in ubuntu (trusty) 14.04 In-Reply-To: References: Message-ID: <1805484.bpV2e2QnfS@sheflug.sheflug.net> On Thursday 13 Mar 2014 10:05:22 Jeroen Ooms wrote: > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed > (98: Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98: > Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed > (98: Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98: > Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: still could not bind() > > However the strange thing is that the default configuration of nginx > on Ubuntu does not include any sites running on port 80 (because > Apache uses this port). In previous versions of Ubuntu I have never > seen these messages. So I don't quite understand where these errors > are coming from. > > Anyone familiar with the nginx package in Ubuntu can explain? Not really but... After upgrading last Friday I found something similar and also got "bad gateway 502" and php5-fpm segfaulted and crashed. A downgrade to the Ubuntu 12.04 version of PHP5 fixed it. All a bit strange. The present beta version of 14.04 would seem to be very wobbly just now. I tried to build from the PHP source but that crashed after running make. I think that explains where I was going wrong. I didn't know that php5-fpm was broken in the 14.04 version. Finally got some web pages out of NGINX the other day. -- Richard Sheffield UK https://twitter.com/SleepyPenguin1 From steve at greengecko.co.nz Thu Mar 13 22:02:03 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 14 Mar 2014 11:02:03 +1300 Subject: Guide on switching from distro-provided nginx to nginx built from source? In-Reply-To: References: Message-ID: <1394748123.24724.217.camel@steve-new> On Thu, 2014-03-13 at 15:34 -0400, Jeff Kaufman wrote: > I haven't been able to find a good guide for people who have been > using nginx as installed by their linux distribution who want to > switch to using an nginx they built themselves. This comes up a lot > with ngx_pagespeed because for many users we're the first module they > want which isn't packaged with their distro. Does this guide exist? > If not, I might write one for ubuntu at least. > > This is somewhat similar to a regular "install nginx from source" > guide but it needs to have at least: > > * how to get a list of the modules you need to add (nginx -V | grep > configure_arguments) > * how to copy your config over > * how to keep and modify your nginx init script when uninstalling the > distro-provided nginx > * how to do all of this with minimal downtime and risk on a single VPS > > Jeff > A good starting point is to build it up using the current configuration delivered from your distro. That way you don't need to fool around moving config files, etc and a reinstall is just to copy /usr/sbin/nginx into place. You need a dev environment... apt-get install build-essential or yum groupinstall "Development Tools" You'll need a load of dependencies, so expect to run the configure quite a few times before it succeeds! And the source code... cd /usr/local/src wget http://nginx.org/download/nginx-1.5.11.tar.gz wget http://nginx.org/download/nginx-1.5.11.tar.gz.asc gpg --keyserver pgpkeys.mit.edu --recv-key A1C052F gpg nginx-1.5.11.tar.gz.asc Code downloaded and verified. expand the archive and move into it tar xf nginx-1.5.11.tar.gz cd nginx-1.5.11 To get the current configuration, use the command nginx -V I use the output to generate a file build.sh which I then modify ready to run the configure ( this is on ans amazon ec2 server )... $ cat build.sh ./configure \ --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --http-client-body-temp-path=/var/cache/nginx/client_temp \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \ --http-scgi-temp-path=/var/cache/nginx/scgi_temp \ --user=nginx \ --group=nginx \ --with-http_ssl_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-mail \ --with-mail_ssl_module \ --with-file-aio \ --with-ipv6 \ --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' so I can keep on running sh build.sh until it completes correctly. then you can run make make install nginx -t and if that is successful, restart nginx to get it running. Make sure you've saved the old version of nginx first, although both rpm and dpkg offer options to reinstall if necessary. You can also use this as a starting point to build in ngx_pagespeed support for example ( good howtos online ). I also strip /usr/sbin/nginx to drop the debug info. It's way more important to do this if using the aforementioned extension, as IIRC the end result is over 100MB. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From jreeve at myintervals.com Thu Mar 13 22:34:28 2014 From: jreeve at myintervals.com (John Reeve) Date: Thu, 13 Mar 2014 17:34:28 -0500 Subject: When to new directives from mainline trickle down to stable? Message-ID: <67A19513428B64478A23C1572D5A866618C21206CF@DFW1MBX18.mex07a.mlsrvr.com> Hi! We are currently using the stable release of nginx. We'd really like to implement the sticky directive that was added to the HTTP Upstream Module in the mainline release in 1.5.7 (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky) My question is this... how long does it take for a new directive like this to appear in stable? 1.5.7 was released in Nov 2013. I'm wondering if we should move to mainline in our production environment, or wait patiently for sticky to become part of stable. I've scoured the FAQs and docs but can't find any info on when features trickle down to stable. Any insight you can provide will be greatly appreciated. Thanks! Regards, John Reeve -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 13 22:34:52 2014 From: nginx-forum at nginx.us (bcx) Date: Thu, 13 Mar 2014 18:34:52 -0400 Subject: Ignored Cache-Control request header In-Reply-To: <20140313163103.GW34696@mdounin.ru> References: <20140313163103.GW34696@mdounin.ru> Message-ID: <74c2e73a94a0168d4136c468e786642a.NginxMailingListEnglish@forum.nginx.org> Thank you for your suggestion. I understand about the DoS issue. proxy_cache_bypass indeed is the solution. Documentation was not clear about it, but the result is written to cache. The cache is only bypassed in the lookup fase, not in the write back fase. I worked out this bit of configuration. The added header is very useful while testing, I'd remove it in production. location / { if ($http_cache_control = "no-cache") { set $ctrl_Ffive_ed "yes"; } proxy_cache_bypass $ctrl_Ffive_ed; add_header X-cache-bypass $ctrl_Ffive_ed; ...other config... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248326,248338#msg-248338 From reallfqq-nginx at yahoo.fr Thu Mar 13 23:06:14 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 14 Mar 2014 00:06:14 +0100 Subject: Guide on switching from distro-provided nginx to nginx built from source? In-Reply-To: <1394748123.24724.217.camel@steve-new> References: <1394748123.24724.217.camel@steve-new> Message-ID: For testing the new binary and being able to revert quickly to the other one, use the advice on this documentation page: http://nginx.org/en/docs/control.html That way, you will have virtually no downtime and only stop the old nginx process when you decide the new one is doing its job. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarah at nginx.com Thu Mar 13 23:15:18 2014 From: sarah at nginx.com (Sarah Novotny) Date: Thu, 13 Mar 2014 16:15:18 -0700 Subject: When to new directives from mainline trickle down to stable? Message-ID: Hi John, > On Thu, Mar 13, 2014 at 3:34 PM, John Reeve wrote: > Hi! > > We are currently using the stable release of nginx. We'd really like to implement the sticky directive that was added to the HTTP Upstream Module in the mainline release in 1.5.7 (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky) > > My question is this? how long does it take for a new directive like this to appear in stable? 1.5.7 was released in Nov 2013. I'm wondering if we should move to mainline in our production environment, or wait patiently for sticky to become part of stable. > > I've scoured the FAQs and docs but can't find any info on when features trickle down to stable. Any insight you can provide will be greatly appreciated. Thanks! Our stable releases are cut annually midyear. So, 1.5.X will become 1.6 in a few months. However, there is a further complexity in that some of the features in the 1.5.X line are only available in our commercial release of NGINX Plus. Sticky is one of those directives. I'm sorry that's not more clear in the documentation. We're working to update the visuals and information layout in our documentation to make that easier to parse at a glance. If you have further questions about the commercial release, we'd be happy to help but I'd ask you contact me off the FOSS list and I'll connect you to the right person. Sarah > From nginx-forum at nginx.us Thu Mar 13 23:58:37 2014 From: nginx-forum at nginx.us (George) Date: Thu, 13 Mar 2014 19:58:37 -0400 Subject: Guide on switching from distro-provided nginx to nginx built from source? In-Reply-To: References: Message-ID: <57edf8605885781041cf1c798f2ed0e1.NginxMailingListEnglish@forum.nginx.org> Maybe this will help http://www.howtoforge.com/using-ngx_pagespeed-with-nginx-on-debian-wheezy and http://www.howtoforge.com/using-ngx_pagespeed-with-nginx-on-debian-jessie-testing - right up your alley for Debian distro :) I personally use CentOS build via Centmin Mod Nginx as it already includes ngx_pagespeed support out of the box http://centminmod.com/nginx_ngx_pagespeed.html :) As to minimal downtime and risk, easiest would be to do a test run first, DigitalOcean VPS charged on an hourly basis is a good platform to do testing for end users wanting to make the jump from pre-packaged Nginx builds to source compilation. I suppose you could even automate the entire transition and shell script something to do all the leg work. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248335,248342#msg-248342 From jreeve at myintervals.com Fri Mar 14 00:02:32 2014 From: jreeve at myintervals.com (John Reeve) Date: Thu, 13 Mar 2014 19:02:32 -0500 Subject: When to new directives from mainline trickle down to stable? In-Reply-To: References: Message-ID: <67A19513428B64478A23C1572D5A866618C21206DE@DFW1MBX18.mex07a.mlsrvr.com> Sarah (and Aaron), Thank you the clarification. At this point we'll have to revisit our options for session persistance and load balancing. If we have any questions about nginx I'll reach out to you. thanks! John -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Sarah Novotny Sent: Thursday, March 13, 2014 4:15 PM To: nginx at nginx.org Subject: re: When to new directives from mainline trickle down to stable? Hi John, > On Thu, Mar 13, 2014 at 3:34 PM, John Reeve wrote: > Hi! > > We are currently using the stable release of nginx. We'd really like > to implement the sticky directive that was added to the HTTP Upstream > Module in the mainline release in 1.5.7 > (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky) > > My question is this... how long does it take for a new directive like this to appear in stable? 1.5.7 was released in Nov 2013. I'm wondering if we should move to mainline in our production environment, or wait patiently for sticky to become part of stable. > > I've scoured the FAQs and docs but can't find any info on when features trickle down to stable. Any insight you can provide will be greatly appreciated. Thanks! Our stable releases are cut annually midyear. So, 1.5.X will become 1.6 in a few months. However, there is a further complexity in that some of the features in the 1.5.X line are only available in our commercial release of NGINX Plus. Sticky is one of those directives. I'm sorry that's not more clear in the documentation. We're working to update the visuals and information layout in our documentation to make that easier to parse at a glance. If you have further questions about the commercial release, we'd be happy to help but I'd ask you contact me off the FOSS list and I'll connect you to the right person. Sarah > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From niranjankhare at hotmail.com Fri Mar 14 04:01:58 2014 From: niranjankhare at hotmail.com (Niranjan Khare) Date: Fri, 14 Mar 2014 09:31:58 +0530 Subject: SMNP monitoring support Message-ID: Hello all, I'd like to know if nginx supports / plans to support triggering SNMP alarms in the near future? Thanks, Niru -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Fri Mar 14 05:00:48 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 14 Mar 2014 18:00:48 +1300 Subject: logical variables Message-ID: <53228D00.7090800@greengecko.co.nz> If I pass a variable, set to true via a fastcgi_param, does it arrive as a logical or string value? If it's a string, is there a way to pass as a logical true/false? Cheers, Steve From lists at ruby-forum.com Fri Mar 14 07:23:16 2014 From: lists at ruby-forum.com (Ellen Zhang) Date: Fri, 14 Mar 2014 08:23:16 +0100 Subject: renesola renewable energy Message-ID: <7e1f275f326fbbcda0bc6d68eb24d6e6@ruby-forum.com> High-efficiency as well as Vitality Conserving Smart BROUGHT Lighting could be the Preferred for that Electric Lighting effects Market place Vitality preservation and saving the environment are hot subjects throughout today?s modern society. ?Green? has become a popular expression. Through adopting using LEDs, nearby residential areas, govt facilities, businesses, and also homeowners can easily lessen how much cash spent on electricity expenses in addition to save strength as well. Learn about a variety of features about LEDS in comparison with conventional light. Among the Southeast?s major solar EPCs, SunEnergy1, may have obtained 70MW worth associated with ReneSola?s 72-cell high efficiency Virtus II polycrystalline quests because of the finish in the 12 months. The greater in comparison with 233, 000 adventures will probably be used in several projects about Idaho. With all the technique the business performs, many of us require a element provider to be able to inventory stock domestically and give nimble logistics, most while ensuring their particular modules are from the finest quality and sturdiness. This season, ReneSola is now apart from our targets, showing their own efficient at giving this program many of us require, resulting in what we should comprehend to become long lasting relationship along with one of the nation?s primary SOLAR FARM element suppliers.=renesola.com]renesola green home -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Fri Mar 14 08:18:30 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Mar 2014 08:18:30 +0000 Subject: logical variables In-Reply-To: <53228D00.7090800@greengecko.co.nz> References: <53228D00.7090800@greengecko.co.nz> Message-ID: <20140314081830.GN29880@craic.sysops.org> On Fri, Mar 14, 2014 at 06:00:48PM +1300, Steve Holdoway wrote: Hi there, > If I pass a variable, set to true via a fastcgi_param, does it arrive as > a logical or string value? That's a fastcgi thing. http://www.fastcgi.com/devkit/doc/fcgi-spec.html#S5.2 It arrives as some bytes. How you interpret it is up to you. > If it's a string, is there a way to pass as a logical true/false? What bytes do you want to represent logical true/false? Send those bytes, and interpret them in your application. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Mar 14 09:02:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Mar 2014 13:02:37 +0400 Subject: secp521r1 removed from 1.4.6 In-Reply-To: References: <20140313162707.GV34696@mdounin.ru> Message-ID: <20140314090237.GY34696@mdounin.ru> Hello! On Thu, Mar 13, 2014 at 03:04:11PM -0400, nginxu14 wrote: > Sorry for wasting your time you are correct secp512r1 isnt there when I run > the command. > > Im guessing that secp256r1 isnt in the list because its just the default > one. Just using the default settings and not setting a curve uses secp256r1 > and secp384r1 works by setting it in ssl_ecdh_curve. Secp256r1 and prime256v1 are just different names of the same curve. (And yes, it's used by default.) > I like CentOS its the only OS I use for servers but this kind of thing > annoys me about CentOS because its waiting for Red Hat to enable secp521r1. > I dont have the need for it but it would be nice to have the option. 256 bit ECC is believed to be equivalent to 3096 bit RSA, and 521 bit ECC - to 16384 bit RSA. So in case of https, unless you are using 16384 bit RSA certificates, use of secp521r1 is mostly pointless and just wastes CPU cycles. > Looking at this: https://bugzilla.redhat.com/show_bug.cgi?id=1021897#c7 it > is coming but not sure when. Note well that this link correctly points out that secp521r1 isn't supported by IE (yet?), so it's use isn't a good idea from compatibility point of view, too. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Mar 14 09:10:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Mar 2014 13:10:31 +0400 Subject: Ignored Cache-Control request header In-Reply-To: <74c2e73a94a0168d4136c468e786642a.NginxMailingListEnglish@forum.nginx.org> References: <20140313163103.GW34696@mdounin.ru> <74c2e73a94a0168d4136c468e786642a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140314091031.GA34696@mdounin.ru> Hello! On Thu, Mar 13, 2014 at 06:34:52PM -0400, bcx wrote: > Thank you for your suggestion. I understand about the DoS issue. > proxy_cache_bypass indeed is the solution. Documentation was not clear about > it, but the result is written to cache. The cache is only bypassed in the > lookup fase, not in the write back fase. Documentation explicitly says that proxy_cache_bypass "defines conditions under which the response will not be taken from a cache". There is proxy_no_cache to control saving responses to a cache. > I worked out this bit of configuration. The added header is very useful > while testing, I'd remove it in production. > > location / { > if ($http_cache_control = "no-cache") { > set $ctrl_Ffive_ed "yes"; > } > proxy_cache_bypass $ctrl_Ffive_ed; > add_header X-cache-bypass $ctrl_Ffive_ed; Just a side note: Use of map{} for such things is usually a better idea, though it probably doesn't matter for testing. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Mar 14 09:13:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Mar 2014 13:13:27 +0400 Subject: nginx in ubuntu (trusty) 14.04 In-Reply-To: References: Message-ID: <20140314091326.GB34696@mdounin.ru> Hello! On Thu, Mar 13, 2014 at 10:05:22AM -0700, Jeroen Ooms wrote: > After upgrading my stack Apache2-Nginx Ubuntu stack to Ubuntu 14.04 > beta, I am getting these errors in /var/log/nginx/error.log > > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed (98: > Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98: > Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed (98: > Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98: > Address already in use) > 2014/03/13 16:11:01 [emerg] 14625#0: still could not bind() > > However the strange thing is that the default configuration of nginx > on Ubuntu does not include any sites running on port 80 (because > Apache uses this port). In previous versions of Ubuntu I have never > seen these messages. So I don't quite understand where these errors > are coming from. > > Anyone familiar with the nginx package in Ubuntu can explain? Note that "listen 80" is the default - so check if there is a server{} block without listen directives. -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Fri Mar 14 11:53:57 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 14 Mar 2014 11:53:57 +0000 Subject: SMNP monitoring support In-Reply-To: References: Message-ID: On 14 March 2014 04:01, Niranjan Khare wrote: > Hello all, > > I'd like to know if nginx supports / plans to support triggering SNMP alarms > in the near future? I would suggest, and hope, not. That sort of feature creep and bloat doesn't belong anywhere near a small, efficient HTTP proxy! You might find this of interest: http://wp.daveheavyindustries.com/2013/08/12/nginx-snmp-monitoring/ I found it using this: https://www.google.com/ HTH, Jonathan From nginx.org at maclemon.at Fri Mar 14 14:02:23 2014 From: nginx.org at maclemon.at (MacLemon) Date: Fri, 14 Mar 2014 15:02:23 +0100 Subject: secp521r1 removed from 1.4.6 In-Reply-To: <20140314090237.GY34696@mdounin.ru> References: <20140313162707.GV34696@mdounin.ru> <20140314090237.GY34696@mdounin.ru> Message-ID: <9067C838-39E8-460B-BEF3-FA2A5F34B179@maclemon.at> On 14.03.2014, at 10:02, Maxim Dounin wrote: > Note well that this link correctly points out that secp521r1 isn't > supported by IE (yet?), so it's use isn't a good idea from > compatibility point of view, too. IE is the odd one out when it comes to ECC curves support. All other browsers I've checked do support secp521r1 (and secp384r1/secp256r1). We're recommending to use secp384r1 in our Applied Crypto Hardening[0] guide IF you decide to use ECC with NIST curves. If you want to provide forward secrecy to IE users you need to use ECC (ECDHE) since IE (again) is the only browser (I know of) to not support DHE. Instead of removing curves we would actually need support for curve_lists since OpenSSL does support this if a list is passed by an application linked against it. This would open the chance to support better curves[1] with nothing-up-your-sleve numbers with a fallback to NIST curves. IMHO this could really help with the old chicken-and-egg problem of server vs. client support. Best regards MacLemon Full disclosure: I'm a co-author of ?Applied crypto hardening?. [0]: https://bettercrypto.org/ [1]: http://safecurves.cr.yp.to/ From makailol7 at gmail.com Fri Mar 14 14:05:52 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Fri, 14 Mar 2014 19:35:52 +0530 Subject: How to send proxy cache status to backend server? Message-ID: Hello, I have been using proxy cache of Nginx. It provides response header to indicate cache status. Is there some way to forward the cache status (in case of miss, expired or revalidate ) to backend upstream server? Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 14 15:41:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Mar 2014 19:41:42 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: Message-ID: <20140314154142.GO34696@mdounin.ru> Hello! On Fri, Mar 14, 2014 at 07:35:52PM +0530, Makailol Charls wrote: > Hello, > > I have been using proxy cache of Nginx. It provides response header to > indicate cache status. Is there some way to forward the cache status (in > case of miss, expired or revalidate ) to backend upstream server? The proxy_set_header directive should work, see http://nginx.org/r/proxy_set_header. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Mar 14 15:51:35 2014 From: nginx-forum at nginx.us (Emarginated) Date: Fri, 14 Mar 2014 11:51:35 -0400 Subject: New to nginx: ssl/sticky/conditional proxying In-Reply-To: References: Message-ID: <36dce2d1ba3b4d1d27257477245707a4.NginxMailingListEnglish@forum.nginx.org> Oh, well, thanks for the warm welcome! :) Just kidding, i'm going on reading documentantio and now i have a semi-functional nginx.conf. I miss something, so maybe someone coul help me on the "conditions". I need to send IP to specific BACKEND server. worker_processes 4; worker_priority -1; worker_rlimit_nofile 8192; worker_cpu_affinity 0001 0010 0100 1000; user nginx; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { multi_accept on; worker_connections 4096; } http { ####SSL ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_certificate include/cert.pem; ssl_certificate_key include/key.pem; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ###LOG log_format apache '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$http_cookie"'; access_log /var/log/nginx/access.log apache; ###### UPSTREAM upstream apache{ sticky; server localhost:81; server localhost:82; server localhost:83; } ####CORE server { listen *:80; listen *:443 ssl; keepalive_timeout 70; #### REVERSE PRXYING location / { proxy_set_header Host $host; proxy_pass http://apache; <-- do i send also HTTPS requestes?) ---> i see that connecting to nginx on port 443 redirect me to backend, but not sure if it is running on https there). proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; include /etc/nginx/mime.types; } } } Thank you. Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248320,248382#msg-248382 From nginx-forum at nginx.us Fri Mar 14 18:15:39 2014 From: nginx-forum at nginx.us (autogun) Date: Fri, 14 Mar 2014 14:15:39 -0400 Subject: Seeking for dynamic proxy_pass solution In-Reply-To: <6c41b902cb356dcf1e1553ecbbea666a.NginxMailingListEnglish@forum.nginx.org> References: <6c41b902cb356dcf1e1553ecbbea666a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi all, I've managed to solve this, with nginx_substitutions_filter module (http://wiki.nginx.org/HttpSubsModule) It's very easily done. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248258,248399#msg-248399 From pablo.platt at gmail.com Fri Mar 14 20:56:15 2014 From: pablo.platt at gmail.com (pablo platt) Date: Fri, 14 Mar 2014 22:56:15 +0200 Subject: TURN/TLS proxy Message-ID: Hi, Is it possible to use NGINX to proxy TURN/TLS? TURN is a relay server for WebRTC and it can work with UDP and TCP packets. It feels very similar to Websockets so I wonder if it can work out of the box or if we there are plans to support it. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 14 22:23:05 2014 From: nginx-forum at nginx.us (nginxu14) Date: Fri, 14 Mar 2014 18:23:05 -0400 Subject: secp521r1 removed from 1.4.6 In-Reply-To: <20140314090237.GY34696@mdounin.ru> References: <20140314090237.GY34696@mdounin.ru> Message-ID: <1e0e91576d0e3231cab7470cd8604801.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Mar 13, 2014 at 03:04:11PM -0400, nginxu14 wrote: > > > Sorry for wasting your time you are correct secp512r1 isnt there > when I run > > the command. > > > > Im guessing that secp256r1 isnt in the list because its just the > default > > one. Just using the default settings and not setting a curve uses > secp256r1 > > and secp384r1 works by setting it in ssl_ecdh_curve. > > Secp256r1 and prime256v1 are just different names of the same > curve. (And yes, it's used by default.) > > > I like CentOS its the only OS I use for servers but this kind of > thing > > annoys me about CentOS because its waiting for Red Hat to enable > secp521r1. > > I dont have the need for it but it would be nice to have the option. > > 256 bit ECC is believed to be equivalent to 3096 bit RSA, and 521 > bit ECC - to 16384 bit RSA. So in case of https, unless you are > using 16384 bit RSA certificates, use of secp521r1 is mostly > pointless and just wastes CPU cycles. > > > Looking at this: > https://bugzilla.redhat.com/show_bug.cgi?id=1021897#c7 it > > is coming but not sure when. > > Note well that this link correctly points out that secp521r1 isn't > supported by IE (yet?), so it's use isn't a good idea from > compatibility point of view, too. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx For me its just about having the option. I know secp521r1 is coming from Red Hat. In the same link a member of staff says they got the go ahead from Legal. I read somewhere the problem is because its patented and Red Hat dont want to risk it. Hopefully in the next few months its enabled/added. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248325,248402#msg-248402 From nginx-forum at nginx.us Sat Mar 15 00:07:23 2014 From: nginx-forum at nginx.us (nginxu14) Date: Fri, 14 Mar 2014 20:07:23 -0400 Subject: secp521r1 removed from 1.4.6 In-Reply-To: <9067C838-39E8-460B-BEF3-FA2A5F34B179@maclemon.at> References: <9067C838-39E8-460B-BEF3-FA2A5F34B179@maclemon.at> Message-ID: Yeh I think MS just loves being crap and doing things wrong. Same from my testing and research ive seen all browsers except MS support both DHE and secp521r1. Ive heard of support for a list of curves and then the best supported is used but im not sure if browsers actually support this yet. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248325,248404#msg-248404 From makailol7 at gmail.com Sat Mar 15 07:44:51 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Sat, 15 Mar 2014 13:14:51 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: <20140314154142.GO34696@mdounin.ru> References: <20140314154142.GO34696@mdounin.ru> Message-ID: Hi, I have been using add_header to send cache status to downstream server or client like this. This is working fine. add_header Cache-Status $upstream_cache_status; Now I tried proxy_set_header to send header to upstream as below. proxy_set_header Cache-Status $upstream_cache_status; But this could not send headers to upstream backend server. I think we can not use "$upstream_*" variables to send headers to upstream backend. Would you suggest me some way to send cache-status to backend upstream server? Thanks, Makailol On Fri, Mar 14, 2014 at 9:11 PM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 14, 2014 at 07:35:52PM +0530, Makailol Charls wrote: > > > Hello, > > > > I have been using proxy cache of Nginx. It provides response header to > > indicate cache status. Is there some way to forward the cache status (in > > case of miss, expired or revalidate ) to backend upstream server? > > The proxy_set_header directive should work, see > http://nginx.org/r/proxy_set_header. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Mar 15 17:30:18 2014 From: nginx-forum at nginx.us (gokhanege) Date: Sat, 15 Mar 2014 13:30:18 -0400 Subject: Nginx getting last variables on exploded uri Message-ID: Where is the problem? I cound not found it. My uri is : /rdev/4/0/9/2/3/2/409232-750-0-257506-supra-ayakkabi-resimleri.jpg Nginx conf is : if ($request_uri ~* "/rdev/(.*)-(.*)-(.*)-(.*)\.jpg$") { set $exp1 $1; set $exp2 $2; set $exp3 $3; set $exp4 $4; } add_header out_head1 $exp1; add_header out_head2 $exp2; add_header out_head3 $exp3; add_header out_head4 $exp4; Header Output : out_head1: 4/0/9/2/3/2/409232-750-0-257506 out_head2: supra out_head3: ayakkabi out_head4: resimleri I want to : out_head1: 4/0/9/2/3/2/409232 out_head2: 750 out_head3: 0 out_head4: 257506-supra-ayakkabi-resimleri Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248408,248408#msg-248408 From igor at sysoev.ru Sat Mar 15 17:42:15 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 15 Mar 2014 21:42:15 +0400 Subject: Nginx getting last variables on exploded uri In-Reply-To: References: Message-ID: <1C29E677-BCE8-43B5-9036-B444AEDA7D60@sysoev.ru> On Mar 15, 2014, at 21:30 , gokhanege wrote: > Where is the problem? I cound not found it. > > My uri is : > > /rdev/4/0/9/2/3/2/409232-750-0-257506-supra-ayakkabi-resimleri.jpg > > Nginx conf is : > > if ($request_uri ~* "/rdev/(.*)-(.*)-(.*)-(.*)\.jpg$") { > set $exp1 $1; > set $exp2 $2; > set $exp3 $3; > set $exp4 $4; > } > add_header out_head1 $exp1; > add_header out_head2 $exp2; > add_header out_head3 $exp3; > add_header out_head4 $exp4; > > Header Output : > > out_head1: 4/0/9/2/3/2/409232-750-0-257506 > out_head2: supra > out_head3: ayakkabi > out_head4: resimleri > > I want to : > > out_head1: 4/0/9/2/3/2/409232 > out_head2: 750 > out_head3: 0 > out_head4: 257506-supra-ayakkabi-resimleri > > Thanks. location ~ ^/rdev/(?[^-]+)-(?[^-]+)-(?[^-]+)-(?.+)\.jpg$ { ... } -- Igor Sysoev http://nginx.com From contact at jpluscplusm.com Sat Mar 15 17:47:42 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 15 Mar 2014 17:47:42 +0000 Subject: Nginx getting last variables on exploded uri In-Reply-To: References: Message-ID: On 15 Mar 2014 17:30, "gokhanege" wrote: > > Where is the problem? I cound not found it. Your problem is that all of your .* matches are greedy, whereas you (probably) want only the last to be greedy. Have a Google for how to do that with regular expressions. You also might want to replace /some/ of your "(.*)" with "([^-]*)" if they should /never/ contain hyphens. HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Mar 15 18:11:40 2014 From: nginx-forum at nginx.us (gokhanege) Date: Sat, 15 Mar 2014 14:11:40 -0400 Subject: Nginx getting last variables on exploded uri In-Reply-To: <1C29E677-BCE8-43B5-9036-B444AEDA7D60@sysoev.ru> References: <1C29E677-BCE8-43B5-9036-B444AEDA7D60@sysoev.ru> Message-ID: <6ff1b9fb4ddabdb1056bd97a28a56cfd.NginxMailingListEnglish@forum.nginx.org> Thanks for all. Working it now. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248408,248411#msg-248411 From thomas at glanzmann.de Sat Mar 15 22:11:51 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Sat, 15 Mar 2014 23:11:51 +0100 Subject: [PATCH] RFC: ngx_http_upstream_process_upgraded: Allocate buffers also for data from upstream Message-ID: <20140315221151.GA12931@glanzmann.de> While using the ugprade funcationality of nginx to tunnel propiertary HTTP commands I noticed that data were only passing through from upstream to downstream but not the other way around. The reason for that was that no receive buffers for downstream were allocated. Normally the receiver buffers for upstream are allocated in ngx_http_upstream_process_header. In my case they were not because I upgrade the connection before exchanging any data. Maybe you consider this for upstream. --- src/http/ngx_http_upstream.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c index cf9ca0d..5ff1f2b 100644 --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2644,20 +2644,20 @@ ngx_http_upstream_process_upgraded(ngx_http_request_t *r, b->end = b->last; do_write = 1; } + } + if (b->start == NULL) { + b->start = ngx_palloc(r->pool, u->conf->buffer_size); if (b->start == NULL) { - b->start = ngx_palloc(r->pool, u->conf->buffer_size); - if (b->start == NULL) { - ngx_http_upstream_finalize_request(r, u, NGX_ERROR); - return; - } - - b->pos = b->start; - b->last = b->start; - b->end = b->start + u->conf->buffer_size; - b->temporary = 1; - b->tag = u->output.tag; + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; } + + b->pos = b->start; + b->last = b->start; + b->end = b->start + u->conf->buffer_size; + b->temporary = 1; + b->tag = u->output.tag; } for ( ;; ) { -- 1.7.10.4 From mdounin at mdounin.ru Mon Mar 17 02:39:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Mar 2014 06:39:28 +0400 Subject: [PATCH] RFC: ngx_http_upstream_process_upgraded: Allocate buffers also for data from upstream In-Reply-To: <20140315221151.GA12931@glanzmann.de> References: <20140315221151.GA12931@glanzmann.de> Message-ID: <20140317023928.GP34696@mdounin.ru> Hello! On Sat, Mar 15, 2014 at 11:11:51PM +0100, Thomas Glanzmann wrote: > While using the ugprade funcationality of nginx to tunnel propiertary > HTTP commands I noticed that data were only passing through from > upstream to downstream but not the other way around. The reason for that > was that no receive buffers for downstream were allocated. Normally the > receiver buffers for upstream are allocated in > ngx_http_upstream_process_header. In my case they were not because I > upgrade the connection before exchanging any data. Maybe you consider > this for upstream. The u->buffer is allocated by ngx_http_upstream_process_header(), and ngx_http_upstream_upgrade() cannot be called bypassing ngx_http_upstream_process_header(). That is, the change you suggest isn't needed in vanilla nginx (even with custom modules). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 17 02:50:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Mar 2014 06:50:23 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140314154142.GO34696@mdounin.ru> Message-ID: <20140317025022.GQ34696@mdounin.ru> Hello! On Sat, Mar 15, 2014 at 01:14:51PM +0530, Makailol Charls wrote: > I have been using add_header to send cache status to downstream server or > client like this. This is working fine. > add_header Cache-Status $upstream_cache_status; > > Now I tried proxy_set_header to send header to upstream as below. > proxy_set_header Cache-Status $upstream_cache_status; > > But this could not send headers to upstream backend server. I think we can > not use "$upstream_*" variables to send headers to upstream backend. > > Would you suggest me some way to send cache-status to backend upstream > server? The proxy_set_header directive works fine and passes $upstream_cache_status to backends without any problems. Mostly likely you did something wrong in your tests. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 17 04:53:18 2014 From: nginx-forum at nginx.us (bignginxfan) Date: Mon, 17 Mar 2014 00:53:18 -0400 Subject: increasing open files limit Message-ID: <8a6cb01b6f473a91272aa557fde05ffc.NginxMailingListEnglish@forum.nginx.org> I am trying to increase the limit on open files for nginx. I've set cat /proc/sys/fs/file-max to 1000000, set worker_rlimit_nofile to 8388608 but I still keep getting errors like: setrlimit(RLIMIT_NOFILE, 8388608) failed (1: Operation not permitted) accept4() failed (24: Too many open files) How do I increase the limit on open files for nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248426,248426#msg-248426 From steve at greengecko.co.nz Mon Mar 17 05:38:54 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 17 Mar 2014 18:38:54 +1300 Subject: increasing open files limit In-Reply-To: <8a6cb01b6f473a91272aa557fde05ffc.NginxMailingListEnglish@forum.nginx.org> References: <8a6cb01b6f473a91272aa557fde05ffc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1395034734.3152.52.camel@steve-new> On Mon, 2014-03-17 at 00:53 -0400, bignginxfan wrote: > I am trying to increase the limit on open files for nginx. I've set cat > /proc/sys/fs/file-max to 1000000, set worker_rlimit_nofile to 8388608 but I > still keep getting errors like: > > setrlimit(RLIMIT_NOFILE, 8388608) failed (1: Operation not permitted) > accept4() failed (24: Too many open files) > > How do I increase the limit on open files for nginx? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248426,248426#msg-248426 > In /etc/sysctl.conf or a conf file in /etc/sysconfig.d fs.file-max = 1000000 and run /etc/sysctl -p /etc/sysctl.conf in /etc/security/limits.conf www-data soft nofile 500000 www-data hard nofile 800000 ( assuming nginx is running as www-data ). However, check your figures... setting max to a million open files, then trying to set rlimit_nofile to 8 million is never going to work! I suggest 800k should be plenty. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From richard at kearsley.me Mon Mar 17 06:23:00 2014 From: richard at kearsley.me (Richard Kearsley) Date: Mon, 17 Mar 2014 06:23:00 +0000 Subject: nginx SSL/SNI phase Message-ID: <532694C4.4050909@kearsley.me> Hi I came across this 'issue' on the lua module about having the ability to control which SSL certificate is used based on a Lua module handler: https://github.com/chaoslawful/lua-nginx-module/issues/331 I believe at the moment, this phase isn't exposed so there is no way to hand it off to a module (Lua or any other module) Could this phase be opened up? The current method of handling SNI requires a separate server {} for every site/certificate in nginx.conf, but also requires a restart or a HUP to make it effective - something which quickly becomes a headache as more and more sites/certficates are added. How I see this working: server { listen 80; listen 443 ssl; ssl_by_lua ' -- get a list of your sites however you usually do it local sites = require "sites" local hostnames = sites.hostnames() -- match the sni to one of the hostnames if hostnames[ngx.var.sni] then -- communicate the path of the cer/key back to nginx ngx.var.ssl_cer = hostnames[ngx.var.sni].cer_path ngx.var.ssl_key = hostnames[ngx.var.sni].key_path else ngx.var.ssl_cer = "/usr/local/nginx/conf/default.cer" ngx.var.ssl_key = "/usr/local/nginx/conf/default.key" end '; location / { # as normal } } Many thanks! Richard From thomas at glanzmann.de Mon Mar 17 08:02:45 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Mon, 17 Mar 2014 09:02:45 +0100 Subject: [PATCH] RFC: ngx_http_upstream_process_upgraded: Allocate buffers also for data from upstream In-Reply-To: <20140317023928.GP34696@mdounin.ru> References: <20140315221151.GA12931@glanzmann.de> <20140317023928.GP34696@mdounin.ru> Message-ID: <20140317080245.GB7277@glanzmann.de> Hello Maxim, > The u->buffer is allocated by ngx_http_upstream_process_header(), > and ngx_http_upstream_upgrade() cannot be called bypassing > ngx_http_upstream_process_header(). > That is, the change you suggest isn't needed in vanilla nginx > (even with custom modules). I agree. The reason for my modification was that I called ngx_http_upstream_upgrade in ngx_http_upstream_connect because I wanted to avoid that the original request was sent to upstream. Cheers, Thomas From kirilk at cloudxcel.com Mon Mar 17 15:26:00 2014 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Mon, 17 Mar 2014 17:26:00 +0200 Subject: Custom error page on SSL negotiation failure Message-ID: <7DC1DDBC-3963-4CA7-9058-C967ADAFBE3E@cloudxcel.com> Hi, I am trying to stop my customers that are trying to connect from an insecure web browser (my goal is to use only TLS1.2). I have read the documentation and I am able to set correct ssl ciphers and protocols on the server side, but I am interested in serving custom page when they are using different SSL protocol. I couldn't find any solution on the internet. Is there any way to do so with nginx? Thank you, Kiril -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3053 bytes Desc: not available URL: From luky-37 at hotmail.com Mon Mar 17 15:32:17 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 17 Mar 2014 16:32:17 +0100 Subject: Custom error page on SSL negotiation failure In-Reply-To: <7DC1DDBC-3963-4CA7-9058-C967ADAFBE3E@cloudxcel.com> References: <7DC1DDBC-3963-4CA7-9058-C967ADAFBE3E@cloudxcel.com> Message-ID: Hi, > I am trying to stop my customers that are trying to connect from an > insecure web browser (my goal is to use only TLS1.2). I have read > the documentation and I am able to set correct ssl ciphers and > protocols on the server side, but I am interested in serving custom > page when they are using different SSL protocol. If you want to serve anything, including custom error pages, than you need to allow all ssl ciphers and let your application return that error, based on the SSL cipher used. Checkout "Embedded Variables" in the SSL module [1]. Regards, Lukas [1] http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables From lists at ruby-forum.com Tue Mar 18 02:12:46 2014 From: lists at ruby-forum.com (Rita Xu) Date: Tue, 18 Mar 2014 03:12:46 +0100 Subject: solar energy Message-ID: High-efficiency along withStrength Keeping Intelligent DIRECTED Light could be the Beloved foryour Electric Lights Market place Power conservation along withprotecting the environment are usually hot topics throughout today?s community. ?Green? has changed into a common expression. Simply by implementing the usage of LEDs, community residential areas, federal amenities, firms, and also homeowners can easily decrease the amount of money invested on vitality payments along with spend less strength at the same time. Learn about the numerous features about LEDS when compared with regular lighting. One of the Southeast?s top pv EPCs, SunEnergy1, should have acquired 70MW worthy of involving ReneSola?s 72-cell substantial effectiveness Virtus II polycrystalline adventures by the end of the calendar year. The harder as compared to 233, 000 modules will probably be employed in several assignments about Idaho. While using method the enterprise works, many of us have to have a element dealer in order to stock supply domestically and offer nimble logistics, just about all although ensuring his or her web theme usually are on the highest quality and durability. -- Posted via http://www.ruby-forum.com/. From kyprizel at gmail.com Tue Mar 18 11:26:10 2014 From: kyprizel at gmail.com (kyprizel) Date: Tue, 18 Mar 2014 15:26:10 +0400 Subject: SSL session cache lifetime vs session ticket lifetime Message-ID: Hi, currently SSL session lifetime and SSL ticket lifetime are equal in nginx. If we use session tickets with big enough lifetime (12hrs), we get a lot of error log messages while allocating new sessions in shared memory: 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no memory in SSL session shared cache "SSL" We don't want to increase session cache size b/c working with it is a blocking operation and I believe it doesn't work good enought in our network scheme. As I can see - those messages are generated by ngx_slab_alloc_pages() even if session was added to the cache after expiration of some old ones. So, what do you think if we add one more config parameter to split session cache and session ticket lifetimes? Thanks. Regards, kyprizel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 18 11:33:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Mar 2014 15:33:11 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: Message-ID: <20140318113311.GL34696@mdounin.ru> Hello! On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: > Hi, > currently SSL session lifetime and SSL ticket lifetime are equal in nginx. > > If we use session tickets with big enough lifetime (12hrs), we get a lot of > error log messages while allocating new sessions in shared memory: > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no memory in > SSL session shared cache "SSL" > > We don't want to increase session cache size b/c working with it is a > blocking operation and I believe it doesn't work good enought in our > network scheme. Just a side note: I don't think that size matters from performance point of view. The only real downside is memory used. > As I can see - those messages are generated by ngx_slab_alloc_pages() even > if session was added to the cache after expiration of some old ones. > > So, what do you think if we add one more config parameter to split session > cache and session ticket lifetimes? May be better approach will be to just avoid such messages? -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Tue Mar 18 11:42:33 2014 From: kyprizel at gmail.com (kyprizel) Date: Tue, 18 Mar 2014 15:42:33 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <20140318113311.GL34696@mdounin.ru> References: <20140318113311.GL34696@mdounin.ru> Message-ID: What will be the best way to do it? On Tue, Mar 18, 2014 at 3:33 PM, Maxim Dounin wrote: > Hello! > > On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: > > > Hi, > > currently SSL session lifetime and SSL ticket lifetime are equal in > nginx. > > > > If we use session tickets with big enough lifetime (12hrs), we get a lot > of > > error log messages while allocating new sessions in shared memory: > > > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no memory in > > SSL session shared cache "SSL" > > > > We don't want to increase session cache size b/c working with it is a > > blocking operation and I believe it doesn't work good enought in our > > network scheme. > > Just a side note: I don't think that size matters from performance > point of view. The only real downside is memory used. > > > As I can see - those messages are generated by ngx_slab_alloc_pages() > even > > if session was added to the cache after expiration of some old ones. > > > > So, what do you think if we add one more config parameter to split > session > > cache and session ticket lifetimes? > > May be better approach will be to just avoid such messages? > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 18 12:05:02 2014 From: nginx-forum at nginx.us (parkerj) Date: Tue, 18 Mar 2014 08:05:02 -0400 Subject: Converting Rewrites to Nginx Message-ID: <3edf3c237bcf79a938d9981631ca6bdd.NginxMailingListEnglish@forum.nginx.org> I have been trying to convert the following htaccess rules to nginx with no luck. RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(.+)$ index.php?url=$1 [QSA,L] I've tried: location / { if (!-e $request_filename){ rewrite ^(.+)$ /index.php?url=$1 last; } } and I've also tried: location / { try_files $uri $uri/ /index.php?url=$1 last; } None seem to work. Any help with this is greatly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248446,248446#msg-248446 From superfelo at yahoo.com Tue Mar 18 12:08:55 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Tue, 18 Mar 2014 05:08:55 -0700 (PDT) Subject: PHP type. Thread Safe or Non Thread Safe? Message-ID: <1395144535.9103.YahooMailNeo@web163405.mail.gq1.yahoo.com> What kind of php to use as a fastcgi in nginx on Windows? From reallfqq-nginx at yahoo.fr Tue Mar 18 12:53:42 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 18 Mar 2014 13:53:42 +0100 Subject: PHP type. Thread Safe or Non Thread Safe? In-Reply-To: <1395144535.9103.YahooMailNeo@web163405.mail.gq1.yahoo.com> References: <1395144535.9103.YahooMailNeo@web163405.mail.gq1.yahoo.com> Message-ID: This ML is intended for nginx-related problems. Yours is PHP-related. Either ask this question on some PHP ML or RTFM ... With regards, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Tue Mar 18 13:50:54 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Tue, 18 Mar 2014 13:50:54 +0000 Subject: Converting Rewrites to Nginx In-Reply-To: <3edf3c237bcf79a938d9981631ca6bdd.NginxMailingListEnglish@forum.nginx.org> References: <3edf3c237bcf79a938d9981631ca6bdd.NginxMailingListEnglish@forum.nginx.org> Message-ID: location / { try_files $uri $uri/ /index.php?url=$1 last; } Maybe what you want here is: location / { try_files $uri $uri/ @rewrite; } location @rewrite { rewrite ^/(.*)$ /index.php/$1; } On Tue, Mar 18, 2014 at 12:05 PM, parkerj wrote: > I have been trying to convert the following htaccess rules to nginx with no > luck. > > RewriteCond %{REQUEST_FILENAME} !-d > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-l > RewriteRule ^(.+)$ index.php?url=$1 [QSA,L] > > I've tried: > > location / { > if (!-e $request_filename){ > rewrite ^(.+)$ /index.php?url=$1 last; > } > } > > and I've also tried: > > location / { > try_files $uri $uri/ /index.php?url=$1 last; > } > > None seem to work. Any help with this is greatly appreciated. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,248446,248446#msg-248446 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 18 14:12:05 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 18 Mar 2014 10:12:05 -0400 Subject: PHP type. Thread Safe or Non Thread Safe? In-Reply-To: <1395144535.9103.YahooMailNeo@web163405.mail.gq1.yahoo.com> References: <1395144535.9103.YahooMailNeo@web163405.mail.gq1.yahoo.com> Message-ID: <7914666be74edc6d271fa6e690d95cac.NginxMailingListEnglish@forum.nginx.org> Felix Quintana Wrote: ------------------------------------------------------- > What kind of php to use as a fastcgi in nginx on Windows? NTS. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248447,248454#msg-248454 From yzprofiles at gmail.com Tue Mar 18 14:52:06 2014 From: yzprofiles at gmail.com (yzprofile) Date: Tue, 18 Mar 2014 22:52:06 +0800 Subject: Why doesn't Nginx rewrite directive support '@' named_location? Message-ID: <02E56E11FEE449C98A13C82B582EC8EC@gmail.com> Hi, I have a simple question: Why doesn't Nginx rewrite directive support '@' named_location? : ) Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 18 14:54:43 2014 From: nginx-forum at nginx.us (parkerj) Date: Tue, 18 Mar 2014 10:54:43 -0400 Subject: Converting Rewrites to Nginx In-Reply-To: References: Message-ID: Thanks for pointing me in the right direction. That last part gave me a redirect loop error, so I changed it to this: rewrite ^/(.*)$ /index.php?url=$1; I applied it, loaded the site, restarted nginx, and loaded the site again. It seems to work. Hopefully, my change is not just a fluke but will continue to work. However, if you think my change will cause an issue for the future, please let me know. Again, thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248446,248456#msg-248456 From vbart at nginx.com Tue Mar 18 15:07:58 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 18 Mar 2014 19:07:58 +0400 Subject: Why doesn't Nginx rewrite directive support '@' named_location? In-Reply-To: <02E56E11FEE449C98A13C82B582EC8EC@gmail.com> References: <02E56E11FEE449C98A13C82B582EC8EC@gmail.com> Message-ID: <1732199.c1cCGLjtOF@vbart-laptop> On Tuesday 18 March 2014 22:52:06 yzprofile wrote: > Hi, > > I have a simple question: > > Why doesn't Nginx rewrite directive support '@' named_location? > > : ) The "rewrite" directive is intended to change request URI, while the named locations are intended for redirecting without URI change. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Mar 18 15:08:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Mar 2014 19:08:23 +0400 Subject: Why doesn't Nginx rewrite directive support '@' named_location? In-Reply-To: <02E56E11FEE449C98A13C82B582EC8EC@gmail.com> References: <02E56E11FEE449C98A13C82B582EC8EC@gmail.com> Message-ID: <20140318150823.GS34696@mdounin.ru> Hello! On Tue, Mar 18, 2014 at 10:52:06PM +0800, yzprofile wrote: > Hi, > > I have a simple question: > > Why doesn't Nginx rewrite directive support '@' named_location? Simple answer is: Because rewrite directive is to rewrite request URI as per it's name, while redirecting a request to a named location preserves request URI (and that's the sole purpose of named locations). More complex one is: Support of named locations in rewrites was discussed more than once, and yet we've failed to convince Igor that it will be a good change. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 18 16:00:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Mar 2014 20:00:53 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: <20140318113311.GL34696@mdounin.ru> Message-ID: <20140318160053.GX34696@mdounin.ru> Hello! On Tue, Mar 18, 2014 at 03:42:33PM +0400, kyprizel wrote: > What will be the best way to do it? Probably a flag in ngx_slab_pool_t will be good enough. > > > On Tue, Mar 18, 2014 at 3:33 PM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: > > > > > Hi, > > > currently SSL session lifetime and SSL ticket lifetime are equal in > > nginx. > > > > > > If we use session tickets with big enough lifetime (12hrs), we get a lot > > of > > > error log messages while allocating new sessions in shared memory: > > > > > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no memory in > > > SSL session shared cache "SSL" > > > > > > We don't want to increase session cache size b/c working with it is a > > > blocking operation and I believe it doesn't work good enought in our > > > network scheme. > > > > Just a side note: I don't think that size matters from performance > > point of view. The only real downside is memory used. > > > > > As I can see - those messages are generated by ngx_slab_alloc_pages() > > even > > > if session was added to the cache after expiration of some old ones. > > > > > > So, what do you think if we add one more config parameter to split > > session > > > cache and session ticket lifetimes? > > > > May be better approach will be to just avoid such messages? > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 18 16:44:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Mar 2014 20:44:35 +0400 Subject: nginx-1.5.12 Message-ID: <20140318164434.GZ34696@mdounin.ru> Changes with nginx 1.5.12 18 Mar 2014 *) Security: a heap memory buffer overflow might occur in a worker process while handling a specially crafted request by ngx_http_spdy_module, potentially resulting in arbitrary code execution (CVE-2014-0133). Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. Manuel Sadosky, Buenos Aires, Argentina. *) Feature: the "proxy_protocol" parameters of the "listen" and "real_ip_header" directives, the $proxy_protocol_addr variable. *) Bugfix: in the "fastcgi_next_upstream" directive. Thanks to Lucas Molas. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 18 16:44:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Mar 2014 20:44:57 +0400 Subject: nginx-1.4.7 Message-ID: <20140318164457.GD34696@mdounin.ru> Changes with nginx 1.4.7 18 Mar 2014 *) Security: a heap memory buffer overflow might occur in a worker process while handling a specially crafted request by ngx_http_spdy_module, potentially resulting in arbitrary code execution (CVE-2014-0133). Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. Manuel Sadosky, Buenos Aires, Argentina. *) Bugfix: in the "fastcgi_next_upstream" directive. Thanks to Lucas Molas. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 18 16:45:41 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Mar 2014 20:45:41 +0400 Subject: nginx security advisory (CVE-2014-0133) Message-ID: <20140318164541.GH34696@mdounin.ru> Hello! A bug in the experimental SPDY implementation in nginx was found, which might allow an attacker to cause a heap memory buffer overflow in a worker process by using a specially crafted request, potentially resulting in arbitrary code execution (CVE-2014-0133). The problem affects nginx 1.3.15 - 1.5.11, compiled with the ngx_http_spdy_module module (which is not compiled by default) and without --with-debug configure option, if the "spdy" option of the "listen" directive is used in a configuration file. The problem is fixed in nginx 1.5.12, 1.4.7. Patch for the problem can be found here: http://nginx.org/download/patch.2014.spdy2.txt Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. Manuel Sadosky, Buenos Aires, Argentina. -- Maxim Dounin http://nginx.org/en/donation.html From yzprofiles at gmail.com Tue Mar 18 17:44:21 2014 From: yzprofiles at gmail.com (yzprofile) Date: Wed, 19 Mar 2014 01:44:21 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaIFdoeSBkb2Vzbid0IE5naW54IHJld3JpdGUgZGlyZWN0aXZlIHN1?= =?UTF-8?B?cHBvcnQgJ0AnIG5hbWVkX2xvY2F0aW9uPw==?= In-Reply-To: <20140318150823.GS34696@mdounin.ru> References: <02E56E11FEE449C98A13C82B582EC8EC@gmail.com> <20140318150823.GS34696@mdounin.ru> Message-ID: Hi, > Simple answer is: > > Because rewrite directive is to rewrite request URI as per it's name, > while redirecting a request to a named location preserves request > URI (and that's the sole purpose of named locations). > > More complex one is: > > Support of named locations in rewrites was discussed more than > once, and yet we've failed to convince Igor that it will be a > good change. Thanks for you answer. : ) Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 18 19:10:23 2014 From: nginx-forum at nginx.us (parkerj) Date: Tue, 18 Mar 2014 15:10:23 -0400 Subject: Converting Rewrites to Nginx In-Reply-To: <3edf3c237bcf79a938d9981631ca6bdd.NginxMailingListEnglish@forum.nginx.org> References: <3edf3c237bcf79a938d9981631ca6bdd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ugh, I spoke too soon. It works when the urls are like this: http://example.com/install?step=1 But it does not work when the urls are like this: http://example.com/dashboard/ http://example.com/profile/ Sometime it brings back 404 Not Found and other times it comes back with 500 Internal Error @Miguel, trying it my way and trying it the way you suggested, both give me a 404/500. Do you have any suggestions? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248446,248485#msg-248485 From fbogdanovic at xnet.hr Tue Mar 18 19:18:45 2014 From: fbogdanovic at xnet.hr (Haris Bogdanovich) Date: Tue, 18 Mar 2014 20:18:45 +0100 Subject: starting Message-ID: <54E0EE59FB3241819508CE2253DF508E@komp> Hi. How to get started with nginx ? Is there some tutorial for Lisp usets ? What's the connection between nginx and hunchentoot ? Why do I get something about nginx when trying to access hunchentoot mailing list ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Mar 18 19:32:59 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 18 Mar 2014 20:32:59 +0100 Subject: starting In-Reply-To: <54E0EE59FB3241819508CE2253DF508E@komp> References: <54E0EE59FB3241819508CE2253DF508E@komp> Message-ID: Hello, On Tue, Mar 18, 2014 at 8:18 PM, Haris Bogdanovich wrote: > How to get started with nginx ? > ?First of all, welcome to nginx community :o)? ?All the basic principles can be found there:? ?http://nginx.org/en/docs/beginners_guide.html ? > Is there some tutorial for Lisp usets ? > What's the connection between nginx and hunchentoot ? > Why do I get something about nginx when trying to access hunchentoot > mailing list ? > ?If you have precise questions about configuring ?nginx for specific use cases, feel free to ask them here. I dunno what you are talking about. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 18 20:11:16 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 18 Mar 2014 16:11:16 -0400 Subject: [ANN] Windows nginx 1.5.12.2 Cheshire Message-ID: <5ae3681231c30cf2af6f0af24a068bc0.NginxMailingListEnglish@forum.nginx.org> 20:29 18-3-2014 nginx 1.5.12.2 Cheshire Based on nginx 1.5.12 (release 18-3-2014) with; + nginx security advisory (CVE-2014-0133) + echo-nginx-module v0.51 (upgraded 18-3-2014) + Nginx-limit-traffic-rate-module (https://github.com/bigplum/Nginx-limit-traffic-rate-module) + lua-nginx-module v0.9.6 (upgraded 18-3-2014) + changed compile order (openresty) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Additional specifications are like 13:58 9-3-2014 nginx 1.5.12.1 Cheshire Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248488,248488#msg-248488 From makailol7 at gmail.com Wed Mar 19 04:49:59 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Wed, 19 Mar 2014 10:19:59 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: <20140317025022.GQ34696@mdounin.ru> References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> Message-ID: Thanks, It is working. I was checking wrong variable name. I have noticed that when catch is being "REVALIDATED", backend receives "EXPIRED" and upstream receives "REVALIDATED". Is this correct ? On Mon, Mar 17, 2014 at 8:20 AM, Maxim Dounin wrote: > Hello! > > On Sat, Mar 15, 2014 at 01:14:51PM +0530, Makailol Charls wrote: > > > I have been using add_header to send cache status to downstream server or > > client like this. This is working fine. > > add_header Cache-Status $upstream_cache_status; > > > > Now I tried proxy_set_header to send header to upstream as below. > > proxy_set_header Cache-Status $upstream_cache_status; > > > > But this could not send headers to upstream backend server. I think we > can > > not use "$upstream_*" variables to send headers to upstream backend. > > > > Would you suggest me some way to send cache-status to backend upstream > > server? > > The proxy_set_header directive works fine and passes > $upstream_cache_status to backends without any problems. Mostly > likely you did something wrong in your tests. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 19 09:39:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Mar 2014 13:39:37 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> Message-ID: <20140319093936.GQ34696@mdounin.ru> Hello! On Wed, Mar 19, 2014 at 10:19:59AM +0530, Makailol Charls wrote: > Thanks, It is working. I was checking wrong variable name. > I have noticed that when catch is being "REVALIDATED", backend receives > "EXPIRED" and upstream receives "REVALIDATED". Is this correct ? Yes, it's correct. The "REVALIDATED" status is only known after nginx recieves 304 from a backend. Before this happens, the only thing known is that response in the cache no longer valid, that is, status is "EXPIRED". -- Maxim Dounin http://nginx.org/ From reto.haeberli at comnode.net Wed Mar 19 09:40:58 2014 From: reto.haeberli at comnode.net (Reto Haeberli) Date: Wed, 19 Mar 2014 10:40:58 +0100 Subject: Optimal gcc parameters for building nginx from source Message-ID: Hello list Are there recommended gcc parameters when building nginx from source - regarding performance and security? --with-cc-opt=parameters I noticed that the prebuilt package on ubuntu used --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' Is this recommended? What would make sense resp. be a best practice approach? thx -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Wed Mar 19 10:00:03 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Wed, 19 Mar 2014 15:30:03 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> Message-ID: One more thing I would like to know, would it be possible to proxy_pass request conditionally based on $upstream_cache_status ? For example here we set request header with proxy_set_header Cache-Status $upstream_cache_status; instead we can use condition like below, if ($upstream_cache_status = "BYPASS") { proxy_pass http://someIP; } if ($upstream_cache_status = "MISS") { proxy_pass http://anotherIP; } I tried this but could not find this working. I can set request header for backend with proxy_set_header but can not use this variable conditionally. Thanks, Makailol On Wed, Mar 19, 2014 at 10:19 AM, Makailol Charls wrote: > Thanks, It is working. I was checking wrong variable name. > I have noticed that when catch is being "REVALIDATED", backend receives > "EXPIRED" and upstream receives "REVALIDATED". Is this correct ? > > > > > On Mon, Mar 17, 2014 at 8:20 AM, Maxim Dounin wrote: > >> Hello! >> >> On Sat, Mar 15, 2014 at 01:14:51PM +0530, Makailol Charls wrote: >> >> > I have been using add_header to send cache status to downstream server >> or >> > client like this. This is working fine. >> > add_header Cache-Status $upstream_cache_status; >> > >> > Now I tried proxy_set_header to send header to upstream as below. >> > proxy_set_header Cache-Status $upstream_cache_status; >> > >> > But this could not send headers to upstream backend server. I think we >> can >> > not use "$upstream_*" variables to send headers to upstream backend. >> > >> > Would you suggest me some way to send cache-status to backend upstream >> > server? >> >> The proxy_set_header directive works fine and passes >> $upstream_cache_status to backends without any problems. Mostly >> likely you did something wrong in your tests. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 19 10:48:25 2014 From: nginx-forum at nginx.us (dvdnginx) Date: Wed, 19 Mar 2014 06:48:25 -0400 Subject: XSLT, one XML file and differing URIs Message-ID: <8d89a19a4e8ca093c05554761b920319.NginxMailingListEnglish@forum.nginx.org> Hello, I was wondering if anyone could put on the right path to do the following, So in it's simplest form, lets say I have one file "chapter.xml" in directory A /A/chapter.xml I want to use nginx XSLT processing ability to "present" this file based on different URIs, so If someone accesses /A/ it presents chapter.xml using XSLT file chapter.xsl, if someone accesses /A/sec1/ it presents chapter.xml using XSLT file sec.xsl and passes it parameter 1, if someone accesses /A/sec2/ it presents chapter.xml using XSLT file sec.xsl and passes it parameter 2. etc. I can achieve the first one as follows location /A/ {; xslt_stylesheet chapter.xsl; index chapter.xml } but I'm stuck on /A/sec1 /A/sec2 etc Thanks, Dave Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248495,248495#msg-248495 From superfelo at yahoo.com Wed Mar 19 12:28:30 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Wed, 19 Mar 2014 05:28:30 -0700 (PDT) Subject: PHP type. Thread Safe or Non Thread Safe? In-Reply-To: <7914666be74edc6d271fa6e690d95cac.NginxMailingListEnglish@forum.nginx.org> References: <1395144535.9103.YahooMailNeo@web163405.mail.gq1.yahoo.com> <7914666be74edc6d271fa6e690d95cac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1395232110.87571.YahooMailNeo@web163401.mail.gq1.yahoo.com> thank you very much I think it has to do with nginx because it is part of your setup. The documentation specifies how to do work with php but never mention the type of php. And it's been hard to find. From nginx-forum at nginx.us Wed Mar 19 13:06:16 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 19 Mar 2014 09:06:16 -0400 Subject: PHP type. Thread Safe or Non Thread Safe? In-Reply-To: <1395232110.87571.YahooMailNeo@web163401.mail.gq1.yahoo.com> References: <1395232110.87571.YahooMailNeo@web163401.mail.gq1.yahoo.com> Message-ID: <981098177829a99dd2e0996bd490778b.NginxMailingListEnglish@forum.nginx.org> Its not my setup, its a common nginx example how to use a backend like PHP, this can be RoR, Perl or any other type of backend which has a listening interface nginx can pass information on to. Tests has shown that the NTS version is the most reliable/stable to use with nginx which does not mean TS can't be used. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248447,248498#msg-248498 From mdounin at mdounin.ru Wed Mar 19 14:08:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Mar 2014 18:08:20 +0400 Subject: XSLT, one XML file and differing URIs In-Reply-To: <8d89a19a4e8ca093c05554761b920319.NginxMailingListEnglish@forum.nginx.org> References: <8d89a19a4e8ca093c05554761b920319.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140319140820.GX34696@mdounin.ru> Hello! On Wed, Mar 19, 2014 at 06:48:25AM -0400, dvdnginx wrote: > Hello, > > I was wondering if anyone could put on the right path to do the following, > > So in it's simplest form, lets say I have one file "chapter.xml" in > directory A > > /A/chapter.xml > > I want to use nginx XSLT processing ability to "present" this file based on > different URIs, so > > If someone accesses /A/ it presents chapter.xml using XSLT file chapter.xsl, > if someone accesses /A/sec1/ it presents chapter.xml using XSLT file > sec.xsl and passes it parameter 1, if someone accesses /A/sec2/ it presents > chapter.xml using XSLT file sec.xsl and passes it parameter 2. > > etc. > > I can achieve the first one as follows > > location /A/ {; > xslt_stylesheet chapter.xsl; > index chapter.xml > } > > but I'm stuck on /A/sec1 /A/sec2 etc The "alias" directive should do the trick, see http://nginx.org/r/alias. Something like this should work: location /A/ { xslt_stylesheet chapter.xsl; ... } location /A/sec1/ { alias /path/to/A/; xslt_stylesheet sec.xsl; ... } -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Mar 19 14:15:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Mar 2014 18:15:32 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> Message-ID: <20140319141532.GY34696@mdounin.ru> Hello! On Wed, Mar 19, 2014 at 03:30:03PM +0530, Makailol Charls wrote: > One more thing I would like to know, would it be possible to proxy_pass > request conditionally based on $upstream_cache_status ? > > For example here we set request header with proxy_set_header Cache-Status > $upstream_cache_status; instead we can use condition like below, > > if ($upstream_cache_status = "BYPASS") { > proxy_pass http://someIP; > } > > if ($upstream_cache_status = "MISS") { > proxy_pass http://anotherIP; > } > > I tried this but could not find this working. I can set request header for > backend with proxy_set_header but can not use this variable conditionally. That's expected, as cache status isn't known at the rewrite phase when "if" directives are executed. It's not even known if a request will be proxied at all. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Mar 19 14:41:39 2014 From: nginx-forum at nginx.us (tobydunst) Date: Wed, 19 Mar 2014 10:41:39 -0400 Subject: How to Purge cach of 502 and 504's Message-ID: <09b531272131686b3093f59cadd15843.NginxMailingListEnglish@forum.nginx.org> Hi All, I am using nginx as a reverse proxy and have configured it to cache 502 using proxy_cache_valid 502 30m; This is working fine, but my question is how do I purge these from the cache if required? There is no matching file in the proxy_cache_path to delete, so it appears the Nginx caches these in memory. Restarting Nginx certainly clears these cached results but I need to avoid restarting as I have Websocket connections I want to avoid dropping. I can disable caching using proxy_no_cache and proxy_cache_bypass and reload, then wait for the cached entries to time out. That seems to work but I wanted to ask if there is a simple one hit solution to empty any cached results. Kind Regards, Warren. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248506,248506#msg-248506 From lists at ruby-forum.com Thu Mar 20 03:31:28 2014 From: lists at ruby-forum.com (Thameera Nawaratna) Date: Thu, 20 Mar 2014 04:31:28 +0100 Subject: Nginx Server with Two network Card Message-ID: Hi All, Server has installed with Nginx with one NIC and later on we add another NIC. we can use SSH or any other service to connect to the server with new IP address (New NIC). But I am not able to connect to the port define for Nginx server. I am not able to telnet to the port with new IP. Our application run on torqubox. Here is my default.conf file server { listen 10000; server_name default; underscores_in_headers on; location / { access_log off; proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Server: Ubuntu x86_64 x86_64 x86_64 GNU/Linux Nginx: nginx version: nginx/1.4.1 (Ubuntu) Any Idea why Nginx not accept connections from new IP? -- Posted via http://www.ruby-forum.com/. From makailol7 at gmail.com Thu Mar 20 04:08:40 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Thu, 20 Mar 2014 09:38:40 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: <20140319141532.GY34696@mdounin.ru> References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> Message-ID: Hi, Is there some way to achieve this? I want to pass requests to backend based on cache status condition. Thanks, Makailol On Wed, Mar 19, 2014 at 7:45 PM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 19, 2014 at 03:30:03PM +0530, Makailol Charls wrote: > > > One more thing I would like to know, would it be possible to proxy_pass > > request conditionally based on $upstream_cache_status ? > > > > For example here we set request header with proxy_set_header Cache-Status > > $upstream_cache_status; instead we can use condition like below, > > > > if ($upstream_cache_status = "BYPASS") { > > proxy_pass http://someIP; > > } > > > > if ($upstream_cache_status = "MISS") { > > proxy_pass http://anotherIP; > > } > > > > I tried this but could not find this working. I can set request header > for > > backend with proxy_set_header but can not use this variable > conditionally. > > That's expected, as cache status isn't known at the rewrite phase > when "if" directives are executed. It's not even known if a > request will be proxied at all. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 20 08:15:45 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 20 Mar 2014 09:15:45 +0100 Subject: Nginx Server with Two network Card In-Reply-To: References: Message-ID: Have you tried to 'reload' the server by sending SIGHUP to the master process? Or if this is not enough try upgrading it on the fly or restart (stop/start) it. Nginx, as any process, cannot detect system changes such as this ones on its own... If you modiify your hardware environment, you need to 'refresh' all your process based on the new one. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 20 08:28:04 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 20 Mar 2014 09:28:04 +0100 Subject: Nginx stop function in service file Message-ID: I am using the official Debian package (stable) and I noticed the service file, in its do_stop() function, sends SIGTERM to the master process. However, the docs say SIGTERM (and SIGQUIT) sent to the master process provokes a 'fast shutdown' whereas SIGQUIT would provoke a 'graceful shutdown'. Why is not SIGQUIT being used? What is the difference between those termination signals, speaking of nginx behavior? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Mar 20 10:14:53 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 20 Mar 2014 14:14:53 +0400 Subject: Nginx stop function in service file In-Reply-To: References: Message-ID: On Mar 20, 2014, at 12:28 , B.R. wrote: > I am using the official Debian package (stable) and I noticed the service file, in its do_stop() function, sends SIGTERM to the master process. > > However, the docs say SIGTERM (and SIGQUIT) sent to the master process provokes a 'fast shutdown' whereas SIGQUIT would provoke a 'graceful shutdown'. > > Why is not SIGQUIT being used? What is the difference between those termination signals, speaking of nginx behavior? Graceful shutdown means that nginx does not close active client connections. This may take hours. -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From superfelo at yahoo.com Thu Mar 20 12:08:03 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Thu, 20 Mar 2014 05:08:03 -0700 (PDT) Subject: PHP type. Thread Safe or Non Thread Safe? In-Reply-To: <981098177829a99dd2e0996bd490778b.NginxMailingListEnglish@forum.nginx.org> References: <1395232110.87571.YahooMailNeo@web163401.mail.gq1.yahoo.com> <981098177829a99dd2e0996bd490778b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1395317283.2212.YahooMailNeo@web163404.mail.gq1.yahoo.com> Thank you very much. Sorry if my English causes mistakes, my native language is Spanish. From nginx-forum at nginx.us Thu Mar 20 12:58:57 2014 From: nginx-forum at nginx.us (dompz) Date: Thu, 20 Mar 2014 08:58:57 -0400 Subject: Debugging symbols for nginx-1.4.7-1.el6.ngx.x86_64.rpm Message-ID: <0bc74745a1530dccd603cb8a0511143f.NginxMailingListEnglish@forum.nginx.org> I have a question to the nginx team - is there any chance for me to get the debugging symbols for the nginx-1.4.7-1.el6.ngx.x86_64.rpm from http://nginx.org/packages/rhel/6/x86_64/RPMS/? It seems the nginx-debug package contains a different binary, so the symbols won't match the one from the nginx package. Do you keep the symbols for the clean package and is there a chance for me to access them? Thanks, Dominik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248527,248527#msg-248527 From sb at nginx.com Thu Mar 20 13:19:34 2014 From: sb at nginx.com (Sergey Budnevitch) Date: Thu, 20 Mar 2014 17:19:34 +0400 Subject: Debugging symbols for nginx-1.4.7-1.el6.ngx.x86_64.rpm In-Reply-To: <0bc74745a1530dccd603cb8a0511143f.NginxMailingListEnglish@forum.nginx.org> References: <0bc74745a1530dccd603cb8a0511143f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0F585B6D-7C3B-4C22-BEE6-4C196B99F8E7@nginx.com> On 20 Mar 2014, at 16:58, dompz wrote: > I have a question to the nginx team - is there any chance for me to get the > debugging symbols for the nginx-1.4.7-1.el6.ngx.x86_64.rpm from > http://nginx.org/packages/rhel/6/x86_64/RPMS/? It seems the nginx-debug > package contains a different binary, so the symbols won't match the one from > the nginx package. Yes, this is different binary built with ??with-debug? option. > Do you keep the symbols for the clean package and is there a chance for me > to access them? No, we don?t keep symbols, but likely will add debuginfo package on next release. From mdounin at mdounin.ru Thu Mar 20 13:26:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Mar 2014 17:26:36 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> Message-ID: <20140320132636.GE34696@mdounin.ru> Hello! On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote: > Hi, > > Is there some way to achieve this? I want to pass requests to backend based > on cache status condition. This is not something easily possible, as cache status is only known after we started processing proxy_pass and already know which backend will be used. (Note that by default proxy_cache_key uses $proxy_host, which wouldn't be known otherwise.) If you want to check BYPASS as in your previous message, I would recommend checking relevant conditions from proxy_cache_bypass separately. As a more generic though less effective aproach, an additional proxy layer may be used. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Mar 20 13:41:05 2014 From: nginx-forum at nginx.us (dompz) Date: Thu, 20 Mar 2014 09:41:05 -0400 Subject: Debugging symbols for nginx-1.4.7-1.el6.ngx.x86_64.rpm In-Reply-To: <0F585B6D-7C3B-4C22-BEE6-4C196B99F8E7@nginx.com> References: <0F585B6D-7C3B-4C22-BEE6-4C196B99F8E7@nginx.com> Message-ID: <3f2f5b5a51b70ca06ad4897ed72b920e.NginxMailingListEnglish@forum.nginx.org> Thanks for the quick answer! Good to know that you are going to add the debuginfo package :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248527,248532#msg-248532 From kworthington at gmail.com Thu Mar 20 14:08:13 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 20 Mar 2014 10:08:13 -0400 Subject: nginx-1.4.7 In-Reply-To: <20140318164457.GD34696@mdounin.ru> References: <20140318164457.GD34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.7 for Windows http://goo.gl/KNRYGj (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 18, 2014 at 12:44 PM, Maxim Dounin wrote: > Changes with nginx 1.4.7 18 Mar > 2014 > > *) Security: a heap memory buffer overflow might occur in a worker > process while handling a specially crafted request by > ngx_http_spdy_module, potentially resulting in arbitrary code > execution (CVE-2014-0133). > Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. > Manuel Sadosky, Buenos Aires, Argentina. > > *) Bugfix: in the "fastcgi_next_upstream" directive. > Thanks to Lucas Molas. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Thu Mar 20 15:58:11 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 20 Mar 2014 11:58:11 -0400 Subject: nginx-1.5.12 In-Reply-To: <20140318164434.GZ34696@mdounin.ru> References: <20140318164434.GZ34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.12 for Windows http://goo.gl/TOFukx (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 18, 2014 at 12:44 PM, Maxim Dounin wrote: > Changes with nginx 1.5.12 18 Mar > 2014 > > *) Security: a heap memory buffer overflow might occur in a worker > process while handling a specially crafted request by > ngx_http_spdy_module, potentially resulting in arbitrary code > execution (CVE-2014-0133). > Thanks to Lucas Molas, researcher at Programa STIC, Fundaci?n Dr. > Manuel Sadosky, Buenos Aires, Argentina. > > *) Feature: the "proxy_protocol" parameters of the "listen" and > "real_ip_header" directives, the $proxy_protocol_addr variable. > > *) Bugfix: in the "fastcgi_next_upstream" directive. > Thanks to Lucas Molas. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 20 17:14:23 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 20 Mar 2014 18:14:23 +0100 Subject: Nginx stop function in service file In-Reply-To: References: Message-ID: Thanks Igor for that clear and concise answer! --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From drhowarddrfine at charter.net Thu Mar 20 20:10:29 2014 From: drhowarddrfine at charter.net (drhowarddrfine at charter.net) Date: Thu, 20 Mar 2014 16:10:29 -0400 (EDT) Subject: Removing blank location block causes 404 errors Message-ID: <337ccb6b.10f307.144e11dc8f5.Webtop.49@charter.net> I'm new to nginx and had this in my conf file which I had copied some time ago but never filled it in and continued building a test page with images, css and javascript: location ~* ^.+\.(css|js|jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|eot|mp4|ogg|ogv|webm|pdf)$ { } Today, I deleted that block and now the images, css and javascript files are all returned a 404. I don't understand what's going on. I'm using version 1.5.11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Mar 20 20:24:30 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Mar 2014 20:24:30 +0000 Subject: Removing blank location block causes 404 errors In-Reply-To: <337ccb6b.10f307.144e11dc8f5.Webtop.49@charter.net> References: <337ccb6b.10f307.144e11dc8f5.Webtop.49@charter.net> Message-ID: <20140320202430.GT29880@craic.sysops.org> On Thu, Mar 20, 2014 at 04:10:29PM -0400, drhowarddrfine at charter.net wrote: Hi there, > location ~* > ^.+\.(css|js|jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|eot|mp4|ogg|ogv|webm|pdf)$ > { > } > > Today, I deleted that block and now the images, css and javascript files > are all returned a 404. I don't understand what's going on. > I'm using version 1.5.11 What request do you make? What response do you want to get? ("The contents of *this* file.") Which location in your new config file is used to process that request? What config is in that location? f -- Francis Daly francis at daoine.org From drhowarddrfine at charter.net Thu Mar 20 20:33:35 2014 From: drhowarddrfine at charter.net (drhowarddrfine at charter.net) Date: Thu, 20 Mar 2014 16:33:35 -0400 (EDT) Subject: Removing blank location block causes 404 errors Message-ID: <4034ad04.10f57c.144e132f056.Webtop.49@charter.net> I was just typing what I think may be the reason. That other location blocks may be picking this up and eventually returning a 404 but, with that block in there, the image and css requests are getting "handled", ie, nothing is done to them at all, and then retrieved from their proper directory path. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Mar 20 20:59:40 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Mar 2014 20:59:40 +0000 Subject: Removing blank location block causes 404 errors In-Reply-To: <4034ad04.10f57c.144e132f056.Webtop.49@charter.net> References: <4034ad04.10f57c.144e132f056.Webtop.49@charter.net> Message-ID: <20140320205940.GU29880@craic.sysops.org> On Thu, Mar 20, 2014 at 04:33:35PM -0400, drhowarddrfine at charter.net wrote: Hi there, > I was just typing what I think may be the reason. That other location > blocks may be picking this up and eventually returning a 404 but, with > that block in there, the image and css requests are getting "handled", > ie, nothing is done to them at all, and then retrieved from their proper > directory path. The "empty" location says "serve from the filesystem", and "start in the root directory inherited from server level". The new location that handles the request presumably says something different. f -- Francis Daly francis at daoine.org From steve at greengecko.co.nz Fri Mar 21 04:44:12 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 21 Mar 2014 17:44:12 +1300 Subject: redirect help please... this one's driving me mad! Message-ID: <1395377052.31659.74.camel@steve-new> I'm tryiing to migrate a site that uses codeigniter behind modx to draw pages, and this is the block that breaks the site when I remove it from .htaccess... RewriteRule ^$ home [L] RewriteCond %{HTTP_HOST} ^(?:www\.)?([^\.]*)\..*$ [NC] RewriteCond %{REQUEST_URI} !^/?$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*) index.php?q=/_%1/pages/$1 [L,QSA] Does anyone have an idea on how to rewrite that? From debugging code added to the site, it seems to go through index.php 3 times for each page draw! Any suggestions gratefully received - I can't get my head around the params from the condition and those from the rule and how to do this in nginx! Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From pchychi at gmail.com Fri Mar 21 05:14:15 2014 From: pchychi at gmail.com (Payam Chychi) Date: Thu, 20 Mar 2014 22:14:15 -0700 Subject: redirect help please... this one's driving me mad! In-Reply-To: <1395377052.31659.74.camel@steve-new> References: <1395377052.31659.74.camel@steve-new> Message-ID: <532BCAA7.60400@gmail.com> Hi Steve, that's a lot of apache nonsense ;) that you shouldn't need check out: http://nginx.org/en/docs/http/converting_rewrite_rules.html another useful link with great commenting: https://blog.engineyard.com/2011/useful-rewrites-for-nginx/ -Payam On 2014-03-20, 9:44 PM, Steve Holdoway wrote: > I'm tryiing to migrate a site that uses codeigniter behind modx to draw > pages, and this is the block that breaks the site when I remove it > from .htaccess... > > RewriteRule ^$ home [L] > RewriteCond %{HTTP_HOST} ^(?:www\.)?([^\.]*)\..*$ [NC] > RewriteCond %{REQUEST_URI} !^/?$ > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteRule (.*) index.php?q=/_%1/pages/$1 [L,QSA] > > > Does anyone have an idea on how to rewrite that? From debugging code > added to the site, it seems to go through index.php 3 times for each > page draw! > > Any suggestions gratefully received - I can't get my head around the > params from the condition and those from the rule and how to do this in > nginx! > > Steve > > From pcgeopc at gmail.com Fri Mar 21 05:52:42 2014 From: pcgeopc at gmail.com (Geo P.C.) Date: Fri, 21 Mar 2014 11:22:42 +0530 Subject: nginx proxypass issue Message-ID: We have a setup in which nginx proxypass is working fine for tomcat like this: server { listen 80; server_name app.geo.com; location /app { proxy_pass https://192.168.1.100:8080/app; } Now while accessing http://app.geo.com/app is working fine. Now we need to access the same application as http://app.geo.com/paymentbut we need the tomcat war as same app. We configured proxypass as follows: server { listen 80; server_name app.geo.com; location /payment { proxy_pass https://192.168.1.100:8080/app; } But while accessing http://app.geo.com/payment tomcat application app is not loading properly. We are not getting any reliable error message. Can any one please help us to configure on this scenario. Thanks Geo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 21 12:13:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Mar 2014 16:13:25 +0400 Subject: nginx proxypass issue In-Reply-To: References: Message-ID: <20140321121325.GX34696@mdounin.ru> Hello! On Fri, Mar 21, 2014 at 11:22:42AM +0530, Geo P.C. wrote: > We have a setup in which nginx proxypass is working fine for tomcat like > this: > > server { > listen 80; > server_name app.geo.com; > location /app { > proxy_pass https://192.168.1.100:8080/app; > } > > Now while accessing http://app.geo.com/app is working fine. > > Now we need to access the same application as > http://app.geo.com/paymentbut we need the tomcat war as same app. We > configured proxypass as follows: > > server { > listen 80; > server_name app.geo.com; > location /payment { > proxy_pass https://192.168.1.100:8080/app; > } > > But while accessing http://app.geo.com/payment tomcat application app is > not loading properly. We are not getting any reliable error message. > > Can any one please help us to configure on this scenario. Most likely, the problem is that your backend links various resources (e.g., images) using "/app/" url prefix, while externally visible prefix is "/payment/". Simpliest and most reliable solution is to fix/change backend to use links you need (many apps have configuration options like "baseurl" or something like this). As a hack/workaround, you may also try changing prefix using nginx itself, e.g., using sub_filter[1]. This approach may have problems and not recommended though. [1] http://nginx.org/en/docs/http/ngx_http_sub_module.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Mar 21 19:20:28 2014 From: nginx-forum at nginx.us (Varix) Date: Fri, 21 Mar 2014 15:20:28 -0400 Subject: Problem with application/octet-stream content-type Message-ID: How to configure that? the files xxx-yyy-ddd-hhh and xxx-yyy-ddd-hhh.html have the same html code in it. Typing example.com/xxx-yyy-ddd-hhh.html in the browser is OK. Typing example.com/xxx-yyy-ddd-hhh in the browser get the wronge content-type application/octet-stream. A text/html content-type is need. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248575,248575#msg-248575 From reallfqq-nginx at yahoo.fr Fri Mar 21 19:42:27 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 21 Mar 2014 20:42:27 +0100 Subject: Problem with application/octet-stream content-type In-Reply-To: References: Message-ID: Nginx binds file extensions with file types: http://nginx.org/en/docs/http/ngx_http_core_module.html#types An examples there links to http://nginx.org/en/docs/http/ngx_http_core_module.html#default_type --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil.mckee.ca at gmail.com Fri Mar 21 20:43:08 2014 From: neil.mckee.ca at gmail.com (neil mckee) Date: Fri, 21 Mar 2014 13:43:08 -0700 Subject: nginx-sflow-module Message-ID: To anyone using (or considering) the nginx-sflow-module for monitoring, then please use version 0.9.9 or later. Previous versions generated incorrect sFlow output in deployments with multiple worker-processes. Symptoms included spurious spikes in Ganglia graphs. https://code.google.com/p/nginx-sflow-module/ This sFlow-HTTP output is the same as you get from the corresponding sFlow agents for apache, tomcat, node.js and haproxy, as well as hardware load-balancers from F5 and A10. However it can be extended, so if there is anything you would like to see added then please let me know, or post to http://sflow.org/discussion/. Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Fri Mar 21 22:37:38 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sat, 22 Mar 2014 11:37:38 +1300 Subject: redirect help please... this one's driving me mad! In-Reply-To: <532BCAA7.60400@gmail.com> References: <1395377052.31659.74.camel@steve-new> <532BCAA7.60400@gmail.com> Message-ID: <1395441458.31659.76.camel@steve-new> Sadly not one mention of the correct way to handle %1 and $1 in either of these pages. Anyone?? Cheers, steve On Thu, 2014-03-20 at 22:14 -0700, Payam Chychi wrote: > Hi Steve, > > that's a lot of apache nonsense ;) that you shouldn't need > > check out: > http://nginx.org/en/docs/http/converting_rewrite_rules.html > > another useful link with great commenting: > https://blog.engineyard.com/2011/useful-rewrites-for-nginx/ > > -Payam > > On 2014-03-20, 9:44 PM, Steve Holdoway wrote: > > I'm tryiing to migrate a site that uses codeigniter behind modx to draw > > pages, and this is the block that breaks the site when I remove it > > from .htaccess... > > > > RewriteRule ^$ home [L] > > RewriteCond %{HTTP_HOST} ^(?:www\.)?([^\.]*)\..*$ [NC] > > RewriteCond %{REQUEST_URI} !^/?$ > > RewriteCond %{REQUEST_FILENAME} !-f > > RewriteCond %{REQUEST_FILENAME} !-d > > RewriteRule (.*) index.php?q=/_%1/pages/$1 [L,QSA] > > > > > > Does anyone have an idea on how to rewrite that? From debugging code > > added to the site, it seems to go through index.php 3 times for each > > page draw! > > > > Any suggestions gratefully received - I can't get my head around the > > params from the condition and those from the rule and how to do this in > > nginx! > > > > Steve > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From john_de_f at hotmail.com Fri Mar 21 22:52:16 2014 From: john_de_f at hotmail.com (John de Freitas) Date: Fri, 21 Mar 2014 22:52:16 +0000 Subject: Rewrite HTTP status code Message-ID: Hello. I'm running nginx as a proxy for back-end servers that are returning a non-standard HTTP status phrase. I'd like to be able to rewrite the status to something standard. For example, if the back-end returns: HTTP/1.1 400 Not Understood I'd like to rewrite to: HTTP/1.1 400 Bad Request Thanks. Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Mar 22 00:16:54 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 22 Mar 2014 00:16:54 +0000 Subject: redirect help please... this one's driving me mad! In-Reply-To: <1395441458.31659.76.camel@steve-new> References: <1395377052.31659.74.camel@steve-new> <532BCAA7.60400@gmail.com> <1395441458.31659.76.camel@steve-new> Message-ID: <20140322001654.GX29880@craic.sysops.org> On Sat, Mar 22, 2014 at 11:37:38AM +1300, Steve Holdoway wrote: Hi there, > Sadly not one mention of the correct way to handle %1 and $1 in either > of these pages. Can you describe in words what you want your nginx config to do? As in: this request leads to that response. I can try guessing at what your apache config probably does; but you're probably best-placed to test it. > > On 2014-03-20, 9:44 PM, Steve Holdoway wrote: > > > I'm tryiing to migrate a site that uses codeigniter behind modx to draw > > > pages, and this is the block that breaks the site when I remove it > > > from .htaccess... So, .htaccess appears in a directory, and the directory prefix is stripped from the url before Rewrite considers it. (Maybe. What do your apache docs say?. You're more likely to get correctly corrected answers to "what does this apache config do" on an apache list than on a non-apache list.) So, the request is something below /dir/. > > > RewriteRule ^$ home [L] If the request is exactly /dir/, then rewrite (or perhaps redirect?) to /dir/home. > > > RewriteCond %{HTTP_HOST} ^(?:www\.)?([^\.]*)\..*$ [NC] If there is at least one "." in the Host header, store the first .-separated part (ignoring the leading "www.", if it is there) as (say) $site ... > > > RewriteCond %{REQUEST_URI} !^/?$ > > > RewriteCond %{REQUEST_FILENAME} !-f > > > RewriteCond %{REQUEST_FILENAME} !-d and if the request is not exactly "/" (or is that "/dir/"?), and the corresponding file-or-directory does not exist... > > > RewriteRule (.*) index.php?q=/_%1/pages/$1 [L,QSA] then rewrite (or perhaps redirect) to /dir/index.php?q=/_$site/pages/$uri. But $uri might be a version of $request_uri, and might be without the leading /dir/ part. See apache docs and/or your testing to know what is actually there, then ask here or check nginx docs for a suitable equivalent nginx variable. > > > Does anyone have an idea on how to rewrite that? From debugging code > > > added to the site, it seems to go through index.php 3 times for each > > > page draw! curl -i http://example.com/dir/fake Does that return a http redirect, or some specific content involving the words "example" and "fake"? Good luck with it, f -- Francis Daly francis at daoine.org From jqzone at gmail.com Sat Mar 22 03:07:28 2014 From: jqzone at gmail.com (jqzone) Date: Sat, 22 Mar 2014 11:07:28 +0800 Subject: how to proxy request to different upstream by url request parameters. Message-ID: upstream upstream1{ } upstream upstream2{ } upstream upstream3{ } upstream upstream4{ } request url is http://xxxx.com/id=xxx&key=xxx if (id is 1~100 key=key1) send to upstream1 if (id is 1~100 key=key2) send to upstream2 if (id is 101~200 key=key1) send to upstream3 if (id is 101~200 key=key2 ) send to upstream4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at glanzmann.de Sat Mar 22 06:32:14 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Sat, 22 Mar 2014 07:32:14 +0100 Subject: how to proxy request to different upstream by url request parameters. In-Reply-To: References: Message-ID: <20140322063214.GB8153@glanzmann.de> Hello, > How to proxy request to different upstream by url request parameters? http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky Cheers, Thomas From thomas at glanzmann.de Sat Mar 22 06:40:15 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Sat, 22 Mar 2014 07:40:15 +0100 Subject: how to proxy request to different upstream by url request parameters. In-Reply-To: <20140322063214.GB8153@glanzmann.de> References: <20140322063214.GB8153@glanzmann.de> Message-ID: <20140322064015.GA15305@glanzmann.de> Hello, > > How to proxy request to different upstream by url request parameters? > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky this is part of the commercial subscription, but you can probably obtain the same using a MAP like that: map $arg_key $backend { key1 backend1; key2 backend2; default fail; } proxy_pass http://$arg_key; Cheers, Thomas From chima.s at gmail.com Sat Mar 22 08:04:09 2014 From: chima.s at gmail.com (chima s) Date: Sat, 22 Mar 2014 13:34:09 +0530 Subject: nginx reverse proxy Message-ID: Hi, I have configured nginx as reverse proxy for jboss. All the system are hosted in Amazon cloud and using AWS ELB for both nginx and jboss WEB ELB <---> nginx reverse proxy <---> APP ELB <---> Jboss7. When i access abc.example.com/admin/login.do, i am getting page and after i provide username/password and submit, i am getting connection timeout. Found the URL get changed to abc.example.com:8080/admin/xxxx How to get rid of port 8080 in the response URL in nginx. Below us the nginx-1.4.7 proxy configuration: upstream appserv { server 192.168.1.100:8080 fail_timeout=0; } proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://appserv/; proxy_redirect http://appserv/ $scheme://$host/; proxy_set_header Connection ''; Also tried with proxy_redirect default and proxy_redirect http://appserv/ $scheme://$host:80/; still got the same issue Thanks Chima -------------- next part -------------- An HTML attachment was scrubbed... URL: From devel at jasonwoods.me.uk Sat Mar 22 09:11:59 2014 From: devel at jasonwoods.me.uk (Jason Woods) Date: Sat, 22 Mar 2014 09:11:59 +0000 Subject: redirect help please... this one's driving me mad! In-Reply-To: <20140322001654.GX29880@craic.sysops.org> References: <1395377052.31659.74.camel@steve-new> <532BCAA7.60400@gmail.com> <1395441458.31659.76.camel@steve-new> <20140322001654.GX29880@craic.sysops.org> Message-ID: On 22 Mar 2014, at 00.16, Francis Daly wrote: > On Sat, Mar 22, 2014 at 11:37:38AM +1300, Steve Holdoway wrote: > > Hi there, > >> Sadly not one mention of the correct way to handle %1 and $1 in either >> of these pages. > I seem to be having problems sending to this mailing list... but I will try again. Following is my response from a few days ago. (The ^$ I remember as a weird one. REQUEST_URI is "/" if in httpd.conf. But if you are within .htaccess it trims the directory path, so at the webroot the REQUEST_URI is empty string, "".) ====== Assuming I understand what those rules are trying to do, maybe something along these lines? (needs testing) location = / { rewrite ^ /home; } location / { if ($http_host ~* ^(?:www\.)?([^\.]*)\..*$) { try_files $uri $uri/ /index.php?q=/_$1%$request_uri; } } Then a regular \.php$ handler which returns 404 if the script doesn't exist and passes to fastcgi if it does. Jason From nginx-forum at nginx.us Sat Mar 22 16:28:16 2014 From: nginx-forum at nginx.us (Larry) Date: Sat, 22 Mar 2014 12:28:16 -0400 Subject: ssl cache pooling ? (kind of) Message-ID: <4892a875aaf2213bbc6eb4fc7c8dd9b8.NginxMailingListEnglish@forum.nginx.org> Hello, I would like to know if we could replicate the shared memory over multiple servers. One cannot reliably use the new ticket system since not all webbrowsers support this. My idea is to modify the ngx_shared_memory_add function to add a rpc stack to it. We would write down the upstream servers we want to make aware of the modification and send them the cache value. The only remaining question is how to make a corresponding with the mmap. Is there a corresponding logic directly between the ssl handshake and the place in memory choosen ? Are there any restrictions ? Basically it would be a full replication of the cache on every server, but allowing dynamic allocation so that every server remains independant. Since this does not consume that much of resources, we can easily allocate even 50Mo for the shared memory without any fear. Before I start coding, I would like to know if there are any mistakes in the idea. I may have missed something huge. Did I ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248588,248588#msg-248588 From icantthinkofone at charter.net Sat Mar 22 21:18:19 2014 From: icantthinkofone at charter.net (Doc) Date: Sat, 22 Mar 2014 17:18:19 -0400 (EDT) Subject: Pass path to thttpd for CGI Message-ID: <89457bf.168c47.144eba899c5.Webtop.47@charter.net> I sort of have this working but not quite. I have http://mysite.com/music/play/song.mp3 and this plays fine in the browser by accessing the mp3 file in its folder. However, I want to handle requests to download the song at http://mysite.com/music/download/song.mp3 with a CGI script that has no extension (it's C). I can get /music/download to be handled by CGI in thttpd but I don't know how to handle the changing song titles cause everything I've tried fails. location /music { try_files $uri.html $uri/index.html @music; } location @music { proxy_pass http://127.0.0.1:8000; } In thttpd.conf I have: host=127.0.0.1 port=8000 dir=/www/cgi-bin cgipat=/** And inside the cgi-bin is a standalone program called "download" so /music/download returns a web page generated by that executable just to test this. I've tried so many variations including rewrite that I've gotten myself pretty turned around as to what I should be doing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Mar 22 21:45:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 23 Mar 2014 01:45:55 +0400 Subject: ssl cache pooling ? (kind of) In-Reply-To: <4892a875aaf2213bbc6eb4fc7c8dd9b8.NginxMailingListEnglish@forum.nginx.org> References: <4892a875aaf2213bbc6eb4fc7c8dd9b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140322214554.GB34696@mdounin.ru> Hello! On Sat, Mar 22, 2014 at 12:28:16PM -0400, Larry wrote: > Hello, > > I would like to know if we could replicate the shared memory over multiple > servers. > > One cannot reliably use the new ticket system since not all webbrowsers > support this. > > My idea is to modify the ngx_shared_memory_add function to add a rpc stack > to it. > > We would write down the upstream servers we want to make aware of the > modification and send them the cache value. > > The only remaining question is how to make a corresponding with the mmap. > > Is there a corresponding logic directly between the ssl handshake and the > place in memory choosen ? > Are there any restrictions ? > > Basically it would be a full replication of the cache on every server, but > allowing dynamic allocation so that every server remains independant. > > Since this does not consume that much of resources, we can easily allocate > even 50Mo for the shared memory without any fear. > > Before I start coding, I would like to know if there are any mistakes in > the idea. I may have missed something huge. > > Did I ? You may have better luck adding replication logic to the session cache. The idea of replication of shared memory looks utterly broken, in particular as there are pointers stored in shared memory (take a look at ngx_ssl_new_session() for details). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Mar 23 10:50:18 2014 From: nginx-forum at nginx.us (Larry) Date: Sun, 23 Mar 2014 06:50:18 -0400 Subject: ssl cache pooling ? (kind of) In-Reply-To: <20140322214554.GB34696@mdounin.ru> References: <20140322214554.GB34696@mdounin.ru> Message-ID: <91d5249c4297940d8052c31c8d65992c.NginxMailingListEnglish@forum.nginx.org> Yep, Missed that -big- one. Failed idea. Many example show how to loadbalance ssl without problems like lvs, haproxy http://virtuallyhyper.com/2013/05/configure-haproxy-to-load-balance-sites-with-ssl/ So, Am I basically creating an imaginary problem ? And if so, why ssl ticket (rfc 5077) even exists ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248588,248593#msg-248593 From mdounin at mdounin.ru Sun Mar 23 13:50:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 23 Mar 2014 17:50:23 +0400 Subject: Rewrite HTTP status code In-Reply-To: References: Message-ID: <20140323135023.GC34696@mdounin.ru> Hello! On Fri, Mar 21, 2014 at 10:52:16PM +0000, John de Freitas wrote: > Hello. I'm running nginx as a proxy for back-end servers that are > returning a non-standard HTTP status phrase. I'd like to be able to > rewrite the status to something standard. For > example, if the back-end returns: > > HTTP/1.1 400 Not Understood > > I'd like to rewrite to: > > HTTP/1.1 400 Bad Request You may do so using proxy_intercept_errors, though it will rewrite response body, too. http://nginx.org/r/proxy_intercept_errors -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Mar 23 14:18:13 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 23 Mar 2014 18:18:13 +0400 Subject: ssl cache pooling ? (kind of) In-Reply-To: <91d5249c4297940d8052c31c8d65992c.NginxMailingListEnglish@forum.nginx.org> References: <20140322214554.GB34696@mdounin.ru> <91d5249c4297940d8052c31c8d65992c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140323141813.GE34696@mdounin.ru> Hello! On Sun, Mar 23, 2014 at 06:50:18AM -0400, Larry wrote: > Yep, > > Missed that -big- one. Failed idea. > > Many example show how to loadbalance ssl without problems like lvs, haproxy > > http://virtuallyhyper.com/2013/05/configure-haproxy-to-load-balance-sites-with-ssl/ > > So, Am I basically creating an imaginary problem ? > > And if so, why ssl ticket (rfc 5077) even exists ? Both session cache and session tickets are needed to reduce cost of creating of new connections. It's not something mandatory, rather an optimization. -- Maxim Dounin http://nginx.org/ From adam at adampearlman.com Sun Mar 23 15:48:49 2014 From: adam at adampearlman.com (Adam Pearlman) Date: Sun, 23 Mar 2014 11:48:49 -0400 Subject: 403 after changing root, but permissions look correct Message-ID: I've been struggling with this for a few hours. I installed nginx 1.4.6 on Fedora 20. The test page displayed fine. I changed the root, leaving all other configuration the same, and I get a 403 Forbidden error. If I look at the permissions for the original test page and the new page, they appear identical. Working test page: namei -om /usr/share/nginx/html/index.html f: /usr/share/nginx/html/index.html dr-xr-xr-x root root / drwxr-xr-x root root usr drwxr-xr-x root root share drwxr-xr-x root root nginx drwxr-xr-x root root html -rw-r--r-- root root index.html Not working: namei -om /var/www/html/index.html f: /var/www/html/index.html dr-xr-xr-x root root / drwxr-xr-x root root var drwxr-xr-x root root www drwxr-xr-x root root html -rw-r--r-- root root index.html The error log seems to be what I would expect as well: 2014/03/23 12:45:08 [error] 5490#0: *13 open() "/var/www/html/index.html" failed (13: Permission denied), client: XXX.XX.XXX.XXX, server: localhost, request: "GET /index.html HTTP/1.1", host: " ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com" The Nginx config has "user nginx" - I tried using root and it made no difference. I also made user ngnix the owner & group of the files, but that didn't work. If I move the index file from /var/www/html to /usr/share/nginx/html (the test file location) it works fine making me suspect the path, but as I said, permissions appear correct. Any help would be very much appreciated. Thanks! - Adam I've included the config file below just in case: # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; index index.html index.htm; server { listen 80; server_name localhost; root /usr/share/nginx/html; ################THIS WORKS #root /var/www/html; #####################THIS DOESN'T #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { } # redirect server error pages to the static page /40x.html # error_page 404 /404.html; location = /40x.html { } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # location / { # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # root html; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # } #} } -------------- next part -------------- An HTML attachment was scrubbed... URL: From icantthinkofone at charter.net Sun Mar 23 16:03:25 2014 From: icantthinkofone at charter.net (Doc) Date: Sun, 23 Mar 2014 12:03:25 -0400 (EDT) Subject: Full pathname not sent Message-ID: <6617c1c6.1675b1.144efaea93f.Webtop.45@charter.net> This might be more of a regex problem on my part than nginx. I do a rewrite to pass the full pathname to thttpd like this: location ~ /radio/download/.*\.mp3$ { rewrite ^ /test/$1; } location /test { proxy_pass http://127.0.0.1:8000; } However the pathname received by thttpd is /test/ without the mp3 filename. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrinmoy.das91 at gmail.com Sun Mar 23 20:22:40 2014 From: mrinmoy.das91 at gmail.com (Mrinmoy Das) Date: Mon, 24 Mar 2014 01:52:40 +0530 Subject: Fwd: How to get rid of caching completely In-Reply-To: References: Message-ID: Mrinmoy Das http://goromlagche.in/ ---------- Forwarded message ---------- From: Mrinmoy Das Date: Mon, Mar 24, 2014 at 1:47 AM Subject: How to get rid of caching completely To: nginx at nginx.org Is there a way to stop caching at all level? I have added * expires -1;* * add_header Cache-Control "private, must-revalidate, max-age=0";* * add_header Last-Modified "";* this on my nginx server block, but there are still some caching are happening. Any way to get rid of that completely. :) Mrinmoy Das http://goromlagche.in/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrinmoy.das91 at gmail.com Sun Mar 23 20:23:28 2014 From: mrinmoy.das91 at gmail.com (Mrinmoy Das) Date: Mon, 24 Mar 2014 01:53:28 +0530 Subject: How to get rid of caching completely Message-ID: Is there a way to stop caching at all level? I have added * expires -1;* * add_header Cache-Control "private, must-revalidate, max-age=0";* * add_header Last-Modified "";* this on my nginx server block, but there are still some caching are happening. Any way to get rid of that completely. :) Mrinmoy Das http://goromlagche.in/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Sun Mar 23 21:35:28 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 24 Mar 2014 10:35:28 +1300 Subject: 403 after changing root, but permissions look correct In-Reply-To: References: Message-ID: <1395610528.31659.104.camel@steve-new> Having just had a similar problem with migrating a MySQL database, I suggest that you check whether SELinux/Apparmor is running. Why prople think it's ok to use a program that can be switched off in an instant to improve their 'security' is and always will be a mystery to me! Cheers, Steve On Sun, 2014-03-23 at 11:48 -0400, Adam Pearlman wrote: > I've been struggling with this for a few hours. > > I installed nginx 1.4.6 on Fedora 20. The test page displayed fine. I > changed the root, leaving all other configuration the same, and I get > a 403 Forbidden error. > > If I look at the permissions for the original test page and the new > page, they appear identical. > > Working test page: > namei -om /usr/share/nginx/html/index.html > f: /usr/share/nginx/html/index.html > dr-xr-xr-x root root / > drwxr-xr-x root root usr > drwxr-xr-x root root share > drwxr-xr-x root root nginx > drwxr-xr-x root root html > -rw-r--r-- root root index.html > > Not working: > namei -om /var/www/html/index.html > f: /var/www/html/index.html > dr-xr-xr-x root root / > drwxr-xr-x root root var > drwxr-xr-x root root www > drwxr-xr-x root root html > -rw-r--r-- root root index.html > > The error log seems to be what I would expect as well: > 2014/03/23 12:45:08 [error] 5490#0: *13 open() > "/var/www/html/index.html" failed (13: Permission denied), client: > XXX.XX.XXX.XXX, server: localhost, request: "GET /index.html > HTTP/1.1", host: "ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com" > > > The Nginx config has "user nginx" - I tried using root and it made no > difference. I also made user ngnix the owner & group of the files, but > that didn't work. If I move the index file from /var/www/html > to /usr/share/nginx/html (the test file location) it works fine making > me suspect the path, but as I said, permissions appear correct. > > > Any help would be very much appreciated. Thanks! > > > - Adam > > > I've included the config file below just in case: > > > # For more information on configuration, see: > # * Official English Documentation: http://nginx.org/en/docs/ > # * Official Russian Documentation: http://nginx.org/ru/docs/ > > user nginx; > worker_processes 1; > > error_log /var/log/nginx/error.log; > #error_log /var/log/nginx/error.log notice; > #error_log /var/log/nginx/error.log info; > > pid /run/nginx.pid; > > events { > worker_connections 1024; > } > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > > #gzip on; > > # Load modular configuration files from the /etc/nginx/conf.d > directory. > # See http://nginx.org/en/docs/ngx_core_module.html#include > # for more information. > include /etc/nginx/conf.d/*.conf; > > index index.html index.htm; > > server { > listen 80; > server_name localhost; > root /usr/share/nginx/html; ################THIS WORKS > #root /var/www/html; #####################THIS DOESN'T > > #charset koi8-r; > > #access_log /var/log/nginx/host.access.log main; > > location / { > } > > # redirect server error pages to the static page /40x.html > # > error_page 404 /404.html; > location = /40x.html { > } > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > # > #location ~ \.php$ { > # proxy_pass http://127.0.0.1; > #} > > # pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > # > #location ~ \.php$ { > # root html; > # fastcgi_pass 127.0.0.1:9000; > # fastcgi_index index.php; > # fastcgi_param SCRIPT_FILENAME /scripts > $fastcgi_script_name; > # include fastcgi_params; > #} > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > #location ~ /\.ht { > # deny all; > #} > } > > # another virtual host using mix of IP-, name-, and port-based > configuration > # > #server { > # listen 8000; > # listen somename:8080; > # server_name somename alias another.alias; > # root html; > > # location / { > # } > #} > > > # HTTPS server > # > #server { > # listen 443; > # server_name localhost; > # root html; > > # ssl on; > # ssl_certificate cert.pem; > # ssl_certificate_key cert.key; > > # ssl_session_timeout 5m; > > # ssl_protocols SSLv2 SSLv3 TLSv1; > # ssl_ciphers HIGH:!aNULL:!MD5; > # ssl_prefer_server_ciphers on; > > # location / { > # } > #} > > } > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From kunalvjti at gmail.com Sun Mar 23 22:35:54 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Sun, 23 Mar 2014 15:35:54 -0700 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character Message-ID: Hello, I have nginx set as a reverse proxy for a mail server and it throws this 502 (invalid header) error while trying to fetch a file with a space in the filename. Any clues on where is this bug in the nginx code ? I searched on the net and found this one forum but it points to some issue in the java code (not sure where that is as nginx is pretty much in C :) http://stackoverflow.com/questions/8005834/empty-space-in-filename-issue-while-downloading-file Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Mar 24 00:06:35 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 01:06:35 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: Hi, > I have nginx set as a reverse proxy for a mail server and it throws > this 502 (invalid header) error while trying to fetch a file with a > space in the filename. Any clues on where is this bug in the nginx code? Prior to jumping to conlusion about bugs in nginx, how does this response header actually look like? Section 19.5.1 in RFC2616 [1] mandates the content of the filename-parm needs to be a quoted string: > filename-parm = "filename" "=" quoted-string > [...] > An example is > Content-Disposition: attachment; filename="fname.ext" Does your response header correctly quote the filename? Regards, Lukas [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1 From jqzone at gmail.com Mon Mar 24 01:46:55 2014 From: jqzone at gmail.com (jqzone) Date: Mon, 24 Mar 2014 09:46:55 +0800 Subject: how to proxy request to different upstream by url request parameters. In-Reply-To: <20140322064015.GA15305@glanzmann.de> References: <20140322063214.GB8153@glanzmann.de> <20140322064015.GA15305@glanzmann.de> Message-ID: Thanks,it works well! On Sat, Mar 22, 2014 at 2:40 PM, Thomas Glanzmann wrote: > Hello, > > > > How to proxy request to different upstream by url request parameters? > > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky > > this is part of the commercial subscription, but you can probably obtain > the same using a MAP like that: > > map $arg_key $backend { > key1 backend1; > key2 backend2; > default fail; > } > > proxy_pass http://$arg_key; > > Cheers, > Thomas > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunalvjti at gmail.com Mon Mar 24 02:11:05 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Sun, 23 Mar 2014 19:11:05 -0700 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: What debugs should i enable & how to see these response headers ? I do see this error though. 2014/03/03 14:04:32 [error] 11259#0: *6 upstream sent invalid header while reading response header from upstream, client: 127.0.0.1, server: xxx.default, request: "GET /service/home/~/?auth=co&loc=en_GB&id=259&part=3 HTTP/1.1", upstream: "https://127.0.1.1:8443/service/home/~/?auth=co&loc=en_GB&id=259&part=3", host: "xxx", referrer: "https://xxx/ " So can this be that the upstream is sending the right header (because it works fine when there is no space in the filename) but nginx is parsing it incorrectly ? Thanks -Kunal On Sun, Mar 23, 2014 at 5:06 PM, Lukas Tribus wrote: > Hi, > > > > I have nginx set as a reverse proxy for a mail server and it throws > > this 502 (invalid header) error while trying to fetch a file with a > > space in the filename. Any clues on where is this bug in the nginx code? > > Prior to jumping to conlusion about bugs in nginx, how does this response > header actually look like? > > Section 19.5.1 in RFC2616 [1] mandates the content of the filename-parm > needs to be a quoted string: > > filename-parm = "filename" "=" quoted-string > > [...] > > An example is > > Content-Disposition: attachment; filename="fname.ext" > > Does your response header correctly quote the filename? > > > > Regards, > > Lukas > > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Riedel at prosiebensat1digital.de Mon Mar 24 06:20:03 2014 From: Sven.Riedel at prosiebensat1digital.de (Riedel Sven) Date: Mon, 24 Mar 2014 06:20:03 +0000 Subject: Proper display of a "server overloaded" page Message-ID: Hi, I want to set up a small static HTML Page that informs visitors that our site is currently under load if an appserver cannot handle the request within N seconds. What would be the proper way to do this? Just set up a 504.html error page? And what are the different conditions when nginx will return a 503 vs a 504? Are there subtle behavioral differences between a proxy_pass statement and an upstream pool definition? Thanks, Sven -- Sven Riedel ? Senior Systems Architect Central Systems Architecture ProSiebenSat.1 Digital GmbH ? Ein Unternehmen der ProSiebenSat.1 Media AG Medienallee 4 ? D-85774 Unterf?hring ? sven.riedel at prosiebensat1digital.de Gesch?ftsf?hrer: Markan Karajica (Vorsitzender), Dr. Sebastian Weil, Thomas Port Firmensitz: Unterf?hring ? HRB 130417 AG M?nchen ? USt.-ID.-Nr. DE 218559421 ? St.-Nr. 9143/104/10137 From shahzaib.cb at gmail.com Mon Mar 24 07:14:20 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 24 Mar 2014 12:14:20 +0500 Subject: Hotlinking protection not working on firefox 28.0 !! Message-ID: Hello, hotlinking protection works fine with chrome and gives 403 forbidden error but somehow it is not working for firefox clients and plays the forbidden video without restrictions. Following is the config : server { listen 80; server_name lwx006.domain.com lwx006.gear3rd.net lwx006.gear3rd.com ; # limit_conn perip 2; limit_rate 600k; # access_log /websites/theos.in/logs/access.log main; location / { root /var/www/html/domain; index index.html index.htm index.php; } location ~ \.(flv|jpg|jpeg)$ { flv; root /var/www/html/domain; # limit_conn addr 5; # limit_req zone=one burst=12; # aio on; # directio 512; # output_buffers 1 512k; expires 7d; # valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ -720\.(mp4)$ { mp4; expires 7d; limit_rate 2000k; root /var/www/html/domain; # valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ -480\.(mp4)$ { mp4; expires 7d; # limit_rate 250k; root /var/www/html/domain; # valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ -360\.(mp4)$ { mp4; expires 7d; # limit_rate 250k; root /var/www/html/domain; # valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; root /var/www/html/domain; # limit_conn addr 40; # limit_req zone=one burst=4; # aio on; # directio 4m; # output_buffers 1 128k; expires 7d; # valid_referers none blocked domain.com *.domain.com *. facebook.com *.twitter.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; valid_referers none blocked domain.com *.domain.com *.facebook.com*. twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root /var/www/html/domain; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } Help will be highly appreciated !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nunomagalhaes at eu.ipp.pt Mon Mar 24 10:32:45 2014 From: nunomagalhaes at eu.ipp.pt (=?UTF-8?Q?Nuno_Magalh=C3=A3es?=) Date: Mon, 24 Mar 2014 10:32:45 +0000 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: On Mon, Mar 24, 2014 at 2:11 AM, Kunal Pariani wrote: > So can this be that the upstream is sending the right header (because it > works fine when there is no space in the filename) but nginx is parsing it > incorrectly ? Use the web developer tools in your browser to see the header and/or try encoding the filename (not sure about this last one). Or GET it yourself with telnet or something. Cheers, Nuno From luky-37 at hotmail.com Mon Mar 24 10:56:54 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 11:56:54 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: , , Message-ID: Hi, > What debugs should i enable & how to see these response headers ? I do? > see this error though.? Just use curl for example and request it directly from your backend: curl -k -I "https://127.0.1.1:8443/service/home/~/?auth=co&loc=en_GB&id=259&part=3" So you can check the actual response header. Regards, Lukas From kyprizel at gmail.com Mon Mar 24 10:59:57 2014 From: kyprizel at gmail.com (kyprizel) Date: Mon, 24 Mar 2014 14:59:57 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <20140318160053.GX34696@mdounin.ru> References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> Message-ID: something like this? On Tue, Mar 18, 2014 at 8:00 PM, Maxim Dounin wrote: > Hello! > > On Tue, Mar 18, 2014 at 03:42:33PM +0400, kyprizel wrote: > > > What will be the best way to do it? > > Probably a flag in ngx_slab_pool_t will be good enough. > > > > > > > On Tue, Mar 18, 2014 at 3:33 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: > > > > > > > Hi, > > > > currently SSL session lifetime and SSL ticket lifetime are equal in > > > nginx. > > > > > > > > If we use session tickets with big enough lifetime (12hrs), we get a > lot > > > of > > > > error log messages while allocating new sessions in shared memory: > > > > > > > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no > memory in > > > > SSL session shared cache "SSL" > > > > > > > > We don't want to increase session cache size b/c working with it is a > > > > blocking operation and I believe it doesn't work good enought in our > > > > network scheme. > > > > > > Just a side note: I don't think that size matters from performance > > > point of view. The only real downside is memory used. > > > > > > > As I can see - those messages are generated by ngx_slab_alloc_pages() > > > even > > > > if session was added to the cache after expiration of some old ones. > > > > > > > > So, what do you think if we add one more config parameter to split > > > session > > > > cache and session ticket lifetimes? > > > > > > May be better approach will be to just avoid such messages? > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: slab.patch Type: application/octet-stream Size: 1379 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Mar 24 11:01:08 2014 From: nginx-forum at nginx.us (Larry) Date: Mon, 24 Mar 2014 07:01:08 -0400 Subject: ssl cache pooling ? (kind of) In-Reply-To: <20140323141813.GE34696@mdounin.ru> References: <20140323141813.GE34696@mdounin.ru> Message-ID: <54332235361daf6e5dd59fdb16d364ad.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim, I will investigate it and get my results here. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248588,248614#msg-248614 From nginx-forum at nginx.us Mon Mar 24 11:20:21 2014 From: nginx-forum at nginx.us (Larry) Date: Mon, 24 Mar 2014 07:20:21 -0400 Subject: ssl cache pooling ? (kind of) In-Reply-To: <20140323141813.GE34696@mdounin.ru> References: <20140323141813.GE34696@mdounin.ru> Message-ID: I will try to code something. Should I put it back here if successful or not ? Anyway, thanks for your knowledge Maxim. Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248588,248610#msg-248610 From mdounin at mdounin.ru Mon Mar 24 11:51:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Mar 2014 15:51:26 +0400 Subject: Proper display of a "server overloaded" page In-Reply-To: References: Message-ID: <20140324115125.GG34696@mdounin.ru> Hello! On Mon, Mar 24, 2014 at 06:20:03AM +0000, Riedel Sven wrote: > Hi, > I want to set up a small static HTML Page that informs visitors that our > site is currently under load if an appserver cannot handle the request > within N seconds. > > What would be the proper way to do this? Just set up a 504.html error > page? And what are the different conditions when nginx will return a 503 > vs a 504? > Are there subtle behavioral differences between a proxy_pass statement and > an upstream pool definition? There two errors which can be returned by proxy module in case of upstream problems: 502 (if an error occurs) and 504 (if a request times out). Usually it's good idea to handle both of them. The 503 code is never used by proxy, it's only returned by limit_conn / limit_req. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 24 11:56:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Mar 2014 15:56:34 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> Message-ID: <20140324115634.GI34696@mdounin.ru> Hello! On Mon, Mar 24, 2014 at 02:59:57PM +0400, kyprizel wrote: > something like this? Yes, something like. But initialized and with a better name. > > > On Tue, Mar 18, 2014 at 8:00 PM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Mar 18, 2014 at 03:42:33PM +0400, kyprizel wrote: > > > > > What will be the best way to do it? > > > > Probably a flag in ngx_slab_pool_t will be good enough. > > > > > > > > > > > On Tue, Mar 18, 2014 at 3:33 PM, Maxim Dounin > > wrote: > > > > > > > Hello! > > > > > > > > On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: > > > > > > > > > Hi, > > > > > currently SSL session lifetime and SSL ticket lifetime are equal in > > > > nginx. > > > > > > > > > > If we use session tickets with big enough lifetime (12hrs), we get a > > lot > > > > of > > > > > error log messages while allocating new sessions in shared memory: > > > > > > > > > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no > > memory in > > > > > SSL session shared cache "SSL" > > > > > > > > > > We don't want to increase session cache size b/c working with it is a > > > > > blocking operation and I believe it doesn't work good enought in our > > > > > network scheme. > > > > > > > > Just a side note: I don't think that size matters from performance > > > > point of view. The only real downside is memory used. > > > > > > > > > As I can see - those messages are generated by ngx_slab_alloc_pages() > > > > even > > > > > if session was added to the cache after expiration of some old ones. > > > > > > > > > > So, what do you think if we add one more config parameter to split > > > > session > > > > > cache and session ticket lifetimes? > > > > > > > > May be better approach will be to just avoid such messages? > > > > > > > > -- > > > > Maxim Dounin > > > > http://nginx.org/ > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Mon Mar 24 12:10:33 2014 From: kyprizel at gmail.com (kyprizel) Date: Mon, 24 Mar 2014 16:10:33 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <20140324115634.GI34696@mdounin.ru> References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> <20140324115634.GI34696@mdounin.ru> Message-ID: Any suggestions to the name? On Mon, Mar 24, 2014 at 3:56 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 24, 2014 at 02:59:57PM +0400, kyprizel wrote: > > > something like this? > > Yes, something like. But initialized and with a better name. > > > > > > > On Tue, Mar 18, 2014 at 8:00 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Tue, Mar 18, 2014 at 03:42:33PM +0400, kyprizel wrote: > > > > > > > What will be the best way to do it? > > > > > > Probably a flag in ngx_slab_pool_t will be good enough. > > > > > > > > > > > > > > > On Tue, Mar 18, 2014 at 3:33 PM, Maxim Dounin > > > wrote: > > > > > > > > > Hello! > > > > > > > > > > On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: > > > > > > > > > > > Hi, > > > > > > currently SSL session lifetime and SSL ticket lifetime are equal > in > > > > > nginx. > > > > > > > > > > > > If we use session tickets with big enough lifetime (12hrs), we > get a > > > lot > > > > > of > > > > > > error log messages while allocating new sessions in shared > memory: > > > > > > > > > > > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no > > > memory in > > > > > > SSL session shared cache "SSL" > > > > > > > > > > > > We don't want to increase session cache size b/c working with it > is a > > > > > > blocking operation and I believe it doesn't work good enought in > our > > > > > > network scheme. > > > > > > > > > > Just a side note: I don't think that size matters from performance > > > > > point of view. The only real downside is memory used. > > > > > > > > > > > As I can see - those messages are generated by > ngx_slab_alloc_pages() > > > > > even > > > > > > if session was added to the cache after expiration of some old > ones. > > > > > > > > > > > > So, what do you think if we add one more config parameter to > split > > > > > session > > > > > > cache and session ticket lifetimes? > > > > > > > > > > May be better approach will be to just avoid such messages? > > > > > > > > > > -- > > > > > Maxim Dounin > > > > > http://nginx.org/ > > > > > > > > > > _______________________________________________ > > > > > nginx mailing list > > > > > nginx at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 24 13:10:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Mar 2014 17:10:31 +0400 Subject: ssl cache pooling ? (kind of) In-Reply-To: References: <20140323141813.GE34696@mdounin.ru> Message-ID: <20140324131031.GJ34696@mdounin.ru> Hello! On Mon, Mar 24, 2014 at 07:20:21AM -0400, Larry wrote: > I will try to code something. > > Should I put it back here if successful or not ? If you'll produce something you will want to submit into nginx, see http://nginx.org/en/docs/contributing_changes.html for recommended approach. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Mon Mar 24 13:13:38 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Mar 2014 13:13:38 +0000 Subject: Full pathname not sent In-Reply-To: <6617c1c6.1675b1.144efaea93f.Webtop.45@charter.net> References: <6617c1c6.1675b1.144efaea93f.Webtop.45@charter.net> Message-ID: <20140324131338.GY29880@craic.sysops.org> On Sun, Mar 23, 2014 at 12:03:25PM -0400, Doc wrote: Hi there, > This might be more of a regex problem on my part than nginx. Yes, it is. > I do a > rewrite to pass the full pathname to thttpd like this: > > location ~ /radio/download/.*\.mp3$ { > rewrite ^ /test/$1; > } What value do you want $1 to have? What does nginx (and pretty much every regex tool) think $1 is? You'll probably want () in there somewhere. Look for "capture" in the manual. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 24 13:19:53 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Mar 2014 13:19:53 +0000 Subject: Hotlinking protection not working on firefox 28.0 !! In-Reply-To: References: Message-ID: <20140324131953.GZ29880@craic.sysops.org> On Mon, Mar 24, 2014 at 12:14:20PM +0500, shahzaib shahzaib wrote: Hi there, > hotlinking protection works fine with chrome and gives 403 > forbidden error but somehow it is not working for firefox clients and plays > the forbidden video without restrictions. Following is the config : Using tcpdump or by other means, can you see: * what request does the chrome client make that leads to the 403? * what request does the firefox client make that leads to the video being played? * what is different between the two? Pay particular attention to the "Referer:" request header. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 24 13:28:25 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Mar 2014 13:28:25 +0000 Subject: Pass path to thttpd for CGI In-Reply-To: <89457bf.168c47.144eba899c5.Webtop.47@charter.net> References: <89457bf.168c47.144eba899c5.Webtop.47@charter.net> Message-ID: <20140324132825.GA29880@craic.sysops.org> On Sat, Mar 22, 2014 at 05:18:19PM -0400, Doc wrote: Hi there, > I can get /music/download to be handled by CGI in > thttpd but I don't know how to handle the changing song titles cause > everything I've tried fails. I don't see any nginx problems here. Can you request the download of thttpd directly? If you can't, nginx can't. If you can, post the curl command you use to get it, and you may have better luck here. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 24 13:33:56 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Mar 2014 13:33:56 +0000 Subject: nginx reverse proxy In-Reply-To: References: Message-ID: <20140324133356.GB29880@craic.sysops.org> On Sat, Mar 22, 2014 at 01:34:09PM +0530, chima s wrote: Hi there, > I have configured nginx as reverse proxy for jboss. All the system are > hosted in Amazon cloud and using AWS ELB for both nginx and jboss > > WEB ELB <---> nginx reverse proxy <---> APP ELB <---> Jboss7. > > When i access abc.example.com/admin/login.do, i am getting page and after i > provide username/password and submit, i am getting connection timeout. > > Found the URL get changed to abc.example.com:8080/admin/xxxx > > How to get rid of port 8080 in the response URL in nginx. The best way is probably to find which machine is putting :8080 in there, and configure it not to do that. Can you see logs or the network traffic at each stage, to see where :8080 first appears? f -- Francis Daly francis at daoine.org From icantthinkofone at charter.net Mon Mar 24 14:19:20 2014 From: icantthinkofone at charter.net (Doc) Date: Mon, 24 Mar 2014 10:19:20 -0400 (EDT) Subject: Full pathname not sent Message-ID: <4eaff7c1.173f66.144f475bc2c.Webtop.46@charter.net> On Mon, Mar 24, 2014 at 8:13 AM, Francis Daly wrote: > On Sun, Mar 23, 2014 at 12:03:25PM -0400, Doc wrote: > > Hi there, > >> This might be more of a regex problem on my part than nginx. > > Yes, it is. > >> I do a rewrite to pass the full pathname to thttpd like this: >> >> location ~ /radio/download/.*\.mp3$ { >> rewrite ^ /test/$1; >> } > > What value do you want $1 to have? > > What does nginx (and pretty much every regex tool) think $1 is? > > You'll probably want () in there somewhere. Look for "capture" in > the manual. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Like a lot of people, I forget my regex bearings but I now have this working to an extent. I just need to think about this some more. As a sole developer, I have too many different things going on at the same time. From icantthinkofone at charter.net Mon Mar 24 14:21:30 2014 From: icantthinkofone at charter.net (Doc) Date: Mon, 24 Mar 2014 10:21:30 -0400 (EDT) Subject: Pass path to thttpd for CGI Message-ID: <7de23a73.173fb3.144f477bbd5.Webtop.46@charter.net> On Mon, Mar 24, 2014 at 8:28 AM, Francis Daly wrote: > On Sat, Mar 22, 2014 at 05:18:19PM -0400, Doc wrote: > > Hi there, > >> I can get /music/download to be handled by CGI in thttpd but I don't >> know how to handle the changing song titles cause everything I've >> tried fails. > > I don't see any nginx problems here. > > Can you request the download of thttpd directly? If you can't, nginx > can't. > > If you can, post the curl command you use to get it, and you may have > better luck here. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Like my other thread, I think I've just lost my regex legs. Things are working almost as I want but I need to review what I've written. From adam at adampearlman.com Mon Mar 24 14:31:39 2014 From: adam at adampearlman.com (Adam Pearlman) Date: Mon, 24 Mar 2014 10:31:39 -0400 Subject: 403 after changing root, but permissions look correct In-Reply-To: <1395610528.31659.104.camel@steve-new> References: <1395610528.31659.104.camel@steve-new> Message-ID: Turns out this was my fault. I was using "sudo service nginx start" instead of just "sudo nginx." On Sun, Mar 23, 2014 at 5:35 PM, Steve Holdoway wrote: > Having just had a similar problem with migrating a MySQL database, I > suggest that you check whether SELinux/Apparmor is running. > > Why prople think it's ok to use a program that can be switched off in an > instant to improve their 'security' is and always will be a mystery to > me! > > Cheers, > > Steve > > On Sun, 2014-03-23 at 11:48 -0400, Adam Pearlman wrote: > > I've been struggling with this for a few hours. > > > > I installed nginx 1.4.6 on Fedora 20. The test page displayed fine. I > > changed the root, leaving all other configuration the same, and I get > > a 403 Forbidden error. > > > > If I look at the permissions for the original test page and the new > > page, they appear identical. > > > > Working test page: > > namei -om /usr/share/nginx/html/index.html > > f: /usr/share/nginx/html/index.html > > dr-xr-xr-x root root / > > drwxr-xr-x root root usr > > drwxr-xr-x root root share > > drwxr-xr-x root root nginx > > drwxr-xr-x root root html > > -rw-r--r-- root root index.html > > > > Not working: > > namei -om /var/www/html/index.html > > f: /var/www/html/index.html > > dr-xr-xr-x root root / > > drwxr-xr-x root root var > > drwxr-xr-x root root www > > drwxr-xr-x root root html > > -rw-r--r-- root root index.html > > > > The error log seems to be what I would expect as well: > > 2014/03/23 12:45:08 [error] 5490#0: *13 open() > > "/var/www/html/index.html" failed (13: Permission denied), client: > > XXX.XX.XXX.XXX, server: localhost, request: "GET /index.html > > HTTP/1.1", host: "ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com" > > > > > > The Nginx config has "user nginx" - I tried using root and it made no > > difference. I also made user ngnix the owner & group of the files, but > > that didn't work. If I move the index file from /var/www/html > > to /usr/share/nginx/html (the test file location) it works fine making > > me suspect the path, but as I said, permissions appear correct. > > > > > > Any help would be very much appreciated. Thanks! > > > > > > - Adam > > > > > > I've included the config file below just in case: > > > > > > # For more information on configuration, see: > > # * Official English Documentation: http://nginx.org/en/docs/ > > # * Official Russian Documentation: http://nginx.org/ru/docs/ > > > > user nginx; > > worker_processes 1; > > > > error_log /var/log/nginx/error.log; > > #error_log /var/log/nginx/error.log notice; > > #error_log /var/log/nginx/error.log info; > > > > pid /run/nginx.pid; > > > > events { > > worker_connections 1024; > > } > > > > http { > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > > > log_format main '$remote_addr - $remote_user [$time_local] > > "$request" ' > > '$status $body_bytes_sent "$http_referer" ' > > '"$http_user_agent" "$http_x_forwarded_for"'; > > > > access_log /var/log/nginx/access.log main; > > > > sendfile on; > > #tcp_nopush on; > > > > #keepalive_timeout 0; > > keepalive_timeout 65; > > > > #gzip on; > > > > # Load modular configuration files from the /etc/nginx/conf.d > > directory. > > # See http://nginx.org/en/docs/ngx_core_module.html#include > > # for more information. > > include /etc/nginx/conf.d/*.conf; > > > > index index.html index.htm; > > > > server { > > listen 80; > > server_name localhost; > > root /usr/share/nginx/html; ################THIS WORKS > > #root /var/www/html; #####################THIS DOESN'T > > > > #charset koi8-r; > > > > #access_log /var/log/nginx/host.access.log main; > > > > location / { > > } > > > > # redirect server error pages to the static page /40x.html > > # > > error_page 404 /404.html; > > location = /40x.html { > > } > > > > # redirect server error pages to the static page /50x.html > > # > > error_page 500 502 503 504 /50x.html; > > location = /50x.html { > > } > > > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > > # > > #location ~ \.php$ { > > # proxy_pass http://127.0.0.1; > > #} > > > > # pass the PHP scripts to FastCGI server listening on > > 127.0.0.1:9000 > > # > > #location ~ \.php$ { > > # root html; > > # fastcgi_pass 127.0.0.1:9000; > > # fastcgi_index index.php; > > # fastcgi_param SCRIPT_FILENAME /scripts > > $fastcgi_script_name; > > # include fastcgi_params; > > #} > > > > # deny access to .htaccess files, if Apache's document root > > # concurs with nginx's one > > # > > #location ~ /\.ht { > > # deny all; > > #} > > } > > > > # another virtual host using mix of IP-, name-, and port-based > > configuration > > # > > #server { > > # listen 8000; > > # listen somename:8080; > > # server_name somename alias another.alias; > > # root html; > > > > # location / { > > # } > > #} > > > > > > # HTTPS server > > # > > #server { > > # listen 443; > > # server_name localhost; > > # root html; > > > > # ssl on; > > # ssl_certificate cert.pem; > > # ssl_certificate_key cert.key; > > > > # ssl_session_timeout 5m; > > > > # ssl_protocols SSLv2 SSLv3 TLSv1; > > # ssl_ciphers HIGH:!aNULL:!MD5; > > # ssl_prefer_server_ciphers on; > > > > # location / { > > # } > > #} > > > > } > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick at bytemark.co.uk Mon Mar 24 16:31:31 2014 From: nick at bytemark.co.uk (Nick Thomas) Date: Mon, 24 Mar 2014 16:31:31 +0000 Subject: AAAA record for nginx.org ? Message-ID: <53305DE3.508@bytemark.co.uk> Hi there, I recently had cause to (try to) install Nginx from the Debian repositories hosted at http://nginx.org (i.e., http://nginx.org/packages/mainline/debian/dists/wheezy/ ) on a set of machines which have IPv6-only connectivity. This fails because nginx.org lacks any AAAA records. Is there any chance of getting IPv6 connectivity set up for this mirror, or are there any IPv6-enabled mirrors we could use? If you're looking for mirrors (I couldn't find any information on a mirroring programme, mind) then Bytemark would be happy to get involved - we run an IPv6-capable mirror at http://mirror.bytemark.co.uk/ Regards, -- Nick Thomas Bytemark Hosting From kunalvjti at gmail.com Mon Mar 24 17:46:07 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Mon, 24 Mar 2014 10:46:07 -0700 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: I used the web browser but didn't see this Content-disposition header in the response. Only saw these response headers. 1. Response Headersview source 1. Connection: keep-alive 2. Content-Length: 1159 3. Content-Type: text/html 4. Date: Mon, 24 Mar 2014 17:15:00 GMT 5. Server: nginx Using the curl to get the file directly from the backend server keeps throwing the http 404 must authenticate error don't know why as i am supplying the -u username:password as well kunal at zdev-vm048:~$ curl -u testuser1:testpass -k -I " https://127.0.1.1:8443/service/home/~/?auth=co&loc=en_GB&id=259&part=3" *HTTP/1.1 404 must authenticate* Date: Mon, 24 Mar 2014 17:38:56 GMT Content-Type: text/html; charset=ISO-8859-1 Cache-Control: must-revalidate,no-cache,no-store Content-Length: 320 Thanks -Kunal On Mon, Mar 24, 2014 at 3:56 AM, Lukas Tribus wrote: > Hi, > > > > What debugs should i enable & how to see these response headers ? I do > > see this error though. > > Just use curl for example and request it directly from your backend: > curl -k -I " > https://127.0.1.1:8443/service/home/~/?auth=co&loc=en_GB&id=259&part=3" > > > So you can check the actual response header. > > > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Mar 24 18:16:14 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 19:16:14 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: , , , , Message-ID: Hi Kunal, > I used the web browser but didn't see this Content-disposition header > in the response. Only saw these response headers. We need to see the Content-disposition, everything else makes no sense. Are you trying against the nginx frontend or your backend? If it is nginx you're connecting to, you abviously need a filename without spaces otherwise the response will not show up on your browser. > Using the curl to get the file directly from the backend server keeps > throwing the http 404 must authenticate error don't know why as i am > supplying the -u username:password as well Fix your authentication problem. Only you know how your backend authenticates users, we can't tell you that (perhaps you need cookies). Regards, Lukas From nginx-forum at nginx.us Mon Mar 24 18:16:30 2014 From: nginx-forum at nginx.us (fcaoliveira) Date: Mon, 24 Mar 2014 14:16:30 -0400 Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: <7e4b3f5fba2e91657ed3d6677632fc7c.NginxMailingListEnglish@forum.nginx.org> References: <20120605104751.GT31671@mdounin.ru> <7e4b3f5fba2e91657ed3d6677632fc7c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7684b1dc944375df7cf9574c11b57130.NginxMailingListEnglish@forum.nginx.org> Hi, everybody. Any solutions to this issue? Thx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227175,248638#msg-248638 From kunalvjti at gmail.com Mon Mar 24 18:35:09 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Mon, 24 Mar 2014 11:35:09 -0700 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: I downloaded another file and the Content-Disposition header lists the filename with space under quotes correctly "zcs error.docx" thereby proving that its nginx which is not parsing it correctly. Correct me if i am wrong. 1. Cache-Control: no-store, no-cache 2. Connection: keep-alive 3. *Content-Disposition: attachment; filename="zcs error.docx"* 4. Content-Encoding: gzip 5. Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document; name="zcs error.docx" Thanks -Kunal On Mon, Mar 24, 2014 at 11:16 AM, Lukas Tribus wrote: > Hi Kunal, > > > > I used the web browser but didn't see this Content-disposition header > > in the response. Only saw these response headers. > > We need to see the Content-disposition, everything else makes no sense. > > Are you trying against the nginx frontend or your backend? If it is nginx > you're connecting to, you abviously need a filename without spaces > otherwise > the response will not show up on your browser. > > > > > Using the curl to get the file directly from the backend server keeps > > throwing the http 404 must authenticate error don't know why as i am > > supplying the -u username:password as well > > Fix your authentication problem. Only you know how your backend > authenticates users, we can't tell you that (perhaps you need cookies). > > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Mar 24 18:59:25 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 19:59:25 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: , , , , , , Message-ID: Hi, > I downloaded another file and the Content-Disposition header lists the > filename with space under quotes correctly "zcs error.docx" thereby > proving that its nginx which is not parsing it correctly. Correct me if > i am wrong. Is this specific response going through nginx or directly from the browser to the backend? Enable debugging logs and collect the traces: http://nginx.org/en/docs/debugging_log.html Regards, Lukas From nginx-forum at nginx.us Mon Mar 24 19:06:45 2014 From: nginx-forum at nginx.us (nxspeed) Date: Mon, 24 Mar 2014 15:06:45 -0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: <201303281832.22834.vbart@nginx.com> References: <201303281832.22834.vbart@nginx.com> Message-ID: > On Wednesday 27 March 2013 16:34:27 praveenkumar Muppala wrote: > > Hi, > > > > We have a nginx1.0.5 version installed in our system. We are getting > this > > error continuously in our nginx error log. ngx_slab_alloc() failed: > no > > memory in cache keys zone "zone-xyz". I have increased this value to > 20G, > > even 30G also still getting the same error. Can you help to fix this > error > > please. > > Do you really have 30 gigabytes of RAM? Why do you need such a big > zone? > > You are probably confuse "keys_zone" with max cache size. > > wbr, Valentin V. Bartenev > Does ngx_http_file_cache_manager() actively manage the size of the 'keys_zone'? i.e., will the manager remove old items from cache to keep the keys_zone under its configured size? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237829,248642#msg-248642 From luky-37 at hotmail.com Mon Mar 24 19:43:41 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 20:43:41 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: , , , , , , Message-ID: FYI, nginx has not problems passing filenames with spaces along: # curl -I http://direct-apache/content-disposition-header.php HTTP/1.1 200 OK Date: Mon, 24 Mar 2014 19:40:22 GMT Server: Apache/2.4.2 (Win32) OpenSSL/1.0.1c PHP/5.4.4 X-Powered-By: PHP/5.4.4 Cache-Control: no-store, no-cache Connection: keep-alive Content-Disposition: attachment; filename="zcs error.docx" Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document; name="zcs error.docx" # curl -I "http://nginx-rev-proxying/content-disposition-header.php" HTTP/1.1 200 OK Server: nginx/1.4.7 Date: Mon, 24 Mar 2014 19:41:03 GMT Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document; name="zcs error.docx" Connection: keep-alive X-Powered-By: PHP/5.4.4 Cache-Control: no-store, no-cache Content-Disposition: attachment; filename="zcs error.docx" From kunalvjti at gmail.com Mon Mar 24 20:17:07 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Mon, 24 Mar 2014 13:17:07 -0700 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: hmm..thanks Lukas. So its my backend server only which is causing this issue. Thanks -Kunal On Mon, Mar 24, 2014 at 12:43 PM, Lukas Tribus wrote: > FYI, nginx has not problems passing filenames with spaces along: > > > # curl -I http://direct-apache/content-disposition-header.php > HTTP/1.1 200 OK > Date: Mon, 24 Mar 2014 19:40:22 GMT > Server: Apache/2.4.2 (Win32) OpenSSL/1.0.1c PHP/5.4.4 > X-Powered-By: PHP/5.4.4 > Cache-Control: no-store, no-cache > Connection: keep-alive > Content-Disposition: attachment; filename="zcs error.docx" > Content-Type: > application/vnd.openxmlformats-officedocument.wordprocessingml.document; > name="zcs error.docx" > > > # curl -I "http://nginx-rev-proxying/content-disposition-header.php" > HTTP/1.1 200 OK > Server: nginx/1.4.7 > Date: Mon, 24 Mar 2014 19:41:03 GMT > Content-Type: > application/vnd.openxmlformats-officedocument.wordprocessingml.document; > name="zcs error.docx" > Connection: keep-alive > X-Powered-By: PHP/5.4.4 > Cache-Control: no-store, no-cache > Content-Disposition: attachment; filename="zcs error.docx" > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Mar 24 21:09:32 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 22:09:32 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: , , , , , , , , Message-ID: > hmm..thanks Lukas. > So its my backend server only which is causing this issue. >From the information provided in this thread, I can't tell. We would need the exact response header that makes nginx return the 502 response plus detailed informations about your setup (output of nginx -v and your configuration). Regards, Lukas From kunalvjti at gmail.com Mon Mar 24 21:27:29 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Mon, 24 Mar 2014 14:27:29 -0700 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: Message-ID: Never mind there's nothing wrong with nginx here. It was one of the response headers sent by an upstream server (mainly Content-Description: 2013923 10H56M56S633_PV.doc?) including this non-ascii char '?' which the nginx didn't like and hence flagged it saying that it received an invalid header. Thanks -Kunal On Mon, Mar 24, 2014 at 2:09 PM, Lukas Tribus wrote: > > hmm..thanks Lukas. > > So its my backend server only which is causing this issue. > > From the information provided in this thread, I can't tell. > > We would need the exact response header that makes nginx return > the 502 response plus detailed informations about your setup (output > of nginx -v and your configuration). > > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Mar 24 21:47:31 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 24 Mar 2014 22:47:31 +0100 Subject: nginx throws a 502 (invalid header) when downloading a file attachment if filename has a space character In-Reply-To: References: , , , , , , , , , , Message-ID: > Never mind there's nothing wrong with nginx here. > It was one of the response headers sent by an upstream server > (mainly Content-Description: 2013923 10H56M56S633_PV.doc?) including > this non-ascii char '?' which the nginx didn't like and hence flagged > it saying that it received an invalid header. Thanks for confirming. Nginx does the right thing here, headers must not contain non-ascii chars per RFC. Regards, Lukas From steve at greengecko.co.nz Tue Mar 25 00:51:30 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 25 Mar 2014 13:51:30 +1300 Subject: moving servers... https woes. Message-ID: <1395708690.31659.142.camel@steve-new> Is there any way of forwarding https to a new server while people's DNS and browsers drain down? I know it's easy enough to terminate it and forward http, but I need both the old and new sites ( ecommerce ) to work in https where relevant... I have a horrible feeling that you can't. Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From feldan1 at gmail.com Tue Mar 25 01:02:22 2014 From: feldan1 at gmail.com (Lorne Wanamaker) Date: Mon, 24 Mar 2014 22:02:22 -0300 Subject: moving servers... https woes. In-Reply-To: <1395708690.31659.142.camel@steve-new> References: <1395708690.31659.142.camel@steve-new> Message-ID: Not quite sure if I understand right but you may be able to do this using maps. Something like: http://redant.com.au/ruby-on-rails-devops/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ Lorne On Mon, Mar 24, 2014 at 8:51 PM, Steve Holdoway wrote: > Is there any way of forwarding https to a new server while people's DNS > and browsers drain down? I know it's easy enough to terminate it and > forward http, but I need both the old and new sites ( ecommerce ) to > work in https where relevant... > > I have a horrible feeling that you can't. > > Cheers, > > > Steve > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Mar 25 02:41:29 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 25 Mar 2014 15:41:29 +1300 Subject: moving servers... https woes. In-Reply-To: References: <1395708690.31659.142.camel@steve-new> Message-ID: <1395715289.31659.167.camel@steve-new> Hi Lorne, Sadly not quite. The change in IP means that the eCommerce part of the site must be served through https:, but there seems to be a terrible lag - even though TTL has been set to 5 minutes for weeks - for customers in picking the change up. This means that the old IP address needs to handle and forward http and https, as well as the new one - which means that I can't just terminate https at the old IP and proxy as http: as the new server forces it back to https: Confused? Me too! Cheers, Steve On Mon, 2014-03-24 at 22:02 -0300, Lorne Wanamaker wrote: > Not quite sure if I understand right but you may be able to do this > using maps. Something like: > > > http://redant.com.au/ruby-on-rails-devops/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ > > > > Lorne > > > On Mon, Mar 24, 2014 at 8:51 PM, Steve Holdoway > wrote: > Is there any way of forwarding https to a new server while > people's DNS > and browsers drain down? I know it's easy enough to terminate > it and > forward http, but I need both the old and new sites > ( ecommerce ) to > work in https where relevant... > > I have a horrible feeling that you can't. > > Cheers, > > > Steve > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Tue Mar 25 08:12:52 2014 From: nginx-forum at nginx.us (rahul286) Date: Tue, 25 Mar 2014 04:12:52 -0400 Subject: map v/s rewrite performance Message-ID: <357a08660a63b7a0f11f07211186f7d2.NginxMailingListEnglish@forum.nginx.org> Hi All, I am just wondering, say for 1000 url redirects, what will be more efficient. Rewrite Style: ============ server { rewrite old-url-1 new-url-1 permanent; rewrite old-url-2 new-url-2 permanent; rewrite old-url-3 new-url-3 permanent; #.... rewrite $old-url-1000 $new-url-1000 permanent; } Map Style: ============ map $request_uri $new_uri { default $request_uri; old-url-1 new-url-1; old-url-2 new-url-2; old-url-3 new-url-3; #.... old-url-1000 new-url-1000; } #and something like server { try_files $new_uri =404; } Since nginx is very fast, I am not able to notice any delay for around 20 rewrites. :| Is one of above 2 method is recommended for large number of rewrites? I am inclined towards map, as rewrite adds plenty of notices logs. It's like every rewrite is checked for incoming requests unless it is surrounded by location. Please let me know if more details are needed. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248659,248659#msg-248659 From reallfqq-nginx at yahoo.fr Tue Mar 25 08:14:49 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 25 Mar 2014 09:14:49 +0100 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) Message-ID: http://forum.nginx.org/ fails to return proper content. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Mar 25 08:15:40 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 25 Mar 2014 12:15:40 +0400 Subject: map v/s rewrite performance In-Reply-To: <357a08660a63b7a0f11f07211186f7d2.NginxMailingListEnglish@forum.nginx.org> References: <357a08660a63b7a0f11f07211186f7d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8FC386F3-2E32-4199-8389-36724456C1F5@sysoev.ru> On Mar 25, 2014, at 12:12 , rahul286 wrote: > Hi All, > > I am just wondering, say for 1000 url redirects, what will be more > efficient. > > Rewrite Style: > ============ > server { > rewrite old-url-1 new-url-1 permanent; > rewrite old-url-2 new-url-2 permanent; > rewrite old-url-3 new-url-3 permanent; > #.... > rewrite $old-url-1000 $new-url-1000 permanent; > } > > > Map Style: > ============ > > map $request_uri $new_uri { > default $request_uri; > old-url-1 new-url-1; > old-url-2 new-url-2; > old-url-3 new-url-3; > #.... > old-url-1000 new-url-1000; > } > > #and something like > server { > try_files $new_uri =404; > } > > Since nginx is very fast, I am not able to notice any delay for around 20 > rewrites. :| > > Is one of above 2 method is recommended for large number of rewrites? > > I am inclined towards map, as rewrite adds plenty of notices logs. It's like > every rewrite is checked for incoming requests unless it is surrounded by > location. > > Please let me know if more details are needed. location = old-url-1 { return 301 new-url-1; } ... -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Tue Mar 25 08:43:15 2014 From: nginx-forum at nginx.us (rahul286) Date: Tue, 25 Mar 2014 04:43:15 -0400 Subject: map v/s rewrite performance In-Reply-To: <8FC386F3-2E32-4199-8389-36724456C1F5@sysoev.ru> References: <8FC386F3-2E32-4199-8389-36724456C1F5@sysoev.ru> Message-ID: <9ddf25f86385955f9e683b2100ed93f2.NginxMailingListEnglish@forum.nginx.org> Igor Sysoev Wrote: ------------------------------------------------------- > location = old-url-1 { return 301 new-url-1; } Bingo! Never thought of this. :-) We will use this for https://github.com/rtCamp/easyengine/issues/162 Thanks a lot. :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248659,248663#msg-248663 From sb at nginx.com Tue Mar 25 11:33:44 2014 From: sb at nginx.com (Sergey Budnevitch) Date: Tue, 25 Mar 2014 15:33:44 +0400 Subject: AAAA record for nginx.org ? In-Reply-To: <53305DE3.508@bytemark.co.uk> References: <53305DE3.508@bytemark.co.uk> Message-ID: <8426CB60-E05F-4213-8172-EB711AAA0041@nginx.com> On 24 Mar 2014, at 20:31, Nick Thomas wrote: > Hi there, > > I recently had cause to (try to) install Nginx from the Debian > repositories hosted at http://nginx.org (i.e., > http://nginx.org/packages/mainline/debian/dists/wheezy/ ) on a set of > machines which have IPv6-only connectivity. This fails because nginx.org > lacks any AAAA records. Unfortunately our current provider doesn?t provide ipv6 connectivity. > Is there any chance of getting IPv6 connectivity set up for this mirror, > or are there any IPv6-enabled mirrors we could use? If you're looking > for mirrors (I couldn't find any information on a mirroring programme, > mind) then Bytemark would be happy to get involved - we run an > IPv6-capable mirror at http://mirror.bytemark.co.uk/ No, we have not mirrors and at least currently have no problems with traffic or/and capacity, thank you. From nick at bytemark.co.uk Tue Mar 25 13:16:28 2014 From: nick at bytemark.co.uk (Nick Thomas) Date: Tue, 25 Mar 2014 13:16:28 +0000 Subject: AAAA record for nginx.org ? In-Reply-To: <8426CB60-E05F-4213-8172-EB711AAA0041@nginx.com> References: <53305DE3.508@bytemark.co.uk> <8426CB60-E05F-4213-8172-EB711AAA0041@nginx.com> Message-ID: <533181AC.7050004@bytemark.co.uk> Hi, On 25/03/14 11:33, Sergey Budnevitch wrote: > > On 24 Mar 2014, at 20:31, Nick Thomas wrote: > >> Hi there, >> >> I recently had cause to (try to) install Nginx from the Debian >> repositories hosted at http://nginx.org (i.e., >> http://nginx.org/packages/mainline/debian/dists/wheezy/ ) on a set of >> machines which have IPv6-only connectivity. This fails because nginx.org >> lacks any AAAA records. > > Unfortunately our current provider doesn?t provide ipv6 connectivity. > >> Is there any chance of getting IPv6 connectivity set up for this mirror, >> or are there any IPv6-enabled mirrors we could use? If you're looking >> for mirrors (I couldn't find any information on a mirroring programme, >> mind) then Bytemark would be happy to get involved - we run an >> IPv6-capable mirror at http://mirror.bytemark.co.uk/ > > No, we have not mirrors and at least currently have no problems with traffic or/and capacity, thank you. No problem - IPv6-only machines are something of a rarity, at least for now :). I'll see what else I can do. /Nick From yuri1987 at gmail.com Tue Mar 25 14:05:15 2014 From: yuri1987 at gmail.com (Yuri Levin) Date: Tue, 25 Mar 2014 16:05:15 +0200 Subject: duplicating post request with nginx proxy Message-ID: i have an app that sends a post request to nginx proxy server, can i forward same request to two different upstream servers? (e.g. duplicating the same post request) can it be achieved with nginx? thanks. From sarah at nginx.com Tue Mar 25 14:27:47 2014 From: sarah at nginx.com (Sarah Novotny) Date: Tue, 25 Mar 2014 07:27:47 -0700 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: References: Message-ID: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> On Mar 25, 2014, at 1:14 AM, B.R. wrote: > http://forum.nginx.org/ fails to return proper content. It looks like it was transient or Jim fixed it up. Thanks for the report! sarah -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Mar 25 15:29:23 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 25 Mar 2014 16:29:23 +0100 Subject: moving servers... https woes. In-Reply-To: <1395715289.31659.167.camel@steve-new> References: <1395708690.31659.142.camel@steve-new>, , <1395715289.31659.167.camel@steve-new> Message-ID: Hi, > Sadly not quite. The change in IP means that the eCommerce part of the > site must be served through https:, but there seems to be a terrible lag > - even though TTL has been set to 5 minutes for weeks - for customers in > picking the change up. > > This means that the old IP address needs to handle and forward http and > https, as well as the new one - which means that I can't just terminate > https at the old IP and proxy as http: as the new server forces it back > to https I'm probably missing something, but why don't you just forward https to https and http to http? Regards, Lukas From jim at ohlste.in Tue Mar 25 15:32:41 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 25 Mar 2014 11:32:41 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> Message-ID: <5331A199.8080209@ohlste.in> Hello, On 3/25/14, 10:27 AM, Sarah Novotny wrote: > > On Mar 25, 2014, at 1:14 AM, B.R. > wrote: > >> http://forum.nginx.org/ fails to return proper content. > > It looks like it was transient or Jim fixed it up. Thanks for the report! > There was a segmentation fault in the upstream about 14 minutes before this was sent but that was restarted automatically, and long before I woke up. Other than that, without more specific information (exact request, IP, approximate time), it's hard to say much. All seems to be functioning as expected. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From lists at ruby-forum.com Tue Mar 25 20:03:22 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Tue, 25 Mar 2014 21:03:22 +0100 Subject: Nginx POST handler module caching response Message-ID: <7783a2975a8f7030aeb26066077468bb@ruby-forum.com> Hi, I have written a small nginx module that processes POST request and sends back in response the same data that is received in POST request. I am testing this with curl utility and closely monitoring nginx log which is set to debug level. (My module's server is listening on port 9000) # curl --data "This is nginx helper forum" http://localhost:9000 This is nginx helper forum # curl --data "Ruby forum" http://localhost:9000 This is ng # curl --data "ngx" http://localhost:9000 Thi However, if I reload the nginx config (nginx -s reload), I see the correct string # curl --data "Ruby forum" http://localhost:9000 Ruby forum When I look at the nginx log file, I can see the correct incoming data string every time in my handler. I also do ngx_pcalloc for my output buffer string. However, why is the cached/stale buffer string returned by nginx in response. I tried to look up relevant config directive but failed to correlate to this behaviour. Any guidance would be highly appreciated. Thanks -- Posted via http://www.ruby-forum.com/. From steve at greengecko.co.nz Tue Mar 25 20:47:37 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 26 Mar 2014 09:47:37 +1300 Subject: moving servers... https woes. In-Reply-To: References: <1395708690.31659.142.camel@steve-new> , , <1395715289.31659.167.camel@steve-new> Message-ID: <1395780457.31659.173.camel@steve-new> On Tue, 2014-03-25 at 16:29 +0100, Lukas Tribus wrote: > Hi, > > > > Sadly not quite. The change in IP means that the eCommerce part of the > > site must be served through https:, but there seems to be a terrible lag > > - even though TTL has been set to 5 minutes for weeks - for customers in > > picking the change up. > > > > This means that the old IP address needs to handle and forward http and > > https, as well as the new one - which means that I can't just terminate > > https at the old IP and proxy as http: as the new server forces it back > > to https > > I'm probably missing something, but why don't you just forward https to > https and http to http? > Mainly because I can't seem to get it to work - nginx, apache or iptables. I'm sure someine can come forward with technical reasons why... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From reallfqq-nginx at yahoo.fr Tue Mar 25 20:57:17 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 25 Mar 2014 21:57:17 +0100 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <5331A199.8080209@ohlste.in> References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> Message-ID: Hello, Still down from my side. I do not have cache anywhere, since my browser purges it on every shutdown. If you look at your frontend logs around the time I specified in the subject, you should notice some 502 being thrown. Do you really need my IP address? Any 502 you would find under whatever circumstances would not be a good sign anyway... ;o) I join a transcript of the HTTP requests/answer of the latest attempt. You'll notice that the favicon is perfectly served and that content type for main content is correctly set. If I can be of any further help... --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Mar 25 20:58:47 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 25 Mar 2014 21:58:47 +0100 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> Message-ID: Attachment, attachment... --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: http_exchange Type: application/octet-stream Size: 1225 bytes Desc: not available URL: From jim at ohlste.in Tue Mar 25 21:17:03 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 25 Mar 2014 17:17:03 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> Message-ID: <5331F24F.2050700@ohlste.in> Hello, On 3/25/14, 4:57 PM, B.R. wrote: > Hello, > > Still down from my side. I do not have cache anywhere, since my browser > purges it on every shutdown. > If you look at your frontend logs around the time I specified in the > subject, you should notice some 502 being thrown. Do you really need my > IP address? Any 502 you would find under whatever circumstances would > not be a good sign anyway... ;o) If you would like me to track this down, then I need your IP address, or at least enough of it to localize your requests *and* I need the time (UTC +/- as I do not know that T08:13Z means) so that I can track down the error. I'm busy, I do this for free and during my free time, and I am trying to help you. Since you seem to be the only one complaining as of this moment, I'm not going to spend a lot of my time on this if you don't want to cooperate. Feel free to send it to me off-list if your privacy is so important to you and you believe that disclosing your local IP is of *any* interest to *anyone* reading this. > > I join a transcript of the HTTP requests/answer of the latest attempt. > You'll notice that the favicon is perfectly served and that content type > for main content is correctly set. > > If I can be of any further help... -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From luky-37 at hotmail.com Tue Mar 25 21:22:25 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 25 Mar 2014 22:22:25 +0100 Subject: moving servers... https woes. In-Reply-To: <1395780457.31659.173.camel@steve-new> References: <1395708690.31659.142.camel@steve-new>,, ,, <1395715289.31659.167.camel@steve-new>, , <1395780457.31659.173.camel@steve-new> Message-ID: Hi, > Mainly because I can't seem to get it to work - nginx, apache or > iptables. > > I'm sure someine can come forward with technical reasons why... In this thread you asked about how this could be done, you didn't say that you already tried something and that it didn't work. So you are hoping that someone may be able to provide the technical reason for a failure you didn't even mention in the first place (let alone some details)? As for your original question, I would configure the old server like this, to pass the requests to the new server: server { ??? listen?????? 80; ??? location / { ??????? proxy_pass?????? http://:80; ??????? proxy_set_header Host???? $host; ??????? proxy_set_header X-Real-IP $remote_addr; ??? } } server { ??? listen?????? 443 ssl; ??? location / { ??????? proxy_pass?????? https://:443; ??????? proxy_set_header Host????? $host; ??????? proxy_set_header X-Real-IP $remote_addr; ??? } } Regards, Lukas From contact at jpluscplusm.com Tue Mar 25 22:50:23 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 25 Mar 2014 22:50:23 +0000 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <5331F24F.2050700@ohlste.in> References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> <5331F24F.2050700@ohlste.in> Message-ID: On 25 March 2014 21:17, Jim Ohlstein wrote: > [what does] that "T08:13Z" mean The Z suffix indicates UTC, as per section 2 of http://www.ietf.org/rfc/rfc3339.txt. HTH, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From alex.thegreat at ambix.net Wed Mar 26 00:30:21 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 00:30:21 -0000 Subject: Location directive not working Message-ID: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> I set up two upstream servers (server 1 and server 2) where I geotarget people from specific countries: A People from country A go to server 1 and the others go to server 2. There are some uri s that must be served by server 1 when the person is in country A. So I have created this directive. It works only when I include one item, when I do (videos|events) it does not work location ^~ /(videos|events) { proxy_pass http://$server1$request_uri; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I do it this way because there are URLs that contain videos and events that must be served by server 1. For example, domain.com/videos/playing-with-my-dog etc. When I reload this directive nginx does not give me any errors, but when I try to access it from country B, it does not work. Thanks for all your help! Alex From nginx-forum at nginx.us Wed Mar 26 01:14:42 2014 From: nginx-forum at nginx.us (VernJensen) Date: Tue, 25 Mar 2014 21:14:42 -0400 Subject: Setting up nginx as Visual Studio 2010 project In-Reply-To: <30c947514885130584670a9e176c8c3e.NginxMailingListEnglish@forum.nginx.org> References: <1979d5146e9ab6c19013841c788b3bb1.NginxMailingListEnglish@forum.nginx.org> <30c947514885130584670a9e176c8c3e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <03656a5be4a574e118a59a8371fb4525.NginxMailingListEnglish@forum.nginx.org> This is almost a year later, but I've written up additional details on how to create a Visual Studio project for nginx. http://stackoverflow.com/questions/21486482/compile-nginx-with-visual-studio/22649559#22649559 Given that I'm not an advanced unix user, I had to figure a number of things out while doing this, and those things are all documented here. Enjoy! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227198,248703#msg-248703 From alex.thegreat at ambix.net Wed Mar 26 02:25:21 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 02:25:21 -0000 Subject: Issue with Upstream servers & GeoIP Message-ID: I am diverting traffic from one specific country (XX) to a specific APP server (Server for country XX) using an upstream definition and geoip. See definition below. Everything works well except when people who live in country XX try to access information that is hosted in the other server - this is because they may get the URI by doing searching on Google or other search engine. So we are trying to set Nginx so that if we get a URI that is hosted in the other server, the user from country XX can still access it. See the "FUN" location definition. The problem comes when the person located in country XX sees the domain.com/fun uri. Then, he/she gets all the text, but any reference of css, jpegs are served by server from country XX, not by the standard server that contains the :fun" information. How can I set it so that is being accessed be served entirely by the appropriate server that hosts the information? Here is a partial definition of my nginx.conf file ## # GeoIP Settings ## geoip_country /pathto/GeoIP.dat; #CREATE A MAP FOR GEOTARGET SESSIONS WITH GEOIP COUNTRY CODE map $geoip_country_code $backend { default default; XX XX; } # SERVER DEFINITIONS #UPSTREAM SERVERS ONE FOR STANDARD SITE & OTHER FOR SOCIAL NETWORK upstream default.backend { server 192.192.192.192:80; #Standard Server for all countries } upstream XX.backend { server 193.193.193.193:80; #Server for country XX. } server { listen 80; server_name domain.com; #FOR PEOPLE WHO LIVE IN COUNTRY XX WHEN THEY ACCESS anything with fun on the URI we should let the default server handle the requests. location ^~ /fun/ { proxy_pass http://default.backend$request_uri; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://$backend.backend; } } From francis at daoine.org Wed Mar 26 09:03:24 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 26 Mar 2014 09:03:24 +0000 Subject: Location directive not working In-Reply-To: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> Message-ID: <20140326090324.GE29880@craic.sysops.org> On Wed, Mar 26, 2014 at 12:30:21AM -0000, Alex wrote: Hi there, > It works only when I include > one item, when I do (videos|events) it does not work > > location ^~ /(videos|events) { That is not a regex location. A request for /videos/ will not match it. http://nginx.org/r/location f -- Francis Daly francis at daoine.org From kyprizel at gmail.com Wed Mar 26 09:34:19 2014 From: kyprizel at gmail.com (kyprizel) Date: Wed, 26 Mar 2014 13:34:19 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> <20140324115634.GI34696@mdounin.ru> Message-ID: will be "log_alloc_failures" better? On Mon, Mar 24, 2014 at 4:10 PM, kyprizel wrote: > Any suggestions to the name? > > > > On Mon, Mar 24, 2014 at 3:56 PM, Maxim Dounin wrote: > >> Hello! >> >> On Mon, Mar 24, 2014 at 02:59:57PM +0400, kyprizel wrote: >> >> > something like this? >> >> Yes, something like. But initialized and with a better name. >> >> > >> > >> > On Tue, Mar 18, 2014 at 8:00 PM, Maxim Dounin >> wrote: >> > >> > > Hello! >> > > >> > > On Tue, Mar 18, 2014 at 03:42:33PM +0400, kyprizel wrote: >> > > >> > > > What will be the best way to do it? >> > > >> > > Probably a flag in ngx_slab_pool_t will be good enough. >> > > >> > > > >> > > > >> > > > On Tue, Mar 18, 2014 at 3:33 PM, Maxim Dounin >> > > wrote: >> > > > >> > > > > Hello! >> > > > > >> > > > > On Tue, Mar 18, 2014 at 03:26:10PM +0400, kyprizel wrote: >> > > > > >> > > > > > Hi, >> > > > > > currently SSL session lifetime and SSL ticket lifetime are >> equal in >> > > > > nginx. >> > > > > > >> > > > > > If we use session tickets with big enough lifetime (12hrs), we >> get a >> > > lot >> > > > > of >> > > > > > error log messages while allocating new sessions in shared >> memory: >> > > > > > >> > > > > > 2014/03/18 13:36:08 [crit] 18730#0: ngx_slab_alloc() failed: no >> > > memory in >> > > > > > SSL session shared cache "SSL" >> > > > > > >> > > > > > We don't want to increase session cache size b/c working with >> it is a >> > > > > > blocking operation and I believe it doesn't work good enought >> in our >> > > > > > network scheme. >> > > > > >> > > > > Just a side note: I don't think that size matters from performance >> > > > > point of view. The only real downside is memory used. >> > > > > >> > > > > > As I can see - those messages are generated by >> ngx_slab_alloc_pages() >> > > > > even >> > > > > > if session was added to the cache after expiration of some old >> ones. >> > > > > > >> > > > > > So, what do you think if we add one more config parameter to >> split >> > > > > session >> > > > > > cache and session ticket lifetimes? >> > > > > >> > > > > May be better approach will be to just avoid such messages? >> > > > > >> > > > > -- >> > > > > Maxim Dounin >> > > > > http://nginx.org/ >> > > > > >> > > > > _______________________________________________ >> > > > > nginx mailing list >> > > > > nginx at nginx.org >> > > > > http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > >> > > >> > > > _______________________________________________ >> > > > nginx mailing list >> > > > nginx at nginx.org >> > > > http://mailman.nginx.org/mailman/listinfo/nginx >> > > >> > > >> > > -- >> > > Maxim Dounin >> > > http://nginx.org/ >> > > >> > > _______________________________________________ >> > > nginx mailing list >> > > nginx at nginx.org >> > > http://mailman.nginx.org/mailman/listinfo/nginx >> > > >> >> >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 26 11:04:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Mar 2014 15:04:16 +0400 Subject: Issue with Upstream servers & GeoIP In-Reply-To: References: Message-ID: <20140326110416.GH34696@mdounin.ru> Hello! On Wed, Mar 26, 2014 at 02:25:21AM -0000, Alex wrote: > I am diverting traffic from one specific country (XX) to a specific APP > server (Server for country XX) using an upstream definition and geoip. See > definition below. > > Everything works well except when people who live in country XX try to > access information that is hosted in the other server - this is because > they may get the URI by doing searching on Google or other search engine. > So we are trying to set Nginx so that if we get a URI that is hosted in > the other server, the user from country XX can still access it. See the > "FUN" location definition. > > The problem comes when the person located in country XX sees the > domain.com/fun uri. Then, he/she gets all the text, but any reference of > css, jpegs are served by server from country XX, not by the standard > server that contains the :fun" information. How can I set it so that is > being accessed be served entirely by the appropriate server that hosts the > information? An obvious solution would be to return appropriate refernces to css/jpegs/etc from your "standard server". -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Mar 26 12:14:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Mar 2014 16:14:58 +0400 Subject: Nginx POST handler module caching response In-Reply-To: <7783a2975a8f7030aeb26066077468bb@ruby-forum.com> References: <7783a2975a8f7030aeb26066077468bb@ruby-forum.com> Message-ID: <20140326121458.GJ34696@mdounin.ru> Hello! On Tue, Mar 25, 2014 at 09:03:22PM +0100, Mapper Uno wrote: > Hi, > > I have written a small nginx module that processes POST request and > sends back in response the same data that is received in POST request. I > am testing this with curl utility and closely monitoring nginx log which > is set to debug level. > > (My module's server is listening on port 9000) > > # curl --data "This is nginx helper forum" http://localhost:9000 > This is nginx helper forum > > # curl --data "Ruby forum" http://localhost:9000 > This is ng > > # curl --data "ngx" http://localhost:9000 > Thi > > However, if I reload the nginx config (nginx -s reload), I see the > correct string > > # curl --data "Ruby forum" http://localhost:9000 > Ruby forum > > When I look at the nginx log file, I can see the correct incoming data > string every time in my handler. I also do ngx_pcalloc for my output > buffer string. > > However, why is the cached/stale buffer string returned by nginx in > response. I tried to look up relevant config directive but failed to > correlate to this behaviour. Looks like you did something wrong. It's highly unlikely though that somebody will be able to say what exactly without seeing your code. -- Maxim Dounin http://nginx.org/ From alex.thegreat at ambix.net Wed Mar 26 12:31:28 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 12:31:28 -0000 Subject: Location directive not working In-Reply-To: <20140326090324.GE29880@craic.sysops.org> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> Message-ID: Francis - Thank you for your reply! This OR regex : location ~* \.(gif|jpg|jpeg)$ works on jogs and gifs, why can not I use the same syntax for the URIs as I am doing it? Does it mean then that I have to define each location block one by one with different URIs even though all the information inside the location block is the same? Is there a regex to match the OR condition just as you do with jogs? Thanks for your help! > On Wed, Mar 26, 2014 at 12:30:21AM -0000, Alex wrote: > > Hi there, > >> It works only when I include >> one item, when I do (videos|events) it does not work >> >> location ^~ /(videos|events) { > > That is not a regex location. A request for /videos/ will not match it. > > http://nginx.org/r/location > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From alex.thegreat at ambix.net Wed Mar 26 12:31:30 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 12:31:30 -0000 Subject: Location directive not working In-Reply-To: <20140326090324.GE29880@craic.sysops.org> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> Message-ID: <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> Francis - Thank you for your reply! This OR regex : location ~* \.(gif|jpg|jpeg)$ works on jogs and gifs, why can not I use the same syntax for the URIs as I am doing it? Does it mean then that I have to define each location block one by one with different URIs even though all the information inside the location block is the same? Is there a regex to match the OR condition just as you do with jogs? Thanks for your help! > On Wed, Mar 26, 2014 at 12:30:21AM -0000, Alex wrote: > > Hi there, > >> It works only when I include >> one item, when I do (videos|events) it does not work >> >> location ^~ /(videos|events) { > > That is not a regex location. A request for /videos/ will not match it. > > http://nginx.org/r/location > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Wed Mar 26 13:03:34 2014 From: nginx-forum at nginx.us (stremovsky) Date: Wed, 26 Mar 2014 09:03:34 -0400 Subject: nginx ssl certificate via variable Message-ID: <17e0175bc143c7899cc3bfbdc0fee2fd.NginxMailingListEnglish@forum.nginx.org> Hello ! When it will be possible to use variables with ssl_certificate in nginx configuration ? It has been discussed several times already in the passed. For example here: http://forum.nginx.org/read.php?29,235397,235408 http://serverfault.com/questions/505015/nginx-use-server-name-on-ssl-certificate-path Thanks Yuli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248715,248715#msg-248715 From jim at ohlste.in Wed Mar 26 13:23:34 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 26 Mar 2014 09:23:34 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> <5331F24F.2050700@ohlste.in> Message-ID: <5332D4D6.9020105@ohlste.in> Hello, On 3/25/14, 6:50 PM, Jonathan Matthews wrote: > On 25 March 2014 21:17, Jim Ohlstein wrote: >> [what does] that "T08:13Z" mean > > The Z suffix indicates UTC, as per section 2 of > http://www.ietf.org/rfc/rfc3339.txt. > > HTH, > Jonathan > Thank you for educating me. It's a busy server and I turn over logs frequently so unfortunately those requests are gone. However, it seems that there are about half a dozen 502's this morning and I'm at a bit of a loss to explain them other than they all are coming from IPv6 addresses. A little background perhaps, and maybe someone can offer assistance. I've run a variety of PHP apps over the years. My preferences is to use FPM, but for some reason lately on this server, it has had several episodes where it just stops accepting new requests. It's running, it just doesn't seem to "see" the requests. I hadn't made any changes to the configuration, and use more or less the same configuration on a bunch of other machines with no issue. About three or four days ago I switched the forum to use an apache22 backend for PHP requests, with RPAF enabled while I looked into the FPM issue. Of course that has gone swimmingly, except that IPv6 requests seem to be generating 502's. Looking at the apache logs I don't even see the requests. Of course, FPM also hasn't experienced the same issue either in that time. I believe it has something to do RPAF handles IPv6 addresses. For now I have put the forum back on a PHP-FPM backend while I try to sort this out. If I continue to have this problem I'll move the forum to another server if need be. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From francis at daoine.org Wed Mar 26 13:34:59 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 26 Mar 2014 13:34:59 +0000 Subject: Location directive not working In-Reply-To: <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> Message-ID: <20140326133459.GA14908@daoine.org> On Wed, Mar 26, 2014 at 12:31:30PM -0000, Alex wrote: Hi there, > This OR regex : location ~* \.(gif|jpg|jpeg)$ Which part of that line says that it is a regex? (The answer is in the documentation.) > works on jogs and gifs, why can not I use the same syntax for the URIs as > I am doing it? If you used the same syntax, it would probably work. > >> location ^~ /(videos|events) { is not the same syntax. Good luck with it, f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Wed Mar 26 13:56:09 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 26 Mar 2014 14:56:09 +0100 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <5332D4D6.9020105@ohlste.in> References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> <5331F24F.2050700@ohlste.in> <5332D4D6.9020105@ohlste.in> Message-ID: Thanks Jonathan for the pointer. I was going to throw some WIkipedia page about it but RFC is definitely the best! I am glad you discovered I was not alone Jim, thought I am a little sad that you were that harsh welcoming my input as, if I may, I also did the reporting on my free time, and without it I am unsure whether you would have investigated anything? On a side note, I know what working for free in whatever structure is. I just rarely used that as an excuse to take the right of sending people of good faith away. I do not know if those 502 are directly tied to some IPv6 request, since I have both v4/v6 connectivity. I do not know how switch/fallback between versions occur during normal browsing. I recovered access to the URL, but following your explanation the 502 were due to the temporary apache setup. - I know little about Apache and especially RPAF module, but Googling a little around I found that: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726529 I do not know if that would help, but who knows? If you are using Debian, it seems there was an update at the end of January. - For PHP stopping accepting requests, since it is not crashing and you confirm the traffic volume is high, I would bet on the exhaustion of threads being able to accept new requests. Looks like the usual symptoms. Ironically, I found some answers on the Nginx ML archive that would help improving threads pool and PHP jobs execution time limit: http://forum.nginx.org/read.php?2,108162 I hope you will find your way out of there, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 26 14:02:50 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 26 Mar 2014 10:02:50 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <5332D4D6.9020105@ohlste.in> References: <5332D4D6.9020105@ohlste.in> Message-ID: <34e2a292d4092c3af4b3eb26d6f95369.NginxMailingListEnglish@forum.nginx.org> Maybe you need to switch over to nginx :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248660,248719#msg-248719 From nginx-forum at nginx.us Wed Mar 26 14:09:42 2014 From: nginx-forum at nginx.us (stremovsky) Date: Wed, 26 Mar 2014 10:09:42 -0400 Subject: nginx SSL/SNI phase In-Reply-To: <532694C4.4050909@kearsley.me> References: <532694C4.4050909@kearsley.me> Message-ID: <55cde0a8acab19dcce1620956258f3c1.NginxMailingListEnglish@forum.nginx.org> Hello I think it can be a great feature for big production environments ! Yuli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248429,248722#msg-248722 From bruno.premont at restena.lu Wed Mar 26 14:10:03 2014 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Wed, 26 Mar 2014 15:10:03 +0100 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> <5331F24F.2050700@ohlste.in> <5332D4D6.9020105@ohlste.in> Message-ID: <20140326151003.730da37e@pluto> On Wed, 26 Mar 2014 14:56:09 +0100 B.R. wrote: > Thanks Jonathan for the pointer. I was going to throw some WIkipedia page > about it but RFC is definitely the best! > > I am glad you discovered I was not alone Jim, thought I am a little sad > that you were that harsh welcoming my input as, if I may, I also did the > reporting on my free time, and without it I am unsure whether you would > have investigated anything? > On a side note, I know what working for free in whatever structure is. I > just rarely used that as an excuse to take the right of sending people of > good faith away. > > I do not know if those 502 are directly tied to some IPv6 request, since I > have both v4/v6 connectivity. I do not know how switch/fallback between > versions occur during normal browsing. > I recovered access to the URL, but following your explanation the 502 were > due to the temporary apache setup. > > - I know little about Apache and especially RPAF module, but Googling a > little around I found that: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726529 > I do not know if that would help, but who knows? If you are using Debian, > it seems there was an update at the end of January. Yeah, vanilla mod_rpaf-0.6 does not handle IPv6 addresses well. Be careful with the patch you choose, some fix the textual representation of REMOTE_ADDR but still break on Apache-side access control (e.g. on mis-match between proxy connection address family and header-passed address family). The patch I'm using successfully here is inlined below. Bruno > - For PHP stopping accepting requests, since it is not crashing and you > confirm the traffic volume is high, I would bet on the exhaustion of > threads being able to accept new requests. Looks like the usual symptoms. > Ironically, I found some answers on the Nginx ML archive that would help > improving threads pool and PHP jobs execution time limit: > http://forum.nginx.org/read.php?2,108162 > > I hope you will find your way out of there, --- diff -NurpP a/mod_rpaf.c b/mod_rpaf.c --- a/mod_rpaf.c 2014-02-17 09:21:08.278411786 +0100 +++ b/mod_rpaf.c 2014-02-17 10:20:18.083940819 +0100 @@ -173,6 +173,7 @@ static int change_remote_ip(request_rec } if (fwdvalue) { + apr_sockaddr_t *tmpsa; rpaf_cleanup_rec *rcr = (rpaf_cleanup_rec *)apr_pcalloc(r->pool, sizeof(rpaf_cleanup_rec)); apr_array_header_t *arr = apr_array_make(r->pool, 0, sizeof(char*)); while (*fwdvalue && (val = ap_get_token(r->pool, &fwdvalue, 1))) { @@ -184,7 +185,8 @@ static int change_remote_ip(request_rec rcr->r = r; apr_pool_cleanup_register(r->pool, (void *)rcr, rpaf_cleanup, apr_pool_cleanup_null); r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]); - r->connection->remote_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(r->connection->remote_ip); + if (apr_sockaddr_info_get(&tmpsa, r->connection->remote_ip, APR_UNSPEC, r->connection->remote_addr->port, 0, r->connection->remote_addr->pool) == APR_SUCCESS) + memcpy(r->connection->remote_addr, tmpsa, sizeof(apr_sockaddr_t)); if (cfg->sethostname) { const char *hostvalue; if ((hostvalue = apr_table_get(r->headers_in, "X-Forwarded-Host"))) { -------------- next part -------------- A non-text attachment was scrubbed... Name: mod_rpaf-0.6-ipv6.patch Type: text/x-patch Size: 1371 bytes Desc: not available URL: From jim at ohlste.in Wed Mar 26 14:11:57 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 26 Mar 2014 10:11:57 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> <5331F24F.2050700@ohlste.in> <5332D4D6.9020105@ohlste.in> Message-ID: <5332E02D.1000601@ohlste.in> Hello, On 3/26/14, 9:56 AM, B.R. wrote: > > I do not know if those 502 are directly tied to some IPv6 request, since > I have both v4/v6 connectivity. I do not know how switch/fallback > between versions occur during normal browsing. > I recovered access to the URL, but following your explanation the 502 > were due to the temporary apache setup. > > - I know little about Apache and especially RPAF module, but Googling a > little around I found that: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726529 > I do not know if that would help, but who knows? If you are using > Debian, it seems there was an update at the end of January. I saw that and others like it. Not using Debian but I suspect that RPAF is the culprit. I could test it, but then I have to scour the logs to find the IP's of spammers and the like. They'd all show up as 137.0.0.1. I could simply have Igor delete the AAAA record and that would solve the problem for people with dual IPv4/IPv6 connectivity, but I'm not excited to do that. I've been serving the forum on IPv6 for years. It is, however an option, at least for now. > > - For PHP stopping accepting requests, since it is not crashing and you > confirm the traffic volume is high, I would bet on the exhaustion of > threads being able to accept new requests. Looks like the usual symptoms. > Ironically, I found some answers on the Nginx ML archive that would help > improving threads pool and PHP jobs execution time limit: > http://forum.nginx.org/read.php?2,108162 That's not the case. Not even close to out of resources. The FPM children are there, they're "listening", they're just not "hearing". -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From jim at ohlste.in Wed Mar 26 14:15:01 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 26 Mar 2014 10:15:01 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <20140326151003.730da37e@pluto> References: <4223F5A8-42E2-4368-9AAF-D3AA346D71F0@nginx.com> <5331A199.8080209@ohlste.in> <5331F24F.2050700@ohlste.in> <5332D4D6.9020105@ohlste.in> <20140326151003.730da37e@pluto> Message-ID: <5332E0E5.4070408@ohlste.in> Hello, On 3/26/14, 10:10 AM, Bruno Pr?mont wrote: > > Yeah, vanilla mod_rpaf-0.6 does not handle IPv6 addresses well. > > Be careful with the patch you choose, some fix the textual > representation of REMOTE_ADDR but still break on Apache-side access > control (e.g. on mis-match between proxy connection address family and > header-passed address family). > > The patch I'm using successfully here is inlined below. > > Bruno > > > --- > diff -NurpP a/mod_rpaf.c b/mod_rpaf.c > --- a/mod_rpaf.c 2014-02-17 09:21:08.278411786 +0100 > +++ b/mod_rpaf.c 2014-02-17 10:20:18.083940819 +0100 > @@ -173,6 +173,7 @@ static int change_remote_ip(request_rec > } > > if (fwdvalue) { > + apr_sockaddr_t *tmpsa; > rpaf_cleanup_rec *rcr = (rpaf_cleanup_rec *)apr_pcalloc(r->pool, sizeof(rpaf_cleanup_rec)); > apr_array_header_t *arr = apr_array_make(r->pool, 0, sizeof(char*)); > while (*fwdvalue && (val = ap_get_token(r->pool, &fwdvalue, 1))) { > @@ -184,7 +185,8 @@ static int change_remote_ip(request_rec > rcr->r = r; > apr_pool_cleanup_register(r->pool, (void *)rcr, rpaf_cleanup, apr_pool_cleanup_null); > r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]); > - r->connection->remote_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(r->connection->remote_ip); > + if (apr_sockaddr_info_get(&tmpsa, r->connection->remote_ip, APR_UNSPEC, r->connection->remote_addr->port, 0, r->connection->remote_addr->pool) == APR_SUCCESS) > + memcpy(r->connection->remote_addr, tmpsa, sizeof(apr_sockaddr_t)); > if (cfg->sethostname) { > const char *hostvalue; > if ((hostvalue = apr_table_get(r->headers_in, "X-Forwarded-Host"))) { > > Thank you Bruno! I will try this a bit later, when things have settled down here. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From jim at ohlste.in Wed Mar 26 14:18:02 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 26 Mar 2014 10:18:02 -0400 Subject: Nginx forum returns 502 (2014-03-25T08:13Z) In-Reply-To: <34e2a292d4092c3af4b3eb26d6f95369.NginxMailingListEnglish@forum.nginx.org> References: <5332D4D6.9020105@ohlste.in> <34e2a292d4092c3af4b3eb26d6f95369.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5332E19A.8000807@ohlste.in> Hello, On 3/26/14, 10:02 AM, itpp2012 wrote: > Maybe you need to switch over to nginx :) > Haha. Did I miss the announcement that someone wrote a PHP handler for nginx? ;) Seriously, perhaps you misunderstood, or perhaps, as I suspect, this is a joke. I am using nginx (1.5.12). -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at nginx.us Wed Mar 26 16:09:27 2014 From: nginx-forum at nginx.us (skyice) Date: Wed, 26 Mar 2014 12:09:27 -0400 Subject: How can I rewrite an url with GET parameters ? Message-ID: <80cdf49b5d4ae94297aefa2ab8b521c8.NginxMailingListEnglish@forum.nginx.org> Hello, How can I have this : http://example.org/en/somepage.php For : http://example.org/somepage.php?locale=en ( locale=en is always present) Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248727,248727#msg-248727 From vbart at nginx.com Wed Mar 26 16:24:16 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 26 Mar 2014 20:24:16 +0400 Subject: How can I rewrite an url with GET parameters ? In-Reply-To: <80cdf49b5d4ae94297aefa2ab8b521c8.NginxMailingListEnglish@forum.nginx.org> References: <80cdf49b5d4ae94297aefa2ab8b521c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1527265.bVKOKCfBRz@vbart-laptop> On Wednesday 26 March 2014 12:09:27 skyice wrote: > Hello, > > How can I have this : http://example.org/en/somepage.php > > For : http://example.org/somepage.php?locale=en > > ( locale=en is always present) > > Thanks. > location =/somepage.php { rewrite ^ /$arg_locale/somepage.php? last; } -- wbr, Valentin V. Bartenev From alex.thegreat at ambix.net Wed Mar 26 16:44:49 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 16:44:49 -0000 Subject: Location directive not working In-Reply-To: <20140326133459.GA14908@daoine.org> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> <20140326133459.GA14908@daoine.org> Message-ID: Francis - What I meant was that I was using an OR statement as it is used in the jpg example. If I do this: location ^~ /videos/ { # it works location ^~ /(videos|events) { #it does not work. So I assume is the OR that does not work. Any thoughts? Thanks, Alex > On Wed, Mar 26, 2014 at 12:31:30PM -0000, Alex wrote: > > Hi there, > >> This OR regex : location ~* \.(gif|jpg|jpeg)$ > > Which part of that line says that it is a regex? > > (The answer is in the documentation.) > >> works on jogs and gifs, why can not I use the same syntax for the URIs >> as >> I am doing it? > > If you used the same syntax, it would probably work. > >> >> location ^~ /(videos|events) { > > is not the same syntax. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Wed Mar 26 17:14:56 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 26 Mar 2014 17:14:56 +0000 Subject: Location directive not working In-Reply-To: References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> <20140326133459.GA14908@daoine.org> Message-ID: <20140326171456.GA16942@daoine.org> On Wed, Mar 26, 2014 at 04:44:49PM -0000, Alex wrote: Hi there, > location ^~ /(videos|events) { #it does not work. So I assume is the OR > that does not work. > > Any thoughts? The documentation is at http://nginx.org/r/location. It's all useful, but the fourth sentence is particularly relevant. > >> This OR regex : location ~* \.(gif|jpg|jpeg)$ > > > > Which part of that line says that it is a regex? When you know the answer to that question, you'll probably see that the various squiggles after the word "location" aren't just random. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Mar 26 17:16:54 2014 From: nginx-forum at nginx.us (skyice) Date: Wed, 26 Mar 2014 13:16:54 -0400 Subject: How can I rewrite an url with GET parameters ? In-Reply-To: <1527265.bVKOKCfBRz@vbart-laptop> References: <1527265.bVKOKCfBRz@vbart-laptop> Message-ID: <19be6b9429078793625cb9cf66e67634.NginxMailingListEnglish@forum.nginx.org> Ok thanks but somepage.php is just an examplle and I want this for all pages of my website :/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248727,248732#msg-248732 From reallfqq-nginx at yahoo.fr Wed Mar 26 17:36:52 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 26 Mar 2014 18:36:52 +0100 Subject: Location directive not working In-Reply-To: <20140326171456.GA16942@daoine.org> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> <20140326133459.GA14908@daoine.org> <20140326171456.GA16942@daoine.org> Message-ID: Wow Francis. You are a patient man. I love the way you teach things. I mean it all. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.thegreat at ambix.net Wed Mar 26 21:38:32 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 21:38:32 -0000 Subject: Location directive not working In-Reply-To: <20140326171456.GA16942@daoine.org> References: <3502280ee8d310193652b9e4d148c098.squirrel@promomerch1.servername.com> <20140326090324.GE29880@craic.sysops.org> <84625527a76a7094ae740a4ddfa82998.squirrel@promomerch1.servername.com> <20140326133459.GA14908@daoine.org> <20140326171456.GA16942@daoine.org> Message-ID: <7018e5977cc06ce59b4642ac8065e6aa.squirrel@promomerch1.servername.com> Thank you for your answer! I read the paragraph. I changed the preceding modifier to ~* and it works now. I made a mistake addressing the OR condition and calling it regex sorry. Thank you for your help! > On Wed, Mar 26, 2014 at 04:44:49PM -0000, Alex wrote: > > Hi there, > >> location ^~ /(videos|events) { #it does not work. So I assume is the OR >> that does not work. >> >> Any thoughts? > > The documentation is at http://nginx.org/r/location. > > It's all useful, but the fourth sentence is particularly relevant. > >> >> This OR regex : location ~* \.(gif|jpg|jpeg)$ >> > >> > Which part of that line says that it is a regex? > > When you know the answer to that question, you'll probably see that the > various squiggles after the word "location" aren't just random. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From alex.thegreat at ambix.net Wed Mar 26 21:55:33 2014 From: alex.thegreat at ambix.net (Alex) Date: Wed, 26 Mar 2014 21:55:33 -0000 Subject: Issue with Upstream servers & GeoIP In-Reply-To: <20140326110416.GH34696@mdounin.ru> References: <20140326110416.GH34696@mdounin.ru> Message-ID: <8939988f113fa17cbf834ed7945598ab.squirrel@promomerch1.servername.com> Maxinm - Thank you for your answer! It is not a root location issue because if I am not in the country that is being geotargetted then I can see css, etc. everything without any problems. The issue comes when the user is in country XX and sees that page. If I look at the error messages on the page I get a lot of 404s for images and css and js because they can not be found since all css are being served by the server in country XX and that server does not have it. I tested this by creating a location with a string that all the images references contain and point it to be served by the server outside of the country XX and all images display perfectly. How can I circumvent this problem? Alex > Hello! > > On Wed, Mar 26, 2014 at 02:25:21AM -0000, Alex wrote: > >> I am diverting traffic from one specific country (XX) to a specific APP >> server (Server for country XX) using an upstream definition and geoip. >> See >> definition below. >> >> Everything works well except when people who live in country XX try to >> access information that is hosted in the other server - this is because >> they may get the URI by doing searching on Google or other search >> engine. >> So we are trying to set Nginx so that if we get a URI that is hosted in >> the other server, the user from country XX can still access it. See the >> "FUN" location definition. >> >> The problem comes when the person located in country XX sees the >> domain.com/fun uri. Then, he/she gets all the text, but any reference >> of >> css, jpegs are served by server from country XX, not by the standard >> server that contains the :fun" information. How can I set it so that is >> being accessed be served entirely by the appropriate server that hosts >> the >> information? > > An obvious solution would be to return appropriate refernces to > css/jpegs/etc from your "standard server". > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From steve at greengecko.co.nz Wed Mar 26 23:14:09 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 27 Mar 2014 12:14:09 +1300 Subject: install on Amazon EC2 Message-ID: <1395875649.31659.268.camel@steve-new> Any idea when we can install the standard 1.4.7 package from the nginx repo on Amazon EC2? Error: Package: nginx-1.4.7-1.el6.ngx.x86_64 (nginx) Requires: libcrypto.so.10(OPENSSL_1.0.1_EC)(64bit) Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From r at roze.lv Thu Mar 27 00:46:25 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 27 Mar 2014 02:46:25 +0200 Subject: install on Amazon EC2 In-Reply-To: <1395875649.31659.268.camel@steve-new> References: <1395875649.31659.268.camel@steve-new> Message-ID: > Any idea when we can install the standard 1.4.7 package from the nginx > repo on Amazon EC2? > Error: Package: nginx-1.4.7-1.el6.ngx.x86_64 (nginx) > Requires: libcrypto.so.10(OPENSSL_1.0.1_EC)(64bit) So why not install the required openssl package? http://rpm.pbone.net/index.php3/stat/4/idpl/25012482/dir/centos_6/com/openssl-1.0.1e-16.el6_5.x86_64.rpm.html rr From kunalvjti at gmail.com Thu Mar 27 01:33:56 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Wed, 26 Mar 2014 18:33:56 -0700 Subject: Build nginx on Mac OS X mavericks Message-ID: Hello, Has anyone tried building nginx code on Mac ? I get the following error while building. Have installed pcre and other needed libraries *src/mail/ngx_mail_handler.c:1152:30: **error: **use of undeclared identifier 'sasl_callback_ft'; did you mean* * 'sasl_callback_t'?* callbacks[i].proc = (sasl_callback_ft)&ngx_mail_sasl_log; * ^* */usr/include/sasl/sasl.h:349:3: note: *'sasl_callback_t' declared here } sasl_callback_t; * ^* *src/mail/ngx_mail_handler.c:1157:30: **error: **use of undeclared identifier 'sasl_callback_ft'; did you mean* * 'sasl_callback_t'?* callbacks[i].proc = (sasl_callback_ft)&ngx_mail_sasl_pauthorize; * ^* */usr/include/sasl/sasl.h:349:3: note: *'sasl_callback_t' declared here } sasl_callback_t; * ^* 2 errors generated. make[2]: *** [objs/src/mail/ngx_mail_handler.o] Error 1 make[1]: *** [install] Error 2 make: *** [build] Error 2 Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 27 02:37:41 2014 From: nginx-forum at nginx.us (emclab) Date: Wed, 26 Mar 2014 22:37:41 -0400 Subject: Nginx not detecting the root location from config for Rails 3.2 app Message-ID: Here is the error.log of nginx server running on ubuntu 12.04. 2014/03/17 12:47:17 [error] 7939#0: *1 open() "/opt/nginx/html/mkl/authentify/signin" failed (2: No such file or directory), client: xxx.xxx.228.66, server: xxx.xxx.109.181, request: "GET /mkl/authentify/signin HTTP/1.1", host: "xxx.xxx.109.181" In /opt/nginx/conf/nginx.conf, it is configured as following: server_name xxx.xxx.109.181; root /ebs/www/; passenger_enabled on; rails_env production; passenger_base_uri /mkl; The root of nginx server is pointing to /ebs/www/. However the nginx is accessing the /opt/nginx and throws out no such file error. What causes the problem? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248743,248743#msg-248743 From nginx-forum at nginx.us Thu Mar 27 05:18:40 2014 From: nginx-forum at nginx.us (SupaIrish) Date: Thu, 27 Mar 2014 01:18:40 -0400 Subject: Nginx not detecting the root location from config for Rails 3.2 app In-Reply-To: References: Message-ID: "root" needs to point to the /public directory of your Rails app. And I don't think you need to use passenger_base_uri My nginx using passenger config looks like: server { listen 80; server_name XXXXXX; root /home/deploy/apps/wtca/current/public; passenger_enabled on; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248743,248744#msg-248744 From lists at ruby-forum.com Thu Mar 27 06:10:17 2014 From: lists at ruby-forum.com (Haward Backmanh) Date: Thu, 27 Mar 2014 07:10:17 +0100 Subject: Medical Billing Software Automates the Process of Billing In-Reply-To: <8dfbbb54d74df6a9dd0d2d168b6bbb04.NginxMailingListEnglish@forum.nginx.org> References: <8dfbbb54d74df6a9dd0d2d168b6bbb04.NginxMailingListEnglish@forum.nginx.org> Message-ID: <217a2f621e2d1e9abd7a9a85cbbd80f2@ruby-forum.com> You cannot just say that medical billing is beneficial only for poor people. The basic use of Medical Billing and Coding is to gain maximum reimbursement for the diagnosis and treatments undertaken. The main job involved here is to avoid denials by the insurance companies and collect the payments faster. This proves to be beneficial to both the patient as well as the healthcare service provider. -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Mar 27 07:00:27 2014 From: lists at ruby-forum.com (Haward Backmanh) Date: Thu, 27 Mar 2014 08:00:27 +0100 Subject: Home Based Medical Services Vs Medical Billing Companies Message-ID: With an increase in demand of medical billing and coding professionals, many professionals have taken up the job of providing these services from home. Though these services may seem to be quite cheap, there are a lot of difficulties that a healthcare service provider might face with these home based services. The Following link will lead you to a post that has well explained how services from a medical billing company will prove to be more beneficial than those from a home based set up. http://medicodingsolutions.wordpress.com/2014/03/06/the-risks-involved-in-hiring-a-home-based-medical-billing-service/ -- Posted via http://www.ruby-forum.com/. From shahzaib.cb at gmail.com Thu Mar 27 07:37:29 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 12:37:29 +0500 Subject: Core dump messages in /var/log/messages !! Message-ID: Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803 (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 (60538880 bytes) Could someone tell me what is that ? Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Thu Mar 27 09:31:22 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Thu, 27 Mar 2014 15:01:22 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: <20140320132636.GE34696@mdounin.ru> References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> <20140320132636.GE34696@mdounin.ru> Message-ID: Hi Maxim, Apart from passing cache status to backend, would it be possible to send some other headers which are stored in cache? For example, If backed sets header "Foo : Bar" , which is stored in cache. Now when cache is expired , request will be sent to backend. At that time can we send the value of Foo header stored in cache to upstream backend? I tried to achieve this with below code but it could not work. proxy_set_header Foo $upstream_http_Foo; Would you suggest me how to achieve this or what am I doing wrong here. Thanks, Makailol On Thu, Mar 20, 2014 at 6:56 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote: > > > Hi, > > > > Is there some way to achieve this? I want to pass requests to backend > based > > on cache status condition. > > This is not something easily possible, as cache status is only > known after we started processing proxy_pass and already know > which backend will be used. (Note that by default proxy_cache_key > uses $proxy_host, which wouldn't be known otherwise.) > > If you want to check BYPASS as in your previous message, I would > recommend checking relevant conditions from proxy_cache_bypass > separately. As a more generic though less effective aproach, an > additional proxy layer may be used. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 27 11:11:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 15:11:23 +0400 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: <20140327111123.GR34696@mdounin.ru> Hello! On Thu, Mar 27, 2014 at 12:37:29PM +0500, shahzaib shahzaib wrote: > Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803 > (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 > (60538880 bytes) > > > Could someone tell me what is that ? http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From jonathan at jpluscplusm.com Thu Mar 27 11:12:12 2014 From: jonathan at jpluscplusm.com (Jonathan Matthews) Date: Thu, 27 Mar 2014 11:12:12 +0000 Subject: nginx ssl certificate via variable In-Reply-To: <17e0175bc143c7899cc3bfbdc0fee2fd.NginxMailingListEnglish@forum.nginx.org> References: <17e0175bc143c7899cc3bfbdc0fee2fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 26 March 2014 13:03, stremovsky wrote: > Hello ! > > When it will be possible to use variables with ssl_certificate in nginx > configuration ? > > It has been discussed several times already in the passed. For example > here: > > http://forum.nginx.org/read.php?29,235397,235408 > http://serverfault.com/questions/505015/nginx-use-server-name-on-ssl-certificate-path Are you asking from the perspective of SNI, or just HTTPS? From nginx-forum at nginx.us Thu Mar 27 11:12:34 2014 From: nginx-forum at nginx.us (Dougadan) Date: Thu, 27 Mar 2014 07:12:34 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org> References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org> <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi tonimarmol, can you elaborate. I'm having this issue on fresh install of 1.6. Under SEO & URL the Order Confirmation is going to a default Prestashop page, order-confirmation. Apparently there is already a default page where customers should be redirected after purchase. Where would I change? Thanks much Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,248751#msg-248751 From luky-37 at hotmail.com Thu Mar 27 11:15:21 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 27 Mar 2014 12:15:21 +0100 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: Hi, > Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803? > (/usr/local/sbin/nginx) to? > /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 (60538880 bytes)? >? >? > Could someone tell me what is that ?? Its a crash. Provide output of "/usr/local/sbin/nginx -V" and check: http://nginx.org/en/docs/debugging_log.html Regards, Lukas From shahzaib.cb at gmail.com Thu Mar 27 11:16:47 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 16:16:47 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: nginx -V nginx version: nginx/1.2.8 I have found not a single issue with 1.2.1 but all other versions have some kind of issue. Regards. Shahzaib On Thu, Mar 27, 2014 at 4:15 PM, Lukas Tribus wrote: > Hi, > > > > Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803 > > (/usr/local/sbin/nginx) to > > /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 (60538880 bytes) > > > > > > Could someone tell me what is that ? > > Its a crash. > > > Provide output of "/usr/local/sbin/nginx -V" and check: > http://nginx.org/en/docs/debugging_log.html > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 27 11:17:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 15:17:55 +0400 Subject: Build nginx on Mac OS X mavericks In-Reply-To: References: Message-ID: <20140327111755.GS34696@mdounin.ru> Hello! On Wed, Mar 26, 2014 at 06:33:56PM -0700, Kunal Pariani wrote: > Hello, > Has anyone tried building nginx code on Mac ? I get the following error > while building. Have installed pcre and other needed libraries > > *src/mail/ngx_mail_handler.c:1152:30: **error: **use of undeclared > identifier 'sasl_callback_ft'; did you mean* > > * 'sasl_callback_t'?* > > callbacks[i].proc = (sasl_callback_ft)&ngx_mail_sasl_log; It doesn't looks like nginx source code. Likely you are using something with local or 3rd party patches. Unmodified version is available from http://nginx.org/en/download.html. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Thu Mar 27 11:18:58 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 16:18:58 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: sorry the same issue have found with nginx-1.2.1 :( On Thu, Mar 27, 2014 at 4:16 PM, shahzaib shahzaib wrote: > nginx -V > nginx version: nginx/1.2.8 > > > I have found not a single issue with 1.2.1 but all other versions have > some kind of issue. > > Regards. > Shahzaib > > > On Thu, Mar 27, 2014 at 4:15 PM, Lukas Tribus wrote: > >> Hi, >> >> >> > Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803 >> > (/usr/local/sbin/nginx) to >> > /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 (60538880 bytes) >> > >> > >> > Could someone tell me what is that ? >> >> Its a crash. >> >> >> Provide output of "/usr/local/sbin/nginx -V" and check: >> http://nginx.org/en/docs/debugging_log.html >> >> >> Regards, >> >> Lukas >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 27 11:22:19 2014 From: nginx-forum at nginx.us (tonimarmol) Date: Thu, 27 Mar 2014 07:22:19 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org> <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1d115fbd3bdd05c94c50d2f9e892aad2.NginxMailingListEnglish@forum.nginx.org> You must go to "SEO & URLS" under "Preferences" tab, and add all pages/sections/modules that don't have a friendly url. (Press the add button, and it will show you the pages without a friendly url) Maybe you need to add the SEO URL of a payment module. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,248757#msg-248757 From shahzaib.cb at gmail.com Thu Mar 27 11:24:14 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 16:24:14 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: I enabled debugging and following is the error_log : 2014/03/27 15:26:09 [debug] 8185#0: *2118538 http write filter: l:0 f:1 s:141632 2014/03/27 15:26:09 [debug] 8185#0: *2118538 http write filter limit 131072 2014/03/27 15:26:09 [debug] 8185#0: *2118538 writev: 131072 2014/03/27 15:26:09 [debug] 8185#0: *2118538 http write filter 000000000130A8B0 2014/03/27 15:26:09 [debug] 8185#0: *2118538 event timer del: 27: 1395916149293 2014/03/27 15:26:09 [debug] 8185#0: *2118538 event timer add: 27: 213:1395915969928 2014/03/27 15:26:09 [debug] 8185#0: *2118538 http copy filter: -2 "/files/videos/2014/03/24/1395674643a4202-240.mp4?" 2014/03/27 15:26:09 [debug] 8185#0: *2118538 http writer output filter: -2, "/files/videos/2014/03/24/1395674643a4202-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118703 event timer del: 90: 1395915969743 2014/03/27 15:26:09 [debug] 8178#0: *2118703 http run request: "/files/videos/2014/03/25/13957610474dea7-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118703 http writer handler: "/files/videos/2014/03/25/13957610474dea7-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118703 event timer add: 90: 180000:1395916149744 2014/03/27 15:26:09 [debug] 8178#0: *2118688 event timer del: 79: 1395915969760 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http run request: "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http writer handler: "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http output filter "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http copy filter: "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http postpone filter "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 0000000000000000 2014/03/27 15:26:09 [debug] 8178#0: *2118688 write old buf t:1 f:0 0000000001E92650, pos 0000000001EF2650, size: 131072 file: 0, size: 0 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http write filter: l:0 f:1 s:131072 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http write filter limit 131072 2014/03/27 15:26:09 [debug] 8178#0: *2118688 writev: 131072 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http write filter 0000000000000000 2014/03/27 15:26:09 [debug] 8178#0: *2118688 event timer add: 79: 213:1395915969973 2014/03/27 15:26:09 [debug] 8178#0: *2118688 read: 80, 0000000001E92650, 524288, 3306231 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http postpone filter "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 0000000001493110 2014/03/27 15:26:09 [debug] 8178#0: *2118688 write new buf t:1 f:0 0000000001E92650, pos 0000000001E92650, size: 524288 file: 0, size: 0 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http write filter: l:0 f:1 s:524288 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http copy filter: -2 "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" 2014/03/27 15:26:09 [debug] 8178#0: *2118688 http writer output filter: -2, "/files/videos/2014/03/23/1395581722c99fc-240.mp4?" Regards. Shahzaib On Thu, Mar 27, 2014 at 4:18 PM, shahzaib shahzaib wrote: > sorry the same issue have found with nginx-1.2.1 :( > > > On Thu, Mar 27, 2014 at 4:16 PM, shahzaib shahzaib wrote: > >> nginx -V >> nginx version: nginx/1.2.8 >> >> >> I have found not a single issue with 1.2.1 but all other versions have >> some kind of issue. >> >> Regards. >> Shahzaib >> >> >> On Thu, Mar 27, 2014 at 4:15 PM, Lukas Tribus wrote: >> >>> Hi, >>> >>> >>> > Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803 >>> > (/usr/local/sbin/nginx) to >>> > /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 (60538880 bytes) >>> > >>> > >>> > Could someone tell me what is that ? >>> >>> Its a crash. >>> >>> >>> Provide output of "/usr/local/sbin/nginx -V" and check: >>> http://nginx.org/en/docs/debugging_log.html >>> >>> >>> Regards, >>> >>> Lukas >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 27 11:32:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 15:32:58 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> <20140320132636.GE34696@mdounin.ru> Message-ID: <20140327113257.GT34696@mdounin.ru> Hello! On Thu, Mar 27, 2014 at 03:01:22PM +0530, Makailol Charls wrote: > Hi Maxim, > > Apart from passing cache status to backend, would it be possible to send > some other headers which are stored in cache? > > For example, If backed sets header "Foo : Bar" , which is stored in cache. > Now when cache is expired , request will be sent to backend. At that time > can we send the value of Foo header stored in cache to upstream backend? > > I tried to achieve this with below code but it could not work. > proxy_set_header Foo $upstream_http_Foo; > > Would you suggest me how to achieve this or what am I doing wrong here. This is not something possible. > > Thanks, > Makailol > > > > > On Thu, Mar 20, 2014 at 6:56 PM, Maxim Dounin wrote: > > > Hello! > > > > On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote: > > > > > Hi, > > > > > > Is there some way to achieve this? I want to pass requests to backend > > based > > > on cache status condition. > > > > This is not something easily possible, as cache status is only > > known after we started processing proxy_pass and already know > > which backend will be used. (Note that by default proxy_cache_key > > uses $proxy_host, which wouldn't be known otherwise.) > > > > If you want to check BYPASS as in your previous message, I would > > recommend checking relevant conditions from proxy_cache_bypass > > separately. As a more generic though less effective aproach, an > > additional proxy layer may be used. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Thu Mar 27 11:36:46 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 27 Mar 2014 12:36:46 +0100 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: , , Message-ID: > nginx -V > nginx version: nginx/1.2.8 Thats not the complete output of -V (capital letter). Either you truncated the output by yourself or you sent us the output of -v. Please send the complete output of "nginx -V", where the V is a capital letter. Also compile nginx without third party modules. Thanks, Lukas From luky-37 at hotmail.com Thu Mar 27 11:38:10 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 27 Mar 2014 12:38:10 +0100 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: , , , Message-ID: > Thats not the complete output of -V (capital letter). Either you > truncated the output by yourself or you sent us the output of -v. > > Please send the complete output of "nginx -V", where the V is a > capital letter. > > > Also compile nginx without third party modules. And, also important, upgrade to latest stable?1.4.7. From shahzaib.cb at gmail.com Thu Mar 27 11:38:15 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 16:38:15 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: [root at DNTX002 nginx-1.2.1]# nginx -V nginx version: nginx/1.2.1 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) configure arguments: --add-module=/root/nginx_mod_h264_streaming-2.2.7 --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin --with-debug On Thu, Mar 27, 2014 at 4:36 PM, Lukas Tribus wrote: > > nginx -V > > nginx version: nginx/1.2.8 > > Thats not the complete output of -V (capital letter). Either you > truncated the output by yourself or you sent us the output of -v. > > Please send the complete output of "nginx -V", where the V is a > capital letter. > > > Also compile nginx without third party modules. > > > > Thanks, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Thu Mar 27 11:38:31 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Thu, 27 Mar 2014 17:08:31 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: <20140327113257.GT34696@mdounin.ru> References: <20140314154142.GO34696@mdounin.ru> <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> <20140320132636.GE34696@mdounin.ru> <20140327113257.GT34696@mdounin.ru> Message-ID: Hi, Would it be possible to add this as new feature? Is there some other alternative ? Actually based on this header value I want to select named based location. Thanks, Makailol On Thu, Mar 27, 2014 at 5:02 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 27, 2014 at 03:01:22PM +0530, Makailol Charls wrote: > > > Hi Maxim, > > > > Apart from passing cache status to backend, would it be possible to send > > some other headers which are stored in cache? > > > > For example, If backed sets header "Foo : Bar" , which is stored in > cache. > > Now when cache is expired , request will be sent to backend. At that time > > can we send the value of Foo header stored in cache to upstream backend? > > > > I tried to achieve this with below code but it could not work. > > proxy_set_header Foo $upstream_http_Foo; > > > > Would you suggest me how to achieve this or what am I doing wrong here. > > This is not something possible. > > > > > Thanks, > > Makailol > > > > > > > > > > On Thu, Mar 20, 2014 at 6:56 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote: > > > > > > > Hi, > > > > > > > > Is there some way to achieve this? I want to pass requests to backend > > > based > > > > on cache status condition. > > > > > > This is not something easily possible, as cache status is only > > > known after we started processing proxy_pass and already know > > > which backend will be used. (Note that by default proxy_cache_key > > > uses $proxy_host, which wouldn't be known otherwise.) > > > > > > If you want to check BYPASS as in your previous message, I would > > > recommend checking relevant conditions from proxy_cache_bypass > > > separately. As a more generic though less effective aproach, an > > > additional proxy layer may be used. > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Mar 27 11:40:19 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 16:40:19 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: we're using nginx for streaming videos like youtube.com. Usually it is 5000 concurrent connections on the server. Should we upgrde to 1.4.7 . It is stable ? On Thu, Mar 27, 2014 at 4:38 PM, Lukas Tribus wrote: > > Thats not the complete output of -V (capital letter). Either you > > truncated the output by yourself or you sent us the output of -v. > > > > Please send the complete output of "nginx -V", where the V is a > > capital letter. > > > > > > Also compile nginx without third party modules. > > > And, also important, upgrade to latest stable 1.4.7. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 27 11:57:29 2014 From: nginx-forum at nginx.us (Dougadan) Date: Thu, 27 Mar 2014 07:57:29 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: <1d115fbd3bdd05c94c50d2f9e892aad2.NginxMailingListEnglish@forum.nginx.org> References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org> <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org> <1d115fbd3bdd05c94c50d2f9e892aad2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <136ae3c9184cd38a9bf5b4f6ab2de1b3.NginxMailingListEnglish@forum.nginx.org> Thanks. Yes, under SEO & URLS, the Order Confirmation has a friendly URL, and PayPal module does not have any page for Order confirmation without a friendly URL. Under SEO & URL, Order Confirmation is going to a default Prestashop page, order-confirmation. After payment is completed in PayPal, when clicking the link to return to the site, I get to the site, but a 404 page not available for the order confirmation. Here is what the link shows: [mysite]/order-confirmation.php?id_cart=20&id_module=73&key=[key] Shopping cart ID is correct for the purchase. Under SEO & URL, Order Confirmation is going to a default Prestashop page, order-confirmation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,248765#msg-248765 From luky-37 at hotmail.com Thu Mar 27 12:02:39 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 27 Mar 2014 13:02:39 +0100 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: , , , , Message-ID: Hi, > [root at DNTX002 nginx-1.2.1]# nginx -V? > nginx version: nginx/1.2.1? > built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)? > configure arguments: --add-module=/root/nginx_mod_h264_streaming-2.2.7? > --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin? > --with-debug nginx_mod_h264_streaming is a unsupported third party module and probably causing your crash. There is generally no support for those modules on the mailing list. Drop nginx_mod_h264_streaming and migrate the official ngx_http_mp4_module: http://nginx.org/en/docs/http/ngx_http_mp4_module.html Please be aware that this also means changing configurations. And yes, nginx-1.4.7 is stable. Regards, Lukas From shahzaib.cb at gmail.com Thu Mar 27 12:04:45 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 17:04:45 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: Thanks Lukas, now i saw these errors in /var/log/messages. Also due to 3rd party module ? Mar 27 12:04:47 DNTX002 kernel: nginx[8190]: segfault at 40174353c ip 000000000046a482 sp 00007fff1fd93810 error 4 in nginx[400000+92000] Mar 27 12:04:47 DNTX002 xinetd[6599]: START: nrpe pid=9442 from=209.59.234.13 Mar 27 12:04:47 DNTX002 abrtd: Directory 'ccpp-2014-03-27-16:04:47-8190' creation detected Mar 27 12:04:47 DNTX002 abrt[9441]: Saved core dump of pid 8190 (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-16:04:47-8190 (67911680 bytes) Mar 27 12:04:47 DNTX002 xinetd[6599]: EXIT: nrpe status=0 pid=9442 duration=0(sec) Mar 27 12:04:48 DNTX002 abrtd: Executable '/usr/local/sbin/nginx' doesn't belong to any package Mar 27 12:04:48 DNTX002 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2014-03-27-16:04:47-8190' exited with 1 Mar 27 12:04:48 DNTX002 abrtd: Corrupted or bad directory '/var/spool/abrt/ccpp-2014-03-27-16:04:47-8190', deleting Regards. Shahzaib On Thu, Mar 27, 2014 at 5:02 PM, Lukas Tribus wrote: > Hi, > > > > [root at DNTX002 nginx-1.2.1]# nginx -V > > nginx version: nginx/1.2.1 > > built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) > > configure arguments: --add-module=/root/nginx_mod_h264_streaming-2.2.7 > > --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin > > --with-debug > > > nginx_mod_h264_streaming is a unsupported third party module and probably > causing your crash. There is generally no support for those modules on > the mailing list. > > Drop nginx_mod_h264_streaming and migrate the official ngx_http_mp4_module: > http://nginx.org/en/docs/http/ngx_http_mp4_module.html > > Please be aware that this also means changing configurations. > > > And yes, nginx-1.4.7 is stable. > > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 27 12:15:36 2014 From: nginx-forum at nginx.us (tonimarmol) Date: Thu, 27 Mar 2014 08:15:36 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: <136ae3c9184cd38a9bf5b4f6ab2de1b3.NginxMailingListEnglish@forum.nginx.org> References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org> <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org> <1d115fbd3bdd05c94c50d2f9e892aad2.NginxMailingListEnglish@forum.nginx.org> <136ae3c9184cd38a9bf5b4f6ab2de1b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: That paypal module I think it not compatible with Prestashop 1.6 because it's using order-confirmation.php file and this file is deleted on PS 1.6. I think the module will be updated soon... Anyways, have a look on Prestashop 1.5 and create the same file: order-confirmation.php References: Message-ID: One quick question, i've updated nginx to 1.4.7 with http_mp4_module. What if i go with the same config as before ? i.e server { listen 80; server_name storage10.domain.com storage10.gear3rd.com storage10.gear3rd.net; client_max_body_size 800m; # limit_rate 250k; # access_log /websites/theos.in/logs/access.log main; add_header "X-Content-Type-Options" "nosniff"; add_header Content-Type text/plain; add_header X-Download-Options noopen ; location / { root /var/www/html/domain; index index.html index.htm index.php; autoindex off; } location ~ \.(flv|jpg|jpeg)$ { flv; root /var/www/html/domain; # aio on; # directio 512; # output_buffers 1 2m; expires 7d; valid_referers none blocked tune.pk *.tune.pk *.facebook.com*. twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ -720\.(mp4)$ { mp4; expires 7d; # limit_rate 1000k; root /var/www/html/domain; valid_referers none blocked tune.pk *.tune.pk *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ -480\.(mp4)$ { mp4; expires 7d; limit_rate 250k; root /var/www/html/domain; valid_referers none blocked tune.pk *.tune.pk *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; expires 7d; # add_header "X-Content-Type-Options" "nosniff"; # add_header Content-Type text/plain; root /var/www/html/domain; valid_referers none blocked tune.pk *.tune.pk *. facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root /var/www/html/domain; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_read_timeout 10000; } location ~ /\.ht { deny all; } } Regards. Shahzaib On Thu, Mar 27, 2014 at 5:04 PM, shahzaib shahzaib wrote: > Thanks Lukas, now i saw these errors in /var/log/messages. Also due to 3rd > party module ? > > Mar 27 12:04:47 DNTX002 kernel: nginx[8190]: segfault at 40174353c ip > 000000000046a482 sp 00007fff1fd93810 error 4 in nginx[400000+92000] > Mar 27 12:04:47 DNTX002 xinetd[6599]: START: nrpe pid=9442 from= > 209.59.234.13 > Mar 27 12:04:47 DNTX002 abrtd: Directory 'ccpp-2014-03-27-16:04:47-8190' > creation detected > Mar 27 12:04:47 DNTX002 abrt[9441]: Saved core dump of pid 8190 > (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-16:04:47-8190 > (67911680 bytes) > Mar 27 12:04:47 DNTX002 xinetd[6599]: EXIT: nrpe status=0 pid=9442 > duration=0(sec) > Mar 27 12:04:48 DNTX002 abrtd: Executable '/usr/local/sbin/nginx' doesn't > belong to any package > Mar 27 12:04:48 DNTX002 abrtd: 'post-create' on '/var/spool/abrt/ccpp- > 2014-03-27-16:04:47-8190' exited with 1 > Mar 27 12:04:48 DNTX002 abrtd: Corrupted or bad directory > '/var/spool/abrt/ccpp-2014-03-27-16:04:47-8190', deleting > > > Regards. > Shahzaib > > > On Thu, Mar 27, 2014 at 5:02 PM, Lukas Tribus wrote: > >> Hi, >> >> >> > [root at DNTX002 nginx-1.2.1]# nginx -V >> > nginx version: nginx/1.2.1 >> > built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) >> > configure arguments: --add-module=/root/nginx_mod_h264_streaming-2.2.7 >> > --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin >> > --with-debug >> >> >> nginx_mod_h264_streaming is a unsupported third party module and probably >> causing your crash. There is generally no support for those modules on >> the mailing list. >> >> Drop nginx_mod_h264_streaming and migrate the official >> ngx_http_mp4_module: >> http://nginx.org/en/docs/http/ngx_http_mp4_module.html >> >> Please be aware that this also means changing configurations. >> >> >> And yes, nginx-1.4.7 is stable. >> >> >> >> Regards, >> >> Lukas >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Mar 27 12:32:08 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 17:32:08 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: You can check the following version now : [root at DNTX002 nginx-1.4.7]# nginx -V nginx version: nginx/1.4.7 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) configure arguments: --with-http_mp4_module --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin --with-debug Still the /var/log/messages showing the following errors : Mar 27 12:32:12 DNTX002 kernel: nginx[12215]: segfault at 40172f51c ip 000000000046a482 sp 00007fff1fd93810 error 4 in nginx (deleted)[400000+92000] Mar 27 12:32:12 DNTX002 abrt[12401]: File '/usr/local/sbin/nginx' seems to be deleted Mar 27 12:32:12 DNTX002 abrt[12401]: Saved core dump of pid 12215 (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-16:32:12-12215 (38105088 bytes) Mar 27 12:32:12 DNTX002 abrtd: Directory 'ccpp-2014-03-27-16:32:12-12215' creation detected Mar 27 12:32:12 DNTX002 abrtd: Executable '/usr/local/sbin/nginx' doesn't belong to any package Mar 27 12:32:12 DNTX002 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2014-03-27-16:32:12-12215' exited with 1 Mar 27 12:32:12 DNTX002 abrtd: Corrupted or bad directory '/var/spool/abrt/ccpp-2014-03-27-16:32:12-12215', deleting Mar 27 12:32:29 DNTX002 xinetd[6599]: START: nrpe pid=12414 from=209.59.234.13 Mar 27 12:32:30 DNTX002 xinetd[6599]: EXIT: nrpe status=0 pid=12414 duration=1(sec) Mar 27 12:32:36 DNTX002 kernel: nginx[12226]: segfault at 401598e6c ip 000000000046a482 sp 00007fff1fd93810 error 4 in nginx (deleted)[400000+92000] Mar 27 12:32:36 DNTX002 abrt[12421]: File '/usr/local/sbin/nginx' seems to be deleted Mar 27 12:32:36 DNTX002 abrt[12421]: Saved core dump of pid 12226 (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-16:32:36-12226 (54018048 bytes) Mar 27 12:32:36 DNTX002 abrtd: Directory 'ccpp-2014-03-27-16:32:36-12226' creation detected Mar 27 12:32:36 DNTX002 abrtd: Executable '/usr/local/sbin/nginx' doesn't belong to any package Mar 27 12:32:36 DNTX002 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2014-03-27-16:32:36-12226' exited with 1 Mar 27 12:32:36 DNTX002 abrtd: Corrupted or bad directory '/var/spool/abrt/ccpp-2014-03-27-16:32:36-12226', deleting Mar 27 12:32:42 DNTX002 kernel: nginx[12220]: segfault at 400ffafac ip 000000000046a482 sp 00007fff1fd93810 error 4 in nginx (deleted)[400000+92000] Mar 27 12:32:42 DNTX002 abrt[12426]: File '/usr/local/sbin/nginx' seems to be deleted Mar 27 12:32:42 DNTX002 abrt[12426]: Not saving repeating crash in '/usr/local/sbin/nginx' Mar 27 12:32:49 DNTX002 kernel: nginx[12212]: segfault at 400ffb0fc ip 000000000046a482 sp 00007fff1fd93810 error 4 in nginx (deleted)[400000+92000] Mar 27 12:32:49 DNTX002 abrt[12431]: File '/usr/local/sbin/nginx' seems to be deleted Mar 27 12:32:49 DNTX002 abrt[12431]: Not saving repeating crash in '/usr/local/sbin/nginx' Mar 27 12:32:54 DNTX002 kernel: nginx[12219]: segfault at 400ff740c ip 000000000046a482 sp 00007fff1fd93810 error 4 in nginx (deleted)[400000+92000] Mar 27 12:32:54 DNTX002 abrt[12434]: File '/usr/local/sbin/nginx' seems to be deleted Mar 27 12:32:54 DNTX002 abrt[12434]: Not saving repeating crash in '/usr/local/sbin/nginx' On Thu, Mar 27, 2014 at 5:18 PM, shahzaib shahzaib wrote: > One quick question, i've updated nginx to 1.4.7 with http_mp4_module. What > if i go with the same config as before ? i.e > > server { > listen 80; > server_name storage10.domain.com storage10.gear3rd.com > storage10.gear3rd.net; > client_max_body_size 800m; > # limit_rate 250k; > # access_log /websites/theos.in/logs/access.log main; > add_header "X-Content-Type-Options" "nosniff"; > add_header Content-Type text/plain; > add_header X-Download-Options noopen ; > location / { > root /var/www/html/domain; > index index.html index.htm index.php; > autoindex off; > } > location ~ \.(flv|jpg|jpeg)$ { > flv; > root /var/www/html/domain; > # aio on; > # directio 512; > # output_buffers 1 2m; > expires 7d; > valid_referers none blocked tune.pk *.tune.pk *. > facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv > www.tunemedia.tv embed.tunemedia.tv; > if ($invalid_referer) { > return 403; > } > } > location ~ -720\.(mp4)$ { > mp4; > expires 7d; > # limit_rate 1000k; > root /var/www/html/domain; > valid_referers none blocked tune.pk *.tune.pk *. > facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv > www.tunemedia.tv embed.tunemedia.tv; > if ($invalid_referer) { > return 403; > } > } > location ~ -480\.(mp4)$ { > mp4; > expires 7d; > limit_rate 250k; > root /var/www/html/domain; > valid_referers none blocked tune.pk *.tune.pk *. > facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv > www.tunemedia.tv embed.tunemedia.tv; > if ($invalid_referer) { > return 403; > } > } > location ~ \.(mp4)$ { > mp4; > expires 7d; > # add_header "X-Content-Type-Options" "nosniff"; > # add_header Content-Type text/plain; > root /var/www/html/domain; > valid_referers none blocked tune.pk *.tune.pk *. > facebook.com *.twitter.com *.domain.com *.gear3rd.net tunemedia.tv > www.tunemedia.tv embed.tunemedia.tv; > if ($invalid_referer) { > return 403; > } > } > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > location ~ \.php$ { > root /var/www/html/domain; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_read_timeout 10000; > } > > location ~ /\.ht { > deny all; > } > } > > Regards. > Shahzaib > > > On Thu, Mar 27, 2014 at 5:04 PM, shahzaib shahzaib wrote: > >> Thanks Lukas, now i saw these errors in /var/log/messages. Also due to >> 3rd party module ? >> >> Mar 27 12:04:47 DNTX002 kernel: nginx[8190]: segfault at 40174353c ip >> 000000000046a482 sp 00007fff1fd93810 error 4 in nginx[400000+92000] >> Mar 27 12:04:47 DNTX002 xinetd[6599]: START: nrpe pid=9442 from= >> 209.59.234.13 >> Mar 27 12:04:47 DNTX002 abrtd: Directory 'ccpp-2014-03-27-16:04:47-8190' >> creation detected >> Mar 27 12:04:47 DNTX002 abrt[9441]: Saved core dump of pid 8190 >> (/usr/local/sbin/nginx) to /var/spool/abrt/ccpp-2014-03-27-16:04:47-8190 >> (67911680 bytes) >> Mar 27 12:04:47 DNTX002 xinetd[6599]: EXIT: nrpe status=0 pid=9442 >> duration=0(sec) >> Mar 27 12:04:48 DNTX002 abrtd: Executable '/usr/local/sbin/nginx' doesn't >> belong to any package >> Mar 27 12:04:48 DNTX002 abrtd: 'post-create' on '/var/spool/abrt/ccpp- >> 2014-03-27-16:04:47-8190' exited with 1 >> Mar 27 12:04:48 DNTX002 abrtd: Corrupted or bad directory >> '/var/spool/abrt/ccpp-2014-03-27-16:04:47-8190', deleting >> >> >> Regards. >> Shahzaib >> >> >> On Thu, Mar 27, 2014 at 5:02 PM, Lukas Tribus wrote: >> >>> Hi, >>> >>> >>> > [root at DNTX002 nginx-1.2.1]# nginx -V >>> > nginx version: nginx/1.2.1 >>> > built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) >>> > configure arguments: --add-module=/root/nginx_mod_h264_streaming-2.2.7 >>> > --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin >>> > --with-debug >>> >>> >>> nginx_mod_h264_streaming is a unsupported third party module and probably >>> causing your crash. There is generally no support for those modules on >>> the mailing list. >>> >>> Drop nginx_mod_h264_streaming and migrate the official >>> ngx_http_mp4_module: >>> http://nginx.org/en/docs/http/ngx_http_mp4_module.html >>> >>> Please be aware that this also means changing configurations. >>> >>> >>> And yes, nginx-1.4.7 is stable. >>> >>> >>> >>> Regards, >>> >>> Lukas >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 27 12:49:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 16:49:48 +0400 Subject: How to send proxy cache status to backend server? In-Reply-To: References: <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> <20140320132636.GE34696@mdounin.ru> <20140327113257.GT34696@mdounin.ru> Message-ID: <20140327124947.GW34696@mdounin.ru> Hello! On Thu, Mar 27, 2014 at 05:08:31PM +0530, Makailol Charls wrote: > Hi, > > Would it be possible to add this as new feature? > > Is there some other alternative ? Actually based on this header value I > want to select named based location. Response headers of expires cached responses are not read from a cache file. If you really want this to happen, you may try to implement this, but I don't it's looks like a generally usable feature. In most if not all cases it will be just a waste of resources. > > > Thanks, > Makailol > > > On Thu, Mar 27, 2014 at 5:02 PM, Maxim Dounin wrote: > > > Hello! > > > > On Thu, Mar 27, 2014 at 03:01:22PM +0530, Makailol Charls wrote: > > > > > Hi Maxim, > > > > > > Apart from passing cache status to backend, would it be possible to send > > > some other headers which are stored in cache? > > > > > > For example, If backed sets header "Foo : Bar" , which is stored in > > cache. > > > Now when cache is expired , request will be sent to backend. At that time > > > can we send the value of Foo header stored in cache to upstream backend? > > > > > > I tried to achieve this with below code but it could not work. > > > proxy_set_header Foo $upstream_http_Foo; > > > > > > Would you suggest me how to achieve this or what am I doing wrong here. > > > > This is not something possible. > > > > > > > > Thanks, > > > Makailol > > > > > > > > > > > > > > > On Thu, Mar 20, 2014 at 6:56 PM, Maxim Dounin > > wrote: > > > > > > > Hello! > > > > > > > > On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote: > > > > > > > > > Hi, > > > > > > > > > > Is there some way to achieve this? I want to pass requests to backend > > > > based > > > > > on cache status condition. > > > > > > > > This is not something easily possible, as cache status is only > > > > known after we started processing proxy_pass and already know > > > > which backend will be used. (Note that by default proxy_cache_key > > > > uses $proxy_host, which wouldn't be known otherwise.) > > > > > > > > If you want to check BYPASS as in your previous message, I would > > > > recommend checking relevant conditions from proxy_cache_bypass > > > > separately. As a more generic though less effective aproach, an > > > > additional proxy layer may be used. > > > > > > > > -- > > > > Maxim Dounin > > > > http://nginx.org/ > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Mar 27 13:03:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 17:03:57 +0400 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: <20140327130357.GY34696@mdounin.ru> Hello! On Thu, Mar 27, 2014 at 05:32:08PM +0500, shahzaib shahzaib wrote: > You can check the following version now : > > [root at DNTX002 nginx-1.4.7]# nginx -V > nginx version: nginx/1.4.7 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > configure arguments: --with-http_mp4_module --with-http_flv_module > --with-file-aio --sbin-path=/usr/local/sbin --with-debug > > > Still the /var/log/messages showing the following errors : > > Mar 27 12:32:12 DNTX002 kernel: nginx[12215]: segfault at 40172f51c ip > 000000000046a482 sp 00007fff1fd93810 error 4 in nginx > (deleted)[400000+92000] > Mar 27 12:32:12 DNTX002 abrt[12401]: File '/usr/local/sbin/nginx' seems to > be deleted According to the logs, it's previous version which segfaults (note that the file was deleted, hence the version on disk, if any, is different one). Likley you've forgot to upgrade the binary running. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Thu Mar 27 13:08:44 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 18:08:44 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: <20140327130357.GY34696@mdounin.ru> References: <20140327130357.GY34696@mdounin.ru> Message-ID: I upgraded nginx to nginx-1.4.7 and then issued the command nginx -s reload. Maybe nginx -s reload didn't actually used the latest binary. Well, now i killed all other nginx using following command : killall -9 nginx nginx (to start new binary) On Thu, Mar 27, 2014 at 6:03 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 27, 2014 at 05:32:08PM +0500, shahzaib shahzaib wrote: > > > You can check the following version now : > > > > [root at DNTX002 nginx-1.4.7]# nginx -V > > nginx version: nginx/1.4.7 > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > > configure arguments: --with-http_mp4_module --with-http_flv_module > > --with-file-aio --sbin-path=/usr/local/sbin --with-debug > > > > > > Still the /var/log/messages showing the following errors : > > > > Mar 27 12:32:12 DNTX002 kernel: nginx[12215]: segfault at 40172f51c ip > > 000000000046a482 sp 00007fff1fd93810 error 4 in nginx > > (deleted)[400000+92000] > > Mar 27 12:32:12 DNTX002 abrt[12401]: File '/usr/local/sbin/nginx' seems > to > > be deleted > > According to the logs, it's previous version which segfaults (note > that the file was deleted, hence the version on disk, if any, is > different one). > > Likley you've forgot to upgrade the binary running. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 27 13:13:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 17:13:55 +0400 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: <20140327130357.GY34696@mdounin.ru> Message-ID: <20140327131355.GA34696@mdounin.ru> Hello! On Thu, Mar 27, 2014 at 06:08:44PM +0500, shahzaib shahzaib wrote: > I upgraded nginx to nginx-1.4.7 and then issued the command nginx -s > reload. Maybe nginx -s reload didn't actually used the latest binary. Well, > now i killed all other nginx using following command : > > killall -9 nginx > nginx (to start new binary) The "nginx -s reload" is a configuration reload command. It won't even try to change a binary, it will just sent a SIGHUP to a running master process. See here for a documentation on how to control nginx properly: http://nginx.org/en/docs/control.html -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Thu Mar 27 13:15:47 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 18:15:47 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: <20140327131355.GA34696@mdounin.ru> References: <20140327130357.GY34696@mdounin.ru> <20140327131355.GA34696@mdounin.ru> Message-ID: Ohhh. THANKS a lot for explaining me that. :-) Sorry a silly job from my end . I'll monitor logs for while and let you know about the progress. I had a quick question in previous reply. Could you please check that one ? Regards. Shahzaib On Thu, Mar 27, 2014 at 6:13 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 27, 2014 at 06:08:44PM +0500, shahzaib shahzaib wrote: > > > I upgraded nginx to nginx-1.4.7 and then issued the command nginx -s > > reload. Maybe nginx -s reload didn't actually used the latest binary. > Well, > > now i killed all other nginx using following command : > > > > killall -9 nginx > > nginx (to start new binary) > > The "nginx -s reload" is a configuration reload command. It won't > even try to change a binary, it will just sent a SIGHUP to a > running master process. > > See here for a documentation on how to control nginx properly: > http://nginx.org/en/docs/control.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Mar 27 13:57:28 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 18:57:28 +0500 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: <20140327130357.GY34696@mdounin.ru> <20140327131355.GA34696@mdounin.ru> Message-ID: Hello, I monitored the kernel logs (/var/log/messages) for while and not found a single error now. Psuedo streaming is also working with the same config. Thanks for quick help :-) But still waiting for the QUICK QUESTION's answer. Regards. Shahzaib On Thu, Mar 27, 2014 at 6:15 PM, shahzaib shahzaib wrote: > Ohhh. THANKS a lot for explaining me that. :-) > > Sorry a silly job from my end . > > I'll monitor logs for while and let you know about the progress. I had a > quick question in previous reply. Could you please check that one ? > > Regards. > Shahzaib > > > On Thu, Mar 27, 2014 at 6:13 PM, Maxim Dounin wrote: > >> Hello! >> >> On Thu, Mar 27, 2014 at 06:08:44PM +0500, shahzaib shahzaib wrote: >> >> > I upgraded nginx to nginx-1.4.7 and then issued the command nginx -s >> > reload. Maybe nginx -s reload didn't actually used the latest binary. >> Well, >> > now i killed all other nginx using following command : >> > >> > killall -9 nginx >> > nginx (to start new binary) >> >> The "nginx -s reload" is a configuration reload command. It won't >> even try to change a binary, it will just sent a SIGHUP to a >> running master process. >> >> See here for a documentation on how to control nginx properly: >> http://nginx.org/en/docs/control.html >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 27 14:08:15 2014 From: nginx-forum at nginx.us (Dougadan) Date: Thu, 27 Mar 2014 10:08:15 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org> <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org> <1d115fbd3bdd05c94c50d2f9e892aad2.NginxMailingListEnglish@forum.nginx.org> <136ae3c9184cd38a9bf5b4f6ab2de1b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0d8f00812a5d5afd0394063f1834610f.NginxMailingListEnglish@forum.nginx.org> Thanks very much. I have been working on this all morning. After your message, there was a load of updates in 1.6, more than 65 modules, including PayPal. I was hoping, but it didn't fix the error. You were directly on target, becuase it called up the order-confirmation.php. So I uploaded these from my PS 1.5, all the order... php files in the root. Now, when returning to my site via the link in PayPal site, it now goes to my site with a list of the customer's orders. It's a work around, but done for now. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,248780#msg-248780 From kworthington at gmail.com Thu Mar 27 14:36:38 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 27 Mar 2014 10:36:38 -0400 Subject: Build nginx on Mac OS X mavericks In-Reply-To: <20140327111755.GS34696@mdounin.ru> References: <20140327111755.GS34696@mdounin.ru> Message-ID: I wrote up a how to that might help as well: http://kevinworthington.com/nginx-for-mac-os-x-mavericks-in-2-minutes/ Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Thu, Mar 27, 2014 at 7:17 AM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 26, 2014 at 06:33:56PM -0700, Kunal Pariani wrote: > > > Hello, > > Has anyone tried building nginx code on Mac ? I get the following error > > while building. Have installed pcre and other needed libraries > > > > *src/mail/ngx_mail_handler.c:1152:30: **error: **use of undeclared > > identifier 'sasl_callback_ft'; did you mean* > > > > * 'sasl_callback_t'?* > > > > callbacks[i].proc = (sasl_callback_ft)&ngx_mail_sasl_log; > > It doesn't looks like nginx source code. Likely you are using > something with local or 3rd party patches. Unmodified version is > available from http://nginx.org/en/download.html. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 27 16:23:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Mar 2014 20:23:15 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> <20140324115634.GI34696@mdounin.ru> Message-ID: <20140327162315.GC34696@mdounin.ru> Hello! On Wed, Mar 26, 2014 at 01:34:19PM +0400, kyprizel wrote: > will be "log_alloc_failures" better? I think something like "log_nomem" will be good enough. Patch: # HG changeset patch # User Maxim Dounin # Date 1395937285 -14400 # Thu Mar 27 20:21:25 2014 +0400 # Node ID 2cc8b9fc7efbf6a98ce29f3f860782a1ebd7e6cf # Parent 734f0babfc133c2dc532f2794deadcf9d90245f7 Core: slab log_nomem flag. The flag allows to suppress "ngx_slab_alloc() failed: no memory" messages from a slab allocator, e.g., if an LRU expiration is used by a consumer and allocation failures aren't fatal. The flag is now set in the SSL session cache code, and in the limit_req module. diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c --- a/src/core/ngx_slab.c +++ b/src/core/ngx_slab.c @@ -129,6 +129,7 @@ ngx_slab_init(ngx_slab_pool_t *pool) pool->pages->slab = pages; } + pool->log_nomem = 1; pool->log_ctx = &pool->zero; pool->zero = '\0'; } @@ -658,7 +659,10 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po } } - ngx_slab_error(pool, NGX_LOG_CRIT, "ngx_slab_alloc() failed: no memory"); + if (pool->log_nomem) { + ngx_slab_error(pool, NGX_LOG_CRIT, + "ngx_slab_alloc() failed: no memory"); + } return NULL; } diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h --- a/src/core/ngx_slab.h +++ b/src/core/ngx_slab.h @@ -39,6 +39,8 @@ typedef struct { u_char *log_ctx; u_char zero; + unsigned log_nomem:1; + void *data; void *addr; } ngx_slab_pool_t; diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1834,6 +1834,8 @@ ngx_ssl_session_cache_init(ngx_shm_zone_ ngx_sprintf(shpool->log_ctx, " in SSL session shared cache \"%V\"%Z", &shm_zone->shm.name); + shpool->log_nomem = 0; + return NGX_OK; } @@ -1986,7 +1988,7 @@ failed: ngx_shmtx_unlock(&shpool->mutex); ngx_log_error(NGX_LOG_ALERT, c->log, 0, - "could not add new SSL session to the session cache"); + "could not allocate new session%s", shpool->log_ctx); return 0; } diff --git a/src/http/modules/ngx_http_limit_req_module.c b/src/http/modules/ngx_http_limit_req_module.c --- a/src/http/modules/ngx_http_limit_req_module.c +++ b/src/http/modules/ngx_http_limit_req_module.c @@ -451,6 +451,8 @@ ngx_http_limit_req_lookup(ngx_http_limit node = ngx_slab_alloc_locked(ctx->shpool, size); if (node == NULL) { + ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, + "could not allocate node%s", ctx->shpool->log_ctx); return NGX_ERROR; } } @@ -674,6 +676,8 @@ ngx_http_limit_req_init_zone(ngx_shm_zon ngx_sprintf(ctx->shpool->log_ctx, " in limit_req zone \"%V\"%Z", &shm_zone->shm.name); + ctx->shpool->log_nomem = 0; + return NGX_OK; } -- Maxim Dounin http://nginx.org/ From kunalvjti at gmail.com Thu Mar 27 16:45:21 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Thu, 27 Mar 2014 09:45:21 -0700 Subject: Build nginx on Mac OS X mavericks In-Reply-To: References: <20140327111755.GS34696@mdounin.ru> Message-ID: Yep, we do have some local and 3rd party patches. But still not sure why it doesn't build on mac and throws that error. Same code with patches builds fine on ubuntu linux. @Kevin: Thats what i followed for building the latest nginx and that worked well. Great post. Thanks -Kunal On Thu, Mar 27, 2014 at 7:36 AM, Kevin Worthington wrote: > I wrote up a how to that might help as well: > http://kevinworthington.com/nginx-for-mac-os-x-mavericks-in-2-minutes/ > > > Best regards, > Kevin > -- > Kevin Worthington > kworthington at gmail.com > http://kevinworthington.com/ > http://twitter.com/kworthington > > > On Thu, Mar 27, 2014 at 7:17 AM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Mar 26, 2014 at 06:33:56PM -0700, Kunal Pariani wrote: >> >> > Hello, >> > Has anyone tried building nginx code on Mac ? I get the following error >> > while building. Have installed pcre and other needed libraries >> > >> > *src/mail/ngx_mail_handler.c:1152:30: **error: **use of undeclared >> > identifier 'sasl_callback_ft'; did you mean* >> > >> > * 'sasl_callback_t'?* >> > >> > callbacks[i].proc = (sasl_callback_ft)&ngx_mail_sasl_log; >> >> It doesn't looks like nginx source code. Likely you are using >> something with local or 3rd party patches. Unmodified version is >> available from http://nginx.org/en/download.html. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 27 17:30:43 2014 From: nginx-forum at nginx.us (emclab) Date: Thu, 27 Mar 2014 13:30:43 -0400 Subject: Nginx not detecting the root location from config for Rails 3.2 app In-Reply-To: References: Message-ID: We use sub uri and symlink. That's why passenger_base_uri is needed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248743,248790#msg-248790 From kworthington at gmail.com Thu Mar 27 18:25:36 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 27 Mar 2014 14:25:36 -0400 Subject: Build nginx on Mac OS X mavericks In-Reply-To: References: <20140327111755.GS34696@mdounin.ru> Message-ID: Great, I'm glad it helped. Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Thu, Mar 27, 2014 at 12:45 PM, Kunal Pariani wrote: > Yep, we do have some local and 3rd party patches. But still not sure why > it doesn't build on mac and throws that error. Same code with patches > builds fine on ubuntu linux. > @Kevin: Thats what i followed for building the latest nginx and that > worked well. Great post. > > Thanks > -Kunal > > > On Thu, Mar 27, 2014 at 7:36 AM, Kevin Worthington > wrote: > >> I wrote up a how to that might help as well: >> http://kevinworthington.com/nginx-for-mac-os-x-mavericks-in-2-minutes/ >> >> >> Best regards, >> Kevin >> -- >> Kevin Worthington >> kworthington at gmail.com >> http://kevinworthington.com/ >> http://twitter.com/kworthington >> >> >> On Thu, Mar 27, 2014 at 7:17 AM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Wed, Mar 26, 2014 at 06:33:56PM -0700, Kunal Pariani wrote: >>> >>> > Hello, >>> > Has anyone tried building nginx code on Mac ? I get the following error >>> > while building. Have installed pcre and other needed libraries >>> > >>> > *src/mail/ngx_mail_handler.c:1152:30: **error: **use of undeclared >>> > identifier 'sasl_callback_ft'; did you mean* >>> > >>> > * 'sasl_callback_t'?* >>> > >>> > callbacks[i].proc = (sasl_callback_ft)&ngx_mail_sasl_log; >>> >>> It doesn't looks like nginx source code. Likely you are using >>> something with local or 3rd party patches. Unmodified version is >>> available from http://nginx.org/en/download.html. >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/ >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Thu Mar 27 19:04:48 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 27 Mar 2014 20:04:48 +0100 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: , , , , , , , Message-ID: Hi, > One quick question, i've updated nginx to 1.4.7 with http_mp4_module. > What if i go with the same config as before ? i.e That should work, as all you really need to do is to use the "mp4;" keyword. (I was not aware that config keyword is actually exactly the same between the third party module and the official module). Regards, Lukas From shahzaib.cb at gmail.com Thu Mar 27 19:09:13 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 27 Mar 2014 12:09:13 -0700 Subject: Core dump messages in /var/log/messages !! In-Reply-To: References: Message-ID: Thanks for help guyz. :-) Regards. Shahzaib On Thu, Mar 27, 2014 at 12:04 PM, Lukas Tribus wrote: > Hi, > > > > One quick question, i've updated nginx to 1.4.7 with http_mp4_module. > > What if i go with the same config as before ? i.e > > That should work, as all you really need to do is to use the "mp4;" > keyword. > > (I was not aware that config keyword is actually exactly the same between > the third party module and the official module). > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Mar 27 23:04:03 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Fri, 28 Mar 2014 00:04:03 +0100 Subject: Accessing HTTP request headers in nginx module Message-ID: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> Hi, I am writing a small nginx module that needs to get/parse HTTP request header and depending on it's value, needs to be do something. ex. curl -X POST -H "OPERATION: add" http://localhost:80/calc to add 2 numbers ex. curl -X POST -H "OPERATION: divide" http://localhost:80/calc to divide 2 numbers etc.. I know in Mongoose web server's callback routine, you can easily access custom HTTP header with an API like const char *value = mg_get_header(connection, "OPERATION") I could not find such an API in nginx. How can I get this functionality in nginx module in handler routine ? Any help would be great Thanks -- Posted via http://www.ruby-forum.com/. From reallfqq-nginx at yahoo.fr Thu Mar 27 23:16:42 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 28 Mar 2014 00:16:42 +0100 Subject: Accessing HTTP request headers in nginx module In-Reply-To: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> References: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> Message-ID: In nginx, you have the http_
embedded variable in the core module to access HTTP headers. You can build the logic the way you want (through mapor directives from the rewrite module ). Using all that you can redirect to specific URL or backend with whatever arguments you wish to do the work for you. It is the UNIX way of doing things to separate components in individual processes, so I suggest you delegate the job to a third-party one which will listen for incoming requests. I would do that this way and keep nginx as close as possible from the lean genuine version to ensure updates are straightforward. My 2 cents, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Fri Mar 28 00:06:00 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Fri, 28 Mar 2014 01:06:00 +0100 Subject: Accessing HTTP request headers in nginx module In-Reply-To: References: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> Message-ID: <19b977753b5ddd0b3ce3821f0bf470c1@ruby-forum.com> Thanks for reply. However, I am still not clear how to access the "custom" headers in module handler. Pls see my comments inline. B.R. wrote in post #1141281: > In nginx, you have the http_
embedded variable in the core > module > to > access HTTP headers. I can see that variables can be accessed by http_
, however the link says "names matching the Apache Server variables", which to me indicates that these are not "custom" headers. With reference to my above example, how can I access my custom header "OPERATION" in module handler ? > You can build the logic the way you want (through > mapor > directives from the > rewrite module > > ). > > Using all that you can redirect to specific URL or backend with whatever > arguments you wish to do the work for you. > Again, here I am not interested in forwarding any URL to any backend server based on header. I simply to access and possibly iterate through list of HTTP headers in module handler where I have reference to ngx_http_request_t *r > It is the UNIX way of doing things to separate components in individual > processes, so I suggest you delegate the job to a third-party one which > will listen for incoming requests. > I would do that this way and keep nginx as close as possible from the > lean > genuine version to ensure updates are straightforward. > > My 2 cents, > --- > *B. R.* -- Posted via http://www.ruby-forum.com/. From chencw1982 at gmail.com Fri Mar 28 09:19:16 2014 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Fri, 28 Mar 2014 17:19:16 +0800 Subject: [ANNOUNCE] Tengine-2.0.2 is released Message-ID: Hi folks, Tengine-2.0.2 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.0.2.tar.gz This update has merged the recent SPDY bugfixes, full changelog is as follows: *) Bugfix: send output queue after processing of read event in SPDY. (chobits) *) Bugfix: CVE-2014-0133 and CVE-2014-0088. (chobits) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyprizel at gmail.com Fri Mar 28 09:33:28 2014 From: kyprizel at gmail.com (kyprizel) Date: Fri, 28 Mar 2014 13:33:28 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <20140327162315.GC34696@mdounin.ru> References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> <20140324115634.GI34696@mdounin.ru> <20140327162315.GC34696@mdounin.ru> Message-ID: Will this patch be applied to mainline? On Thu, Mar 27, 2014 at 8:23 PM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 26, 2014 at 01:34:19PM +0400, kyprizel wrote: > > > will be "log_alloc_failures" better? > > I think something like "log_nomem" will be good enough. > Patch: > > # HG changeset patch > # User Maxim Dounin > # Date 1395937285 -14400 > # Thu Mar 27 20:21:25 2014 +0400 > # Node ID 2cc8b9fc7efbf6a98ce29f3f860782a1ebd7e6cf > # Parent 734f0babfc133c2dc532f2794deadcf9d90245f7 > Core: slab log_nomem flag. > > The flag allows to suppress "ngx_slab_alloc() failed: no memory" messages > from a slab allocator, e.g., if an LRU expiration is used by a consumer > and allocation failures aren't fatal. > > The flag is now set in the SSL session cache code, and in the limit_req > module. > > diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c > --- a/src/core/ngx_slab.c > +++ b/src/core/ngx_slab.c > @@ -129,6 +129,7 @@ ngx_slab_init(ngx_slab_pool_t *pool) > pool->pages->slab = pages; > } > > + pool->log_nomem = 1; > pool->log_ctx = &pool->zero; > pool->zero = '\0'; > } > @@ -658,7 +659,10 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po > } > } > > - ngx_slab_error(pool, NGX_LOG_CRIT, "ngx_slab_alloc() failed: no > memory"); > + if (pool->log_nomem) { > + ngx_slab_error(pool, NGX_LOG_CRIT, > + "ngx_slab_alloc() failed: no memory"); > + } > > return NULL; > } > diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h > --- a/src/core/ngx_slab.h > +++ b/src/core/ngx_slab.h > @@ -39,6 +39,8 @@ typedef struct { > u_char *log_ctx; > u_char zero; > > + unsigned log_nomem:1; > + > void *data; > void *addr; > } ngx_slab_pool_t; > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -1834,6 +1834,8 @@ ngx_ssl_session_cache_init(ngx_shm_zone_ > ngx_sprintf(shpool->log_ctx, " in SSL session shared cache \"%V\"%Z", > &shm_zone->shm.name); > > + shpool->log_nomem = 0; > + > return NGX_OK; > } > > @@ -1986,7 +1988,7 @@ failed: > ngx_shmtx_unlock(&shpool->mutex); > > ngx_log_error(NGX_LOG_ALERT, c->log, 0, > - "could not add new SSL session to the session cache"); > + "could not allocate new session%s", shpool->log_ctx); > > return 0; > } > diff --git a/src/http/modules/ngx_http_limit_req_module.c > b/src/http/modules/ngx_http_limit_req_module.c > --- a/src/http/modules/ngx_http_limit_req_module.c > +++ b/src/http/modules/ngx_http_limit_req_module.c > @@ -451,6 +451,8 @@ ngx_http_limit_req_lookup(ngx_http_limit > > node = ngx_slab_alloc_locked(ctx->shpool, size); > if (node == NULL) { > + ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, > + "could not allocate node%s", > ctx->shpool->log_ctx); > return NGX_ERROR; > } > } > @@ -674,6 +676,8 @@ ngx_http_limit_req_init_zone(ngx_shm_zon > ngx_sprintf(ctx->shpool->log_ctx, " in limit_req zone \"%V\"%Z", > &shm_zone->shm.name); > > + ctx->shpool->log_nomem = 0; > + > return NGX_OK; > } > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.bonnet at esiee.fr Fri Mar 28 09:51:56 2014 From: frank.bonnet at esiee.fr (BONNET, Frank) Date: Fri, 28 Mar 2014 10:51:56 +0100 Subject: One webdav per user to their home directories ? Message-ID: hello I would like to setup the following configuration , run one nginx instance per user to let them access to their home directories thru the webdav protocol + LDAP AUTH like the following https://myserver.domain.tld/user1 https://myserver.domain.tld/user2 and so on ( I have approx 4000 users , mainly students and professors ) The main problem is each "user instance" have to run under the identity of the existing home directory's owner Is it possible to do so with NGINX ? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Fri Mar 28 10:42:06 2014 From: lists at ruby-forum.com (Igor Anishchuk) Date: Fri, 28 Mar 2014 11:42:06 +0100 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: <20130820112647.GD19334@mdounin.ru> Message-ID: We hit the same problem. First off, to work around this problem add the following to your nginx configuration: send_timeout 3600; This will increase the time out to one hour which is long enough to empty the buffer with a slow connection. As to what's happening during these 60 seconds I can answer. The traffic from the server continues to flow non-stop to the client during the 60 seconds and after that, until the client downloads so much data as is reported in the access.log. I am able to reproduce this problem connecting using "curl --limit-rate 5k" to localhost. I hope this helps someone in the future. /Igor -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Fri Mar 28 11:45:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Mar 2014 15:45:46 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: <20140318113311.GL34696@mdounin.ru> <20140318160053.GX34696@mdounin.ru> <20140324115634.GI34696@mdounin.ru> <20140327162315.GC34696@mdounin.ru> Message-ID: <20140328114546.GI34696@mdounin.ru> Hello! On Fri, Mar 28, 2014 at 01:33:28PM +0400, kyprizel wrote: > Will this patch be applied to mainline? Most likely it will, but testing and review are appreciated, as usual. > > > > On Thu, Mar 27, 2014 at 8:23 PM, Maxim Dounin wrote: > > > Hello! > > > > On Wed, Mar 26, 2014 at 01:34:19PM +0400, kyprizel wrote: > > > > > will be "log_alloc_failures" better? > > > > I think something like "log_nomem" will be good enough. > > Patch: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1395937285 -14400 > > # Thu Mar 27 20:21:25 2014 +0400 > > # Node ID 2cc8b9fc7efbf6a98ce29f3f860782a1ebd7e6cf > > # Parent 734f0babfc133c2dc532f2794deadcf9d90245f7 > > Core: slab log_nomem flag. > > > > The flag allows to suppress "ngx_slab_alloc() failed: no memory" messages > > from a slab allocator, e.g., if an LRU expiration is used by a consumer > > and allocation failures aren't fatal. > > > > The flag is now set in the SSL session cache code, and in the limit_req > > module. > > > > diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c > > --- a/src/core/ngx_slab.c > > +++ b/src/core/ngx_slab.c > > @@ -129,6 +129,7 @@ ngx_slab_init(ngx_slab_pool_t *pool) > > pool->pages->slab = pages; > > } > > > > + pool->log_nomem = 1; > > pool->log_ctx = &pool->zero; > > pool->zero = '\0'; > > } > > @@ -658,7 +659,10 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po > > } > > } > > > > - ngx_slab_error(pool, NGX_LOG_CRIT, "ngx_slab_alloc() failed: no > > memory"); > > + if (pool->log_nomem) { > > + ngx_slab_error(pool, NGX_LOG_CRIT, > > + "ngx_slab_alloc() failed: no memory"); > > + } > > > > return NULL; > > } > > diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h > > --- a/src/core/ngx_slab.h > > +++ b/src/core/ngx_slab.h > > @@ -39,6 +39,8 @@ typedef struct { > > u_char *log_ctx; > > u_char zero; > > > > + unsigned log_nomem:1; > > + > > void *data; > > void *addr; > > } ngx_slab_pool_t; > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > > --- a/src/event/ngx_event_openssl.c > > +++ b/src/event/ngx_event_openssl.c > > @@ -1834,6 +1834,8 @@ ngx_ssl_session_cache_init(ngx_shm_zone_ > > ngx_sprintf(shpool->log_ctx, " in SSL session shared cache \"%V\"%Z", > > &shm_zone->shm.name); > > > > + shpool->log_nomem = 0; > > + > > return NGX_OK; > > } > > > > @@ -1986,7 +1988,7 @@ failed: > > ngx_shmtx_unlock(&shpool->mutex); > > > > ngx_log_error(NGX_LOG_ALERT, c->log, 0, > > - "could not add new SSL session to the session cache"); > > + "could not allocate new session%s", shpool->log_ctx); > > > > return 0; > > } > > diff --git a/src/http/modules/ngx_http_limit_req_module.c > > b/src/http/modules/ngx_http_limit_req_module.c > > --- a/src/http/modules/ngx_http_limit_req_module.c > > +++ b/src/http/modules/ngx_http_limit_req_module.c > > @@ -451,6 +451,8 @@ ngx_http_limit_req_lookup(ngx_http_limit > > > > node = ngx_slab_alloc_locked(ctx->shpool, size); > > if (node == NULL) { > > + ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, > > + "could not allocate node%s", > > ctx->shpool->log_ctx); > > return NGX_ERROR; > > } > > } > > @@ -674,6 +676,8 @@ ngx_http_limit_req_init_zone(ngx_shm_zon > > ngx_sprintf(ctx->shpool->log_ctx, " in limit_req zone \"%V\"%Z", > > &shm_zone->shm.name); > > > > + ctx->shpool->log_nomem = 0; > > + > > return NGX_OK; > > } > > > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Mar 28 13:07:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Mar 2014 17:07:10 +0400 Subject: Accessing HTTP request headers in nginx module In-Reply-To: <19b977753b5ddd0b3ce3821f0bf470c1@ruby-forum.com> References: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> <19b977753b5ddd0b3ce3821f0bf470c1@ruby-forum.com> Message-ID: <20140328130710.GM34696@mdounin.ru> Hello! On Fri, Mar 28, 2014 at 01:06:00AM +0100, Mapper Uno wrote: > Thanks for reply. However, I am still not clear how to access the > "custom" headers in module handler. Pls see my comments inline. > > B.R. wrote in post #1141281: > > In nginx, you have the http_
embedded variable in the core > > module > > to > > access HTTP headers. > I can see that variables can be accessed by http_
, however > the link says "names matching the Apache Server variables", which to me > indicates that these are not "custom" headers. With reference to my > above example, how can I access my custom header "OPERATION" in module > handler ? Please make sure to read not only the first sentence. Note the "Also there are other variables" in the same paragraph. The $http_* variables provide access to all headers, including any custom ones. And it is documented as: $http_name arbitrary request header field; the last part of a variable name is the field name converted to lower case with dashes replaced by underscores Therefore, you may either use the $http_operation variable to access the header you are looking for. Or you may take a look at the source code to find out how it's implemented. Take a look at the src/http/ngx_http_variables.c, functions ngx_http_variable_unknown_header_in() and ngx_http_variable_unknown_header() (first one says the header should be searched in r->headers_in.headers list, second one does actual search). -- Maxim Dounin http://nginx.org/ From ben at indietorrent.org Fri Mar 28 14:31:55 2014 From: ben at indietorrent.org (Ben Johnson) Date: Fri, 28 Mar 2014 10:31:55 -0400 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https Message-ID: <533587DB.6050805@indietorrent.org> Hello, We run multiple vhosts in nginx. Occasionally, a vhost will be mis-configured or disabled (via the website management software that we use), and public requests for the domain will fall-back to nginx's default vhost, which can have very unintended consequences (e.g., an incorrect and completely unrelated website is displayed). The nginx documentation suggests doing something like this to combat this type of problem: server { listen *:80 default_server; server_name ""; return 444; } server { listen *:443 default_server ssl; ssl_certificate /var/www/clients/client1/web1/ssl/localhost.com.crt; ssl_certificate_key /var/www/clients/client1/web1/ssl/localhost.com.key; server_name ""; return 444; } I've placed this snippet at the top of nginx's "default" vhost configuration file and it does exactly what I want. But I'm wondering if this is the "correct" and "best" approach to the problem I describe. Also, I noticed that this doesn't seem to work for SSL when an SSL certificate and key are not specified, with the following appearing in nginx's error log: no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 10.0.1.57, server: 0.0.0.0:443 That's fine; I just want to ensure that the certificate I've specified in order to make this work will never be transmitted nor presented to the user-agent. When I test this in a web browser, the browser never seems to display or mention the certificate (no mismatch or anything; just the 444 response). However, when I test this with cURL, it does seem to be privy to the certificate (disregard the fact that the cert verification fails; it's self-signed): $ curl https://10.0.1.50 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Is there any way to avoid this certificate being presented, but still return the 444 response under the conditions I've described? Thanks for any tips here! -Ben From contact at jpluscplusm.com Fri Mar 28 14:53:18 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 28 Mar 2014 14:53:18 +0000 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: <533587DB.6050805@indietorrent.org> References: <533587DB.6050805@indietorrent.org> Message-ID: On 28 March 2014 14:31, Ben Johnson wrote: > Is there any way to av,oid this certificate being presented, but still > return the 444 response under the conditions I've described? I'd /suspect/ not, as the 444 response can't be "delivered" (i.e. the connection closed) until sufficient information has been passed over the already-SSL-secured connection. In other words, the cert *has* to be used to secure the channel over which the HTTP request will be made, and only after its been made can the correct server{} block be chosen and the response delivered - even if the response is simply to close the connection. J From mdounin at mdounin.ru Fri Mar 28 15:45:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Mar 2014 19:45:47 +0400 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: References: <533587DB.6050805@indietorrent.org> Message-ID: <20140328154547.GQ34696@mdounin.ru> Hello! On Fri, Mar 28, 2014 at 02:53:18PM +0000, Jonathan Matthews wrote: > On 28 March 2014 14:31, Ben Johnson wrote: > > Is there any way to av,oid this certificate being presented, but still > > return the 444 response under the conditions I've described? > > I'd /suspect/ not, as the 444 response can't be "delivered" (i.e. the > connection closed) until sufficient information has been passed over > the already-SSL-secured connection. In other words, the cert *has* to > be used to secure the channel over which the HTTP request will be > made, and only after its been made can the correct server{} block be > chosen and the response delivered - even if the response is simply to > close the connection. If SNI is used, it's in theory possible to close a connection early (during an SSL handshake, after ClientHello but before sending enything). The following tickets in trac are related: http://trac.nginx.org/nginx/ticket/195 http://trac.nginx.org/nginx/ticket/214 -- Maxim Dounin http://nginx.org/ From francis at daoine.org Fri Mar 28 16:04:08 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 28 Mar 2014 16:04:08 +0000 Subject: One webdav per user to their home directories ? In-Reply-To: References: Message-ID: <20140328160408.GB16942@daoine.org> On Fri, Mar 28, 2014 at 10:51:56AM +0100, BONNET, Frank wrote: Hi there, > I would like to setup the following configuration , run one nginx instance > per user to let them access to their home directories thru the webdav > protocol + LDAP AUTH like the following > > https://myserver.domain.tld/user1 > > https://myserver.domain.tld/user2 That sounds like it should be doable. It also sounds vaguely familiar. http://forum.nginx.org/read.php?2,225643,225654 Are there specific problems you are seeing? f -- Francis Daly francis at daoine.org From ben at indietorrent.org Fri Mar 28 16:51:17 2014 From: ben at indietorrent.org (Ben Johnson) Date: Fri, 28 Mar 2014 12:51:17 -0400 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: <20140328154547.GQ34696@mdounin.ru> References: <533587DB.6050805@indietorrent.org> <20140328154547.GQ34696@mdounin.ru> Message-ID: <5335A885.1080303@indietorrent.org> On 3/28/2014 11:45 AM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 28, 2014 at 02:53:18PM +0000, Jonathan Matthews wrote: > >> On 28 March 2014 14:31, Ben Johnson wrote: >>> Is there any way to av,oid this certificate being presented, but still >>> return the 444 response under the conditions I've described? >> >> I'd /suspect/ not, as the 444 response can't be "delivered" (i.e. the >> connection closed) until sufficient information has been passed over >> the already-SSL-secured connection. In other words, the cert *has* to >> be used to secure the channel over which the HTTP request will be >> made, and only after its been made can the correct server{} block be >> chosen and the response delivered - even if the response is simply to >> close the connection. > > If SNI is used, it's in theory possible to close a connection > early (during an SSL handshake, after ClientHello but > before sending enything). The following tickets in trac are > related: > > http://trac.nginx.org/nginx/ticket/195 > http://trac.nginx.org/nginx/ticket/214 > Thanks for the input, Jonathan and Maxim. Maxim, when you say, "If SNI is used, it's in theory possible to close a connection early," do you mean to imply that while possible, this capability has not yet been implemented in nginx (the tickets are still open after almost two years)? Thanks again, -Ben From contact at jpluscplusm.com Fri Mar 28 17:03:43 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 28 Mar 2014 17:03:43 +0000 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: <5335A885.1080303@indietorrent.org> References: <533587DB.6050805@indietorrent.org> <20140328154547.GQ34696@mdounin.ru> <5335A885.1080303@indietorrent.org> Message-ID: On 28 March 2014 16:51, Ben Johnson wrote: > Maxim, when you say, "If SNI is used, it's in theory possible to close a > connection early," do you mean to imply that while possible, this > capability has not yet been implemented in nginx (the tickets are still > open after almost two years)? I'd suggest that, across all software (including nginx!), SNI bugs/features like this will get exponentially more attention after Windows XP goes EOL this April, as (I believe) it's the last major platform not to support SNI. J From hari at cpacket.com Fri Mar 28 17:16:07 2014 From: hari at cpacket.com (Hari Miriyala) Date: Fri, 28 Mar 2014 10:16:07 -0700 Subject: Radius and TACACS+ based authentication Message-ID: Hi Everyone, Currently I am using nginx (version 1.2.7) and would like to have authentication support using external security servers such as Radius and TACACS+. Are there are any extensions/plugins available to support this functionality? if not, any thoughts on how it can be achieved? Best Regards, Hari -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 28 17:20:23 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 28 Mar 2014 13:20:23 -0400 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: References: Message-ID: <68ec9bb55d8e2f19b7e7f27ffc5b70fa.NginxMailingListEnglish@forum.nginx.org> Jonathan Matthews Wrote: ------------------------------------------------------- > bugs/features like this will get exponentially more attention after > Windows XP goes EOL this April, as (I believe) it's the last major > platform not to support SNI. Which is a moot case since there are at least 5 other browsers for XP who support SNI. "SSL support with SNI on shared IP addresses requires that the user's browser supports SNI. Most modern web browsers support it (e.g., IE 7 and above, Firefox, Opera, and Chrome). However, there are a few outlier exceptions: Any Internet Explorer browser on Windows XP Chrome 5 and older on Windows XP Blackberry web browser Windows Mobile phones up to version 6.5 Android mobile phone default browser on Android OS 2.x" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248808,248822#msg-248822 From nginx-forum at nginx.us Fri Mar 28 17:22:17 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 28 Mar 2014 13:22:17 -0400 Subject: Radius and TACACS+ based authentication In-Reply-To: References: Message-ID: AFAIK there is only a ldap module for nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248820,248823#msg-248823 From mdounin at mdounin.ru Fri Mar 28 17:58:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Mar 2014 21:58:47 +0400 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: <5335A885.1080303@indietorrent.org> References: <533587DB.6050805@indietorrent.org> <20140328154547.GQ34696@mdounin.ru> <5335A885.1080303@indietorrent.org> Message-ID: <20140328175847.GU34696@mdounin.ru> Hello! On Fri, Mar 28, 2014 at 12:51:17PM -0400, Ben Johnson wrote: > > > On 3/28/2014 11:45 AM, Maxim Dounin wrote: > > Hello! > > > > On Fri, Mar 28, 2014 at 02:53:18PM +0000, Jonathan Matthews wrote: > > > >> On 28 March 2014 14:31, Ben Johnson wrote: > >>> Is there any way to av,oid this certificate being presented, but still > >>> return the 444 response under the conditions I've described? > >> > >> I'd /suspect/ not, as the 444 response can't be "delivered" (i.e. the > >> connection closed) until sufficient information has been passed over > >> the already-SSL-secured connection. In other words, the cert *has* to > >> be used to secure the channel over which the HTTP request will be > >> made, and only after its been made can the correct server{} block be > >> chosen and the response delivered - even if the response is simply to > >> close the connection. > > > > If SNI is used, it's in theory possible to close a connection > > early (during an SSL handshake, after ClientHello but > > before sending enything). The following tickets in trac are > > related: > > > > http://trac.nginx.org/nginx/ticket/195 > > http://trac.nginx.org/nginx/ticket/214 > > > > Thanks for the input, Jonathan and Maxim. > > Maxim, when you say, "If SNI is used, it's in theory possible to close a > connection early," do you mean to imply that while possible, this > capability has not yet been implemented in nginx (the tickets are still > open after almost two years)? Nobody care enough to submit a patch. Likely due to the fact that SNI isn't considered to be an option for serious SSL-enabled sites anyway due to still limited client-side support, see here for details: http://en.wikipedia.org/wiki/Server_Name_Indication#Client_side -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Fri Mar 28 19:38:53 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 28 Mar 2014 23:38:53 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <20140327162315.GC34696@mdounin.ru> References: <20140327162315.GC34696@mdounin.ru> Message-ID: <5251638.bMuClGqdKs@vbart-laptop> On Thursday 27 March 2014 20:23:15 Maxim Dounin wrote: > Hello! > > On Wed, Mar 26, 2014 at 01:34:19PM +0400, kyprizel wrote: > > > will be "log_alloc_failures" better? > > I think something like "log_nomem" will be good enough. > Patch: > > # HG changeset patch > # User Maxim Dounin > # Date 1395937285 -14400 > # Thu Mar 27 20:21:25 2014 +0400 > # Node ID 2cc8b9fc7efbf6a98ce29f3f860782a1ebd7e6cf > # Parent 734f0babfc133c2dc532f2794deadcf9d90245f7 > Core: slab log_nomem flag. > > The flag allows to suppress "ngx_slab_alloc() failed: no memory" messages > from a slab allocator, e.g., if an LRU expiration is used by a consumer > and allocation failures aren't fatal. > > The flag is now set in the SSL session cache code, and in the limit_req > module. > > diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c > --- a/src/core/ngx_slab.c > +++ b/src/core/ngx_slab.c > @@ -129,6 +129,7 @@ ngx_slab_init(ngx_slab_pool_t *pool) > pool->pages->slab = pages; > } > > + pool->log_nomem = 1; > pool->log_ctx = &pool->zero; > pool->zero = '\0'; > } Just a quick nitpicking. I'd suggest to put this log_nomem assignment in the last line as to follow order of elements in the structure definition. wbr, Valentin V. Bartenev [..] > diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h > --- a/src/core/ngx_slab.h > +++ b/src/core/ngx_slab.h > @@ -39,6 +39,8 @@ typedef struct { > u_char *log_ctx; > u_char zero; > > + unsigned log_nomem:1; > + > void *data; > void *addr; > } ngx_slab_pool_t; [..] From lists at ruby-forum.com Fri Mar 28 20:07:38 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Fri, 28 Mar 2014 21:07:38 +0100 Subject: Accessing HTTP request headers in nginx module In-Reply-To: <20140328130710.GM34696@mdounin.ru> References: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> <19b977753b5ddd0b3ce3821f0bf470c1@ruby-forum.com> <20140328130710.GM34696@mdounin.ru> Message-ID: <2fb7eec30ec5bd2d55c012d70ed8b7cc@ruby-forum.com> Thanks Maxim for your reply. Since I am newbie, please excuse my questions. I am still unable to retrieve the variable. All I have in the handler routine is: ngx_http_request_t *r I can see that r->headers_in.headers is a list, but then when you say $http_operation, it is confusing me. Could you please explain Maxim Dounin wrote in post #1141328: > Hello! > > On Fri, Mar 28, 2014 at 01:06:00AM +0100, Mapper Uno wrote: > >> indicates that these are not "custom" headers. With reference to my >> above example, how can I access my custom header "OPERATION" in module >> handler ? > > Please make sure to read not only the first sentence. Note the > "Also there are other variables" in the same paragraph. The > $http_* variables provide access to all headers, including any > custom ones. And it is documented as: > > $http_name > arbitrary request header field; the last part of a variable name > is the field name converted to lower case with dashes replaced by > underscores > > Therefore, you may either use the $http_operation variable to > access the header you are looking for. Or you may take a look at > the source code to find out how it's implemented. Take a look at > the src/http/ngx_http_variables.c, functions > ngx_http_variable_unknown_header_in() and > ngx_http_variable_unknown_header() (first one says the header > should be searched in r->headers_in.headers list, second one does > actual search). > > -- > Maxim Dounin > http://nginx.org/ -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Mar 28 20:49:29 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Fri, 28 Mar 2014 16:49:29 -0400 Subject: Transforming nginx for Windows In-Reply-To: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> Has anyone else experienced nginx.exe 1.5.9.1. Cheshire crashing on Windows server 2003? Intermittently, it is crashing on me with this message in the NT event logs: Event Type: Information Event Source: Application Error Event Category: (100) Event ID: 1004 Date: 3/28/2014 Time: 4:06:50 PM User: N/A Computer: qwerty101 Description: Reporting queued error: faulting application nginx.exe, version 0.0.0.0, faulting module nginx.exe, version 0.0.0.0, fault address 0x001a54ff. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 41 70 70 6c 69 63 61 74 Applicat 0008: 69 6f 6e 20 46 61 69 6c ion Fail 0010: 75 72 65 20 20 6e 67 69 ure ngi 0018: 6e 78 2e 65 78 65 20 30 nx.exe 0 0020: 2e 30 2e 30 2e 30 20 69 .0.0.0 i 0028: 6e 20 6e 67 69 6e 78 2e n nginx. 0030: 65 78 65 20 30 2e 30 2e exe 0.0. 0038: 30 2e 30 20 61 74 20 6f 0.0 at o 0040: 66 66 73 65 74 20 30 30 ffset 00 0048: 31 61 35 34 66 66 1a54ff Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248833#msg-248833 From nginx-forum at nginx.us Fri Mar 28 22:03:12 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 28 Mar 2014 18:03:12 -0400 Subject: Transforming nginx for Windows In-Reply-To: <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> tonyschwartz Wrote: ------------------------------------------------------- > Has anyone else experienced nginx.exe 1.5.9.1. Cheshire crashing on > Windows server 2003? Intermittently, it is crashing on me with this > message in the NT event logs: > 0.0.0.0, faulting module nginx.exe, version 0.0.0.0, fault address > 0x001a54ff. Have you tried 1.5.12.2 ? contents nginx.conf ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248835#msg-248835 From kunalvjti at gmail.com Sat Mar 29 01:04:04 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Fri, 28 Mar 2014 18:04:04 -0700 Subject: debug logs not getting generated Message-ID: Hello, I followed this http://nginx.org/en/docs/debugging_log.html. Have nginx built with --with-debug and set the the error_log in the following 2 files (nginx.conf.main & inside http { } in nginx.conf.web). But still i don't see debug level logging getting generated in the files specified. error_log /log/nginx.log debug; Tried with the rewrite_flag on too. Am i missing something ? Also does anyone know why does nginx has these 2 types of template files ?(i.e .template & .default.template) for eg. nginx.conf.web.http.template & nginx.conf.web.http.default.template I see them to be exactly same. Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Mar 29 07:24:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 29 Mar 2014 11:24:07 +0400 Subject: debug logs not getting generated In-Reply-To: References: Message-ID: <20140329072407.GA34696@mdounin.ru> Hello! On Fri, Mar 28, 2014 at 06:04:04PM -0700, Kunal Pariani wrote: > Hello, > I followed this http://nginx.org/en/docs/debugging_log.html. Have nginx > built with --with-debug and set the the error_log in the following 2 files > (nginx.conf.main & inside http { } in nginx.conf.web). But still i don't > see debug level logging getting generated in the files specified. > > error_log /log/nginx.log debug; > > Tried with the rewrite_flag on too. Am i missing something ? Yes. Some things to check, in no particular order: 1) Is nginx installed is actually compiled with --with-debug (check the "nginx -V" output)? 2) Are you running nginx binary you are checking? Make sure the binary you are running isn't some other binary, e.g., run from a different directory. 3) Make sure you've restarted nginx (or upgraded running binary). 4) Make sure the config you are editing is right one (and again, you've reloaded a configuration after editing it; note that you have to check logs after a configuration reload to make sure it was successful). > Also does anyone know why does nginx has these 2 types of template files > ?(i.e .template & .default.template) > for eg. nginx.conf.web.http.template & nginx.conf.web.http.default.template > I see them to be exactly same. There are no such files in nginx itself. Likely it's something provided by your OS package. -- Maxim Dounin http://nginx.org/ From icantthinkofone at charter.net Sat Mar 29 16:50:09 2014 From: icantthinkofone at charter.net (Doc) Date: Sat, 29 Mar 2014 12:50:09 -0400 (EDT) Subject: Don't understand why this directive causes a 404 in mp3 only Message-ID: <2484c16.19b65d.1450ebf9b47.Webtop.46@charter.net> On my site, music can be played by going to http://mysite.com/music/billyjoel/song.mp3 (for example). You can download the same song by going to http://mysite.com/music/billyjoel/download/song.mp3. The path including "download" is aliased and the default mime type changed as shown here: location /music/billyjoel/play/ { types{ audio/mp3 mp3; } } location /music/billyjoel/download/ { types{} default_type application/octet-stream; alias /(location of root goes here)/music/billyjoel/play/; } This works as intended. However, I set expires and access_log directives for css/jpg/woff, etc. like this and the mp3 will no longer download and I get a 404 with a "file not found" error in the error logs: location ~* ^.+\.(mp3|jpg|css)$ { expires modified +30d; access_log off; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nurahmadie at gmail.com Sat Mar 29 17:12:15 2014 From: nurahmadie at gmail.com (Adie Nurahmadie) Date: Sun, 30 Mar 2014 00:12:15 +0700 Subject: Don't understand why this directive causes a 404 in mp3 only In-Reply-To: <2484c16.19b65d.1450ebf9b47.Webtop.46@charter.net> References: <2484c16.19b65d.1450ebf9b47.Webtop.46@charter.net> Message-ID: On Sat, Mar 29, 2014 at 11:50 PM, Doc wrote: > On my site, music can be played by going to > http://mysite.com/music/billyjoel/song.mp3 (for example). > You can download the same song by going to > http://mysite.com/music/billyjoel/download/song.mp3. > > The path including "download" is aliased and the default mime type changed > as shown here: > > location /music/billyjoel/play/ { > types{ > audio/mp3 mp3; > } > } > > location /music/billyjoel/download/ { > types{} > default_type application/octet-stream; > alias /(location of root goes here)/music/billyjoel/play/; > } > > This works as intended. However, I set expires and access_log directives > for css/jpg/woff, etc. like this and the mp3 will no longer download and I > get a 404 with a "file not found" error in the error logs: > > location ~* ^.+\.(mp3|jpg|css)$ { > expires modified +30d; > access_log off; > } > > Location with regex will get matched first, Looks like the download request for mp3 files get served by: `location ~* ^.+\.(mp3|jpg|css)$` and since you didn't set any alias or location over there, it will returns 404. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- On Sat, Mar 29, 2014 at 11:50 PM, Doc wrote: > On my site, music can be played by going to > http://mysite.com/music/billyjoel/song.mp3 (for example). > You can download the same song by going to > http://mysite.com/music/billyjoel/download/song.mp3. > > The path including "download" is aliased and the default mime type changed > as shown here: > > location /music/billyjoel/play/ { > types{ > audio/mp3 mp3; > } > } > > location /music/billyjoel/download/ { > types{} > default_type application/octet-stream; > alias /(location of root goes here)/music/billyjoel/play/; > } > > This works as intended. However, I set expires and access_log directives > for css/jpg/woff, etc. like this and the mp3 will no longer download and I > get a 404 with a "file not found" error in the error logs: > > location ~* ^.+\.(mp3|jpg|css)$ { > expires modified +30d; > access_log off; > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From icantthinkofone at charter.net Sat Mar 29 18:23:08 2014 From: icantthinkofone at charter.net (Doc) Date: Sat, 29 Mar 2014 14:23:08 -0400 (EDT) Subject: Don't understand why this directive causes a 404 in mp3 only Message-ID: <4f060a9c.19be90.1450f14bb7f.Webtop.46@charter.net> On Sat, Mar 29, 2014 at 12:12 PM, Adie Nurahmadie wrote: On Sat, Mar 29, 2014 at 11:50 PM, Doc? < icantthinkofone at charter.net > ?wrote: On my site, music can be played by going to? http://mysite.com/music/billyjoel/song.mp3 ?(for example). You can download the same song by going to? http://mysite.com/music/billyjoel/download/song.mp3 . The path including "download" is aliased and the default mime type changed as shown here: location /music/billyjoel/play/ { ??? types{ ??????? audio/mp3 mp3; ??? } } location /music/billyjoel/download/ { ??? types{} ??? default_type application/octet-stream; ??? alias /(location of root goes here)/music/billyjoel/play/; } This works as intended. However, I set expires and access_log directives for css/jpg/woff, etc. like this and the mp3 will no longer download and I get a 404 with a "file not found" error in the error logs: location ~* ^.+\.(mp3|jpg|css)$ { ??? expires modified +30d; ??? access_log off; } Location with regex will get matched first,? Ack! Yes. Exactly. I keep forgetting that. Thank you. Looks like the download request for mp3 files get served by: ` location ~* ^.+\.(mp3|jpg|css)$` and since you didn't set any alias or location over there, it will returns 404. ? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx --? regards, Nurahmadie -- On Sat, Mar 29, 2014 at 11:50 PM, Doc < icantthinkofone at charter.net > wrote: On my site, music can be played by going to http://mysite.com/music/billyjoel/song.mp3 (for example). You can download the same song by going to http://mysite.com/music/billyjoel/download/song.mp3 . The path including "download" is aliased and the default mime type changed as shown here: location /music/billyjoel/play/ { ??? types{ ??????? audio/mp3 mp3; ??? } } location /music/billyjoel/download/ { ??? types{} ??? default_type application/octet-stream; ??? alias /(location of root goes here)/music/billyjoel/play/; } This works as intended. However, I set expires and access_log directives for css/jpg/woff, etc. like this and the mp3 will no longer download and I get a 404 with a "file not found" error in the error logs: location ~* ^.+\.(mp3|jpg|css)$ { ??? expires modified +30d; ??? access_log off; } _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- regards, Nurahmadie -- ------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org ? http://mailman.nginx.org/mailman/listinfo/nginx ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunalvjti at gmail.com Sat Mar 29 22:36:20 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Sat, 29 Mar 2014 15:36:20 -0700 Subject: debug logs not getting generated In-Reply-To: <20140329072407.GA34696@mdounin.ru> References: <20140329072407.GA34696@mdounin.ru> Message-ID: 3) Make sure you've restarted nginx (or upgraded running binary). 4) Make sure the config you are editing is right one (and again, you've reloaded a configuration after editing it; note that you have to check logs after a configuration reload to make sure it was successful). I noticed that the config file (conf/nginx/nginx.conf.main) i am changing loses the changes after restarting nginx. Not sure why this should happen as config files need to be persistent across reloads. On Sat, Mar 29, 2014 at 12:24 AM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 28, 2014 at 06:04:04PM -0700, Kunal Pariani wrote: > > > Hello, > > I followed this http://nginx.org/en/docs/debugging_log.html. Have nginx > > built with --with-debug and set the the error_log in the following 2 > files > > (nginx.conf.main & inside http { } in nginx.conf.web). But still i don't > > see debug level logging getting generated in the files specified. > > > > error_log /log/nginx.log debug; > > > > Tried with the rewrite_flag on too. Am i missing something ? > > Yes. Some things to check, in no particular order: > > 1) Is nginx installed is actually compiled with --with-debug > (check the "nginx -V" output)? > > 2) Are you running nginx binary you are checking? Make sure the > binary you are running isn't some other binary, e.g., run from a > different directory. > > 3) Make sure you've restarted nginx (or upgraded running binary). > > 4) Make sure the config you are editing is right one (and again, > you've reloaded a configuration after editing it; note that you > have to check logs after a configuration reload to make sure it > was successful). > > > Also does anyone know why does nginx has these 2 types of template files > > ?(i.e .template & .default.template) > > for eg. nginx.conf.web.http.template & > nginx.conf.web.http.default.template > > I see them to be exactly same. > > There are no such files in nginx itself. Likely it's something > provided by your OS package. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 30 03:45:05 2014 From: nginx-forum at nginx.us (sellOfDrops) Date: Sat, 29 Mar 2014 23:45:05 -0400 Subject: [Arch User] Change document and problems with permission. Message-ID: <86d1431b5d51e815efe7a2edc81ecfcc.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying use nginx in my home folder. What I have to do? Look at my files, please: /etc/nginx/nginx.conf http://p.ngx.cc/94 /etc/php/php-fpm.conf http://p.ngx.cc/1e3b I type: "code setfacl -m u:http:x /home/myuser setfacl -m u:http:x /home/myuser/workspace setfacl -m u:http:rwx /home/myuser/workspace/php " That configuration works. But I have problems with write permissions. Scripts broke. I don't want give chmod 777 on all files. I want to know the best way, the nginx right way to change the document root. I use nginx, php-fpm and mysql. Can you help me? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248847,248847#msg-248847 From lists at ruby-forum.com Sun Mar 30 08:53:33 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Sun, 30 Mar 2014 10:53:33 +0200 Subject: Unable to get large POST data in post handler nginx module Message-ID: <7f290b793b506218f0ee8ff1349a5337@ruby-forum.com> Hi, I am writing an nginx module that processes POST requests. For small data typically less than 1000 bytes, the module successfully receives data. However, for larger data > 1000 bytes or so, I get NULL. ngx_http_request_t *r; ngx_chain_t *ch = r->request_body->bufs; // ----> For larger data ch->buf->pos is NULL // u_char *incoming_data = ch->buf->pos; Any help would be highly appreciated Thanks in advance -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 30 10:20:30 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Sun, 30 Mar 2014 12:20:30 +0200 Subject: Handling large POST data in nginx module Message-ID: <12f0e705865b9e1466a907dc950618d4@ruby-forum.com> Hi, I wrote an nginx module that handles POST data and returns back the same (something similar to echo_read_request_body in echo module) however, in echo module the input buffer is passed as is to output buffer. However, in my case, I need to copy the incoming buffer and then process it and then send it back to client. What I am seeing in my POST handler routine is, the incoming request size is 1377344 bytes. However, the file I am sending via curl command is 1420448 bytes. Where could I be losing the rest of the bytes. Moreover the curl command on client hangs for a while and eventually returns 18 error code. Here is my POST handler code ------------------------- static void ngx_my_post_handler(ngx_http_request_t *r) { ngx_chain_t *cl, *in = r->request_body->bufs; ngx_buf_t *b; for (cl = in; cl; count++, cl=cl->next) { b = cl->buf; if (!ngx_buf_in_memory(b)) { if (b->in_file) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, " What to do here ??????"); return; } } else { in_count++; in_sz += b->last - b->pos; } } u_char *out_data = (unsigned char *)ngx_pcalloc(r->pool, in_sz); // Copy incoming data to outgoing data buffer for (cl = in; cl; count++, cl = cl->next) { b = cl->buf; if (ngx_buf_in_memory(b)) { out_data = ngx_copy(out_data, b->pos, b->last - b->pos); } } // Allocate response buffer and fill it out b = (ngx_buf_t *) ngx_pcalloc(r->pool, sizeof (ngx_buf_t)); if (b == NULL) return; b->pos = out_data; b->last = out_data + out_len; b->memory = 1; b->last_buf = 1; b->last_in_chain = 1; out_chain.buf = b; out_chain.next = NULL; // Setup HTTP response headers r->headers_out.status = NGX_HTTP_OK; r->headers_out.content_length_n = out_len; r->headers_out.content_type.len = sizeof ("text/html") - 1; r->headers_out.content_type.data = (u_char *) "text/html"; ngx_http_send_header(r); ngx_http_output_filter(r, &out_chain); } -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 30 11:11:55 2014 From: lists at ruby-forum.com (Kubo Tatsuhiko) Date: Sun, 30 Mar 2014 13:11:55 +0200 Subject: Handling large POST data in nginx module In-Reply-To: <12f0e705865b9e1466a907dc950618d4@ruby-forum.com> References: <12f0e705865b9e1466a907dc950618d4@ruby-forum.com> Message-ID: <28148ffce9b5e813a61b80bd3c6ed865@ruby-forum.com> Hello. > What I am seeing in my POST handler routine is, the incoming request > size is 1377344 bytes. However, the file I am sending via curl command > is 1420448 bytes. Where could I be losing the rest of the bytes ?? The incoming request buffer is not always completed reading when nginx's handler is called in some module. You may look the function ngx_http_image_read in nginx/src/http/modules/ngx_http_image_filter_module.c as reference. http://hg.nginx.org/nginx/file/a24f88eff684/src/http/modules/ngx_http_image_filter_module.c#l451 The key point is using NGX_AGAIN. -- Posted via http://www.ruby-forum.com/. From kunalvjti at gmail.com Sun Mar 30 17:35:26 2014 From: kunalvjti at gmail.com (Kunal Pariani) Date: Sun, 30 Mar 2014 10:35:26 -0700 Subject: debug logs not getting generated In-Reply-To: References: <20140329072407.GA34696@mdounin.ru> Message-ID: Never mind. Got it working. Had a script which was overwriting the loglevel value in the conf file after restarting nginx. Thanks for the help. On Sat, Mar 29, 2014 at 3:36 PM, Kunal Pariani wrote: > 3) Make sure you've restarted nginx (or upgraded running binary). > > 4) Make sure the config you are editing is right one (and again, > you've reloaded a configuration after editing it; note that you > have to check logs after a configuration reload to make sure it > was successful). > > I noticed that the config file (conf/nginx/nginx.conf.main) i am changing > loses the changes after restarting nginx. Not sure why this should happen > as config files need to be persistent across reloads. > > > > On Sat, Mar 29, 2014 at 12:24 AM, Maxim Dounin wrote: > >> Hello! >> >> On Fri, Mar 28, 2014 at 06:04:04PM -0700, Kunal Pariani wrote: >> >> > Hello, >> > I followed this http://nginx.org/en/docs/debugging_log.html. Have nginx >> > built with --with-debug and set the the error_log in the following 2 >> files >> > (nginx.conf.main & inside http { } in nginx.conf.web). But still i don't >> > see debug level logging getting generated in the files specified. >> > >> > error_log /log/nginx.log debug; >> > >> > Tried with the rewrite_flag on too. Am i missing something ? >> >> Yes. Some things to check, in no particular order: >> >> 1) Is nginx installed is actually compiled with --with-debug >> (check the "nginx -V" output)? >> >> 2) Are you running nginx binary you are checking? Make sure the >> binary you are running isn't some other binary, e.g., run from a >> different directory. >> >> 3) Make sure you've restarted nginx (or upgraded running binary). >> >> 4) Make sure the config you are editing is right one (and again, >> you've reloaded a configuration after editing it; note that you >> have to check logs after a configuration reload to make sure it >> was successful). >> >> > Also does anyone know why does nginx has these 2 types of template files >> > ?(i.e .template & .default.template) >> > for eg. nginx.conf.web.http.template & >> nginx.conf.web.http.default.template >> > I see them to be exactly same. >> >> There are no such files in nginx itself. Likely it's something >> provided by your OS package. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Mar 31 05:25:49 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 30 Mar 2014 22:25:49 -0700 Subject: [ANN] OpenResty 1.5.11.1 released Message-ID: Hello! I am happy to announce the new 1.5.11.1 release of the OpenResty bundle: http://openresty.org/#Download Special thanks go to all our contributors for making this happen! Below is the complete change log for this release, as compared to the last formal release, 1.5.8.1: * upgraded LuaJIT to v2.1-20140330. * feature: included Mike Pall's new "trace stitching" feature that can compile around most of the NYI items. thanks CloudFlare Inc. for sponsoring the development. This helps compiling more Lua code. For example, it gives 10% ~ 40% speedup in simple test cases of LuaRestyMySQLLibrary out of the box. * bugfix: included all the new bug fixes from Mike Pall, most of which are very obscure bugs in the JIT compiler hidden for years. * relaxed the hard-coded heuristic limit further to 100 for loopunroll. * feature: applied John Marino's patch for compiling LuaJIT on DragonFlyBSD. thanks lhmwzy for proposing the patch. * upgraded the Nginx core to 1.5.11. * see the changes here: * bugfix: applied the patch to the NGINX core for the latest SPDY security vulnerability (CVE-2014-0133). * feature: added support for DragonFlyBSD to "./configure". thanks lhmwzy for the patch. * bugfix: disabled the -Werror option for clang because it caused build failures at least in recent Mac OS X systems. thanks Hamish Forbes for the report. * feature: bundled new component LuaRestyUpstreamHealthcheckLibrary 0.01. * see the documentation for details: https://github.com/agentzh/lua-resty-upstream-healthcheck#re adme * feature: bundled new component LuaUpstreamNginxModule 0.01. * see the documentation for details: https://github.com/agentzh/lua-upstream-nginx-module#readme * upgraded LuaNginxModule to 0.9.6. * feature: added new configuration directives, init_worker_by_lua and init_worker_by_lua_file, to run Lua code upon every nginx worker process's startup. * feature: added new API function ngx.config.nginx_configure() to return the NGINX "./configure" arguments string to the Lua land. thanks Tatsuhiko Kubo for the patch. * feature: added new API function ngx.resp.get_headers() for fetching all the response headers. thanks Tatsuhiko Kubo for the patch. * feature: added new API function ngx.worker.pid() for retrieving the current nginx worker process's pid. * feature: explicitly check Lua langauge version mismatch; we only accept the Lua 5.1 language (for now). * bugfix: accessing a cosocket object from a request which does not create it could lead to segmentation faults. now we throw out a Lua error "bad request" properly in this case. * change: it is now the user's responsibility to clear the captures table for ngx.re.match(). * bugfix: we should prefix our chunk names for from-string lua source (which also leads to nicer error messages). thanks Mike Pall for the catch. * bugfix: subrequests initiated by ngx.location.capture* with the HEAD method did not result in responses without response bodies. thanks Daniel for the report. * bugfix: segfault might happen in the FFI API for destroying compiled PCRE regexes, which affects libraries like LuaRestyCoreLibrary. thanks Dane Kneche. * bugfix: fixes for small string buffer arguments in the C API for FFI-based implementations of shdict:get(). * bugfix: fixed the error message buffer overwrite in the C API for FFI-based ngx.re implementations. * bugfix: use of the public C API in other nginx C modules (extending LuaNginxModule) lead to compilation errors and warnings when the Microsoft C compiler is used. thanks Edwin Cleton for the report. * bugfix: segmentation faults might happen when multiple "light threads" in the same request manipuate a stream cosocket object in turn. thanks Aviram Cohen for the report. * bugfix: timers created by ngx.timer.at() might not be aborted prematurely upon nginx worker exit. thanks Hamish Forbes for the report. * bugfix: the return value sizes of the C functions "ngx_http_lua_init_by_inline" and "ngx_http_lua_init_by_file" were wrong. * optimize: coroutine status string look-up is now a bit more efficient by specifying the string lengths explicitly. thanks Tatsuhiko Kubo for the patch. * various code refactoring. * upgraded LuaRestyCoreLibrary to 0.0.5. * change: now it is the user's responsibility to clear the input result table. * feature: resty.core.regex: added new function "set_buf_grow_ratio" to control the buffer grow ratio (default 2.0). * bugfix: segmentation fault might happen due to assignments to ngx.header.HEADER because we did not anchor the memory buffer properly which might get collected prematurely. * bugfix: ngx.req.get_headers: we need to anchor the string buffer being casted otherwise it might be accidentally garbage collected when we still hold a C pointer to it. this bug might lead to segmentation faults. * optimize: cache the match captures table for ngx.re.gsub() when a function-typed "replace" argument is specified. this gives a remarkable speedup. * optimize: resty.core.regex: forked the original shared code paths to multiple specialized versions, which helps the JIT compiler. * optimize: resty.core.regex: cache the parsing results for the regex option strings. thanks Mike Pall for the suggestion. * upgraded LuaRestyRedisLibrary to 0.20. * feature: added new redis 2.8.0 commands: "scan", "sscan", "hscan", and "zscan". thanks Dragonoid for the patch. * feature: the read_reply() method can now be re-tried immediately after a "timeout" error is returned. * bugfix: the "unsubscribe"/"subscribe" commands could not be called after read_reply() returned "timeout". thanks doujiang for the patch. * bugfix: we incorrectly allowed reusing redis connections in the "subscribed" state. thanks doujiang for the patch. * upgraded LuaCjsonLibrary to 2.1.0.1. * rebased on lua-cjson 2.1.0: the most notable new feature is the "cjson.safe" module. * feature: applied Jiale Zhi's patch to add the new config function "encode_empty_table_as_object" so that we can encode empty Lua tables into empty JSON arrays. * upgraded SrcacheNginxModule to 0.26. * bugfix: HEAD requests might result in response bodies. * upgraded EchoNginxModule to 0.52. * bugfix: HEAD subrequests could still result in non-empty response bodies. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1005011 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From mdounin at mdounin.ru Mon Mar 31 10:57:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 31 Mar 2014 14:57:37 +0400 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <5251638.bMuClGqdKs@vbart-laptop> References: <20140327162315.GC34696@mdounin.ru> <5251638.bMuClGqdKs@vbart-laptop> Message-ID: <20140331105737.GB34696@mdounin.ru> Hello! On Fri, Mar 28, 2014 at 11:38:53PM +0400, Valentin V. Bartenev wrote: > On Thursday 27 March 2014 20:23:15 Maxim Dounin wrote: > > Hello! > > > > On Wed, Mar 26, 2014 at 01:34:19PM +0400, kyprizel wrote: > > > > > will be "log_alloc_failures" better? > > > > I think something like "log_nomem" will be good enough. > > Patch: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1395937285 -14400 > > # Thu Mar 27 20:21:25 2014 +0400 > > # Node ID 2cc8b9fc7efbf6a98ce29f3f860782a1ebd7e6cf > > # Parent 734f0babfc133c2dc532f2794deadcf9d90245f7 > > Core: slab log_nomem flag. > > > > The flag allows to suppress "ngx_slab_alloc() failed: no memory" messages > > from a slab allocator, e.g., if an LRU expiration is used by a consumer > > and allocation failures aren't fatal. > > > > The flag is now set in the SSL session cache code, and in the limit_req > > module. > > > > diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c > > --- a/src/core/ngx_slab.c > > +++ b/src/core/ngx_slab.c > > @@ -129,6 +129,7 @@ ngx_slab_init(ngx_slab_pool_t *pool) > > pool->pages->slab = pages; > > } > > > > + pool->log_nomem = 1; > > pool->log_ctx = &pool->zero; > > pool->zero = '\0'; > > } > > Just a quick nitpicking. > > I'd suggest to put this log_nomem assignment in the last line as to follow > order of elements in the structure definition. IMHO, it looks silly this way, and that's why it was placed just before the log_ctx assignment. Note well that the order of elements in the structure is more about memory efficiency, and following the order isn't something required. -- Maxim Dounin http://nginx.org/ From kirilk at cloudxcel.com Mon Mar 31 10:57:56 2014 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Mon, 31 Mar 2014 13:57:56 +0300 Subject: Custom error page on SSL negotiation failure In-Reply-To: References: <7DC1DDBC-3963-4CA7-9058-C967ADAFBE3E@cloudxcel.com> Message-ID: Hi, Thank you very much for the fast response. It turns out that I have somehow missed the response. Variable ssl_protocol is what I need. Regards, Kiril On Mar 17, 2014, at 5:32 PM, Lukas Tribus wrote: > Embedded Variables -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3053 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Mar 31 12:38:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 31 Mar 2014 16:38:11 +0400 Subject: Unable to get large POST data in post handler nginx module In-Reply-To: <7f290b793b506218f0ee8ff1349a5337@ruby-forum.com> References: <7f290b793b506218f0ee8ff1349a5337@ruby-forum.com> Message-ID: <20140331123810.GJ34696@mdounin.ru> Hello! On Sun, Mar 30, 2014 at 10:53:33AM +0200, Mapper Uno wrote: > Hi, > > I am writing an nginx module that processes POST requests. For small > data typically less than 1000 bytes, the module successfully receives > data. However, for larger data > 1000 bytes or so, I get NULL. > > ngx_http_request_t *r; > ngx_chain_t *ch = r->request_body->bufs; > > // ----> For larger data ch->buf->pos is NULL > // > u_char *incoming_data = ch->buf->pos; > > Any help would be highly appreciated If a response isn't small enough for client_body_buffer_size, it's saved into a file, and buffers in r->request_body->bufs will be file-backed buffers. See http://nginx.org/r/client_body_buffer_size for a user-level documentation. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 31 12:59:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 31 Mar 2014 16:59:14 +0400 Subject: Handling large POST data in nginx module In-Reply-To: <12f0e705865b9e1466a907dc950618d4@ruby-forum.com> References: <12f0e705865b9e1466a907dc950618d4@ruby-forum.com> Message-ID: <20140331125914.GK34696@mdounin.ru> Hello! On Sun, Mar 30, 2014 at 12:20:30PM +0200, Mapper Uno wrote: > Hi, > > I wrote an nginx module that handles POST data and returns back the same > (something similar to echo_read_request_body in echo module) however, > in echo module the input buffer is passed as is to output buffer. > However, in my case, I need to copy the incoming buffer and then process > it and then send it back to client. > > What I am seeing in my POST handler routine is, the incoming request > size is 1377344 bytes. However, the file I am sending via curl command > is 1420448 bytes. Where could I be losing the rest of the bytes. > Moreover the curl command on client hangs for a while and eventually > returns 18 error code. First of all, make sure to use curl properly. In particular, curl --data is identical to --data-ascii, while for binary data --data-binary must be used. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Mar 31 14:31:11 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Mon, 31 Mar 2014 10:31:11 -0400 Subject: Transforming nginx for Windows In-Reply-To: <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2b10663b4e075b264f05f5add81b6994.NginxMailingListEnglish@forum.nginx.org> Trying now, will let you know how it goes. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248876#msg-248876