From nginx-forum at nginx.us Fri Jun 1 07:58:59 2012 From: nginx-forum at nginx.us (youzhengchuan) Date: Fri, 1 Jun 2012 03:58:59 -0400 (EDT) Subject: =?UTF-8?B?UmU6IE5naW54LVVwc3RyZWFtLXByb3h5IG5leHQgdXBzdHJlYW0t5oOK5aSp5aSn?= =?UTF-8?B?QnVn?= In-Reply-To: <20120531151205.GB31671@mdounin.ru> References: <20120531151205.GB31671@mdounin.ru> Message-ID: Accordance with the above configuration, When the domain "flvstorage.ppserver.org.cn," nginx nslookup results is just a one IP address, this backend upstream can't be used. my apology for my bad english. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227075,227089#msg-227089 From brian at akins.org Fri Jun 1 11:43:31 2012 From: brian at akins.org (Brian Akins) Date: Fri, 1 Jun 2012 07:43:31 -0400 Subject: =?UTF-8?B?UmU6IE5naW54LVVwc3RyZWFtLXByb3h5IG5leHQgdXBzdHJlYW0t5oOK5aSp5aSn?= =?UTF-8?B?QnVn?= In-Reply-To: References: <20120531151205.GB31671@mdounin.ru> Message-ID: <26A7300E-17B7-4302-8708-B01F8E7FD61C@akins.org> Do something like this: you need to define a resolver: resolver 8.8.8.8; # or your dns serevrs location / { set $myupstream flvdownload.ppserver.org.cn; proxy_pass http://$myupstream; } if you need it to use multiple ip addresses. From ganaiwali at gmail.com Fri Jun 1 14:17:17 2012 From: ganaiwali at gmail.com (tariq wali) Date: Fri, 1 Jun 2012 14:17:17 +0000 Subject: ssl/tls https with red cross In-Reply-To: References: Message-ID: can anyone please tell why this error on my nginx instance with ssl/tls 2012/06/01 10:06:12 [emerg] 20286#0: SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key") failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting password error:0906A068:PEM routines:PEM_do_header:bad password read error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) 2012/06/01 10:06:20 [emerg] 866#0: SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key") failed (SSL: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt error:0906A065:PEM routines:PEM_do_header:bad decrypt error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) On Wed, May 30, 2012 at 3:44 PM, tariq wali wrote: > Hi, > > Looking to get some help from the group . > > We are running nginx/0.7.62 and notice that https with red-cross (either > the connection is not encrypted or the page has some non https content and > in my case it is no encrypted connection ) this is how thw config looks > > > server { > listen 443; > server_name login.jobsgulf.com; > access_log on; > ssl on; > ssl_certificate login.jobsgulf.com.crt; > ssl_certificate_key login.jobsgulf.com.key; > ssl_protocols SSLv3 TLSv1 ; > # ssl_ciphers HIGH:!aNULL:!MD5; > ssl_ciphers > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; > keepalive_timeout 60; > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > > I want to know if we really have to explicitly specify ssl_protocols and > ssl_ciphers in the config in order to be fully https for the said directive > ?? > > Also does it make sense to enable ssl/tls support on apache also ? in my > case i have nginx in front of the apache . > > > > > -- > *Tariq Wali.* > > -- *Tariq Wali.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Fri Jun 1 14:42:25 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 1 Jun 2012 18:42:25 +0400 Subject: ssl/tls https with red cross In-Reply-To: References: Message-ID: <201206011842.25662.ne@vbart.ru> On Friday 01 June 2012 18:17:17 tariq wali wrote: > can anyone please tell why this error on my nginx instance with ssl/tls > > 2012/06/01 10:06:12 [emerg] 20286#0: > SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key") > failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting > password error:0906A068:PEM routines:PEM_do_header:bad password read > error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) > > 2012/06/01 10:06:20 [emerg] 866#0: > SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key") > failed (SSL: error:06065064:digital envelope > routines:EVP_DecryptFinal_ex:bad decrypt error:0906A065:PEM > routines:PEM_do_header:bad decrypt error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > Nginx doesn't know passphrase for your private key file. You need to remove it. wbr, Valentin V. Bartenev From sammyraul1 at gmail.com Mon Jun 4 02:01:34 2012 From: sammyraul1 at gmail.com (Sammy Raul) Date: Mon, 4 Jun 2012 11:01:34 +0900 Subject: Video Streaming using non http backend, Ref ngx_drizzle Message-ID: Hi All, I am trying to stream video it can be mp4, flv anything using nginx. The video streams in the form of 1024 size will be available from the backend non-http server. For achieveing this I followed the ngx_http_drizzle source. I wrote an upstream handler and followed most of the source code from ngx_http_drizzle. I have few questions or to be more precise I did not understood how the output from drizzle is being streamed to the client. 1) In ngx_http_drizzle_output.c the function ngx_http_drizzle_submit_mem is the place where it is setting the output filter, Is it also sending the response i.e the stream to the client at this point, or it is some other function? 2) What I need to do to send my video contents to the client, I followed the drizzle example but setting output and sending stream to the client, how I can achieve this. I have 1024B avaialble at one point and I want to send this to the client till the backend server has no stream to send and the client should be able to play the content. 3) Is it possible to send the video stream to the client with the browser. Can someone who knows about this, please explain how it works. What changes I need to make. It would be highly appreciated if anyone explains this. Thanks, Raul -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 4 06:31:20 2012 From: nginx-forum at nginx.us (zestsh) Date: Mon, 4 Jun 2012 02:31:20 -0400 (EDT) Subject: Would like to implement WebSocket support In-Reply-To: References: Message-ID: is there some discussing about the future websocket implementation? from the roadmap, we couldn't get to know any new information. Thank you. ??? Wrote: ------------------------------------------------------- > This feature will be implement in the 1.3 branch, > you can see the > roadmap here: http://trac.nginx.org/nginx/roadmap > > Or you can use my tcp proxy module as an > alternative temporarily : > https://github.com/yaoweibin/nginx_tcp_proxy_modul > e > > Thanks. > > 2012/5/18 Alexandr Gomoliako : > >> I want to use websockets in my appliaction > server. My provider has > >> always in front of the application server an > nginx-server. > >> And since nginx currently doesn't support > websockets I have a problem. > >> So I just wanted to ask, how is the progress > about proxiing websocket > >> communications? > >> I would be very great and I could imagine that > other users may ask for > >> that, too in the near future. > > > > I've been playing with websockets for awhile now > and I don't think it > > can make a difference for your provider. Real > time web application are > > really expensive to handle, each frame costs > almost as much as > > keepalive request, but you don't usually expect > hundreds of requests > > from each client every second. It's like > streaming video by 100 bytes > > at a time. > > > > So, it has to be some kind of frame multiplexing > over a single > > connection with backend and even then it's still > a lot to handle. > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221884,227134#msg-227134 From sammyraul1 at gmail.com Mon Jun 4 08:15:34 2012 From: sammyraul1 at gmail.com (sammy_raul) Date: Mon, 4 Jun 2012 01:15:34 -0700 (PDT) Subject: Video Streaming using non http backend, Ref ngx_drizzle In-Reply-To: References: Message-ID: <1338797734133-7580237.post@n2.nabble.com> Anything on this, just a small hint on how I can configure the output filter would be highly appreciated. Thanks, Raul -- View this message in context: http://nginx.2469901.n2.nabble.com/Video-Streaming-using-non-http-backend-Ref-ngx-drizzle-tp7580235p7580237.html Sent from the nginx mailing list archive at Nabble.com. From nginx-forum at nginx.us Mon Jun 4 08:18:34 2012 From: nginx-forum at nginx.us (youzhengchuan) Date: Mon, 4 Jun 2012 04:18:34 -0400 (EDT) Subject: =?UTF-8?B?UmU6IE5naW54LVVwc3RyZWFtLXByb3h5IG5leHQgdXBzdHJlYW0t5oOK5aSp5aSn?= =?UTF-8?B?QnVn?= In-Reply-To: <26A7300E-17B7-4302-8708-B01F8E7FD61C@akins.org> References: <26A7300E-17B7-4302-8708-B01F8E7FD61C@akins.org> Message-ID: thanks Brian Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227075,227139#msg-227139 From nginx-forum at nginx.us Mon Jun 4 08:33:19 2012 From: nginx-forum at nginx.us (zestsh) Date: Mon, 4 Jun 2012 04:33:19 -0400 (EDT) Subject: Would like to implement WebSocket support In-Reply-To: References: Message-ID: will the websocket implementation in the nginx 1.3 branch work same with the tcp_proxy_module? if not, what will be it look like? I hope nginx developer geeks give some clues about functionality or related api provided. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221884,227140#msg-227140 From agentzh at gmail.com Mon Jun 4 14:13:14 2012 From: agentzh at gmail.com (agentzh) Date: Mon, 4 Jun 2012 22:13:14 +0800 Subject: Video Streaming using non http backend, Ref ngx_drizzle In-Reply-To: References: Message-ID: Hello! On Mon, Jun 4, 2012 at 10:01 AM, Sammy Raul wrote: > I am trying to stream video it can be mp4, flv anything using nginx. > > The video streams in the form of 1024 size will be available from the > backend non-http server. > I think this can be done trivially via ngx_lua module while still achieving good performance. Here is a small example that demonstrates how to meet your requirements with a little Lua: location /api { content_by_lua ' local sock, err = ngx.socket.tcp() if not sock then ngx.log(ngx.ERR, "failed to get socket: ", err) ngx.exit(500) end sock:settimeout(1000) -- 1 sec local ok, err = sock:connect("some.backend.host", 12345) if not ok then ngx.log(ngx.ERR, "failed to connect to upstream: ", err) ngx.exit(502) end local bytes, err = sock:send("some query") if not bytes then ngx.log(ngx.ERR, "failed to send query: ", err) ngx.exit(502) end while true do local data, err, partial = sock:receive(1024) if not data then if err == "closed" then if partial then ngx.print(partial) ngx.eof() ngx.exit(ngx.OK) end else ngx.log(ngx.ERR, "error reading data: ", err) ngx.exit(502) end else ngx.print(data) ngx.flush(true) end end '; } See the documentation for details: http://wiki.nginx.org/HttpLuaModule > For achieveing this I followed the ngx_http_drizzle source. > > I wrote an upstream handler and followed most of the source code from > ngx_http_drizzle. > As the author of ngx_drizzle, I suggest you start from trying out ngx_lua. Customizing ngx_drizzle for your needs requires a *lot* of work. The C approach should only be attempted when Lua is indeed too slow for your purpose, which is not very likely for many applications though. Also, please note that ngx_drizzle does not support strict non-buffered data output. So, for downstream connections that are slow to write, data will still accumulate in RAM without control. On the other hand, the ngx_lua sample given above does not suffer from this issue. > I have few questions or to be more precise I did not understood how the > output from drizzle is being streamed to the client. > > 1) In ngx_http_drizzle_output.c the function ngx_http_drizzle_submit_mem is > the place where it is setting the output filter, Is it also sending the > response i.e the stream to the client at this point, or it is some other > function? > Nope. Sending output buffers to the output filter chain is done by the ngx_http_drizzle_output_bufs function. > 2) What I need to do to send my video contents to the client, I followed the > drizzle example but setting output and sending stream to the client, how I > can achieve this. I have 1024B avaialble at one point and I want to send > this to the client till the backend server has no stream to send and the > client should be able to play the content. > Basically, you can call the ngx_http_output_filter function, just as other nginx upstream modules. > 3) Is it possible to send the video stream to the client with the browser. > I do not quite follow this question. Best regards, -agentzh From nginx-forum at nginx.us Mon Jun 4 23:32:32 2012 From: nginx-forum at nginx.us (ptiseo) Date: Mon, 4 Jun 2012 19:32:32 -0400 (EDT) Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work Message-ID: <58edd092bc513c870ea369bf566e0331.NginxMailingListEnglish@forum.nginx.org> Had nginx 1.0.5 running on a Fedora 15 system just fine for months. Upgraded the server from F15 to F17. At first, all seems well, but over time, I keep getting 500 errors on proxied sites. Logs say: "socket() failed (24: Too many open files) while connecting to upstream". Has anyone else had this experience? If so, what's the root cause? Had to revert server back to a backup to get sites functional. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227162,227162#msg-227162 From reallfqq-nginx at yahoo.fr Tue Jun 5 00:15:07 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 4 Jun 2012 20:15:07 -0400 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: <58edd092bc513c870ea369bf566e0331.NginxMailingListEnglish@forum.nginx.org> References: <58edd092bc513c870ea369bf566e0331.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hmmm... Apparently there seem to be some official packages of Nginx in Fedora repositories. However, there have been some updates of the 1.0.x branch there. 1.0.5 seems to be far outdated. For example the latest seems to be a 1.0.15-4 version of the package: http://lists.fedoraproject.org/pipermail/package-announce/2012-May/081214.html I can't check much, since I don't have Fedora. I just did little online research on the Fedora-Announce mailing-list . Hope my 2 cents helped, --- *B. R.* On Mon, Jun 4, 2012 at 7:32 PM, ptiseo wrote: > Had nginx 1.0.5 running on a Fedora 15 system just fine for months. > Upgraded the server from F15 to F17. At first, all seems well, but over > time, I keep getting 500 errors on proxied sites. Logs say: "socket() > failed (24: Too many open files) while connecting to upstream". Has > anyone else had this experience? If so, what's the root cause? > > Had to revert server back to a backup to get sites functional. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227162,227162#msg-227162 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steeeeeveee at gmx.net Tue Jun 5 01:16:06 2012 From: steeeeeveee at gmx.net (Steve) Date: Tue, 05 Jun 2012 03:16:06 +0200 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: <58edd092bc513c870ea369bf566e0331.NginxMailingListEnglish@forum.nginx.org> References: <58edd092bc513c870ea369bf566e0331.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120605011606.192860@gmx.net> -------- Original-Nachricht -------- > Datum: Mon, 4 Jun 2012 19:32:32 -0400 (EDT) > Von: "ptiseo" > An: nginx at nginx.org > Betreff: Upgrade From Fedora 15 to 17: nginx Doesn\'t Work > Had nginx 1.0.5 running on a Fedora 15 system just fine for months. > Months? MONTHS? So you are not a new *nix user? > Upgraded the server from F15 to F17. At first, all seems well, but over > time, I keep getting 500 errors on proxied sites. Logs say: "socket() > failed (24: Too many open files) while connecting to upstream". Has > anyone else had this experience? If so, what's the root cause? > The root cause you ask? You must be joking. I mean... how hard is it to interpret "Too many open files"? > Had to revert server back to a backup to get sites functional. > Ohhh boy. All you need to do is increase the open file limit in /etc/sysctl.conf and /etc/security/limits.conf. In my installation I currently have... ... in /etc/sysctl.conf: fs.file-max = 5049800 ... in /etc/security/limits.conf: * - nofile 101062 I assume you used the same nginx.conf like in the old install? So no need for me to mention worker_rlimit_nofile. Right? Setting/getting file limits is really basic linux system admin knowledge. I don't want to be harsh but not knowing that and going back from a fresh installed Fedora 17 to a backup of Fedora 15 because of the above error is crazy. You need to spend some time educating yourself in how to maintain a *nix system. And while at it... please take your time to learn how to use Google: http://www.google.com/search?q=Too+many+open+files&ie=utf-8&oe=utf-8&aq=t If you don't find the solution in the first 10 or 20 links then I am going to eat xx xxxxx! -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de From nginx-forum at nginx.us Tue Jun 5 01:45:36 2012 From: nginx-forum at nginx.us (ptiseo) Date: Mon, 4 Jun 2012 21:45:36 -0400 (EDT) Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: <20120605011606.192860@gmx.net> References: <20120605011606.192860@gmx.net> Message-ID: @steve: nginx seems to attract the hostile and malpadapted? I've seen more arrogant a**es in this forum than most other places. You guys need to relax; it's just software. No need for you to go nuts like that. It just shows badly on you. The reason I restore from backup is because I needed that proxy online for development. And, I have used Linux for a while that can be counted in more than months. Do you know what they say about "assume"? I did Google. I saw that worked for some and not for others. I tried it, it didn't work for me. My file-max setting was already some 200K. So, let me ask this, why would I need to increase open file limit anyways? This is a low traffic proxy. @BR: Thanks for not being as bad as steve. I did notice that Fedora does not have an up-to-date package. For now, I will stay with the backup and spin up another virtual machine to see if I can test further. If anyone has any other ideas than the first 20 Google hits, I'd love to hear of them. Thx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227162,227165#msg-227165 From steeeeeveee at gmx.net Tue Jun 5 02:03:25 2012 From: steeeeeveee at gmx.net (Steve) Date: Tue, 05 Jun 2012 04:03:25 +0200 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: References: <20120605011606.192860@gmx.net> Message-ID: <20120605020325.192850@gmx.net> -------- Original-Nachricht -------- > Datum: Mon, 4 Jun 2012 21:45:36 -0400 (EDT) > Von: "ptiseo" > An: nginx at nginx.org > Betreff: Re: Upgrade From Fedora 15 to 17: nginx Doesn\'t Work > @steve: nginx seems to attract the hostile and malpadapted? I've seen > more arrogant a**es in this forum than most other places. You guys need > to relax; it's just software. No need for you to go nuts like that. It > just shows badly on you. > LOL. It's almost 4am over here in Europe and I was sitting here reading stuff (fighting with sleep) and from time to time looking at the nginx mailing list and then saw your post and was falling almost off my chair. Could not resist and had to post. ;) > The reason I restore from backup is because I needed that proxy online > for development. And, I have used Linux for a while that can be counted > in more than months. Do you know what they say about "assume"? > > I did Google. I saw that worked for some and not for others. I tried it, > it didn't work for me. My file-max setting was already some 200K. > In sysctl.conf? Or /etc/security/limits.conf? Does your system use PAM? > So, let me ask this, why would I need to increase open file limit > anyways? This is a low traffic proxy. > Well.... you have obviously the need else nginx would not complain about a low open file descriptor limit. Looks like you configured nginx to use a lot of descriptors. But how can I tell without having seen your nginx configuration (I left my crystal ball in the office)? If you want real good help then post your nginx.conf, the output of "ulimit -a", the content of /etc/sysctl.conf, the content of /etc/security/limits.conf, the output of "ls -lah /etc/security/limits.d/*" and the content of files found in /etc/security/limits.d/ > @BR: Thanks for not being as bad as steve. I did notice that Fedora does > not have an up-to-date package. For now, I will stay with the backup and > spin up another virtual machine to see if I can test further. > > If anyone has any other ideas than the first 20 Google hits, I'd love to > hear of them. Thx. > http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ http://blog.unixy.net/2010/11/nginx-accept-failed-24-too-many-open-files-while-accepting-new-connection/ http://forum.nginx.org/read.php?2,187416 http://forum.nginx.org/read.php?2,61252 http://forum.nginx.org/read.php?2,13111 etc... > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227162,227165#msg-227165 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From jim at ohlste.in Tue Jun 5 02:08:14 2012 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 4 Jun 2012 22:08:14 -0400 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: References: <20120605011606.192860@gmx.net> Message-ID: On Jun 4, 2012 9:45 PM, "ptiseo" wrote: > > @steve: nginx seems to attract the hostile and malpadapted? I've seen > more arrogant a**es in this forum than most other places. You guys need > to relax; it's just software. No need for you to go nuts like that. It > just shows badly on you. Hardly the case. This is a pretty well mannered mailing list compared to some to which I subscribe. But, to be constructive, please do not top post. It's very confusing when trying to follow a threaded discussion. So, please answer the question asked -do you have an entry in your nginx.conf for "worker_rlimit_nofile"? Posting your full nginx.conf might help. > > The reason I restore from backup is because I needed that proxy online > for development. And, I have used Linux for a while that can be counted > in more than months. Do you know what they say about "assume"? > > I did Google. I saw that worked for some and not for others. I tried it, > it didn't work for me. My file-max setting was already some 200K. To which settings(s) are you referring? > > So, let me ask this, why would I need to increase open file limit > anyways? This is a low traffic proxy. Maybe an issue with how you've configured *your* system which has nothing to do with nginx? Not to be one of those hostile, maladaped, arrogant people to whom you referred, but this isn't a Fedora mailing list. Perhaps you can find help there in determining what process(es) is/are using all of those file descriptors. Maybe one of them will hold your hand and not hurt your feelings in the process. Calling people names is certainly *not* a good way to get people to help you. > > @BR: Thanks for not being as bad as steve. I did notice that Fedora does > not have an up-to-date package. For now, I will stay with the backup and > spin up another virtual machine to see if I can test further. > > If anyone has any other ideas than the first 20 Google hits, I'd love to > hear of them. Thx. > Jim Ohlstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From sammyraul1 at gmail.com Tue Jun 5 04:06:23 2012 From: sammyraul1 at gmail.com (sammy_raul) Date: Mon, 4 Jun 2012 21:06:23 -0700 (PDT) Subject: Video Streaming using non http backend, Ref ngx_drizzle In-Reply-To: References: Message-ID: <1338869183160-7580247.post@n2.nabble.com> Thanks agentzh for explaining so well. When I am connected to the backend server I am getting buffer which I am sending to the client like this: ngx_int_t ngx_http_ccn_send_output_bufs(ngx_http_request_t *r, ngx_http_upstream_ccn_peer_data_t *dp, const unsigned char *data, size_t data_size) { ngx_http_upstream_t *u = r->upstream; ngx_int_t rc; ngx_buf_t *b; ngx_chain_t out; /* allocate a buffer for your response body */ b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); if (b == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } /* attach this buffer to the buffer chain */ out.buf = b; out.next = NULL; /* adjust the pointers of the buffer */ b->pos = (u_char *) data; b->last = b->pos + data_size - 1; b->memory = 1; /* this buffer is in memory */ b->last_buf = 1; /* this is the last buffer in the buffer chain */ if ( ! u->header_sent ) { fprintf(stdout, "ngx_http_ccn_send_output_bufs u->header_sent\n"); r->headers_out.status = NGX_HTTP_OK; /* set the Content-Type header */ r->headers_out.content_type.data = (u_char *) "application/octet-stream"; r->headers_out.content_type.len = sizeof("application/octet-stream") - 1; r->headers_out.content_type_len = sizeof("application/octet-stream") - 1; rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) { fprintf(stdout, "ngx_http_ccn_send_output_bufs header sent error\n"); return rc; } u->header_sent = 1; fprintf(stdout, "ngx_http_ccn_send_output_bufs u->header_sent=%d\n",u->header_sent); } rc = ngx_http_output_filter(r, &out); if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) { return rc; } this function I am calling everytime I am receiving data (1024) from the backend when it is end of stream I am calling ngx_http_finalize_request(r, rc); but it is not working as expected which is like playing the video file in the browser I didn't follow the lua module yet, but will look into it Is there anything I am doing wrong while setting the output buffer, do I need to change this line b->last_buf = 1; or something else. Thanks, Raul -- View this message in context: http://nginx.2469901.n2.nabble.com/Video-Streaming-using-non-http-backend-Ref-ngx-drizzle-tp7580235p7580247.html Sent from the nginx mailing list archive at Nabble.com. From agentzh at gmail.com Tue Jun 5 04:16:57 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 5 Jun 2012 12:16:57 +0800 Subject: Video Streaming using non http backend, Ref ngx_drizzle In-Reply-To: <1338869183160-7580247.post@n2.nabble.com> References: <1338869183160-7580247.post@n2.nabble.com> Message-ID: Hello! On Tue, Jun 5, 2012 at 12:06 PM, sammy_raul wrote: > > ? ?/* adjust the pointers of the buffer */ > ? ?b->pos = (u_char *) data; > ? ?b->last = b->pos + data_size - 1; > ? ?b->memory = 1; ? ?/* this buffer is in memory */ > ? ?b->last_buf = 1; ?/* this is the last buffer in the buffer chain */ > Setting b->last_buf to 1 means the current buf is the last buf in the whole response body stream in this context (actually it is the indicator for the end of the output data stream). So you must not set this for every single buf. Also, you should never set this flag in case you're in a subrequest or things will break. > > this function I am calling everytime I am receiving data (1024) from the > backend > when it is end of stream I am calling > ngx_http_finalize_request(r, rc); > Call ngx_http_send_header once and call ngx_http_output_filter multiple times (as needed). If you need strict non-buffered output behaivor, you have to *wait* for the downstream to flush out *all* the data before continuing reading data from upstream. You can check out how the ngx_http_upstream (in non-buffered mode) and ngx_lua modules do this. > > I didn't follow the lua module yet, but will look into it > I strongly recommend it because it should save you a *lot* of time (I guess) :) > Is there anything I am doing wrong while setting the output buffer, do I > need to change this line b->last_buf = 1; > or something else. > See above. Regards, -agentzh P.S. C programming is hard; let's go scripting! :D From nginx-forum at nginx.us Tue Jun 5 07:33:24 2012 From: nginx-forum at nginx.us (speedfirst) Date: Tue, 5 Jun 2012 03:33:24 -0400 (EDT) Subject: Can't upload big files via nginx as reverse proxy Message-ID: <30fe20f1fb79eb7d74ba300eac01c6c5.NginxMailingListEnglish@forum.nginx.org> Hey. In my env, the layout is: client <--> nginx <--> jetty In the client, there is a control. I tried to upload a file with size of 3.7MB. In the client request, the content type is "multipart/form-data", and there is an "Expect: 100-continue" header. Through tcpdump, I could see nginx immediately return an "HTTP/1.1 100 Continue" response, and started to read data. After buffering the uploaded data, nginx then started to send them to jetty. However in this time, no "Expect: 100-continue" header was proxied because HTTP/1.0 is used. After sending part of data, nginx stopped continuing to proxy the rest of data, but the connection is kept. After 30s, jetty reports time out exception and returned an response. Nginx finally proxied this response back to client. I simply merged all the tcp segments which was sent from nginx to jetty, and found only 400K bytes are proxied. My nginx config is quite simple, just server { listen 80; location / { proxy_pass http://upstream; } } All proxy buffer config was not explicitly set so the default values were applied. I tried to "proxy_buffering off;" and re-do the experiment above and find the result was same. I also tried to observe the temp file written by nginx but it's automatically removed when everything is done. Any way to keep it? Therefore, I'm wondering is this expected? Did I make mistakes for configuring proxy buffers? Do I have to use the third party "upload" module (http://www.grid.net.ru/nginx/upload.en.html) to make it work? Many thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227175,227175#msg-227175 From nginx-forum at nginx.us Tue Jun 5 07:39:09 2012 From: nginx-forum at nginx.us (speedfirst) Date: Tue, 5 Jun 2012 03:39:09 -0400 (EDT) Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: <30fe20f1fb79eb7d74ba300eac01c6c5.NginxMailingListEnglish@forum.nginx.org> References: <30fe20f1fb79eb7d74ba300eac01c6c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5dd8083cc275240e55cc760035b5d482.NginxMailingListEnglish@forum.nginx.org> by the way, my client request is a POST request. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227175,227176#msg-227176 From mdounin at mdounin.ru Tue Jun 5 07:54:17 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 11:54:17 +0400 Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: <30fe20f1fb79eb7d74ba300eac01c6c5.NginxMailingListEnglish@forum.nginx.org> References: <30fe20f1fb79eb7d74ba300eac01c6c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120605075416.GS31671@mdounin.ru> Hello! On Tue, Jun 05, 2012 at 03:33:24AM -0400, speedfirst wrote: > Hey. > > In my env, the layout is: > > client <--> nginx <--> jetty > > In the client, there is a control. I tried to upload a > file with size of 3.7MB. In the client request, the content type is > "multipart/form-data", and there is an "Expect: 100-continue" header. > > Through tcpdump, I could see nginx immediately return an "HTTP/1.1 100 > Continue" response, and started to read data. After buffering the > uploaded data, nginx then started to send them to jetty. However in this > time, no "Expect: 100-continue" header was proxied because HTTP/1.0 is > used. So far this is expected behaviour. > After sending part of data, nginx stopped continuing to proxy the rest > of data, but the connection is kept. After 30s, jetty reports time out > exception and returned an response. Nginx finally proxied this response > back to client. > > I simply merged all the tcp segments which was sent from nginx to jetty, > and found only 400K bytes are proxied. This is obviously not expected. Anything in error log? Could you please provide tcpdump and debug log? It would be also cool to see which version of nginx you are using, i.e. please provide "nginx -V" output, and a full config. > > > My nginx config is quite simple, just > > server { > listen 80; > location / { > proxy_pass http://upstream; > } > } This misses at least "client_max_body_size" as by default 3.5MB upload will be just rejected. > > All proxy buffer config was not explicitly set so the default values > were applied. I tried to "proxy_buffering off;" and re-do the experiment > above and find the result was same. Proxy buffers, as well as proxy_buffering, doesn't matter, as it only affects sending response from an upstream to a client. > I also tried to observe the temp file written by nginx but it's > automatically removed when everything is done. Any way to keep it? client_body_in_file_only on; See here for details: http://nginx.org/r/client_body_in_file_only Maxim Dounin From nginx-forum at nginx.us Tue Jun 5 09:24:48 2012 From: nginx-forum at nginx.us (speedfirst) Date: Tue, 5 Jun 2012 05:24:48 -0400 (EDT) Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: <20120605075416.GS31671@mdounin.ru> References: <20120605075416.GS31671@mdounin.ru> Message-ID: Thanks for your quick response. In my config, client_max_body_size is set to 0. Does it mean "unlimited"? I made this test in two version, 0.9.3 and 1.2.0. Both have the same problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227175,227180#msg-227180 From nginx-forum at nginx.us Tue Jun 5 10:08:30 2012 From: nginx-forum at nginx.us (speedfirst) Date: Tue, 5 Jun 2012 06:08:30 -0400 (EDT) Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: References: <20120605075416.GS31671@mdounin.ru> Message-ID: <4cd2ded1222dfbb80b803d6cb9d3f801.NginxMailingListEnglish@forum.nginx.org> retry the test while client_max_body_size=0; The size of tmp file is as expected, about 3.7M : root at zm-dev03:/opt/data/tmp/nginx/client# ll 0000000001 -rw------- 1 speedfirst speedfirst 3914486 2012-06-06 01:27 0000000001 Here is the client script from curl: curl -v -u admin at dev03.eng.test.com:test123 -F "file=@test.tgz;filename=test.tgz;type=application/x-compressed-tar" "http://dev03.eng.test.com/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" Here is the debug_http nginx log. 2012/06/06 01:27:35 [debug] 15621#0: *5 http process request line 2012/06/06 01:27:35 [debug] 15621#0: *5 http request line: "POST /service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset HTTP/1.1" 2012/06/06 01:27:35 [debug] 15621#0: *5 http uri: "/service/home/admin at dev03.eng.test.com/" 2012/06/06 01:27:35 [debug] 15621#0: *5 http args: "fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http exten: "" 2012/06/06 01:27:35 [debug] 15621#0: *5 http process request header line 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "Authorization: Basic YWRtaW5Aem0tZGV2MDMuZW5nLnZtd2FyZS5jb206dGVzdDEyMw==" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "Host: dev03.eng.vmware.com" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "Accept: */*" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "Content-Length: 3914486" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "Expect: 100-continue" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header: "Content-Type: multipart/form-data; boundary=----------------------------f9dbdf4f72b4" 2012/06/06 01:27:35 [debug] 15621#0: *5 http header done 2012/06/06 01:27:35 [debug] 15621#0: *5 rewrite phase: 0 2012/06/06 01:27:35 [debug] 15621#0: *5 test location: "/" 2012/06/06 01:27:35 [debug] 15621#0: *5 using configuration "/" 2012/06/06 01:27:35 [debug] 15621#0: *5 generic phase: 4 2012/06/06 01:27:35 [debug] 15621#0: *5 generic phase: 5 2012/06/06 01:27:35 [debug] 15621#0: *5 access phase: 6 2012/06/06 01:27:35 [debug] 15621#0: *5 access phase: 7 2012/06/06 01:27:35 [debug] 15621#0: *5 post access phase: 8 2012/06/06 01:27:35 [debug] 15621#0: *5 send 100 Continue 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3914486 2012/06/06 01:27:35 [debug] 15621#0: *5 http finalize request: -4, "/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" a:1, c:2 2012/06/06 01:27:35 [debug] 15621#0: *5 http request count:2 blk:0 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 174 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3914312 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3914312 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3912952 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3912952 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3911592 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3910232 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3910232 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3908872 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3908872 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3907512 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3907512 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1218 2012/06/06 01:27:35 [notice] 15621#0: *5 a client request body is buffered to a temporary file /opt/zimbra/data/tmp/nginx/client/0000000001, client: 10.112.117.117, server: , request: "POST /service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset HTTP/1.1", host: "dev03.test.com" 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1502 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3904792 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3904792 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:35 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3903432 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:35 [debug] 15621#0: *5 http client request body rest 3903432 2012/06/06 01:27:35 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" ... ... ... <-- tens of similar log entries 2012/06/06 01:27:37 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body recv 1360 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body rest 1528 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body recv -2 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body rest 1528 2012/06/06 01:27:37 [debug] 15621#0: *5 http run request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:37 [debug] 15621#0: *5 http read client request body 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body recv 1528 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body rest 0 2012/06/06 01:27:37 [debug] 15621#0: *5 http init upstream, client timer: 0 2012/06/06 01:27:37 [debug] 15621#0: *5 http script copy: "X-Forwarded-For: " 2012/06/06 01:27:37 [debug] 15621#0: *5 http script var: "10.112.117.117" 2012/06/06 01:27:37 [debug] 15621#0: *5 http script copy: " " 2012/06/06 01:27:37 [debug] 15621#0: *5 http script copy: "Host: " 2012/06/06 01:27:37 [debug] 15621#0: *5 http script var: "dev03.test.com" 2012/06/06 01:27:37 [debug] 15621#0: *5 http script copy: " " 2012/06/06 01:27:37 [debug] 15621#0: *5 http script copy: "Connection: close " 2012/06/06 01:27:37 [debug] 15621#0: *5 http proxy header: "Authorization: Basic YWRtaW5Aem0tZGV2MDMuZW5nLnZtd2FyZS5jb206dGVzdDEyMw==" 2012/06/06 01:27:37 [debug] 15621#0: *5 http proxy header: "User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3" 2012/06/06 01:27:37 [debug] 15621#0: *5 http proxy header: "Accept: */*" 2012/06/06 01:27:37 [debug] 15621#0: *5 http proxy header: "Content-Length: 3914486" 2012/06/06 01:27:37 [debug] 15621#0: *5 http proxy header: "Content-Type: multipart/form-data; boundary=----------------------------f9dbdf4f72b4" 2012/06/06 01:27:37 [debug] 15621#0: *5 http proxy header: "POST /service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset HTTP/1.0 X-Forwarded-For: 10.112.117.117 Host: dev03.test.com Connection: close Authorization: Basic YWRtaW5Aem0tZGV2MDMuZW5nLnZtd2FyZS5jb206dGVzdDEyMw== User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 Accept: */* Content-Length: 3914486 Content-Type: multipart/form-data; boundary=----------------------------f9dbdf4f72b4 " 2012/06/06 01:27:37 [debug] 15621#0: *5 http cleanup add: 0000000001D28F58 2012/06/06 01:27:37 [debug] 15621#0: *5 zmauth: prepare route for proxy ... ..<-- choose the upstream route 2012/06/06 01:27:37 [debug] 15621#0: *5 zmauth: prepare upstream connection, try: 1 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream connect: -2 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request handler 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request handler 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request 2012/06/06 01:27:40 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:27:40 [debug] 15621#0: *5 http upstream send request handler 2012/06/06 01:27:40 [debug] 15621#0: *5 http upstream send request 2012/06/06 01:27:44 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" ... ... <--- tens of similar log entires 2012/06/06 01:28:19 [debug] 15621#0: *5 http upstream send request handler 2012/06/06 01:28:19 [debug] 15621#0: *5 http upstream send request 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream send request handler 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream send request 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream process header 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy status 200 "200 OK" 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header: "Date: Tue, 05 Jun 2012 17:27:37 GMT" 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header: "Content-Type: text/html; charset=utf-8" 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header: "Connection: close" 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header done 2012/06/06 01:28:22 [debug] 15621#0: *5 HTTP/1.1 200 OK Server: nginx Date: Tue, 05 Jun 2012 17:28:22 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive 2012/06/06 01:28:22 [debug] 15621#0: *5 http write filter: l:0 f:0 s:163 2012/06/06 01:28:22 [debug] 15621#0: *5 http cacheable: 0 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream process upstream 2012/06/06 01:28:23 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:23 [debug] 15621#0: *5 http upstream send request handler 2012/06/06 01:28:57 [debug] 15621#0: *5 http upstream request: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http upstream process upstream 2012/06/06 01:28:57 [debug] 15621#0: *5 http output filter "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http copy filter: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http postpone filter "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 0000000001D2A388 2012/06/06 01:28:57 [debug] 15621#0: *5 http chunk: 22 2012/06/06 01:28:57 [debug] 15621#0: *5 http write filter: l:0 f:0 s:191 2012/06/06 01:28:57 [debug] 15621#0: *5 http copy filter: 0 "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http upstream exit: 0000000000000000 2012/06/06 01:28:57 [debug] 15621#0: *5 finalize http upstream request: 0 2012/06/06 01:28:57 [debug] 15621#0: *5 finalize http proxy request 2012/06/06 01:28:57 [debug] 15621#0: *5 free rr peer 1 0 2012/06/06 01:28:57 [debug] 15621#0: *5 close http upstream connection: 20 2012/06/06 01:28:57 [debug] 15621#0: *5 http upstream temp fd: -1 2012/06/06 01:28:57 [debug] 15621#0: *5 http output filter "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http copy filter: "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http postpone filter "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 00007FFF51B162A0 2012/06/06 01:28:57 [debug] 15621#0: *5 http chunk: 0 2012/06/06 01:28:57 [debug] 15621#0: *5 http write filter: l:1 f:0 s:196 2012/06/06 01:28:57 [debug] 15621#0: *5 http write filter limit 0 2012/06/06 01:28:57 [debug] 15621#0: *5 http write filter 0000000000000000 2012/06/06 01:28:57 [debug] 15621#0: *5 http copy filter: 0 "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" 2012/06/06 01:28:57 [debug] 15621#0: *5 http finalize request: 0, "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" a:1, c:1 2012/06/06 01:28:57 [debug] 15621#0: *5 set http keepalive handler 2012/06/06 01:28:57 [debug] 15621#0: *5 http close request 2012/06/06 01:28:57 [debug] 15621#0: *5 http log handler 2012/06/06 01:28:57 [debug] 15621#0: *5 hc free: 0000000000000000 0 2012/06/06 01:28:57 [debug] 15621#0: *5 hc busy: 0000000000000000 0 2012/06/06 01:28:57 [debug] 15621#0: *5 tcp_nodelay 2012/06/06 01:28:57 [debug] 15621#0: *5 http keepalive handler 2012/06/06 01:28:57 [debug] 15621#0: *5 http keepalive handler 2012/06/06 01:28:57 [info] 15621#0: *5 client 10.112.117.117 closed keepalive connection 2012/06/06 01:28:57 [debug] 15621#0: *5 close http connection: 14 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227175,227182#msg-227182 From mdounin at mdounin.ru Tue Jun 5 10:47:51 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 14:47:51 +0400 Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: <4cd2ded1222dfbb80b803d6cb9d3f801.NginxMailingListEnglish@forum.nginx.org> References: <20120605075416.GS31671@mdounin.ru> <4cd2ded1222dfbb80b803d6cb9d3f801.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120605104751.GT31671@mdounin.ru> Hello! On Tue, Jun 05, 2012 at 06:08:30AM -0400, speedfirst wrote: [...] > 2012/06/06 01:27:37 [debug] 15621#0: *5 http read client request body > 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body recv > 1528 > 2012/06/06 01:27:37 [debug] 15621#0: *5 http client request body rest 0 Ok, so the request body is read from a client without any problems. [...] > 2012/06/06 01:27:37 [debug] 15621#0: *5 zmauth: prepare route for proxy > ... ..<-- choose the upstream route > 2012/06/06 01:27:37 [debug] 15621#0: *5 zmauth: prepare upstream > connection, try: 1 Are you able to reproduce the problem without 3rd party modules/patches? (Unlikely it's related in this particular case, but just to make sure.) > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream connect: -2 > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request > handler > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request > handler > 2012/06/06 01:27:37 [debug] 15621#0: *5 http upstream send request > 2012/06/06 01:27:40 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > 2012/06/06 01:27:40 [debug] 15621#0: *5 http upstream send request > handler > 2012/06/06 01:27:40 [debug] 15621#0: *5 http upstream send request > 2012/06/06 01:27:44 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > ... ... <--- tens of similar log entires It's sad you skipped them all, and only did debug_http log. With full debug log it would be clearly visible sending request goes on (i.e. how many bytes are sent). > 2012/06/06 01:28:19 [debug] 15621#0: *5 http upstream send request > handler > 2012/06/06 01:28:19 [debug] 15621#0: *5 http upstream send request > 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream send request > handler > 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream send request > 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream process header > 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy status 200 "200 OK" On the other hand, it looks like sending of the request is still in progress, and upstream server replies before the request was completely sent. It might indicate it just doesn't wait long enough, and the problem is in the backend (and slow connectivity to the backend). I don't see any pause in request sending you've claimed in your initial message. (and see below) > 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header: "Date: Tue, > 05 Jun 2012 17:27:37 GMT" > 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header: > "Content-Type: text/html; charset=utf-8" > 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header: "Connection: > close" > 2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy header done > 2012/06/06 01:28:22 [debug] 15621#0: *5 HTTP/1.1 200 OK > > Server: nginx > > Date: Tue, 05 Jun 2012 17:28:22 GMT > > Content-Type: text/html; charset=utf-8 > > Transfer-Encoding: chunked > > Connection: keep-alive > > > 2012/06/06 01:28:22 [debug] 15621#0: *5 http write filter: l:0 f:0 > s:163 > 2012/06/06 01:28:22 [debug] 15621#0: *5 http cacheable: 0 > 2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream process upstream > 2012/06/06 01:28:23 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" > 2012/06/06 01:28:23 [debug] 15621#0: *5 http upstream send request > handler > 2012/06/06 01:28:57 [debug] 15621#0: *5 http upstream request: > "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" On the other hand, here is ~ 30s pause you've probably talked about. It might indicate that upstream tries to send headers before "receiving and interpreting a request message" (as per HTTP RFC2616 it should do it "after"), which confuses nginx and makes it to think further body bytes aren't needed. You may want to dig further into what goes on on the backend to understand the real problem. Maxim Dounin From nginx-forum at nginx.us Tue Jun 5 11:26:49 2012 From: nginx-forum at nginx.us (speedfirst) Date: Tue, 5 Jun 2012 07:26:49 -0400 (EDT) Subject: Can't upload big files via nginx as reverse proxy In-Reply-To: <20120605104751.GT31671@mdounin.ru> References: <20120605104751.GT31671@mdounin.ru> Message-ID: <7e4b3f5fba2e91657ed3d6677632fc7c.NginxMailingListEnglish@forum.nginx.org> >On the other hand, it looks like sending of the request is still >in progress, and upstream server replies before the request was >completely sent. It might indicate it just doesn't wait long >enough, and the problem is in the backend (and slow connectivity >to the backend). >I don't see any pause in request sending you've claimed in your >initial message. >On the other hand, here is ~ 30s pause you've probably talked >about. It might indicate that upstream tries to send headers >before "receiving and interpreting a request message" (as per HTTP >RFC2616 it should do it "after"), which confuses nginx and makes >it to think further body bytes aren't needed. >understand the real problem. Yes, I agree and also notice where the real problem is. I just created a fake backend (which simply receives the uploaded data and writes into disk), nginx correctly pass all the data to it. Let me hack the backend code to see what's wrong. Will update if I found something new. Thanks for your inspired comments :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227175,227190#msg-227190 From nginx-forum at nginx.us Tue Jun 5 12:43:44 2012 From: nginx-forum at nginx.us (colorando) Date: Tue, 5 Jun 2012 08:43:44 -0400 (EDT) Subject: Setting up nginx as Visual Studio 2010 project Message-ID: <1979d5146e9ab6c19013841c788b3bb1.NginxMailingListEnglish@forum.nginx.org> Hi! I'd like to make a Visual Studio project from the nginx source and then build it. Has anyone already done this experience and can tell me how to start? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227198,227198#msg-227198 From mdounin at mdounin.ru Tue Jun 5 14:30:50 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 18:30:50 +0400 Subject: nginx-1.3.1 Message-ID: <20120605143050.GW31671@mdounin.ru> Changes with nginx 1.3.1 05 Jun 2012 *) Security: now nginx/Windows ignores trailing dot in URI path component, and does not allow URIs with ":$" in it. Thanks to Vladimir Kochetkov, Positive Research Center. *) Feature: the "proxy_pass", "fastcgi_pass", "scgi_pass", "uwsgi_pass" directives, and the "server" directive inside the "upstream" block, now support IPv6 addresses. *) Feature: the "resolver" directive now support IPv6 addresses and an optional port specification. *) Feature: the "least_conn" directive inside the "upstream" block. *) Feature: it is now possible to specify a weight for servers while using the "ip_hash" directive. *) Bugfix: a segmentation fault might occur in a worker process if the "image_filter" directive was used; the bug had appeared in 1.3.0. *) Bugfix: nginx could not be built with ngx_cpp_test_module; the bug had appeared in 1.1.12. *) Bugfix: access to variables from SSI and embedded perl module might not work after reconfiguration. Thanks to Yichun Zhang. *) Bugfix: in the ngx_http_xslt_filter_module. Thanks to Kuramoto Eiji. *) Bugfix: memory leak if $geoip_org variable was used. Thanks to Denis F. Latypoff. *) Bugfix: in the "proxy_cookie_domain" and "proxy_cookie_path" directives. Maxim Dounin From mdounin at mdounin.ru Tue Jun 5 14:31:21 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 18:31:21 +0400 Subject: nginx-1.2.1 Message-ID: <20120605143121.GA31671@mdounin.ru> Changes with nginx 1.2.1 05 Jun 2012 *) Security: now nginx/Windows ignores trailing dot in URI path component, and does not allow URIs with ":$" in it. Thanks to Vladimir Kochetkov, Positive Research Center. *) Feature: the "debug_connection" directive now supports IPv6 addresses and the "unix:" parameter. *) Feature: the "set_real_ip_from" directive and the "proxy" parameter of the "geo" directive now support IPv6 addresses. *) Feature: the "real_ip_recursive", "geoip_proxy", and "geoip_proxy_recursive" directives. *) Feature: the "proxy_recursive" parameter of the "geo" directive. *) Bugfix: a segmentation fault might occur in a worker process if the "resolver" directive was used. *) Bugfix: a segmentation fault might occur in a worker process if the "fastcgi_pass", "scgi_pass", or "uwsgi_pass" directives were used and backend returned incorrect response. *) Bugfix: a segmentation fault might occur in a worker process if the "rewrite" directive was used and new request arguments in a replacement used variables. *) Bugfix: nginx might hog CPU if the open file resource limit was reached. *) Bugfix: nginx might loop infinitely over backends if the "proxy_next_upstream" directive with the "http_404" parameter was used and there were backup servers specified in an upstream block. *) Bugfix: adding the "down" parameter of the "server" directive might cause unneeded client redistribution among backend servers if the "ip_hash" directive was used. *) Bugfix: socket leak. Thanks to Yichun Zhang. *) Bugfix: in the ngx_http_fastcgi_module. Maxim Dounin From mdounin at mdounin.ru Tue Jun 5 14:31:59 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 18:31:59 +0400 Subject: security advisory Message-ID: <20120605143159.GE31671@mdounin.ru> Hello! Vladimir Kochetkov, Positive Research Center, discovered a security problem in nginx/Windows, which might allow security restrictions bypass (CVE-2011-4963). There are many ways to access the same file when working under Windows, and nginx failed to account for all of them. As a result, it was possible to bypass security restrictions like location /directory/ { deny all; } by requesting a file as "/directory::$index_allocation/file", or "/directory:$i30:$index_allocation/file", or "/directory./file". The problem is fixed in nginx/Windows 1.3.1, 1.2.1. For older versions the following configuration can be used as a workaround: location ~ "(\./|:\$)" { deny all; } Maxim Dounin From reallfqq-nginx at yahoo.fr Tue Jun 5 15:37:43 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 5 Jun 2012 11:37:43 -0400 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: References: <20120605011606.192860@gmx.net> Message-ID: Hello, > @BR: Thanks for not being as bad as steve. 'As bad'? I was really trying to help. To my opinion, it's always better to have the least outdated version, so 1.0.15 is way better than 1.0.5. If I appeared arrogant to you, well, nvm... > Calling people names is certainly *not* a good way to get people to help you. I totally agree. I won't lose anymore time on your case. --- *B. R.* On Mon, Jun 4, 2012 at 10:08 PM, Jim Ohlstein wrote: > On Jun 4, 2012 9:45 PM, "ptiseo" wrote: > > > > @steve: nginx seems to attract the hostile and malpadapted? I've seen > > more arrogant a**es in this forum than most other places. You guys need > > to relax; it's just software. No need for you to go nuts like that. It > > just shows badly on you. > > Hardly the case. This is a pretty well mannered mailing list compared to > some to which I subscribe. > > But, to be constructive, please do not top post. It's very confusing when > trying to follow a threaded discussion. > > So, please answer the question asked -do you have an entry in your > nginx.conf for "worker_rlimit_nofile"? > > Posting your full nginx.conf might help. > > > > > The reason I restore from backup is because I needed that proxy online > > for development. And, I have used Linux for a while that can be counted > > in more than months. Do you know what they say about "assume"? > > > > I did Google. I saw that worked for some and not for others. I tried it, > > it didn't work for me. My file-max setting was already some 200K. > > To which settings(s) are you referring? > > > > > So, let me ask this, why would I need to increase open file limit > > anyways? This is a low traffic proxy. > > Maybe an issue with how you've configured *your* system which has nothing > to do with nginx? Not to be one of those hostile, maladaped, arrogant > people to whom you referred, but this isn't a Fedora mailing list. Perhaps > you can find help there in determining what process(es) is/are using all of > those file descriptors. Maybe one of them will hold your hand and not hurt > your feelings in the process. Calling people names is certainly *not* a > good way to get people to help you. > > > > > @BR: Thanks for not being as bad as steve. I did notice that Fedora does > > not have an up-to-date package. For now, I will stay with the backup and > > spin up another virtual machine to see if I can test further. > > > > If anyone has any other ideas than the first 20 Google hits, I'd love to > > hear of them. Thx. > > > > Jim Ohlstein > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manens at grossac.org Tue Jun 5 16:04:02 2012 From: manens at grossac.org (Florent Manens) Date: Tue, 5 Jun 2012 18:04:02 +0200 (CEST) Subject: Client Certificate verification for mail In-Reply-To: <471353081.15980.1338911623663.JavaMail.root@grossac.org> Message-ID: <60240686.15988.1338912242553.JavaMail.root@grossac.org> Hi NGINX team, I can read here : http://mailman.nginx.org/pipermail/nginx/2007-March/000825.html and in this thread : http://mailman.nginx.org/pipermail/nginx-ru/2009-July/026304.html that the client certificate verification is not supported by NGINX (and that there is no RFE for it). We want to implement client certificate verification for IMAP and POP connection and we plan to rely on NGINX for scalability. I think that it is possible to implement client certificate verification in NGINX but I still need to know : * if it is a trivial task * if I can do it only with addons * why it isn't already in NGINX core ? I will apreciate if someone can give me directions on that subject. Best regards, Florent -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 5 16:12:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 20:12:48 +0400 Subject: Client Certificate verification for mail In-Reply-To: <60240686.15988.1338912242553.JavaMail.root@grossac.org> References: <471353081.15980.1338911623663.JavaMail.root@grossac.org> <60240686.15988.1338912242553.JavaMail.root@grossac.org> Message-ID: <20120605161248.GK31671@mdounin.ru> Hello! On Tue, Jun 05, 2012 at 06:04:02PM +0200, Florent Manens wrote: > > > Hi NGINX team, > > > I can read here : > http://mailman.nginx.org/pipermail/nginx/2007-March/000825.html > > > and in this thread : > http://mailman.nginx.org/pipermail/nginx-ru/2009-July/026304.html > > > that the client certificate verification is not supported by NGINX (and that there is no RFE for it). > > > We want to implement client certificate verification for IMAP and POP connection and we plan to rely on NGINX for scalability. > > > I think that it is possible to implement client certificate verification in NGINX but I still need to know : > * if it is a trivial task More or less. > * if I can do it only with addons No. > * why it isn't already in NGINX core ? The second link (or, rather, Igor's reply to it) explains the reason. It's more or less useless for large scale installations where nginx mail proxy is generally used. Maxim Dounin From tdgh2323 at hotmail.com Tue Jun 5 17:01:34 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 17:01:34 +0000 Subject: client_max_body_size for a location {} ? Possible? Message-ID: Is it possible to specify a client_max_body_size and client_body_buffer_size specifically for a location? If so how? I need to allow higher buffers for a section that hosts an application. Thanks Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdgh2323 at hotmail.com Tue Jun 5 17:26:09 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 17:26:09 +0000 Subject: Graph nginx by error codes and requests per second? Cacti? or some other? Message-ID: Does anybody have a monitoring system in place by nginx error code... 500, 200, 404, 444.... and did you do this with cacti or php4nagios? Regards, Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdgh2323 at hotmail.com Tue Jun 5 17:31:25 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 17:31:25 +0000 Subject: client_max_body_size for a location {} ? Possible? In-Reply-To: References: Message-ID: Answering myself partially location /wordpress/wp-admin { client_max_body_size 1m; } <-- does that apply for evert sub directory such as /wordpress/wp-admin/dir1 /wordpress/wp-admin/dir2/app.php etc Joseph From: tdgh2323 at hotmail.com To: nginx at nginx.org Subject: client_max_body_size for a location {} ? Possible? Date: Tue, 5 Jun 2012 17:01:34 +0000 Is it possible to specify a client_max_body_size and client_body_buffer_size specifically for a location? If so how? I need to allow higher buffers for a section that hosts an application. Thanks Joseph _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Tue Jun 5 17:57:17 2012 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 05 Jun 2012 13:57:17 -0400 Subject: Netflix Open Connect Appliance Software Message-ID: <4FCE487D.2090903@ohlste.in> This is a cross posting from freebsd-stable. I thought it worth giving Igor et al a shout out: >From https://signup.netflix.com/openconnect/software : Open Source Software Open Connect Appliance Software Netflix delivers streaming content using a combination of intelligent clients, a central control system, and a network of Open Connect appliances. When designing the Open Connect Appliance Software, we focused on these fundamental design goals: Use of Open Source software Ability to efficiently read from disk and write to network sockets High-performance HTTP delivery Ability to gather routing information via BGP Operating System For the operating system, we use FreeBSD version 9.0. This was selected for its balance of stability and features, a strong development community and staff expertise. We will contribute changes we make as part of our project to the community through the FreeBSD committers on our team. Web server We use the nginx web server for its proven scalability and performance. Netflix audio and video is served via HTTP. Routing intelligence proxy We use the BIRD Internet routing daemon to enable the transfer of network topology from ISP networks to the Netflix control system that directs clients to sources of content. Acknowledgements We would would like to express our thanks to the FreeBSD community, the nginx community, and Ondrej and the BIRD team for providing excellent open source software. We also work directly with Igor, Maxim, Andrew, Sergey, Ruslan and the rest of the team at nginx.com, who provide superb development support for our project. Questions Contact the Open Connect team at openconnectappliance at netflix.com. If you are interested in joining the Content Delivery or another team at Netflix, apply at www.netflix.com/jobs -- Jim Ohlstein From kworthington at gmail.com Tue Jun 5 18:14:57 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 5 Jun 2012 14:14:57 -0400 Subject: nginx-1.3.1 In-Reply-To: <20120605143050.GW31671@mdounin.ru> References: <20120605143050.GW31671@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.1 For Windows http://goo.gl/Xvccu (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Jun 5, 2012 at 10:30 AM, Maxim Dounin wrote: > Changes with nginx 1.3.1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 05 Jun 2012 > > ? ?*) Security: now nginx/Windows ignores trailing dot in URI path > ? ? ? component, and does not allow URIs with ":$" in it. > ? ? ? Thanks to Vladimir Kochetkov, Positive Research Center. > > ? ?*) Feature: the "proxy_pass", "fastcgi_pass", "scgi_pass", "uwsgi_pass" > ? ? ? directives, and the "server" directive inside the "upstream" block, > ? ? ? now support IPv6 addresses. > > ? ?*) Feature: the "resolver" directive now support IPv6 addresses and an > ? ? ? optional port specification. > > ? ?*) Feature: the "least_conn" directive inside the "upstream" block. > > ? ?*) Feature: it is now possible to specify a weight for servers while > ? ? ? using the "ip_hash" directive. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if the > ? ? ? "image_filter" directive was used; the bug had appeared in 1.3.0. > > ? ?*) Bugfix: nginx could not be built with ngx_cpp_test_module; the bug > ? ? ? had appeared in 1.1.12. > > ? ?*) Bugfix: access to variables from SSI and embedded perl module might > ? ? ? not work after reconfiguration. > ? ? ? Thanks to Yichun Zhang. > > ? ?*) Bugfix: in the ngx_http_xslt_filter_module. > ? ? ? Thanks to Kuramoto Eiji. > > ? ?*) Bugfix: memory leak if $geoip_org variable was used. > ? ? ? Thanks to Denis F. Latypoff. > > ? ?*) Bugfix: in the "proxy_cookie_domain" and "proxy_cookie_path" > ? ? ? directives. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From matthieu.tourne at gmail.com Tue Jun 5 18:26:51 2012 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Tue, 5 Jun 2012 11:26:51 -0700 Subject: Graph nginx by error codes and requests per second? Cacti? or some other? In-Reply-To: References: Message-ID: On Tue, Jun 5, 2012 at 10:26 AM, Joseph Cabezas wrote: > Does anybody have a monitoring system in place by nginx error code... 500, > 200, 404, 444.... and did you do this with cacti or php4nagios? > Hi, You can take a look at the nginx-lua module (on the logby branch) : https://github.com/chaoslawful/lua-nginx-module/tree/logby There is an example in the README : https://github.com/chaoslawful/lua-nginx-module/blob/logby/README Look for log_by_lua, and log_by_lua_file. You can use it to aggregate values, and use another location to report aggregated data (using content_by_lua) and feed it in your own system. We use OpenTSDB (http://opentsdb.net/) to keep aggregating data in time series. Hope that helps, Matthieu. From kworthington at gmail.com Tue Jun 5 18:34:24 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 5 Jun 2012 14:34:24 -0400 Subject: nginx-1.2.1 In-Reply-To: <20120605143121.GA31671@mdounin.ru> References: <20120605143121.GA31671@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.2.1 For Windows http://goo.gl/QlrVs (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Jun 5, 2012 at 10:31 AM, Maxim Dounin wrote: > Changes with nginx 1.2.1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 05 Jun 2012 > > ? ?*) Security: now nginx/Windows ignores trailing dot in URI path > ? ? ? component, and does not allow URIs with ":$" in it. > ? ? ? Thanks to Vladimir Kochetkov, Positive Research Center. > > ? ?*) Feature: the "debug_connection" directive now supports IPv6 addresses > ? ? ? and the "unix:" parameter. > > ? ?*) Feature: the "set_real_ip_from" directive and the "proxy" parameter > ? ? ? of the "geo" directive now support IPv6 addresses. > > ? ?*) Feature: the "real_ip_recursive", "geoip_proxy", and > ? ? ? "geoip_proxy_recursive" directives. > > ? ?*) Feature: the "proxy_recursive" parameter of the "geo" directive. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if the > ? ? ? "resolver" directive was used. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if the > ? ? ? "fastcgi_pass", "scgi_pass", or "uwsgi_pass" directives were used and > ? ? ? backend returned incorrect response. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if the > ? ? ? "rewrite" directive was used and new request arguments in a > ? ? ? replacement used variables. > > ? ?*) Bugfix: nginx might hog CPU if the open file resource limit was > ? ? ? reached. > > ? ?*) Bugfix: nginx might loop infinitely over backends if the > ? ? ? "proxy_next_upstream" directive with the "http_404" parameter was > ? ? ? used and there were backup servers specified in an upstream block. > > ? ?*) Bugfix: adding the "down" parameter of the "server" directive might > ? ? ? cause unneeded client redistribution among backend servers if the > ? ? ? "ip_hash" directive was used. > > ? ?*) Bugfix: socket leak. > ? ? ? Thanks to Yichun Zhang. > > ? ?*) Bugfix: in the ngx_http_fastcgi_module. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Jun 5 18:40:55 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2012 22:40:55 +0400 Subject: client_max_body_size for a location {} ? Possible? In-Reply-To: References: Message-ID: <20120605184055.GP31671@mdounin.ru> Hello! On Tue, Jun 05, 2012 at 05:31:25PM +0000, Joseph Cabezas wrote: > > Answering myself partially > > location /wordpress/wp-admin { client_max_body_size 1m; } <-- does that apply for evert sub directory such as /wordpress/wp-admin/dir1 /wordpress/wp-admin/dir2/app.php etc See here for details explanation on how locations work: http://nginx.org/r/location And here for an answer to your original question: http://nginx.org/r/client_max_body_size http://nginx.org/r/client_body_buffer_size The "context" in the directive description gives a list of places where the directive is allowed. Maxim Dounin From tdgh2323 at hotmail.com Tue Jun 5 19:32:36 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 19:32:36 +0000 Subject: client_max_body_size for a location {} ? Possible? In-Reply-To: <20120605184055.GP31671@mdounin.ru> References: , , <20120605184055.GP31671@mdounin.ru> Message-ID: Hello Matt, Thank you very much. Is there anyway to match this location ONLY for a specific IP? location /wordpress { client_body_buffer_size 1M; client_max_body_size 2M; proxy_pass http://backend1; } In other words I only one IP 1.1.1.1 to be able to use those special buffers for that URL.. everybody else should use the buffers stated at the httpd section. Regards, Joseph > Date: Tue, 5 Jun 2012 22:40:55 +0400 > From: mdounin at mdounin.ru > To: nginx at nginx.org > Subject: Re: client_max_body_size for a location {} ? Possible? > > Hello! > > On Tue, Jun 05, 2012 at 05:31:25PM +0000, Joseph Cabezas wrote: > > > > > Answering myself partially > > > > location /wordpress/wp-admin { client_max_body_size 1m; } <-- does that apply for evert sub directory such as /wordpress/wp-admin/dir1 /wordpress/wp-admin/dir2/app.php etc > > See here for details explanation on how locations work: > > http://nginx.org/r/location > > And here for an answer to your original question: > > http://nginx.org/r/client_max_body_size > http://nginx.org/r/client_body_buffer_size > > The "context" in the directive description gives a list of places > where the directive is allowed. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdgh2323 at hotmail.com Tue Jun 5 20:30:26 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 20:30:26 +0000 Subject: Anyway to limit amount of requests to no more then 20 requests per hour for each file? Message-ID: Hello, Iam trying to avoid a seperate location {} for each file, instead just limit the amount of reqeusts per second based on each object. Iam sure there must be a way. Thanks Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdgh2323 at hotmail.com Tue Jun 5 20:51:46 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 20:51:46 +0000 Subject: Graph nginx by error codes and requests per second? Cacti? or some other? In-Reply-To: References: , Message-ID: matthieu, Thanks! If you are using it to graph nginx by error code.. would you have any screenshot example without any sensitive data that you can share of the final outcome of this? Iam no way familiar with the things you quoted. Anybody know an alternative? Regards, Joseph > From: matthieu.tourne at gmail.com > Date: Tue, 5 Jun 2012 11:26:51 -0700 > Subject: Re: Graph nginx by error codes and requests per second? Cacti? or some other? > To: nginx at nginx.org > CC: tsunanet at gmail.com > > On Tue, Jun 5, 2012 at 10:26 AM, Joseph Cabezas wrote: > > Does anybody have a monitoring system in place by nginx error code... 500, > > 200, 404, 444.... and did you do this with cacti or php4nagios? > > > > Hi, > > You can take a look at the nginx-lua module (on the logby branch) : > https://github.com/chaoslawful/lua-nginx-module/tree/logby > > There is an example in the README : > https://github.com/chaoslawful/lua-nginx-module/blob/logby/README > Look for log_by_lua, and log_by_lua_file. > > You can use it to aggregate values, and use another location to report > aggregated data (using content_by_lua) and feed it in your own system. > > We use OpenTSDB (http://opentsdb.net/) to keep aggregating data in time series. > > Hope that helps, > > Matthieu. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Tue Jun 5 20:54:48 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 6 Jun 2012 00:54:48 +0400 Subject: Anyway to limit amount of requests to no more then 20 requests per hour for each file? In-Reply-To: References: Message-ID: <201206060054.48407.ne@vbart.ru> On Wednesday 06 June 2012 00:30:26 Joseph Cabezas wrote: > Hello, > > Iam trying to avoid a seperate location {} for each file, instead just > limit the amount of reqeusts per second based on each object. > If you're happy with 60 requests per hour, then you can take advantage of the http_limit_req module. http://nginx.org/en/docs/http/ngx_http_limit_req_module.html limit_req_zone $uri zone=one:32m rate=1r/m; wbr, Valentin V. Bartenev From tdgh2323 at hotmail.com Tue Jun 5 22:18:46 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 22:18:46 +0000 Subject: Anyway to limit amount of requests to no more then 20 requests per hour for each file? In-Reply-To: <201206060054.48407.ne@vbart.ru> References: , <201206060054.48407.ne@vbart.ru> Message-ID: Great.... So all I need is that single line in the httpd section, or server section? limit_req_zone $uri zone=one:32m rate=1r/m; Iam thinking I need some other line to actually apply the limit such as this? location / { limit_req zone=one; proxy_pass http://backend1; } Or something like this? Joseph > From: ne at vbart.ru > To: nginx at nginx.org > Subject: Re: Anyway to limit amount of requests to no more then 20 requests per hour for each file? > Date: Wed, 6 Jun 2012 00:54:48 +0400 > > On Wednesday 06 June 2012 00:30:26 Joseph Cabezas wrote: > > Hello, > > > > Iam trying to avoid a seperate location {} for each file, instead just > > limit the amount of reqeusts per second based on each object. > > > > If you're happy with 60 requests per hour, then you can take advantage of > the http_limit_req module. > > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > limit_req_zone $uri zone=one:32m rate=1r/m; > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Tue Jun 5 22:24:41 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 6 Jun 2012 02:24:41 +0400 Subject: Anyway to limit amount of requests to no more then 20 requests per hour for each file? In-Reply-To: References: <201206060054.48407.ne@vbart.ru> Message-ID: <201206060224.41691.ne@vbart.ru> On Wednesday 06 June 2012 02:18:46 Joseph Cabezas wrote: [...] > Iam thinking I need some other line to actually apply the limit such as > this? > > location / { limit_req zone=one; proxy_pass http://backend1; } > > > Or something like this? Yes, that's right. You also need limit_req directive in the place where do you want to apply the limit. wbr, Valentin V. Bartenev From tdgh2323 at hotmail.com Tue Jun 5 22:33:09 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 5 Jun 2012 22:33:09 +0000 Subject: How to limit_req depending if the requests has a REFERER or not. Message-ID: I have something like this... I need to be able to apply three different limit_req depending: a.) If the referer to click.php is domain.com ... apply zone1 b.) If there is some other referer apply zone2 on click.php c.) If there is no referer apply zone3 on click.php location /click.php { limit_req zone=one; proxy_pass http://backend; } Thanks guys -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Tue Jun 5 23:31:03 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 6 Jun 2012 03:31:03 +0400 Subject: How to limit_req depending if the requests has a REFERER or not. In-Reply-To: References: Message-ID: <201206060331.03636.ne@vbart.ru> On Wednesday 06 June 2012 02:33:09 Joseph Cabezas wrote: > I have something like this... I need to be able to apply three different > limit_req depending: > > a.) If the referer to click.php is domain.com ... apply zone1 > b.) If there is some other referer apply zone2 on click.php > c.) If there is no referer apply zone3 on click.php > > > location /click.php { limit_req zone=one; proxy_pass http://backend; } > Probably, something like this will work: http { map $http_referer $is_referer { default ''; ~^. 1; } map $is_referer $no_referer { default 1; 1 ''; } map $invalid_referer $zone1 { 0 1; 1 ''; } map $invalid_referer $zone2 { 0 ''; 1 $is_referer; } map $invalid_referer $zone3 { 0 ''; 1 $no_referer; } limit_req_zone $zone1 zone=zone1:128k rate=50r/s; limit_req_zone $zone2 zone=zone2:128k rate=10r/s; limit_req_zone $zone3 zone=zone3:128k rate=3r/s; server { valid_referers domain.com; location /click.php { limit_req zone=zone3 burst=12; limit_req zone=zone2 burst=10 nodelay; limit_req zone=zone1 burst=100 nodelay; ... wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Jun 6 02:51:51 2012 From: nginx-forum at nginx.us (jessusniww) Date: Tue, 5 Jun 2012 22:51:51 -0400 (EDT) Subject: how to deploy multiple application in nginx? Message-ID: <43428cf050601c546d4749d6ca60cd8c.NginxMailingListEnglish@forum.nginx.org> hi: now,i want to use nginx to run php,but now ,i only run one php application in nginx.For example,my web root is D:\PHPWeb\,in this directory i have two php applications 'example1' and 'example2' respectively.In each application there exists a .htaccess file,i don't know how do i to run this two application? thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227251,227251#msg-227251 From agentzh at gmail.com Wed Jun 6 03:23:10 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 6 Jun 2012 11:23:10 +0800 Subject: Graph nginx by error codes and requests per second? Cacti? or some other? In-Reply-To: References: Message-ID: Hello! On Wed, Jun 6, 2012 at 2:26 AM, Matthieu Tourne > > You can take a look at the nginx-lua module (on the logby branch) : > https://github.com/chaoslawful/lua-nginx-module/tree/logby > BTW, The "logby" branch is going to be merged into "master" in the next few days or so (after more aggressive testing) :) Special thanks go to Matthieu Tourne for the work on this branch :) Best regards, -agentzh From omega13a at fedtrek.com Wed Jun 6 03:26:34 2012 From: omega13a at fedtrek.com (Brandon Amaro) Date: Tue, 05 Jun 2012 20:26:34 -0700 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <20120605143050.GW31671@mdounin.ru> References: <20120605143050.GW31671@mdounin.ru> Message-ID: <4FCECDEA.8080607@fedtrek.com> After I upgraded to nginx 1.3.1, I've been having a problem with PHP-FPM. It would work fine for a few minutes then I would start getting 500 Internal Server Errors on all the pages that use PHP. I have to keep restaring the PHP-FPM service in order to navigate my website. Everything was running smoothly before upgrading nginx and I've haven't made any recent changes in the config files for both nginx and anything PHP related. I'm running Fedora 14 (can't upgrade to anything more recent) and compiled nginx myself as I've always done in the past. Any help would be greatly appreciated. -- omega13a Owner and Founder of UFT http://www.fedtrek.com From reallfqq-nginx at yahoo.fr Wed Jun 6 04:28:01 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 6 Jun 2012 00:28:01 -0400 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <4FCECDEA.8080607@fedtrek.com> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> Message-ID: Wild guess: maybe some file openings limit reached? Most important thing to start: what does the error log says? --- *B. R.* On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro wrote: > After I upgraded to nginx 1.3.1, I've been having a problem with PHP-FPM. > It would work fine for a few minutes then I would start getting 500 > Internal Server Errors on all the pages that use PHP. I have to keep > restaring the PHP-FPM service in order to navigate my website. Everything > was running smoothly before upgrading nginx and I've haven't made any > recent changes in the config files for both nginx and anything PHP related. > I'm running Fedora 14 (can't upgrade to anything more recent) and compiled > nginx myself as I've always done in the past. Any help would be greatly > appreciated. > > -- > omega13a > Owner and Founder of UFT > http://www.fedtrek.com > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 6 04:30:05 2012 From: nginx-forum at nginx.us (chenmin7249) Date: Wed, 6 Jun 2012 00:30:05 -0400 (EDT) Subject: nginx1.0.15 stable, cut off a 16k js static file into 2.03k Message-ID: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> nginx version 1.0.15 stable im running into a problem the a 16k static file was cut off into a 2.03k incomplete file occasionally, 7 times of 10 will get the incomplete file. i use the nginx server for pure static file serving sendfile on, tcp_nopush on, tcp_nodelay on, keepalive timeout 0, gzip off open_file_cache on using epoll no other configurations, what could be the problem? client body size? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227254,227254#msg-227254 From omega13a at fedtrek.com Wed Jun 6 04:37:49 2012 From: omega13a at fedtrek.com (Brandon Amaro) Date: Tue, 05 Jun 2012 21:37:49 -0700 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> Message-ID: <4FCEDE9D.8080107@fedtrek.com> I'm assuming this is the error as that the error long is filled with similar messages since around the time I upgraded: 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to unix:/var/lib/php5-fpm/web1.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 72.253.115.223, server: fedtrek.com, request: "POST /Forums-file-ajax_online_update-mypage-minus200.html HTTP/1.1", upstream: "fastcgi://unix:/var/lib/php5-fpm/web1.sock:", host: "www.fedtrek.com", referrer: "http://www.fedtrek.com/Borg_Species_Designations.html" On 06/05/2012 09:28 PM, B.R. wrote: > Wild guess: maybe some file openings limit reached? > > Most important thing to start: what does the error log says? > --- > *B. R.* > > > On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro > wrote: > > After I upgraded to nginx 1.3.1, I've been having a problem with > PHP-FPM. It would work fine for a few minutes then I would start > getting 500 Internal Server Errors on all the pages that use PHP. > I have to keep restaring the PHP-FPM service in order to navigate > my website. Everything was running smoothly before upgrading nginx > and I've haven't made any recent changes in the config files for > both nginx and anything PHP related. I'm running Fedora 14 (can't > upgrade to anything more recent) and compiled nginx myself as I've > always done in the past. Any help would be greatly appreciated. > > -- > omega13a > Owner and Founder of UFT > http://www.fedtrek.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- omega13a Owner and Founder of UFT http://www.fedtrek.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Jun 6 04:44:16 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 6 Jun 2012 00:44:16 -0400 Subject: nginx1.0.15 stable, cut off a 16k js static file into 2.03k In-Reply-To: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> References: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> Message-ID: A few suggestions: 1?) tcp_nodelay on; is useless, as the default valueis 'on' already 2?) Is tcp_nopush a really necessary option? Forcing things on packets is non-natural (http://wiki.nginx.org/HttpCoreModule#tcp_nopush). Maybe problems come from that 3?) I'd suggest to test with the latest stable (1.2.1 since today). Maybe it won't solve the pb but the latest binary is always a good thing to have Wild guesses, I am no expert :o\ --- *B. R.* On Wed, Jun 6, 2012 at 12:30 AM, chenmin7249 wrote: > nginx version 1.0.15 stable > im running into a problem the a 16k static file was cut off into a 2.03k > incomplete file occasionally, 7 times of 10 will get the incomplete > file. > i use the nginx server for pure static file serving > sendfile on, > tcp_nopush on, > tcp_nodelay on, > keepalive timeout 0, > gzip off > open_file_cache on > using epoll > no other configurations, what could be the problem? client body size? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227254,227254#msg-227254 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 6 04:55:23 2012 From: nginx-forum at nginx.us (chenmin7249) Date: Wed, 6 Jun 2012 00:55:23 -0400 (EDT) Subject: nginx1.0.15 stable, cut off a 16k js static file into 2.03k In-Reply-To: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> References: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> Message-ID: thx B.R, i will try to put nopush off to verify the problem Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227254,227257#msg-227257 From nginx-forum at nginx.us Wed Jun 6 05:01:27 2012 From: nginx-forum at nginx.us (chenmin7249) Date: Wed, 6 Jun 2012 01:01:27 -0400 (EDT) Subject: nginx1.0.15 stable, cut off a 16k js static file into 2.03k In-Reply-To: References: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7f79640676de36a3dceaf32e6fb5496c.NginxMailingListEnglish@forum.nginx.org> problem solved: the problem is due to that a new file (16k) replaced the old one (2.03k), and the cache only refreshed the file content, but not the file size, so if the file cache is not expired, client may get the right file but the wrong size, which cause the file to be truncated. thx B.R, again Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227254,227258#msg-227258 From zhuzhaoyuan at gmail.com Wed Jun 6 05:04:55 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Wed, 6 Jun 2012 13:04:55 +0800 Subject: nginx1.0.15 stable, cut off a 16k js static file into 2.03k In-Reply-To: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> References: <100f32aa2c3af3dde56eb57106145bea.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On Wed, Jun 6, 2012 at 12:30 PM, chenmin7249 wrote: > nginx version 1.0.15 stable > im running into a problem the a 16k static file was cut off into a 2.03k > incomplete file occasionally, 7 times of 10 will get the incomplete > file. > i use the nginx server for pure static file serving > sendfile on, > tcp_nopush on, > tcp_nodelay on, > keepalive timeout 0, > gzip off > open_file_cache on > using epoll > no other configurations, what could be the problem? client body size? > You could try turning off the 'open_file_cache' directive. The file size info might not be updated if the file is in file cache then being changed. Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Jun 6 07:14:25 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 6 Jun 2012 11:14:25 +0400 Subject: How to limit_req depending if the requests has a REFERER or not. In-Reply-To: <201206060331.03636.ne@vbart.ru> References: <201206060331.03636.ne@vbart.ru> Message-ID: <201206061114.25708.ne@vbart.ru> On Wednesday 06 June 2012 03:31:03 Valentin V. Bartenev wrote: > On Wednesday 06 June 2012 02:33:09 Joseph Cabezas wrote: > > I have something like this... I need to be able to apply three different > > limit_req depending: > > > > a.) If the referer to click.php is domain.com ... apply zone1 > > b.) If there is some other referer apply zone2 on click.php > > c.) If there is no referer apply zone3 on click.php > > > > location /click.php { limit_req zone=one; proxy_pass http://backend; } [...] > > map $invalid_referer $zone1 { > 0 1; > 1 ''; > } > > map $invalid_referer $zone2 { > 0 ''; > 1 $is_referer; > } > > map $invalid_referer $zone3 { > 0 ''; > 1 $no_referer; > } [...] Oops, Ruslan Ermilov pointed out to me that the "false" value of $invalid_referer is an empty string, not "0". http://nginx.org/r/valid_referers Well, then the correct config example is as follows: http { map $http_referer $zone3 { default 1; ~^. ''; } map $zone3 $zone2 { default $invalid_referer; 1 ''; } map $invalid_referer $zone1 { default 1; 1 ''; } limit_req_zone $zone1 zone=zone1:128k rate=50r/s; limit_req_zone $zone2 zone=zone2:128k rate=10r/s; limit_req_zone $zone3 zone=zone3:128k rate=3r/s; server { valid_referers domain.com; location /click.php { limit_req zone=zone3 burst=12; limit_req zone=zone2 burst=10 nodelay; limit_req zone=zone1 burst=100 nodelay; ... wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Jun 6 08:46:23 2012 From: nginx-forum at nginx.us (jessusniww) Date: Wed, 6 Jun 2012 04:46:23 -0400 (EDT) Subject: how to deploy multiple application in nginx? In-Reply-To: <43428cf050601c546d4749d6ca60cd8c.NginxMailingListEnglish@forum.nginx.org> References: <43428cf050601c546d4749d6ca60cd8c.NginxMailingListEnglish@forum.nginx.org> Message-ID: This problem had been solved by Nginx rewrite. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227251,227266#msg-227266 From nginx-forum at nginx.us Wed Jun 6 12:05:06 2012 From: nginx-forum at nginx.us (rohit) Date: Wed, 6 Jun 2012 08:05:06 -0400 (EDT) Subject: how configure httpd drizzle with nginx Message-ID: <5fd1f43cda9d0ff2175e180c733fac4b.NginxMailingListEnglish@forum.nginx.org> `i configure nginx download nginx.1.0.8 cd /nginx.1.0.8 ./configure --prefix=/etc/nginx --add-module=/home/rohittyagi/Downloads/chaoslawful-drizzle-nginx-module-74682b1/ --add-module=/home/rohittyagi/Downloads/agentzh-rds-json-nginx-module-74c21b3/ create virtual upstrem like upstream backend { drizzled_server 127.0.0.1:3306 dbname=vhost_dbi password=root user=root protocol=mysql; } its give error [emerg]: unknown directive "drizzled_server" in /etc/nginx/nginx.conf:133 can any one help me to configure HttpDrizzleModule with nginx` Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227274,227274#msg-227274 From nginx-forum at nginx.us Wed Jun 6 12:08:19 2012 From: nginx-forum at nginx.us (rohit) Date: Wed, 6 Jun 2012 08:08:19 -0400 (EDT) Subject: how create mass dynamic virtual hosting in nginx with Mysql database Message-ID: how make mass dynamic hosting like mod_vhost_dbi in apachi how we can execute mysql query in nginx.conf with out drizzle Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227275,227275#msg-227275 From christian.boenning at gmail.com Wed Jun 6 12:20:06 2012 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Wed, 6 Jun 2012 14:20:06 +0200 Subject: how configure httpd drizzle with nginx In-Reply-To: <5fd1f43cda9d0ff2175e180c733fac4b.NginxMailingListEnglish@forum.nginx.org> References: <5fd1f43cda9d0ff2175e180c733fac4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, that 'unknown directive' says it all. You need to check your backend definition. The drizzled_server key simply does not exist. See the README of this module: https://github.com/chaoslawful/drizzle-nginx-module#readme That directive is called 'drizzle_server'. Should be working good good once you've corrected that typo. Regards, Chris 2012/6/6 rohit : > `i configure nginx > download nginx.1.0.8 > cd ?/nginx.1.0.8 > > ./configure --prefix=/etc/nginx > --add-module=/home/rohittyagi/Downloads/chaoslawful-drizzle-nginx-module-74682b1/ > ?--add-module=/home/rohittyagi/Downloads/agentzh-rds-json-nginx-module-74c21b3/ > > create virtual upstrem like > > upstream backend { > ? ? ? ? ? ?drizzled_server 127.0.0.1:3306 dbname=vhost_dbi > ? ? ? ? ? ? ? ? password=root user=root protocol=mysql; > ? ? ? ?} > > its give error > [emerg]: unknown directive "drizzled_server" in > /etc/nginx/nginx.conf:133 > ?can any one help me to configure HttpDrizzleModule with nginx` > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227274,227274#msg-227274 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From markus.jelsma at openindex.io Wed Jun 6 13:05:54 2012 From: markus.jelsma at openindex.io (=?utf-8?Q?Markus_Jelsma?=) Date: Wed, 6 Jun 2012 13:05:54 +0000 Subject: healthcheck_nginx_upstreams patch fails for Nginx 1.2.1 Message-ID: Hello, We tried to test the new Nginx 1.2.1 release but we're using the healthcheck_nginx_upstreams module [1]. With Nginx 1.1.x and 1.2.0 we could succesfully apply the Healthcheck module patch but hunks fail for Nginx 1.2.1. Any hints on what we can do about it? # patch --dry-run -p1 < nginx.patch patching file src/http/ngx_http_upstream.c Hunk #1 succeeded at 4403 (offset 110 lines). patching file src/http/ngx_http_upstream.h Hunk #1 succeeded at 110 (offset 1 line). patching file src/http/ngx_http_upstream_round_robin.c Hunk #1 succeeded at 5 (offset 1 line). Hunk #2 FAILED at 14. Hunk #3 succeeded at 35 (offset 1 line). Hunk #4 succeeded at 77 (offset 8 lines). Hunk #5 succeeded at 399 (offset 11 lines). Hunk #6 FAILED at 434. Hunk #7 FAILED at 495. Hunk #8 FAILED at 600. Hunk #9 FAILED at 625. Hunk #10 FAILED at 645. 6 out of 10 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej patching file src/http/ngx_http_upstream_round_robin.h Hunk #1 succeeded at 29 (offset 3 lines) [1]: https://github.com/cep21/healthcheck_nginx_upstreams Thanks, Markus From johannes_graumann at web.de Wed Jun 6 13:18:10 2012 From: johannes_graumann at web.de (Johannes Graumann) Date: Wed, 06 Jun 2012 16:18:10 +0300 Subject: Conversion of PHP single machine setup to nginx/php residing on different boxes Message-ID: Hello, I am trying to implement http://www.howtoforge.com/installing-nginx-with- php5-and-mysql-support-on-debian-squeeze , where the nginx server config looks like so: > server { > listen 80; ## listen for ipv4 > listen [::]:80 default ipv6only=on; ## listen for ipv6 > server_name _; > access_log /var/log/nginx/localhost.access.log; > location / { > root /var/www; > index index.php index.html index.htm; > } > location /doc { > root /usr/share; > autoindex on; > allow 127.0.0.1; > deny all; > } > location /images { > root /usr/share; > autoindex on; > } > #error_page 404 /404.html; > # redirect server error pages to the static page /50x.html > # > #error_page 500 502 503 504 /50x.html; > #location = /50x.html { > # root /var/www/nginx-default; > #} > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > # > #location ~ \.php$ { > #proxy_pass http://127.0.0.1; > #} > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > # > location ~ \.php$ { > try_files $uri =404; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; > include fastcgi_params; > } > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > location ~ /\.ht { > deny all; > } > } My setup looks slightly different from the one used for that howto, as I have a host/firewall/nginx box that houses client applications in lxc- containers. I am having trouble in setting up the ~ \.php$ bit in that context. My config lookis like this: > server { > listen 443; > server_name XXX; > client_max_body_size 40M; > # SSL is using CACert credentials > ssl on; > ssl_certificate /etc/ssl/private/cacert.XXX.org.pem; > ssl_certificate_key /etc/ssl/private/cacert.XXX.org_privatkey.pem; > ssl_session_timeout 5m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM:+SSLv3: +EXP; > ssl_prefer_server_ciphers on; > # Proxy the "feng-container" lxc container > location / { > proxy_pass http://10.10.10.3:80/; > } > location ~ \.php$ { > try_files $uri =404; > fastcgi_pass 10.10.10.3:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME http://10.10.10.3:80/$fastcgi_script_name; > include fastcgi_params; > } > } The "/" bit works, but php location gives me "404" despite ports 80/9000 being open ... Can anyone nudge me into the right direction on how to proceed here? Thanks, Joh From nginx-forum at nginx.us Wed Jun 6 13:22:50 2012 From: nginx-forum at nginx.us (torajx) Date: Wed, 6 Jun 2012 09:22:50 -0400 (EDT) Subject: block access to a file !! Message-ID: <82732c268abadd4f9f4c2569849989eb.NginxMailingListEnglish@forum.nginx.org> Hi, it must be a piece of cake but i can not find a soloution... the problem is " location admin.php". you can check it below... i just want to allow access to admin.php to some ip address and deny others. if i remove allow line from it it deny all; but when I add allow line it dont work .. please help me... I tried too many syntax but no success. i even add root and index to this location but browser prompt to download my php file.. please help here 's my server section of config...very simple server { listen 80; server_tokens off; server_name www.mysite.com; access_log /var/log/nginx/mysite.log; error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location / { root /usr/share/nginx/www; index index.php; client_max_body_size 8m; client_body_buffer_size 256k; } location = /50x.html { root /usr/share/nginx/html; } location = /404.html { root /usr/share/nginx/html; } location ~ /\.ht { deny all; } location ~ \.php$ { root /usr/share/nginx/www; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; set $ssl off; if ($ssl_protocol != "" ) { set $ssl on; } fastcgi_param HTTPS $ssl; } location ~ admin.php { allow 217.66.196.193; deny all; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227281,227281#msg-227281 From francis at daoine.org Wed Jun 6 13:34:24 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 6 Jun 2012 14:34:24 +0100 Subject: Conversion of PHP single machine setup to nginx/php residing on different boxes In-Reply-To: References: Message-ID: <20120606133424.GE4719@craic.sysops.org> On Wed, Jun 06, 2012 at 04:18:10PM +0300, Johannes Graumann wrote: Hi there, > > location ~ \.php$ { > > try_files $uri =404; try_files refers to local files only. http://nginx.org/r/try_files You probably don't want it here at all. > > fastcgi_pass 10.10.10.3:9000; > > fastcgi_index index.php; > > fastcgi_param SCRIPT_FILENAME > http://10.10.10.3:80/$fastcgi_script_name; SCRIPT_FILENAME is a parameter that the fastcgi server will read, and it will expect it to refer to a file on its filesystem that it should load and process. So: if the client requests the url /one/two.php, what file on the 10.10.10.3 server do you want that to correspond to? (From the perspective of the fasctcgi server, in case it is chroot'ed.) Make SCRIPT_FILENAME be that filename. All the best, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jun 6 13:41:08 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 6 Jun 2012 14:41:08 +0100 Subject: block access to a file !! In-Reply-To: <82732c268abadd4f9f4c2569849989eb.NginxMailingListEnglish@forum.nginx.org> References: <82732c268abadd4f9f4c2569849989eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120606134108.GF4719@craic.sysops.org> On Wed, Jun 06, 2012 at 09:22:50AM -0400, torajx wrote: Hi there, > it must be a piece of cake but i can not find a soloution... > > the problem is " location admin.php". you can check it below... No, the problem is "location ~ admin.php" coming after "location ~ \.php$". The exact characters matter. http://nginx.org/r/location > i just want to allow access to admin.php to some ip address and deny > others. Either use something like "location = /admin.php", or maybe reorder to regex locations (with ~) so they match in the order you want. You will (probably) want to repeat your fastcgi_pass and other fastcgi_* config lines inside the new location block. One request is handled by one location. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jun 6 13:50:57 2012 From: nginx-forum at nginx.us (torajx) Date: Wed, 6 Jun 2012 09:50:57 -0400 (EDT) Subject: block access to a file !! In-Reply-To: <82732c268abadd4f9f4c2569849989eb.NginxMailingListEnglish@forum.nginx.org> References: <82732c268abadd4f9f4c2569849989eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: thank you for reply.. i change it to "location = /admin.php " and move it above the "location ~ \.php$ " it blocks other IPs but give file not found error for allowed Ips.. the last time I added root to admin.php location the browser promot me to download admin.php.. what can I do now ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227281,227284#msg-227284 From francis at daoine.org Wed Jun 6 14:31:53 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 6 Jun 2012 15:31:53 +0100 Subject: block access to a file !! In-Reply-To: References: <82732c268abadd4f9f4c2569849989eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120606143153.GG4719@craic.sysops.org> On Wed, Jun 06, 2012 at 09:50:57AM -0400, torajx wrote: > thank you for reply.. > i change it to "location = /admin.php " > and move it above the "location ~ \.php$ " """ Either use something like "location = /admin.php", or maybe reorder to regex locations (with ~) so they match in the order you want. """ You've done both. That's ok, but unnecessary. > it blocks other IPs but give file not found error for allowed Ips.. > > the last time I added root to admin.php location the browser promot me > to download admin.php.. > > what can I do now ? """ You will (probably) want to repeat your fastcgi_pass and other fastcgi_* config lines inside the new location block. One request is handled by one location. """ Do that. f -- Francis Daly francis at daoine.org From christian.boenning at gmail.com Wed Jun 6 15:00:35 2012 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Wed, 6 Jun 2012 17:00:35 +0200 Subject: is 'try_files' or 'rewrite' aware of 'gzip_static'? Message-ID: Hi, I'm running a quite huge TYPO3 installation on nginx (several other installations to follow on that host). In this case I'm using (or want to use) nc_staticfilecache as an extension to speed things up a little. This extension writes the html document which is sent to the client down to disk as a single uncompressed file and - depending on its configuration - another one which is gzipped for static delivery for subsequent requests to the same document. Now the question is if my rewrite or try_files directives will try to deliver the gzipped version of the page. My try_files directive looks like this: try_files /typo3temp/tx_ncstaticfilecache/cache/$host${request_uri}index.html @fetch_from_typo3; The index.html.gz within the same directory does exist and 'gzip_static' is set to 'on'. Best, Chris From johannes_graumann at web.de Wed Jun 6 20:10:37 2012 From: johannes_graumann at web.de (Johannes Graumann) Date: Wed, 06 Jun 2012 23:10:37 +0300 Subject: Conversion of PHP single machine setup to nginx/php residing on different boxes References: <20120606133424.GE4719@craic.sysops.org> Message-ID: Francis Daly wrote: > On Wed, Jun 06, 2012 at 04:18:10PM +0300, Johannes Graumann wrote: > > Hi there, > >> > location ~ \.php$ { >> > try_files $uri =404; > > try_files refers to local files only. http://nginx.org/r/try_files > > You probably don't want it here at all. > >> > fastcgi_pass 10.10.10.3:9000; >> > fastcgi_index index.php; >> > fastcgi_param SCRIPT_FILENAME >> http://10.10.10.3:80/$fastcgi_script_name; > > SCRIPT_FILENAME is a parameter that the fastcgi server will read, and > it will expect it to refer to a file on its filesystem that it should > load and process. > > So: if the client requests the url /one/two.php, what file on the > 10.10.10.3 server do you want that to correspond to? (From the perspective > of the fasctcgi server, in case it is chroot'ed.) > > Make SCRIPT_FILENAME be that filename. Thanks a lot for your pointers, which nudged me in the right direction. After also finding http://nginxlibrary.com/resolving-no-input-file- specified-error/ I came up with the following, which is working (a "info.php" script at the documentroot containing a call to "phpinfo()" is rendered properly and the output has "Server API: CGI/FastCGI"): > server { > listen 443; > server_name XXX.org XXX; > client_max_body_size 40M; > # SSL is using CACert credentials > ssl on; > ssl_certificate /etc/ssl/private/cacert.XXX.org.pem; > ssl_certificate_key /etc/ssl/private/cacert.XXX.org_privatkey.pem; > ssl_session_timeout 5m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM: +SSLv3:+EXP; > ssl_prefer_server_ciphers on; > # Proxy the "feng-container" lxc container > root /var/www/; > location / { > proxy_pass http://10.10.10.3:80/; > index index.php index.html index.htm; > } > location ~ \.php$ { > fastcgi_pass 10.10.10.3:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > } > } Thank you. Joh From francis at daoine.org Wed Jun 6 20:44:54 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 6 Jun 2012 21:44:54 +0100 Subject: is 'try_files' or 'rewrite' aware of 'gzip_static'? In-Reply-To: References: Message-ID: <20120606204454.GH4719@craic.sysops.org> On Wed, Jun 06, 2012 at 05:00:35PM +0200, Christian B?nning wrote: Hi there, > This extension writes the html document which is sent to the > client down to disk as a single uncompressed file and - depending on > its configuration - another one which is gzipped for static delivery > for subsequent requests to the same document. > > Now the question is if my rewrite or try_files directives will try to > deliver the gzipped version of the page. You've shown a "try_files" directive, but no rewrite directive. Quick testing indicates that for try_files, the answer is "yes and no", with a fuller explanation below. > My try_files directive looks like this: > try_files /typo3temp/tx_ncstaticfilecache/cache/$host${request_uri}index.html > @fetch_from_typo3; http://nginx.org/r/try_files indicates that it means "take these urls in turn, convert them into filenames by prefixing the relevant 'alias' or 'root' value, and for the first such file that exists, process the corresponding url normally". So with the above line, unless you have a "typo3temp" directory inside /usr/local/nginx/html, it is going to fall back to the @location every time. Once that is fixed, then if the index.html file does not exist, try_files will fall back to the @location (so "it does not heed gzip_static"). But if index.html does exist, then processing continues of that url -- which will serve index.html.gz if appropriate (so "it does heed gzip_static"). Overall: try_files is not aware of gzip_static; but nginx will still honour it if both the non-gz and .gz files exist. This differs from the "normal" gzip_static handling which will serve the .gz version if appropriate, whether or not non-gz exists. > The index.html.gz within the same directory does exist and > 'gzip_static' is set to 'on'. So long as both index.html and index.html.gz are there, try_files should Just Work. I tested this on a 1.2.0 build, in case that matters. rewrite can be tested separately, if you have a sample directive. f -- Francis Daly francis at daoine.org From contact at jpluscplusm.com Wed Jun 6 22:39:30 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 6 Jun 2012 23:39:30 +0100 Subject: Status of Nginx centralised logging Message-ID: Hi all - As many of us will have discovered over the course of our careers, centralised/non-local logging is an important part of any non-trivial infrastructure. I'm aware of different patches that add syslogging capabilities to different Nginx versions, but I've yet to see an official description of how we should achieve non-local logging. Preferably syslog, personally speaking, but anything scalable, supportable, debug-able and sane would, I feel, be acceptable to the wider community. I'm aware of at least the following options, but I feel they're all lacking to some degree: * log to local disk and syslog/logstash/rsync them off: undesirable due to the management overhead of the additional logging process/logic + the wasted disk I/O when creating the per-request logs * use post_action to hit a logging endpoint after each request: would add overhead to both configuration and network traffic; post_action has been described as "a bit of a hack", IIRC * use syslog patches: not "official", hence troublesome to debug on-list; rightly or wrongly, the usual response of "please replicate problem without 3rd party patches" would cause problems when debugging production systems * use another 3rd-party logging protocol, e.g. statsd, redis: as similarly unsupportable as syslog patches I wonder if someone could officially comment on the potential for resolving this gap in Nginx's feature set? Or perhaps I'm missing something ... :-) Many thanks, all Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From omega13a at fedtrek.com Wed Jun 6 22:49:38 2012 From: omega13a at fedtrek.com (Brandon Amaro) Date: Wed, 06 Jun 2012 15:49:38 -0700 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <4FCEDE9D.8080107@fedtrek.com> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> <4FCEDE9D.8080107@fedtrek.com> Message-ID: <4FCFDE82.4030303@fedtrek.com> I'm still having problems. I forgot to mention that I'm running PHP 5.4.3. -- omega13a Owner and Founder of UFT http://www.fedtrek.com On 06/05/2012 09:37 PM, Brandon Amaro wrote: > I'm assuming this is the error as that the error long is filled with > similar messages since around the time I upgraded: > > > 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to > unix:/var/lib/php5-fpm/web1.sock failed (11: Resource temporarily > unavailable) while connecting to upstream, client: 72.253.115.223, > server: fedtrek.com, request: "POST > /Forums-file-ajax_online_update-mypage-minus200.html HTTP/1.1", > upstream: "fastcgi://unix:/var/lib/php5-fpm/web1.sock:", host: > "www.fedtrek.com", referrer: > "http://www.fedtrek.com/Borg_Species_Designations.html" > > On 06/05/2012 09:28 PM, B.R. wrote: >> Wild guess: maybe some file openings limit reached? >> >> Most important thing to start: what does the error log says? >> --- >> *B. R.* >> >> >> On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro > > wrote: >> >> After I upgraded to nginx 1.3.1, I've been having a problem with >> PHP-FPM. It would work fine for a few minutes then I would start >> getting 500 Internal Server Errors on all the pages that use PHP. >> I have to keep restaring the PHP-FPM service in order to >> navigate my website. Everything was running smoothly before >> upgrading nginx and I've haven't made any recent changes in the >> config files for both nginx and anything PHP related. I'm running >> Fedora 14 (can't upgrade to anything more recent) and compiled >> nginx myself as I've always done in the past. Any help would be >> greatly appreciated. >> >> -- >> omega13a >> Owner and Founder of UFT >> http://www.fedtrek.com >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > omega13a > Owner and Founder of UFT > http://www.fedtrek.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.tourne at gmail.com Wed Jun 6 23:03:03 2012 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Wed, 6 Jun 2012 16:03:03 -0700 Subject: Status of Nginx centralised logging In-Reply-To: References: Message-ID: On Wed, Jun 6, 2012 at 3:39 PM, Jonathan Matthews wrote: > Hi all - > > As many of us will have discovered over the course of our careers, > centralised/non-local logging is an important part of any non-trivial > infrastructure. > > I'm aware of different patches that add syslogging capabilities to > different Nginx versions, but I've yet to see an official description > of how we should achieve non-local logging. Preferably syslog, > personally speaking, but anything scalable, supportable, debug-able > and sane would, I feel, be acceptable to the wider community. > > I'm aware of at least the following options, but I feel they're all > lacking to some degree: > > * log to local disk and syslog/logstash/rsync them off: undesirable > due to the management overhead of the additional logging process/logic > + the wasted disk I/O when creating the per-request logs > > * use post_action to hit a logging endpoint after each request: would > add overhead to both configuration and network traffic; post_action > has been described as "a bit of a hack", IIRC > > * use syslog patches: not "official", hence troublesome to debug > on-list; rightly or wrongly, the usual response of "please replicate > problem without 3rd party patches" would cause problems when debugging > production systems > > * use another 3rd-party logging protocol, e.g. statsd, redis: as > similarly unsupportable as syslog patches > > I wonder if someone could officially comment on the potential for > resolving this gap in Nginx's feature set? Or perhaps I'm missing > something ... :-) > Hi! I think this thread from a few days ago might answer some of your questions : http://forum.nginx.org/read.php?2,227234 We use log_by_lua to aggregate data, and then fetch the aggregated data from another location. In turns this feeds into a bigger centralized system (OpenTSDB). This way we don't have to output one log line, or hit one logging endpoint for each request that goes through. One interesting thing we've noticed while developing this is that even an error in our logging code really doesn't affect serving pages, so this might be the safest use case for 3rd party modules you could imagine. Currently your only other official alternative to get data, besides the access_log is the stub_status module : http://wiki.nginx.org/HttpStubStatusModule Hope this helps, Matthieu From contact at jpluscplusm.com Wed Jun 6 23:40:20 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 7 Jun 2012 00:40:20 +0100 Subject: Status of Nginx centralised logging In-Reply-To: References: Message-ID: On 7 June 2012 00:03, Matthieu Tourne wrote: > I think this thread from a few days ago might answer some of your questions : > http://forum.nginx.org/read.php?2,227234 Yes, it was glancing over this thread that made me think I needed to raise this issue kinda sorta formally, and ask for some official recognition of this relatively important issue. > We use log_by_lua to aggregate data, and then fetch the aggregated > data from another location. > In turns this feeds into a bigger centralized system (OpenTSDB). > This way we don't have to output one log line, or hit one logging > endpoint for each request that goes through. > > One interesting thing we've noticed while developing this is that even > an error in our logging code really doesn't affect serving pages, so > this might be the safest use case for 3rd party modules you could > imagine. Interesting, thank you. [ I've so far avoided enabling lua in my production Nginx instances as it's almost /too/ powerful. If our systems team (or devs!) get the idea we can do /anything/ we like in the reverse proxy layer in front of our app servers, I'm concerned we'll start to add too much logic in there that should actually live in the apps :-) ] I'll definitely take a look at log_by_lua, however I still think we should get the centralised logging issue recognised officially as a area where Nginx is deficient, along with suggestions for bringing it up to spec. Nginx is a vital part of many people's infrastructures now: it's a shame it's behind the (devops'y!) times in this area. > Currently your only other official alternative to get data, besides > the access_log is the stub_status module : > http://wiki.nginx.org/HttpStubStatusModule Appreciated but, not being per-request, it isn't really in the same space as the other things we're discussing. Cheers, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From reallfqq-nginx at yahoo.fr Wed Jun 6 23:50:32 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 6 Jun 2012 19:50:32 -0400 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <4FCFDE82.4030303@fedtrek.com> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> <4FCEDE9D.8080107@fedtrek.com> <4FCFDE82.4030303@fedtrek.com> Message-ID: Hello, Well, you may: 1?) Search the ML archiveto see if anyone already encountered your problem 2?) Since you said problems started to appear after an upgrade: Use the stable releases of Nginx (latest is 1.2.1) rather than development version if you don't especially need it *Or* Revert to a previous development version to check if it could solve the problem 3?) Use the Fedora Nginx package. Other people seem to have problems with Nginx on Fedora. Package are made for Fedora starting with v15, but maybe you can give it a try anyway? 4?) Check you system so see where the problem might come: Too many files open, too many Nginx processes spawned, anything not normal. Can't help much more, though... --- *B. R.* On Wed, Jun 6, 2012 at 6:49 PM, Brandon Amaro wrote: > I'm still having problems. I forgot to mention that I'm running PHP 5.4.3. > > > -- > omega13a > Owner and Founder of UFThttp://www.fedtrek.com > > > > On 06/05/2012 09:37 PM, Brandon Amaro wrote: > > I'm assuming this is the error as that the error long is filled with > similar messages since around the time I upgraded: > > > 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to > unix:/var/lib/php5-fpm/web1.sock failed (11: Resource temporarily > unavailable) while connecting to upstream, client: 72.253.115.223, server: > fedtrek.com, request: "POST > /Forums-file-ajax_online_update-mypage-minus200.html HTTP/1.1", upstream: > "fastcgi://unix:/var/lib/php5-fpm/web1.sock:", host: "www.fedtrek.com", > referrer: "http://www.fedtrek.com/Borg_Species_Designations.html" > > On 06/05/2012 09:28 PM, B.R. wrote: > > Wild guess: maybe some file openings limit reached? > > Most important thing to start: what does the error log says? > --- > *B. R.* > > > On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro wrote: > >> After I upgraded to nginx 1.3.1, I've been having a problem with PHP-FPM. >> It would work fine for a few minutes then I would start getting 500 >> Internal Server Errors on all the pages that use PHP. I have to keep >> restaring the PHP-FPM service in order to navigate my website. Everything >> was running smoothly before upgrading nginx and I've haven't made any >> recent changes in the config files for both nginx and anything PHP related. >> I'm running Fedora 14 (can't upgrade to anything more recent) and compiled >> nginx myself as I've always done in the past. Any help would be greatly >> appreciated. >> >> -- >> omega13a >> Owner and Founder of UFT >> http://www.fedtrek.com >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > omega13a > Owner and Founder of UFThttp://www.fedtrek.com > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Thu Jun 7 00:02:14 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 6 Jun 2012 17:02:14 -0700 Subject: Status of Nginx centralised logging In-Reply-To: References: Message-ID: > * use another 3rd-party logging protocol, e.g. statsd, redis: as > similarly unsupportable as syslog patches There's also the nginx sflow module, http://code.google.com/p/nginx-sflow-module/ I realize that this is another 3rd party module, so falls into the same 'not officially supported' category, but I figured I'd mention it as something to add to the centralized logging bag of tricks. I've used it and it's pretty nice, though obviously you need an sflow server somewhere to send to. From psi at y0ru.net Thu Jun 7 02:48:27 2012 From: psi at y0ru.net (JD Harrington) Date: Wed, 6 Jun 2012 22:48:27 -0400 Subject: exposing request start time as a variable to use in a custom header Message-ID: Hi, I'm using New Relic RPM for performance profiling and they have an optional metric for time spent in request queueing. In order to compute this metric, they require that the frontend server add an X-QUEUE-START request header with a timestamp in microseconds before handing the request off to the application. There are a couple of patches floating around that add this ability to various versions of nginx. I'm wondering if this or something similar might be considered for inclusion in a future official release of nginx. Details about the New Relic feature: https://newrelic.com/docs/features/tracking-front-end-time The currently available patches: https://gist.github.com/318681 Thanks! JD From tdgh2323 at hotmail.com Thu Jun 7 06:15:46 2012 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Thu, 7 Jun 2012 06:15:46 +0000 Subject: Possible to limit_req based on requests coming from a Class C (/24 subnet) instead of per IP (/32) ? Message-ID: Hello, Is it Possible to limit_req based on requests coming from a Class C (/24 subnet) instead of per IP (/32) ? If so can anybody please provide an example. Regards, Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Jun 7 08:33:47 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 7 Jun 2012 09:33:47 +0100 Subject: Possible to limit_req based on requests coming from a Class C (/24 subnet) instead of per IP (/32) ? In-Reply-To: References: Message-ID: <20120607083347.GI4719@craic.sysops.org> On Thu, Jun 07, 2012 at 06:15:46AM +0000, Joseph Cabezas wrote: Hi there, > Is it Possible to limit_req based on requests coming from a Class C (/24 subnet) instead of per IP (/32) ? If so can anybody please provide an example. Totally untested, but: Use exactly the same method as in the responses to your other limit_req questions? limit_req_zone (http://nginx.org/r/limit_req_zone) using a new variable "$the_class_c". limit_req (http://nginx.org/r/limit_req) to do the limiting. map (http://nginx.org/r/map) to set the variable "$the_class_c" to empty, or to some identifier for the class C that should be limited. Note that those docs for "map" don't currently mention the "~ means regex match" or the "you can refer back to matched parts from the pattern, in the value", which are shown on http://wiki.nginx.org/HttpMapModule, and which will likely be useful here. In your map, you could test $remote_addr for "everything up to the final .digits"; or possibly you could try taking "three bytes of $binary_remote_addr". Test it and see. Usually the debug log will include useful information about what nginx thinks is going on, in case it is unclear. Good luck with it, f -- Francis Daly francis at daoine.org From steeeeeveee at gmx.net Thu Jun 7 10:01:37 2012 From: steeeeeveee at gmx.net (Steve) Date: Thu, 07 Jun 2012 12:01:37 +0200 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <4FCFDE82.4030303@fedtrek.com> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> <4FCEDE9D.8080107@fedtrek.com> <4FCFDE82.4030303@fedtrek.com> Message-ID: <20120607100137.211840@gmx.net> -------- Original-Nachricht -------- > Datum: Wed, 06 Jun 2012 15:49:38 -0700 > Von: Brandon Amaro > An: nginx at nginx.org > Betreff: Re: nginx 1.3.1 not PHP-FPM friendly > I'm still having problems. I forgot to mention that I'm running PHP 5.4.3. > I run nginx 1.3.1 and PHP 5.4.3 too and have no issues at all. > -- > omega13a > Owner and Founder of UFT > http://www.fedtrek.com > > > > On 06/05/2012 09:37 PM, Brandon Amaro wrote: > > I'm assuming this is the error as that the error long is filled with > > similar messages since around the time I upgraded: > > > > > > 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to > > unix:/var/lib/php5-fpm/web1.sock failed (11: Resource temporarily > > unavailable) while connecting to upstream, client: 72.253.115.223, > > server: fedtrek.com, request: "POST > > /Forums-file-ajax_online_update-mypage-minus200.html HTTP/1.1", > > upstream: "fastcgi://unix:/var/lib/php5-fpm/web1.sock:", host: > > "www.fedtrek.com", referrer: > > "http://www.fedtrek.com/Borg_Species_Designations.html" > > > > On 06/05/2012 09:28 PM, B.R. wrote: > >> Wild guess: maybe some file openings limit reached? > >> > >> Most important thing to start: what does the error log says? > >> --- > >> *B. R.* > >> > >> > >> On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro >> > wrote: > >> > >> After I upgraded to nginx 1.3.1, I've been having a problem with > >> PHP-FPM. It would work fine for a few minutes then I would start > >> getting 500 Internal Server Errors on all the pages that use PHP. > >> I have to keep restaring the PHP-FPM service in order to > >> navigate my website. Everything was running smoothly before > >> upgrading nginx and I've haven't made any recent changes in the > >> config files for both nginx and anything PHP related. I'm running > >> Fedora 14 (can't upgrade to anything more recent) and compiled > >> nginx myself as I've always done in the past. Any help would be > >> greatly appreciated. > >> > >> -- > >> omega13a > >> Owner and Founder of UFT > >> http://www.fedtrek.com > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> > >> > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > omega13a > > Owner and Founder of UFT > > http://www.fedtrek.com > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de From mdounin at mdounin.ru Thu Jun 7 10:09:57 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Jun 2012 14:09:57 +0400 Subject: Status of Nginx centralised logging In-Reply-To: References: Message-ID: <20120607100957.GZ31671@mdounin.ru> Hello! On Wed, Jun 06, 2012 at 11:39:30PM +0100, Jonathan Matthews wrote: > Hi all - > > As many of us will have discovered over the course of our careers, > centralised/non-local logging is an important part of any non-trivial > infrastructure. > > I'm aware of different patches that add syslogging capabilities to > different Nginx versions, but I've yet to see an official description > of how we should achieve non-local logging. Preferably syslog, > personally speaking, but anything scalable, supportable, debug-able > and sane would, I feel, be acceptable to the wider community. > > I'm aware of at least the following options, but I feel they're all > lacking to some degree: > > * log to local disk and syslog/logstash/rsync them off: undesirable > due to the management overhead of the additional logging process/logic > + the wasted disk I/O when creating the per-request logs This is, actually, recommended solution. It's main advantage is that nginx will continue to serve requests even if something really bad will happen with your "non-trivial infrastructure". Basically it will work till the server is alive. [...] > * use another 3rd-party logging protocol, e.g. statsd, redis: as > similarly unsupportable as syslog patches You may also take a look at Valery Kholodkov's udplog module, http://grid.net.ru/nginx/udplog.en.html I haven't actually used it (as I prefer local logging solution, see above), but I tend to like the basic concept behind the module. At least it resolves the main problem with usual syslog interfaces (i.e. blocking). Maxim Dounin From agentzh at gmail.com Thu Jun 7 13:22:50 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 7 Jun 2012 21:22:50 +0800 Subject: [ANN] ngx_openresty devel version 1.0.15.9 released In-Reply-To: References: Message-ID: Hello! After more than one week's active development, the new development version of ngx_openresty, 1.0.15.9, is now released: ? ?http://openresty.org/#Download Below is the change log for this release: * upgraded LuaNginxModule to 0.5.0rc30. * feature: new Lua API, ngx.sleep(), for doing non-blocking sleep in Lua. thanks jinglong for the patch. * feature: ngx.log() now checks if the log level number is in the valid range (0 ~ 8). thanks Xiaoyu Chen (smallfish) for suggesting this improvement. * bugfix: cosocket:receiveuntil could leak memory, especially for long pattern string arguments. this bug was caught by Test::Nginx::Socket when setting the environment "TEST_NGINX_CHECK_LEAK=1". * bugfix: ngx.re.sub() could leak memory when the "replace" template argument string is not well-formed and the "o" regex option is also specified. this issue was caught by Test::Nginx::Socket when setting environment "TEST_NGINX_CHECK_LEAK=1". * bugfix: ngx.re.gmatch leaked memory when the "o" option was not specified. this bug was caught by Test::Nginx::Socket when setting the environment "TEST_NGINX_CHECK_LEAK=1". * bugfix: the Lua "_G" special table did not get cleared when lua_code_cache is turned off. thanks Moven for reporting this issue. * bugfix: cosocket:connect() might hang on socket creation errors or segfault later due to left-over state flags. * bugfix: refactored on-demand handler registration. the old approach rewrites to static (global) variables at config-time, which could have potential problems with nginx config reloading via the "HUP" signal. * optimize: now we no longer call "ngx_http_post_request" to wake up the request associated with the current cosocket upstream from within the cosocket upstream event handlers, but rather call "r->write_event_handler" directly. this change can also make backtraces more meaningful because we preserve the original calling stack. * docs: massive wording improvements from the Nginx Wiki site. thanks Dayo. * upgraded RdsJsonNginxModule to 0.12rc10. * bugfix: refactored on-demand handler registration. the old approach rewrites to static (global) variables at config-time, which could have potential problems with nginx config reloading via the "HUP" signal. * bugfix: the (optional) no-pool patch might leak memory. now we have updated the no-pool patch to the latest version that is a thorough rewrite. * bugfix: applied poll_del_event_at_exit patch that fixed a segmentation fault in the nginx core when the poll event type is used: * bugfix: applied the resolver_debug_log patch that fixed reads of uninitialized memory in the nginx core: Special thanks go to all our contributors and users for helping make this happen :) OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well? as most of their external dependencies. See OpenResty's homepage for more details: ? http://openresty.org/ Have fun! -agentzh From omega13a at fedtrek.com Thu Jun 7 16:00:05 2012 From: omega13a at fedtrek.com (Brandon Amaro) Date: Thu, 07 Jun 2012 09:00:05 -0700 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <20120607100137.211840@gmx.net> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> <4FCEDE9D.8080107@fedtrek.com> <4FCFDE82.4030303@fedtrek.com> <20120607100137.211840@gmx.net> Message-ID: <4FD0D005.3010603@fedtrek.com> My investigation of things on my server have just gotten a bit weirder. My server has ISPConfig installed and the web based control panel is not effected at all by this slowness. Yet everything on my site is taking an insane amount of time to load if it loads at all. Even a simple php file that contains only phpinfo(); takes forever... I'm completely baffled by this. -- omega13a Owner and Founder of UFT http://www.fedtrek.com On 06/07/2012 03:01 AM, Steve wrote: > -------- Original-Nachricht -------- >> Datum: Wed, 06 Jun 2012 15:49:38 -0700 >> Von: Brandon Amaro >> An: nginx at nginx.org >> Betreff: Re: nginx 1.3.1 not PHP-FPM friendly >> I'm still having problems. I forgot to mention that I'm running PHP 5.4.3. >> > I run nginx 1.3.1 and PHP 5.4.3 too and have no issues at all. > > >> -- >> omega13a >> Owner and Founder of UFT >> http://www.fedtrek.com >> >> >> >> On 06/05/2012 09:37 PM, Brandon Amaro wrote: >>> I'm assuming this is the error as that the error long is filled with >>> similar messages since around the time I upgraded: >>> >>> >>> 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to >>> unix:/var/lib/php5-fpm/web1.sock failed (11: Resource temporarily >>> unavailable) while connecting to upstream, client: 72.253.115.223, >>> server: fedtrek.com, request: "POST >>> /Forums-file-ajax_online_update-mypage-minus200.html HTTP/1.1", >>> upstream: "fastcgi://unix:/var/lib/php5-fpm/web1.sock:", host: >>> "www.fedtrek.com", referrer: >>> "http://www.fedtrek.com/Borg_Species_Designations.html" >>> >>> On 06/05/2012 09:28 PM, B.R. wrote: >>>> Wild guess: maybe some file openings limit reached? >>>> >>>> Most important thing to start: what does the error log says? >>>> --- >>>> *B. R.* >>>> >>>> >>>> On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro>>> > wrote: >>>> >>>> After I upgraded to nginx 1.3.1, I've been having a problem with >>>> PHP-FPM. It would work fine for a few minutes then I would start >>>> getting 500 Internal Server Errors on all the pages that use PHP. >>>> I have to keep restaring the PHP-FPM service in order to >>>> navigate my website. Everything was running smoothly before >>>> upgrading nginx and I've haven't made any recent changes in the >>>> config files for both nginx and anything PHP related. I'm running >>>> Fedora 14 (can't upgrade to anything more recent) and compiled >>>> nginx myself as I've always done in the past. Any help would be >>>> greatly appreciated. >>>> >>>> -- >>>> omega13a >>>> Owner and Founder of UFT >>>> http://www.fedtrek.com >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> -- >>> omega13a >>> Owner and Founder of UFT >>> http://www.fedtrek.com >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx From lists at ruby-forum.com Thu Jun 7 17:33:12 2012 From: lists at ruby-forum.com (Peter P.) Date: Thu, 07 Jun 2012 19:33:12 +0200 Subject: Status of Nginx centralised logging In-Reply-To: References: Message-ID: I just wanted to add a couple of comments on using sFlow as the logging protocol - there are also sFlow plugins for Apache and Tomcat, allowing you to get the same data from all the HTTP server pools. On the analyzer side, the HTTP sFlow export also includes performance counters that can be trended with Graphite or Ganglia. You can also use sflowtool to convert the binary sFlow datagrams to ASCII combined logfile format that can be fed into any log analyzer. http://blog.sflow.com/2011/04/nginx.html Mark Moseley wrote in post #1063450: > >> * use another 3rd-party logging protocol, e.g. statsd, redis: as >> similarly unsupportable as syslog patches > > > There's also the nginx sflow module, > http://code.google.com/p/nginx-sflow-module/ > > I realize that this is another 3rd party module, so falls into the > same 'not officially supported' category, but I figured I'd mention it > as something to add to the centralized logging bag of tricks. I've > used it and it's pretty nice, though obviously you need an sflow > server somewhere to send to. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Jun 7 17:57:05 2012 From: nginx-forum at nginx.us (parttis) Date: Thu, 7 Jun 2012 13:57:05 -0400 (EDT) Subject: Code coverage on nginx using lcov (gcov) Message-ID: <7c9e6c714adc2c44c6bcbb6a748af0f1.NginxMailingListEnglish@forum.nginx.org> Hi I am trying to measure code coverage on nginx with lcov (gcov) tool. So far I am able to get code coverage results, but they are always the same. Here is a short recap on what I've done. Downloaded source code. $ cd /opt $ wget http://nginx.org/download/nginx-1.2.1.tar.gz $ tar xzvf nginx-1.2.1.tar.gz $ cd nginx-1.2.1 $ chmod -R 777 . Compiled source code with following compiler and linker flags. $ ./configure --with-cc-opt=--coverage --with-ld-opt=-lgcov $ make $ make install I have done code coverage measurements in following order. $ lcov --directory /opt/nginx-1.2.1 --zerocounters $ /usr/local/nginx/sbin/nginx Made an HTTP request to nginx server. $ lynx http://localhost:80 $ /usr/local/nginx/sbin/nginx/nginx -s quit $ lcov --directory /opt/nginx-1.2.1 --capture --output-file app.info --base-directory /opt/nginx-1.2.1 $ genhtml -o /opt/report1 app.info Here are the code coverage results. lines......: 15.1% (4159 of 27467 lines) functions..: 22.6% (262 of 1158 functions) branches...: 11.7% (2231 of 19121 branches) My problem is that I get exactly the same code coverage result when I don't even make the HTTP request. I think the code coverage in this case should be smaller than when an HTTP request has been made. And I get exactly the same code coverage result even if I use several different request methods (GET, HEAD, PUT, DELETE, POST and TRACE). I think the code coverage in this case should be bigger than when only one HTTP request has been made. Could someone please give an advice on this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227329,227329#msg-227329 From reallfqq-nginx at yahoo.fr Thu Jun 7 18:48:10 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 7 Jun 2012 14:48:10 -0400 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <4FD0D005.3010603@fedtrek.com> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> <4FCEDE9D.8080107@fedtrek.com> <4FCFDE82.4030303@fedtrek.com> <20120607100137.211840@gmx.net> <4FD0D005.3010603@fedtrek.com> Message-ID: If ISPConfig is running without trouble, it seems the problem doesn't come from Nginx or PHP-FPM themselves. Check configuration, intermediates, conflicts, etc. Maybe purge or start from scratch on a separate small installation of Nginx + PHP-FPM? Try to clean-up things and remove unecessary features to get back to a very basic service. That's what I do when strange or apparently illogical problems arouse, --- *B. R.* On Thu, Jun 7, 2012 at 12:00 PM, Brandon Amaro wrote: > My investigation of things on my server have just gotten a bit weirder. My > server has ISPConfig installed and the web based control panel is not > effected at all by this slowness. Yet everything on my site is taking an > insane amount of time to load if it loads at all. Even a simple php file > that contains only phpinfo(); takes forever... I'm completely baffled by > this. > > > -- > omega13a > Owner and Founder of UFT > http://www.fedtrek.com > > > > On 06/07/2012 03:01 AM, Steve wrote: > >> -------- Original-Nachricht -------- >> >>> Datum: Wed, 06 Jun 2012 15:49:38 -0700 >>> Von: Brandon Amaro >>> An: nginx at nginx.org >>> Betreff: Re: nginx 1.3.1 not PHP-FPM friendly >>> I'm still having problems. I forgot to mention that I'm running PHP >>> 5.4.3. >>> >>> I run nginx 1.3.1 and PHP 5.4.3 too and have no issues at all. >> >> >> -- >>> omega13a >>> Owner and Founder of UFT >>> http://www.fedtrek.com >>> >>> >>> >>> On 06/05/2012 09:37 PM, Brandon Amaro wrote: >>> >>>> I'm assuming this is the error as that the error long is filled with >>>> similar messages since around the time I upgraded: >>>> >>>> >>>> 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to >>>> unix:/var/lib/php5-fpm/web1.**sock failed (11: Resource temporarily >>>> unavailable) while connecting to upstream, client: 72.253.115.223, >>>> server: fedtrek.com, request: "POST >>>> /Forums-file-ajax_online_**update-mypage-minus200.html HTTP/1.1", >>>> upstream: "fastcgi://unix:/var/lib/php5-**fpm/web1.sock:", host: >>>> "www.fedtrek.com", referrer: >>>> "http://www.fedtrek.com/Borg_**Species_Designations.html >>>> " >>>> >>>> On 06/05/2012 09:28 PM, B.R. wrote: >>>> >>>>> Wild guess: maybe some file openings limit reached? >>>>> >>>>> Most important thing to start: what does the error log says? >>>>> --- >>>>> *B. R.* >>>>> >>>>> >>>>> On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro>>>> > wrote: >>>>> >>>>> After I upgraded to nginx 1.3.1, I've been having a problem with >>>>> PHP-FPM. It would work fine for a few minutes then I would start >>>>> getting 500 Internal Server Errors on all the pages that use PHP. >>>>> I have to keep restaring the PHP-FPM service in order to >>>>> navigate my website. Everything was running smoothly before >>>>> upgrading nginx and I've haven't made any recent changes in the >>>>> config files for both nginx and anything PHP related. I'm running >>>>> Fedora 14 (can't upgrade to anything more recent) and compiled >>>>> nginx myself as I've always done in the past. Any help would be >>>>> greatly appreciated. >>>>> >>>>> -- >>>>> omega13a >>>>> Owner and Founder of UFT >>>>> http://www.fedtrek.com >>>>> >>>>> ______________________________**_________________ >>>>> nginx mailing list >>>>> nginx at nginx.org> >>>>> http://mailman.nginx.org/**mailman/listinfo/nginx >>>>> >>>>> >>>>> >>>>> >>>>> ______________________________**_________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/**mailman/listinfo/nginx >>>>> >>>> >>>> -- >>>> omega13a >>>> Owner and Founder of UFT >>>> http://www.fedtrek.com >>>> >>>> >>>> ______________________________**_________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/**mailman/listinfo/nginx >>>> >>> > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steeeeeveee at gmx.net Thu Jun 7 20:24:09 2012 From: steeeeeveee at gmx.net (Steve) Date: Thu, 07 Jun 2012 22:24:09 +0200 Subject: nginx 1.3.1 not PHP-FPM friendly In-Reply-To: <4FD0D005.3010603@fedtrek.com> References: <20120605143050.GW31671@mdounin.ru> <4FCECDEA.8080607@fedtrek.com> <4FCEDE9D.8080107@fedtrek.com> <4FCFDE82.4030303@fedtrek.com> <20120607100137.211840@gmx.net> <4FD0D005.3010603@fedtrek.com> Message-ID: <20120607202409.232830@gmx.net> -------- Original-Nachricht -------- > Datum: Thu, 07 Jun 2012 09:00:05 -0700 > Von: Brandon Amaro > An: nginx at nginx.org > Betreff: Re: nginx 1.3.1 not PHP-FPM friendly > My investigation of things on my server have just gotten a bit weirder. > My server has ISPConfig installed and the web based control panel is not > effected at all by this slowness. Yet everything on my site is taking an > insane amount of time to load if it loads at all. Even a simple php file > that contains only phpinfo(); takes forever... I'm completely baffled by > this. > But all of this has nothing to do with nginx. Does it take the same amount of time if you call with php cli? Probably yes. Right? > -- > omega13a > Owner and Founder of UFT > http://www.fedtrek.com > > > > On 06/07/2012 03:01 AM, Steve wrote: > > -------- Original-Nachricht -------- > >> Datum: Wed, 06 Jun 2012 15:49:38 -0700 > >> Von: Brandon Amaro > >> An: nginx at nginx.org > >> Betreff: Re: nginx 1.3.1 not PHP-FPM friendly > >> I'm still having problems. I forgot to mention that I'm running PHP > 5.4.3. > >> > > I run nginx 1.3.1 and PHP 5.4.3 too and have no issues at all. > > > > > >> -- > >> omega13a > >> Owner and Founder of UFT > >> http://www.fedtrek.com > >> > >> > >> > >> On 06/05/2012 09:37 PM, Brandon Amaro wrote: > >>> I'm assuming this is the error as that the error long is filled with > >>> similar messages since around the time I upgraded: > >>> > >>> > >>> 2012/06/05 21:35:23 [error] 5976#0: *43558 connect() to > >>> unix:/var/lib/php5-fpm/web1.sock failed (11: Resource temporarily > >>> unavailable) while connecting to upstream, client: 72.253.115.223, > >>> server: fedtrek.com, request: "POST > >>> /Forums-file-ajax_online_update-mypage-minus200.html HTTP/1.1", > >>> upstream: "fastcgi://unix:/var/lib/php5-fpm/web1.sock:", host: > >>> "www.fedtrek.com", referrer: > >>> "http://www.fedtrek.com/Borg_Species_Designations.html" > >>> > >>> On 06/05/2012 09:28 PM, B.R. wrote: > >>>> Wild guess: maybe some file openings limit reached? > >>>> > >>>> Most important thing to start: what does the error log says? > >>>> --- > >>>> *B. R.* > >>>> > >>>> > >>>> On Tue, Jun 5, 2012 at 11:26 PM, Brandon Amaro >>>> > wrote: > >>>> > >>>> After I upgraded to nginx 1.3.1, I've been having a problem with > >>>> PHP-FPM. It would work fine for a few minutes then I would start > >>>> getting 500 Internal Server Errors on all the pages that use > PHP. > >>>> I have to keep restaring the PHP-FPM service in order to > >>>> navigate my website. Everything was running smoothly before > >>>> upgrading nginx and I've haven't made any recent changes in the > >>>> config files for both nginx and anything PHP related. I'm > running > >>>> Fedora 14 (can't upgrade to anything more recent) and compiled > >>>> nginx myself as I've always done in the past. Any help would be > >>>> greatly appreciated. > >>>> > >>>> -- > >>>> omega13a > >>>> Owner and Founder of UFT > >>>> http://www.fedtrek.com > >>>> > >>>> _______________________________________________ > >>>> nginx mailing list > >>>> nginx at nginx.org > >>>> http://mailman.nginx.org/mailman/listinfo/nginx > >>>> > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> nginx mailing list > >>>> nginx at nginx.org > >>>> http://mailman.nginx.org/mailman/listinfo/nginx > >>> > >>> -- > >>> omega13a > >>> Owner and Founder of UFT > >>> http://www.fedtrek.com > >>> > >>> > >>> _______________________________________________ > >>> nginx mailing list > >>> nginx at nginx.org > >>> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de From ne at vbart.ru Thu Jun 7 20:59:57 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 8 Jun 2012 00:59:57 +0400 Subject: Code coverage on nginx using lcov (gcov) In-Reply-To: <7c9e6c714adc2c44c6bcbb6a748af0f1.NginxMailingListEnglish@forum.nginx.org> References: <7c9e6c714adc2c44c6bcbb6a748af0f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201206080059.57528.ne@vbart.ru> On Thursday 07 June 2012 21:57:05 parttis wrote: > Hi > > I am trying to measure code coverage on nginx with lcov (gcov) tool. So > far I am able to get code coverage results, but they are always the > same. Here is a short recap on what I've done. > > Downloaded source code. > > $ cd /opt > $ wget http://nginx.org/download/nginx-1.2.1.tar.gz > $ tar xzvf nginx-1.2.1.tar.gz > $ cd nginx-1.2.1 > $ chmod -R 777 . > > Compiled source code with following compiler and linker flags. > > $ ./configure --with-cc-opt=--coverage --with-ld-opt=-lgcov > $ make > $ make install > > I have done code coverage measurements in following order. > > $ lcov --directory /opt/nginx-1.2.1 --zerocounters > $ /usr/local/nginx/sbin/nginx > > Made an HTTP request to nginx server. > $ lynx http://localhost:80 > > $ /usr/local/nginx/sbin/nginx/nginx -s quit > $ lcov --directory /opt/nginx-1.2.1 --capture --output-file app.info > --base-directory /opt/nginx-1.2.1 > $ genhtml -o /opt/report1 app.info > > Here are the code coverage results. > lines......: 15.1% (4159 of 27467 lines) > functions..: 22.6% (262 of 1158 functions) > branches...: 11.7% (2231 of 19121 branches) > > My problem is that I get exactly the same code coverage result when I > don't even make the HTTP request. I think the code coverage in this case > should be smaller than when an HTTP request has been made. > > And I get exactly the same code coverage result even if I use several > different request methods (GET, HEAD, PUT, DELETE, POST and TRACE). I > think the code coverage in this case should be bigger than when only one > HTTP request has been made. > > Could someone please give an advice on this? > Looks like it measures only the master process, which does not process requests. I don't know much about gcov, probably it can measure multi-process applications, but also you can set the "master_process" directive to "off". http://nginx.org/r/master_process wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jun 8 02:25:47 2012 From: nginx-forum at nginx.us (alyn) Date: Thu, 7 Jun 2012 22:25:47 -0400 (EDT) Subject: Converse Comme des Garcons Play Online Message-ID: <64e37a70170f5de70713f17d29c85e17.NginxMailingListEnglish@forum.nginx.org> Material used for the [b][url=http://www.conversefeatured.com/converse-comme-des-garcons-play-c-128.html]Converse Comme des Garcons Play Online[/url][/b] are mostly corduroy, canvas and woolrich. This is not it! You can also pick various styles like [b][url=http://www.conversefeatured.com/converse-comme-des-garcons-play-c-128.html]converse comme des garcons play shoes[/url][/b] on's.We would like to share the names of our best sellers in Converse shoes so you can all check out this great collection. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227335,227335#msg-227335 From simohayha.bobo at gmail.com Fri Jun 8 07:29:21 2012 From: simohayha.bobo at gmail.com (Simon Liu) Date: Fri, 8 Jun 2012 15:29:21 +0800 Subject: nginx configure error when use zsh Message-ID: Hello. nginx configure error when I use zsh. This is my environment: operating system: Linux(3.3.7-1-ARCH) zsh: 4.3.17 (i686-pc-linux-gnu) Nginx: 1.3.1 This is configure message: checking for OS + Linux 3.3.7-1-ARCH i686 checking for C compiler ... found + using GNU C compiler + gcc version: 4.7.0 20120505 (prerelease) (GCC) ......................................................................... ............................................................... checking for Linux specific features *auto/os/linux:154: parse error near `version=$((`uname -r...'* ................................................................................................... *auto/modules:396: read-only variable: modules* ................................................................................... Thanks! -- do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jun 8 07:56:00 2012 From: nginx-forum at nginx.us (lima) Date: Fri, 8 Jun 2012 03:56:00 -0400 (EDT) Subject: Error while connecting to apache from nginx running on same machine Message-ID: Hi, I have an LB setup with nginx for an ssl enabled website. It will be listening at 443 port. For some requests, i need to proxy them to apache running at 7443 port in the same machine. but when i send the request, it is trying to forward it to apache and is getting 500 error. I checked the logs in apache, where there is nothing logged in ssl-error_log (which logs all the errors happening while transferring https requests) but the error_log (which logs all the errors happening while transferring http requests) was showing the message like [error] Hello But if I pass it to apache running in 7443 in some other machine, it is working fine. So, I think there is some problem while handshaking between nginx and apache running on different ports in the same machine. Can some one please assist me how to resolve this..Thanks in advance... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227341,227341#msg-227341 From sb at waeme.net Fri Jun 8 08:56:30 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Fri, 8 Jun 2012 12:56:30 +0400 Subject: nginx configure error when use zsh In-Reply-To: References: Message-ID: <843A9285-0DA9-4475-BFA0-50DCDC43E945@waeme.net> On 08.06.2012, at 11:29, Simon Liu wrote: > Hello. > > nginx configure error when I use zsh. Please show me result of ls -l `which sh` > > This is my environment: > > operating system: Linux(3.3.7-1-ARCH) > > zsh: 4.3.17 (i686-pc-linux-gnu) > > Nginx: 1.3.1 > > This is configure message: > > checking for OS > + Linux 3.3.7-1-ARCH i686 > checking for C compiler ... found > + using GNU C compiler > + gcc version: 4.7.0 20120505 (prerelease) (GCC) > ......................................................................... > ............................................................... > checking for Linux specific features > auto/os/linux:154: parse error near `version=$((`uname -r...' > ................................................................................................... > auto/modules:396: read-only variable: modules > ................................................................................... > > > Thanks! > > -- > do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Jun 8 11:55:11 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jun 2012 15:55:11 +0400 Subject: nginx configure error when use zsh In-Reply-To: References: Message-ID: <20120608115511.GD31671@mdounin.ru> Hello! On Fri, Jun 08, 2012 at 03:29:21PM +0800, Simon Liu wrote: > Hello. > > nginx configure error when I use zsh. > > This is my environment: > > operating system: Linux(3.3.7-1-ARCH) > > zsh: 4.3.17 (i686-pc-linux-gnu) You mean using zsh instead of POSIX-compliant shell for /bin/sh? Looks like bad idea. Zsh docs at most promise it will "try to emulate sh", nothing more. Maxim Dounin From bruno.premont at restena.lu Fri Jun 8 12:40:52 2012 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Fri, 8 Jun 2012 14:40:52 +0200 Subject: nginx worker segfault, NULL pool Message-ID: <20120608144052.79d06009@pluto.restena.lu> Hi, Running nginx on ARM I'm having it segfault at about any request (those known not to crash are /status/nginx and /status/php-fpm). Attaching it with GDB I get the following trace: Program received signal SIGSEGV, Segmentation fault. ngx_alloc_chain_link (pool=0x0) at src/core/ngx_buf.c:52 52 src/core/ngx_buf.c: No such file or directory. in src/core/ngx_buf.c (gdb) backtrace #0 ngx_alloc_chain_link (pool=0x0) at src/core/ngx_buf.c:52 #1 0x00012290 in ngx_chain_writer (data=0x525b24, in=) at src/core/ngx_output_chain.c:626 #2 0x0001202c in ngx_output_chain (ctx=0x525ae4, in=0x5260a4) at src/core/ngx_output_chain.c:66 #3 0x0004a6d0 in ngx_http_upstream_send_request (r=0x524c18, u=0x525a9c) at src/http/ngx_http_upstream.c:1394 #4 0x0004aeec in ngx_http_upstream_init_request (r=0x524c18) at src/http/ngx_http_upstream.c:645 #5 ngx_http_upstream_init (r=0x524c18) at src/http/ngx_http_upstream.c:446 #6 0x000427a4 in ngx_http_read_client_request_body (r=0x524c18, post_handler=0x4ac80 ) at src/http/ngx_http_request_body.c:59 #7 0x000612e0 in ngx_http_fastcgi_handler (r=0x524c18) at src/http/modules/ngx_http_fastcgi_module.c:636 #8 0x00036d18 in ngx_http_core_content_phase (r=0x524c18, ph=0x54ce08) at src/http/ngx_http_core_module.c:1396 #9 0x00032458 in ngx_http_core_run_phases (r=0x524c18) at src/http/ngx_http_core_module.c:877 #10 0x00037848 in ngx_http_internal_redirect (r=0x524c18, uri=, args=) at src/http/ngx_http_core_module.c:2545 #11 0x0004dad0 in ngx_http_index_handler (r=0x524c18) at src/http/modules/ngx_http_index_module.c:277 #12 0x00036d38 in ngx_http_core_content_phase (r=0x524c18, ph=0x54ce08) at src/http/ngx_http_core_module.c:1403 #13 0x00032458 in ngx_http_core_run_phases (r=0x524c18) at src/http/ngx_http_core_module.c:877 #14 0x0003bccc in ngx_http_process_request (r=0x524c18) at src/http/ngx_http_request.c:1688 #15 0x0003c6e0 in ngx_http_process_request_line (rev=0x40a5b10c) at src/http/ngx_http_request.c:932 #16 0x000397b8 in ngx_http_init_request (rev=0x40a5b10c) at src/http/ngx_http_request.c:519 #17 0x0002bf70 in ngx_epoll_process_events (cycle=, timer=, flags=) at src/event/modules/ngx_epoll_module.c:679 #18 0x00023d0c in ngx_process_events_and_timers (cycle=0x51ec18) at src/event/ngx_event.c:247 #19 0x0002a278 in ngx_worker_process_cycle (cycle=, data=) at src/os/unix/ngx_process_cycle.c:806 #20 0x00028920 in ngx_spawn_process (cycle=0x51ec18, proc=0, data=0x40096918, name=0x69d00 "worker process", respawn=-3) at src/os/unix/ngx_process.c:198 #21 0x0002a6f0 in ngx_start_worker_processes (cycle=0x51ec18, n=1, type=-3) at src/os/unix/ngx_process_cycle.c:365 #22 0x0002acb0 in ngx_master_process_cycle (cycle=0x51ec18) at src/os/unix/ngx_process_cycle.c:137 #23 0x0000eb64 in main (argc=, argv=) at src/core/nginx.c:410 System is Gentoo on ARM (armv5tel), nginx -V (applied patch: forward-ported ipv6-geoip support patch as attached): nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-scgi-temp-path=/var/tmp/nginx/scgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --with-file-aio --with-aio_module --with-ipv6 --with-pcre --without-http_browser_module --without-http_charset_module --without-http_empty_gif_module --without-http_memcached_module --without-http_proxy_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --without-http_userid_module --without-http_uwsgi_module --with-http_geoip_module --with-http_stub_status_module --with-http_xslt_module --with-http_realip_module --add-module=/var/tmp/portage/www-servers/nginx-1.2.1/work/agentzh-headers-more-nginx-module-3580526 --without-http-cache --with-http_ssl_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --user=nginx --group=nginx Having a look at the code it means that ngx_http_upstream_t->output->filter_ctx->pool is NULL but still being dereferenced... I have seen equivalent crash behavior for nginx-1.2.0 (no analysed or check exact cause with gdb and debug symbols) on the same host but have not seen crashes on an x86 system with 1.2.0. Note, config might help trigger the issue, quoted below: ############ nginx.conf ############### user nginx nginx; daemon off; worker_processes 1; worker_cpu_affinity 0001; worker_rlimit_nofile 65535; error_log /var/log/nginx/error_log info; events { accept_mutex off; worker_connections 10240; use epoll; } http { include /etc/nginx/mime.types; #default_type application/octet-stream; server_names_hash_bucket_size 64; geoip_country /usr/share/GeoIP/GeoIPv6.dat; log_format main '$remote_addr $host $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" $request_time "$gzip_ratio" -'; log_format main_ssl '$remote_addr $host $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" $request_time "$gzip_ratio" $ssl_protocol'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain application/xhtml+xml text/css application/javascript application/xml application/json; output_buffers 1 32k; postpone_output 1460; sendfile off; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; # HTTP catch-all server { listen :80 default_server; listen [:80; listen [; # replaced subnet with placeholder allow ; allow ; allow ; deny all; root /home/www/htdocs; index index.php index.html; error_page 400 /error400.php; error_page 403 /error403.php; error_page 404 /error404.php; error_page 500 /error500.php; error_page 502 /error502.php; error_page 503 /error503.php; rewrite ^/$ /status.html redirect; # Status monitoring pages location ~ ^/status/php-fpm$ { include fastcgi_params; fastcgi_buffer_size 8k; fastcgi_buffers 16 4k; fastcgi_param SCRIPT_FILENAME /dev/null; fastcgi_param REDIRECT_STATUS 200; fastcgi_pass unix:/run/php-fpm/fpm.socket; } location = /status/nginx { stub_status on; } # Remaining pages location ~ ^/(?.*)\.html$ { # Rewrite non-html pages to php if (-f $request_filename) { break; } if (-f $document_root/$page.php ) { rewrite ^ /$page.php last; } } rewrite ^/rrdgraph.(png|svg|pdf|eps)$ /rrdgraph.php last; location ~ ^/(?.*/)?error(?[0-9]+)\.php$ { # Handle error pages if (!-f $document_root/$path/error.php) { rewrite ^ /error404.txt last; } if ($ecode !~ [0-9]+) { set $ecode 200; } include fastcgi_params; fastcgi_buffer_size 8k; fastcgi_buffers 16 4k; fastcgi_param SCRIPT_FILENAME $document_root/$path/error.php; fastcgi_param REDIRECT_STATUS $ecode; fastcgi_pass unix:/var/run/php-fpm/fpm.socket; } location ~ \.php$ { # Handle PHP pages if (!-f $request_filename) { rewrite ^ /error404.php last; } include fastcgi_params; fastcgi_buffer_size 8k; fastcgi_buffers 16 4k; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param REDIRECT_STATUS 200; fastcgi_pass unix:/var/run/php-fpm/fpm.socket; } location /img/ { expires 1h; } location /css/ { expires 1h; } location /js/ { expires 1h; } ########## include-2-end ############### } ########## include-1-end ############### } -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.1-geoip-ipv6.patch Type: text/x-patch Size: 10076 bytes Desc: not available URL: From mdounin at mdounin.ru Fri Jun 8 15:31:39 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jun 2012 19:31:39 +0400 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120608144052.79d06009@pluto.restena.lu> References: <20120608144052.79d06009@pluto.restena.lu> Message-ID: <20120608153139.GF31671@mdounin.ru> Hello! On Fri, Jun 08, 2012 at 02:40:52PM +0200, Bruno Pr?mont wrote: > Running nginx on ARM I'm having it segfault at about any request (those > known not to crash are /status/nginx and /status/php-fpm). > Attaching it with GDB I get the following trace: [...] > geoip_country /usr/share/GeoIP/GeoIPv6.dat; Is it works for you if you don't use GeoIP? [...] Maxim Dounin From supairish at gmail.com Fri Jun 8 15:48:44 2012 From: supairish at gmail.com (Chris Irish) Date: Fri, 8 Jun 2012 08:48:44 -0700 Subject: Expires Headers under https Message-ID: Hey all, I'm having a Nginx problem with my Rails app. When I try setting Expires Headers for Static Assets in an https server block they 404. I've tried a couple different ways but can't seem to get it to work. Here a gist of what I've tried. Any suggestions? https://gist.github.com/2896294 Much appreciated -- Chris Irish Burst Software Rails Web Development w: www.christopherirish.com e: supairish at gmail.com c: 623-523-2221 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jun 8 16:39:49 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jun 2012 20:39:49 +0400 Subject: Expires Headers under https In-Reply-To: References: Message-ID: <20120608163949.GI31671@mdounin.ru> Hello! On Fri, Jun 08, 2012 at 08:48:44AM -0700, Chris Irish wrote: > Hey all, > I'm having a Nginx problem with my Rails app. When I try setting > Expires Headers for Static Assets in an https server block they 404. I've > tried a couple different ways but can't seem to get it to work. Here a > gist of what I've tried. Any suggestions? > > https://gist.github.com/2896294 Most likely you are hitting this "problem": http://wiki.nginx.org/Pitfalls#Root_inside_Location_Block Can't say for sure as you've "ommitted the non relevant config parts for clarity". Maxim Dounin From cordmacleod at gmail.com Fri Jun 8 17:38:25 2012 From: cordmacleod at gmail.com (Cord MacLeod) Date: Fri, 8 Jun 2012 10:38:25 -0700 Subject: filtering access logs Message-ID: <4D535C8C-A5ED-4A39-8E3A-19A4F17FF36F@gmail.com> I have an nginx server that terminates SSL and subsequently logs usernames and passwords send in the URL string in the logs. There is a particular compliance regulation that required me to scrub these out with xxxx or some set of arbitrary characters before they are written to disk. Is there a facility or way of doing this with nginx or some add on? From bruno.premont at restena.lu Fri Jun 8 19:43:44 2012 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Fri, 8 Jun 2012 21:43:44 +0200 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120608153139.GF31671@mdounin.ru> References: <20120608144052.79d06009@pluto.restena.lu> <20120608153139.GF31671@mdounin.ru> Message-ID: <20120608214344.10a8a7a9@neptune.home> Hello Maxim, On Fri, 08 June 2012 Maxim Dounin wrote: > On Fri, Jun 08, 2012 at 02:40:52PM +0200, Bruno Pr?mont wrote: > > Running nginx on ARM I'm having it segfault at about any request (those > > known not to crash are /status/nginx and /status/php-fpm). > > Attaching it with GDB I get the following trace: > > [...] > > > geoip_country /usr/share/GeoIP/GeoIPv6.dat; > > Is it works for you if you don't use GeoIP? Just disabling it config side makes no difference. I will try disabling it at configure time and see if it changes anything, though I doubt it will. Oh, and in case it is important, my fastcgi_params include some extra options: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param SSL_CIPHER $ssl_cipher if_not_empty; fastcgi_param SSL_PROTOCOL $ssl_protocol if_not_empty; fastcgi_param SSL_SESSION_ID $ssl_session_id if_not_empty; fastcgi_param SSL_CLIENT_SERIAL $ssl_client_serial if_not_empty; fastcgi_param SSL_CLIENT_S_DN $ssl_client_s_dn if_not_empty; fastcgi_param SSL_CLIENT_I_DN $ssl_client_i_dn if_not_empty; fastcgi_param SSL_CLIENT_CERT $ssl_client_cert if_not_empty; fastcgi_param SSL_CLIENT_RAW_CERT $ssl_client_raw_cert if_not_empty; fastcgi_param SSL_CLIENT_VERIFY $ssl_client_verify if_not_empty; # disabled as requested #fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; #fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; > [...] > > Maxim Dounin Bruno From nginx-forum at nginx.us Fri Jun 8 20:31:17 2012 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 8 Jun 2012 16:31:17 -0400 (EDT) Subject: Error while connecting to apache from nginx running on same machine In-Reply-To: References: Message-ID: <7218c6792a667f92b837417745b7c8fe.NginxMailingListEnglish@forum.nginx.org> Show us the relative config parts, we're just guessing without it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227341,227371#msg-227371 From bruno.premont at restena.lu Fri Jun 8 21:40:46 2012 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Fri, 8 Jun 2012 23:40:46 +0200 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120608214344.10a8a7a9@neptune.home> References: <20120608144052.79d06009@pluto.restena.lu> <20120608153139.GF31671@mdounin.ru> <20120608214344.10a8a7a9@neptune.home> Message-ID: <20120608234046.48857ccb@neptune.home> Hello Maxim, > On Fri, 08 June 2012 Maxim Dounin wrote: > > On Fri, Jun 08, 2012 at 02:40:52PM +0200, Bruno Pr?mont wrote: > > > Running nginx on ARM I'm having it segfault at about any request (those > > > known not to crash are /status/nginx and /status/php-fpm). > > > Attaching it with GDB I get the following trace: > > > > [...] > > > > > geoip_country /usr/share/GeoIP/GeoIPv6.dat; > > > > Is it works for you if you don't use GeoIP? > > Just disabling it config side makes no difference. > > I will try disabling it at configure time and see if it changes > anything, though I doubt it will. Exact same result when geoip support is not built at all. Looking more exactly at the URLs I tested, static file like images don't crash the worker, just those that get handled by php-fpm upstream do (e.g. /collectd/ which implies /collectd/index.php). For the static files the result browser side looks the same, connection closed before getting any content but that time around nginx logs something to error log: 2012/06/08 23:30:42 [alert] 20638#0: *2 pread() read only 400 of 32768 from "/home/www/htdocs/collectd/add.png" while sending response to client, client: 123.123.123.123, server: arm.tld, request: "GET /collectd/add.png HTTP/1.1", host: "arm.tld" Tough stat on that file returns: File: `/home/www/htdocs/collectd/add.png' Size: 400 Blocks: 8 IO Block: 4096 regular file Device: b302h/45826d Inode: 33 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2010-02-26 17:31:32.899120476 +0100 Modify: 2010-02-26 17:31:32.899120476 +0100 Change: 2011-05-07 01:19:21.440005750 +0200 Birth: - Toogling SendFile back to "on" seems to get file access to work again (a few days ago with 1.2.0 turning it of made it work for a day or so)??? Bruno From nginx-forum at nginx.us Sat Jun 9 06:44:23 2012 From: nginx-forum at nginx.us (torajx) Date: Sat, 9 Jun 2012 02:44:23 -0400 (EDT) Subject: block access to a file !! In-Reply-To: <20120606143153.GG4719@craic.sysops.org> References: <20120606143153.GG4719@craic.sysops.org> Message-ID: thank you again; i will test and inform you here... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227281,227376#msg-227376 From nginx-forum at nginx.us Sat Jun 9 08:41:23 2012 From: nginx-forum at nginx.us (valinor) Date: Sat, 9 Jun 2012 04:41:23 -0400 (EDT) Subject: nginx default server not used In-Reply-To: References: Message-ID: <97221a02eacadcfdd878d65ba48ebf34.NginxMailingListEnglish@forum.nginx.org> Hello I have found another possible cause for nginx to ignore default server I have encountered it just now, and it is not your case, but maybe it would be helpful for someone At example: server { listen 80 default_server; server_name default.domain.dom; ................ } server { listen :80; server_name some.server.domain.dom; .................. } In this case, listening schema for the default server is _not_the_same_ as for another server. Considering the fact that we have described a server with the particular ip address, our default server for that ip (although it is listening on all IPs) would be ignored, and all queries to that particular ip, even with "Host: default.domain.dom", would be directed to "some.server.domain.dom" as it becomes a default (first-described) server for this listening schema. We have to explicitly describe "listen :80" on the default server to enable it for this schema. WBW, valinor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222679,227377#msg-227377 From nginx-forum at nginx.us Sat Jun 9 10:25:55 2012 From: nginx-forum at nginx.us (zgen) Date: Sat, 9 Jun 2012 06:25:55 -0400 (EDT) Subject: Freebsd/jail: nginx ignores IP in listen directive Message-ID: FreeBSD 8.3/amd64 nginx 1.2.1 jail default config, but listen is: server { listen ip_addr_of_jail:80; ... } but $ telnet 127.0.0.1 80 gets answer from nginx. If I remark this listen directive - nginx isn't answer. Why nginx listens localhost? I found something similar to this here: http://mailman.nginx.org/pipermail/nginx/2009-February/009947.html but server { listen 80; allow ip_addr_of_jail; deny all; ... } isn't works too. Thanks for help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227381,227381#msg-227381 From siefke_listen at web.de Sat Jun 9 10:58:14 2012 From: siefke_listen at web.de (Silvio Siefke) Date: Sat, 9 Jun 2012 12:58:14 +0200 Subject: Unable to start php-fpm In-Reply-To: <349b4899bb3c8449ab7f5b01e79b637f.NginxMailingListEnglish@forum.nginx.org> References: <349b4899bb3c8449ab7f5b01e79b637f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120609125814.8db62a67.siefke_listen@web.de> Hello, On Thu, 26 Apr 2012 03:23:30 -0400 (EDT) wrote rite2subodh: > Unable to start php-fpm. > > [root at ip-10-28-197-14 html]# service php-fpm start > Starting php-fpm: [ OK ] > [26-Apr-2012 07:22:32] ERROR: [pool www] cannot get uid for user > 'apache' > [26-Apr-2012 07:22:32] ERROR: FPM initialization failed is apache user present on system= Regards Silvio From mdounin at mdounin.ru Sat Jun 9 13:48:03 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 9 Jun 2012 17:48:03 +0400 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120608234046.48857ccb@neptune.home> References: <20120608144052.79d06009@pluto.restena.lu> <20120608153139.GF31671@mdounin.ru> <20120608214344.10a8a7a9@neptune.home> <20120608234046.48857ccb@neptune.home> Message-ID: <20120609134803.GK31671@mdounin.ru> Hello! On Fri, Jun 08, 2012 at 11:40:46PM +0200, Bruno Pr?mont wrote: > Hello Maxim, > > > On Fri, 08 June 2012 Maxim Dounin wrote: > > > On Fri, Jun 08, 2012 at 02:40:52PM +0200, Bruno Pr?mont wrote: > > > > Running nginx on ARM I'm having it segfault at about any request (those > > > > known not to crash are /status/nginx and /status/php-fpm). > > > > Attaching it with GDB I get the following trace: > > > > > > [...] > > > > > > > geoip_country /usr/share/GeoIP/GeoIPv6.dat; > > > > > > Is it works for you if you don't use GeoIP? > > > > Just disabling it config side makes no difference. > > > > I will try disabling it at configure time and see if it changes > > anything, though I doubt it will. > > Exact same result when geoip support is not built at all. > > > Looking more exactly at the URLs I tested, static file like images > don't crash the worker, just those that get handled by php-fpm upstream > do (e.g. /collectd/ which implies /collectd/index.php). You've claimed above "/status/php-fpm" works ok too. Is it was mistake? Anyway, please make sure you have aligment problems properly reported by a kernel. It looks like the linux kernel has an unfortunate default to silently ignore alignment problems on arm, which results in data corruption on unaligned accesses instead of immediate exit on SIGBUS when unaligned access happens. You may get proper behaviour with echo 4 > /proc/cpu/alignment This should allow to trace a root of your problems. See http://lecs.cs.ucla.edu/wiki/index.php/XScale_alignment for more details. Maxim Dounin From bruno.premont at restena.lu Sat Jun 9 14:54:01 2012 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Sat, 9 Jun 2012 16:54:01 +0200 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120609134803.GK31671@mdounin.ru> References: <20120608144052.79d06009@pluto.restena.lu> <20120608153139.GF31671@mdounin.ru> <20120608214344.10a8a7a9@neptune.home> <20120608234046.48857ccb@neptune.home> <20120609134803.GK31671@mdounin.ru> Message-ID: <20120609165401.6a64e975@neptune.home> Hallo Maxim, On Sat, 09 June 2012 Maxim Dounin wrote: > On Fri, Jun 08, 2012 at 11:40:46PM +0200, Bruno Pr?mont wrote: > > > On Fri, 08 June 2012 Maxim Dounin wrote: > > > > On Fri, Jun 08, 2012 at 02:40:52PM +0200, Bruno Pr?mont wrote: > > > > > Running nginx on ARM I'm having it segfault at about any request (those > > > > > known not to crash are /status/nginx and /status/php-fpm). > > > > > Attaching it with GDB I get the following trace: > > > > > > > > [...] > > > > > > > > > geoip_country /usr/share/GeoIP/GeoIPv6.dat; > > > > > > > > Is it works for you if you don't use GeoIP? > > > > > > Just disabling it config side makes no difference. > > > > > > I will try disabling it at configure time and see if it changes > > > anything, though I doubt it will. > > > > Exact same result when geoip support is not built at all. > > > > > > Looking more exactly at the URLs I tested, static file like images > > don't crash the worker, just those that get handled by php-fpm upstream > > do (e.g. /collectd/ which implies /collectd/index.php). > > You've claimed above "/status/php-fpm" works ok too. Is it was > mistake? > > Anyway, please make sure you have aligment problems properly > reported by a kernel. It looks like the linux kernel has an > unfortunate default to silently ignore alignment problems on arm, > which results in data corruption on unaligned accesses instead of > immediate exit on SIGBUS when unaligned access happens. You may > get proper behaviour with > > echo 4 > /proc/cpu/alignment > > This should allow to trace a root of your problems. > > See http://lecs.cs.ucla.edu/wiki/index.php/XScale_alignment for > more details. Thanks for the pointer, will read trough it! Seems to be that one, after echoing 4 to /proc/cpu/alignment nginx does not even start anymore (and `nginx -t` fails as well), each time with SIGBUS. e.g. for `nginx -t` the first SIGBUS happens at #0 0x0000d64c in ngx_set_cpu_affinity (cf=0xbe892358, cmd=, conf=) at src/core/nginx.c:1275 #1 0x0001cafc in ngx_conf_handler (last=13909340, cf=0xbe892358) at src/core/ngx_conf_file.c:394 #2 ngx_conf_parse (cf=0xbe892358, filename=0xd43d70) at src/core/ngx_conf_file.c:244 #3 0x0001aba4 in ngx_init_cycle (old_cycle=0xbe8923c0) at src/core/ngx_cycle.c:268 #4 0x0000e29c in main (argc=, argv=) at src/core/nginx.c:331 as backtraced with gdb. Thanks, Bruno From mdounin at mdounin.ru Sat Jun 9 15:19:33 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 9 Jun 2012 19:19:33 +0400 Subject: Freebsd/jail: nginx ignores IP in listen directive In-Reply-To: References: Message-ID: <20120609151933.GL31671@mdounin.ru> Hello! On Sat, Jun 09, 2012 at 06:25:55AM -0400, zgen wrote: > FreeBSD 8.3/amd64 > nginx 1.2.1 > jail > > default config, but listen is: > > server { > listen ip_addr_of_jail:80; > ... > } > > but > > $ telnet 127.0.0.1 80 > gets answer from nginx. If I remark this listen directive - nginx isn't > answer. > > Why nginx listens localhost? Because there is no localhost in a jail, it's instead emulated by the kernel by using jail's ip instead of localhost. This is how jails work, nothing to do with nginx. > I found something similar to this here: > > http://mailman.nginx.org/pipermail/nginx/2009-February/009947.html > > but > server { > listen 80; > allow ip_addr_of_jail; > deny all; > ... > } > > isn't works too. This config is expected to resolve the opposite issue: as there is no localhost in a jail one can't listen on it as well (it will listen on jail's ip instead). So to allow only local connections it's not enough to use "listen 127.0.0.1", the allow/deny directives are needed as an additional protection. Maxim Dounin From nginx-forum at nginx.us Sat Jun 9 17:56:15 2012 From: nginx-forum at nginx.us (nfn) Date: Sat, 9 Jun 2012 13:56:15 -0400 (EDT) Subject: fastcgi_cache php5-fpm and ssi problem Message-ID: <0c74899a93cd22f73d29c0bad7fe6d40.NginxMailingListEnglish@forum.nginx.org> Hi, I'm try to use ssi with fastcgi_cache (fpm) with but the ssi isn't executed. When I view the source, the include command is listed and nothing appends: Is there any incompatibility about running ssi and fastcgi_cache? Thanks Nuno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227393,227393#msg-227393 From nginx-forum at nginx.us Sat Jun 9 18:04:05 2012 From: nginx-forum at nginx.us (nfn) Date: Sat, 9 Jun 2012 14:04:05 -0400 (EDT) Subject: fastcgi_cache php5-fpm and ssi problem In-Reply-To: <0c74899a93cd22f73d29c0bad7fe6d40.NginxMailingListEnglish@forum.nginx.org> References: <0c74899a93cd22f73d29c0bad7fe6d40.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just a quick note. I'm testing this with IPB and the request uri (/forum/topic/1-welcome/) is redirect internally using try_files to /index.php: 2012/06/09 03:20:53 [debug] 2811#0: *1 http script var: "/forum/topic/1-welcome/" 2012/06/09 03:20:53 [debug] 2811#0: *1 trying to use file: "/forum/topic/1-welcome/" "/usr/share/nginx/www/forum/topic/1-welcome/" 2012/06/09 03:20:53 [debug] 2811#0: *1 http script var: "/forum/topic/1-welcome/" 2012/06/09 03:20:53 [debug] 2811#0: *1 trying to use dir: "/forum/topic/1-welcome/" "/usr/share/nginx/www/forum/topic/1-welcome/" 2012/06/09 03:20:53 [debug] 2811#0: *1 trying to use file: "/forum/index.php" "/usr/share/nginx/www/forum/index.php" 2012/06/09 03:20:53 [debug] 2811#0: *1 internal redirect: "/forum/index.php?" In the logs, using debug I see a line like this referring ssi: 2012/06/09 03:20:53 [debug] 2811#0: *1 http ssi filter "/forum/index.php?" Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227393,227394#msg-227394 From mdounin at mdounin.ru Sun Jun 10 00:03:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 Jun 2012 04:03:18 +0400 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120609165401.6a64e975@neptune.home> References: <20120608144052.79d06009@pluto.restena.lu> <20120608153139.GF31671@mdounin.ru> <20120608214344.10a8a7a9@neptune.home> <20120608234046.48857ccb@neptune.home> <20120609134803.GK31671@mdounin.ru> <20120609165401.6a64e975@neptune.home> Message-ID: <20120610000318.GM31671@mdounin.ru> Hello! On Sat, Jun 09, 2012 at 04:54:01PM +0200, Bruno Pr?mont wrote: > Hallo Maxim, > > On Sat, 09 June 2012 Maxim Dounin wrote: > > On Fri, Jun 08, 2012 at 11:40:46PM +0200, Bruno Pr?mont wrote: > > > > On Fri, 08 June 2012 Maxim Dounin wrote: > > > > > On Fri, Jun 08, 2012 at 02:40:52PM +0200, Bruno Pr?mont wrote: > > > > > > Running nginx on ARM I'm having it segfault at about any request (those > > > > > > known not to crash are /status/nginx and /status/php-fpm). > > > > > > Attaching it with GDB I get the following trace: > > > > > > > > > > [...] > > > > > > > > > > > geoip_country /usr/share/GeoIP/GeoIPv6.dat; > > > > > > > > > > Is it works for you if you don't use GeoIP? > > > > > > > > Just disabling it config side makes no difference. > > > > > > > > I will try disabling it at configure time and see if it changes > > > > anything, though I doubt it will. > > > > > > Exact same result when geoip support is not built at all. > > > > > > > > > Looking more exactly at the URLs I tested, static file like images > > > don't crash the worker, just those that get handled by php-fpm upstream > > > do (e.g. /collectd/ which implies /collectd/index.php). > > > > You've claimed above "/status/php-fpm" works ok too. Is it was > > mistake? > > > > Anyway, please make sure you have aligment problems properly > > reported by a kernel. It looks like the linux kernel has an > > unfortunate default to silently ignore alignment problems on arm, > > which results in data corruption on unaligned accesses instead of > > immediate exit on SIGBUS when unaligned access happens. You may > > get proper behaviour with > > > > echo 4 > /proc/cpu/alignment > > > > This should allow to trace a root of your problems. > > > > See http://lecs.cs.ucla.edu/wiki/index.php/XScale_alignment for > > more details. > > Thanks for the pointer, will read trough it! > > Seems to be that one, after echoing 4 to /proc/cpu/alignment nginx > does not even start anymore (and `nginx -t` fails as well), each time with > SIGBUS. > > e.g. for `nginx -t` the first SIGBUS happens at > > #0 0x0000d64c in ngx_set_cpu_affinity (cf=0xbe892358, cmd=, conf=) at src/core/nginx.c:1275 > #1 0x0001cafc in ngx_conf_handler (last=13909340, cf=0xbe892358) at src/core/ngx_conf_file.c:394 > #2 ngx_conf_parse (cf=0xbe892358, filename=0xd43d70) at src/core/ngx_conf_file.c:244 > #3 0x0001aba4 in ngx_init_cycle (old_cycle=0xbe8923c0) at src/core/ngx_cycle.c:268 > #4 0x0000e29c in main (argc=, argv=) at src/core/nginx.c:331 > > as backtraced with gdb. Ok, this looks sensisble. Could you please provide ./configure output and test if the following patch fixes things for you? diff --git a/auto/os/conf b/auto/os/conf --- a/auto/os/conf +++ b/auto/os/conf @@ -93,6 +93,7 @@ case "$NGX_MACHINE" in ;; *) + have=NGX_ALIGNMENT value=16 . auto/define NGX_MACH_CACHE_LINE=32 ;; Maxim Dounin From nginx-forum at nginx.us Sun Jun 10 08:56:53 2012 From: nginx-forum at nginx.us (andytse) Date: Sun, 10 Jun 2012 04:56:53 -0400 (EDT) Subject: how can i rewrite this? Message-ID: i want to rewrite: http://a.com/abcde.jpg to http://a.com/ab/cde.jpg here is my nginx config: rewrite "^/([a-z]{2})([a-z]{3})\.jpg$" /$1/$2.jpg last; but it not works Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227397,227397#msg-227397 From bruno.premont at restena.lu Sun Jun 10 10:49:36 2012 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Sun, 10 Jun 2012 12:49:36 +0200 Subject: nginx worker segfault, NULL pool In-Reply-To: <20120610000318.GM31671@mdounin.ru> References: <20120608144052.79d06009@pluto.restena.lu> <20120608153139.GF31671@mdounin.ru> <20120608214344.10a8a7a9@neptune.home> <20120608234046.48857ccb@neptune.home> <20120609134803.GK31671@mdounin.ru> <20120609165401.6a64e975@neptune.home> <20120610000318.GM31671@mdounin.ru> Message-ID: <20120610124936.5f5310ba@neptune.home> Hello Maxim, On Sun, 10 June 2012 Maxim Dounin wrote: > On Sat, Jun 09, 2012 at 04:54:01PM +0200, Bruno Pr?mont wrote: > > On Sat, 09 June 2012 Maxim Dounin wrote: > > > Anyway, please make sure you have aligment problems properly > > > reported by a kernel. It looks like the linux kernel has an > > > unfortunate default to silently ignore alignment problems on arm, > > > which results in data corruption on unaligned accesses instead of > > > immediate exit on SIGBUS when unaligned access happens. You may > > > get proper behaviour with > > > > > > echo 4 > /proc/cpu/alignment > > > > > > This should allow to trace a root of your problems. > > > > > > See http://lecs.cs.ucla.edu/wiki/index.php/XScale_alignment for > > > more details. > > > > Thanks for the pointer, will read trough it! > > > > Seems to be that one, after echoing 4 to /proc/cpu/alignment nginx > > does not even start anymore (and `nginx -t` fails as well), each time with > > SIGBUS. > > > > e.g. for `nginx -t` the first SIGBUS happens at > > > > #0 0x0000d64c in ngx_set_cpu_affinity (cf=0xbe892358, cmd=, conf=) at src/core/nginx.c:1275 > > #1 0x0001cafc in ngx_conf_handler (last=13909340, cf=0xbe892358) at src/core/ngx_conf_file.c:394 > > #2 ngx_conf_parse (cf=0xbe892358, filename=0xd43d70) at src/core/ngx_conf_file.c:244 > > #3 0x0001aba4 in ngx_init_cycle (old_cycle=0xbe8923c0) at src/core/ngx_cycle.c:268 > > #4 0x0000e29c in main (argc=, argv=) at src/core/nginx.c:331 > > > > as backtraced with gdb. > > Ok, this looks sensisble. > > Could you please provide ./configure output and test if the > following patch fixes things for you? > > diff --git a/auto/os/conf b/auto/os/conf > --- a/auto/os/conf > +++ b/auto/os/conf > @@ -93,6 +93,7 @@ case "$NGX_MACHINE" in > ;; > > *) > + have=NGX_ALIGNMENT value=16 . auto/define > NGX_MACH_CACHE_LINE=32 > ;; > The patch seems to fix things, `nginx -t` does not die on SIGBUS anymore, it also runs properly for the requests that made it fail ( /proc/cpu/alignment does not account any new alignment traps). Thanks! Bruno Full configure output (as well as first few lines of make which shows used CFLAGS -- compiler does not generate any warnings): checking for OS + Linux 2.6.37-00003-g924cf4c armv5tel checking for C compiler ... found + using GNU C compiler checking for --with-ld-opt="-L/usr/lib" ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT ... found checking for TCP_INFO ... found checking for accept4() ... found checking for kqueue AIO support ... not found checking for Linux AIO support ... found checking for int size ... 4 bytes checking for long size ... 4 bytes checking for long long size ... 8 bytes checking for void * size ... 4 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 4 bytes checking for off_t size ... 8 bytes checking for time_t size ... 4 bytes checking for AF_INET6 ... found checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for sysconf(_SC_NPROCESSORS_ONLN) ... found checking for openat(), fstatat() ... found configuring additional modules adding module in /var/tmp/portage/www-servers/nginx-1.2.1/work/agentzh-headers-more-nginx-module-3580526 + ngx_http_headers_more_filter_module was configured checking for PCRE library ... found checking for PCRE JIT support ... found checking for OpenSSL library ... found checking for zlib library ... found checking for libxslt ... found checking for libexslt ... found creating objs/Makefile Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr" nginx binary file: "/usr/sbin/nginx" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/var/run/nginx.pid" nginx error log file: "/var/log/nginx/error_log" nginx http access log file: "/var/log/nginx/access_log" nginx http client request body temporary files: "/var/tmp/nginx/client" nginx http fastcgi temporary files: "/var/tmp/nginx/fastcgi" make -j2 'LINK=armv5tel-softfloat-linux-gnueabi-gcc -Wl,-O1 -Wl,--as-needed' 'OTHERLDFLAGS=-Wl,-O1 -Wl,--as-needed' make -f objs/Makefile make[1]: Entering directory `/var/tmp/portage/www-servers/nginx-1.2.1/work/nginx-1.2.1' armv5tel-softfloat-linux-gnueabi-gcc -c -O2 -march=armv5te -mtune=xscale -pipe -Wall -ggdb -I/usr/include -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs \ -o objs/src/core/nginx.o \ src/core/nginx.c ... From nginx-forum at nginx.us Sun Jun 10 10:57:13 2012 From: nginx-forum at nginx.us (locojohn) Date: Sun, 10 Jun 2012 06:57:13 -0400 (EDT) Subject: how can i rewrite this? In-Reply-To: References: Message-ID: <72acf7b6506383c25ff1004e1849527c.NginxMailingListEnglish@forum.nginx.org> Hi, Avoid rewrite's: location ~ ^/([a-z][a-z])([a-z][a-z][a-z])\.jpg$ { try_files /$1/$2.jpg $uri; } Andrejs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227397,227399#msg-227399 From nginx-forum at nginx.us Sun Jun 10 14:53:30 2012 From: nginx-forum at nginx.us (torajx) Date: Sun, 10 Jun 2012 10:53:30 -0400 (EDT) Subject: block access to a file !! In-Reply-To: <20120606143153.GG4719@craic.sysops.org> References: <20120606143153.GG4719@craic.sysops.org> Message-ID: <2bb6fd5af48da44689e20486306bcfb7.NginxMailingListEnglish@forum.nginx.org> Thank you ! it worked great... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227281,227400#msg-227400 From nginx-forum at nginx.us Sun Jun 10 17:40:12 2012 From: nginx-forum at nginx.us (karlseguin) Date: Sun, 10 Jun 2012 13:40:12 -0400 (EDT) Subject: http_log_module filter by status Message-ID: I was interested in having nginx log 404s to their own file. Essentially, i _hate_ 404s, so I like to parse access logs find and report all 404s. However, as-is, parsing large access logs can be quite inefficient since 404s represent such a small % of the entire file. I was thinking nginx could filter it out at write-time: access_log not_found.log combined buffer=16K 404; Apologies for the lameness of the code, but this is what I came up with: https://gist.github.com/2906701 I certainly don't recommend anyone uses it, I'm mostly just looking for feedback. Is this better off in its own module? (there's so much code in the http_log_module that I want to leverage though). There's much more filtering that could go on that perhaps a new directive is a better approach: access_log_filter $status /(40\d)/ access_log_filter $method GET (which is certainly beyond my capabilities). Thoughts? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227401,227401#msg-227401 From nginx-forum at nginx.us Mon Jun 11 02:19:26 2012 From: nginx-forum at nginx.us (xiaofen) Date: Sun, 10 Jun 2012 22:19:26 -0400 (EDT) Subject: What's the difference between fashion jeans and normal jeans? Message-ID: <60f0df5c40fb5610d4283a19b961271f.NginxMailingListEnglish@forum.nginx.org> Hey, I have a huge interest in clothes but I don't get why people spend hundreds of pounds on jeans, when any good shop will have a pair that fit and look decent. I once when I was younger ignorantly bought some ?210 jeans from a charity shop for ?5, and they lasted a lot shorter than any other pair of jeans I bought, and tattered surprisingly quickly, (I also wonder about people who by plainest clothes for fashion labels) what's the difference between fashion jeans and normal jeans? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227407,227407#msg-227407 From reallfqq-nginx at yahoo.fr Mon Jun 11 02:23:08 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 10 Jun 2012 22:23:08 -0400 Subject: What's the difference between fashion jeans and normal jeans? In-Reply-To: <60f0df5c40fb5610d4283a19b961271f.NginxMailingListEnglish@forum.nginx.org> References: <60f0df5c40fb5610d4283a19b961271f.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2 spam messages in a few days, both coming from the same email address... Probably spoofed address, though. Couldn't the sending machine IP range be blocked? I won't stay on a spammed mailing-list. --- *B. R.* On Sun, Jun 10, 2012 at 10:19 PM, xiaofen wrote: > Hey, I have a huge interest in clothes but I don't get why people spend > hundreds of pounds on jeans, when any good shop will have a pair that > fit and look decent. > > I once when I was younger ignorantly bought some ?210 jeans from a > charity shop for ?5, and they lasted a lot shorter than any other pair > of jeans I bought, and tattered surprisingly quickly, (I also wonder > about people who by plainest clothes for fashion labels) > > what's the difference between fashion jeans and normal jeans? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227407,227407#msg-227407 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From simohayha.bobo at gmail.com Mon Jun 11 02:24:40 2012 From: simohayha.bobo at gmail.com (Simon Liu) Date: Mon, 11 Jun 2012 10:24:40 +0800 Subject: nginx configure error when use zsh In-Reply-To: <20120608115511.GD31671@mdounin.ru> References: <20120608115511.GD31671@mdounin.ru> Message-ID: Hello! On Fri, Jun 8, 2012 at 7:55 PM, Maxim Dounin wrote: > Hello! > > On Fri, Jun 08, 2012 at 03:29:21PM +0800, Simon Liu wrote: > > > Hello. > > > > nginx configure error when I use zsh. > > > > This is my environment: > > > > operating system: Linux(3.3.7-1-ARCH) > > > > zsh: 4.3.17 (i686-pc-linux-gnu) > > You mean using zsh instead of POSIX-compliant shell for /bin/sh? > Looks like bad idea. Zsh docs at most promise it will "try to > emulate sh", nothing more. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Yes, i use zsh instead of /bin/sh, and thanks for your reply. -- do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Mon Jun 11 02:34:57 2012 From: jim at ohlste.in (Jim Ohlstein) Date: Sun, 10 Jun 2012 22:34:57 -0400 Subject: What's the difference between fashion jeans and normal jeans? In-Reply-To: References: <60f0df5c40fb5610d4283a19b961271f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4FD55951.1040102@ohlste.in> On 6/10/12 10:23 PM, B.R. wrote: > 2 spam messages in a few days, both coming from the same email address... > > Probably spoofed address, though. Couldn't the sending machine IP range > be blocked? Not at all. It's the "officially sanctioned" nginx forum - I say that only because it uses a subdomain of nginx.org that Igor has graciously provided and put it in quotes because I run it. Unfortunately, determined human spammers will get through once in a while. I have filters in place which block 2-3 messages per day, but one or two a month get through. If that's such a bother to your sensibilities, then take your threatened action. > > I won't stay on a spammed mailing-list. B'bye! See ya! > --- > *B. R.* -- Jim Ohlstein From jim at ohlste.in Mon Jun 11 02:39:47 2012 From: jim at ohlste.in (Jim Ohlstein) Date: Sun, 10 Jun 2012 22:39:47 -0400 Subject: What's the difference between fashion jeans and normal jeans? In-Reply-To: <4FD55951.1040102@ohlste.in> References: <60f0df5c40fb5610d4283a19b961271f.NginxMailingListEnglish@forum.nginx.org> <4FD55951.1040102@ohlste.in> Message-ID: <4FD55A73.5010801@ohlste.in> On 6/10/12 10:34 PM, Jim Ohlstein wrote: > > On 6/10/12 10:23 PM, B.R. wrote: >> 2 spam messages in a few days, both coming from the same email address... >> >> Probably spoofed address, though. Couldn't the sending machine IP range >> be blocked? > > Not at all. It's the "officially sanctioned" nginx forum - I say that > only because it uses a subdomain of nginx.org that Igor has graciously > provided and put it in quotes because I run it. > > Unfortunately, determined human spammers will get through once in a > while. I have filters in place which block 2-3 messages per day, but one > or two a month get through. If that's such a bother to your > sensibilities, then take your threatened action. > >> >> I won't stay on a spammed mailing-list. > > B'bye! See ya! > >> --- >> *B. R.* > BTW, the system blocked his other entries which contained links. It doesn't work real well for simple text without typical "spam words". This same thing could happen in any mailing list. I see spam come through with much greater frequency on FreeBSD lists. I guess the difference is that there people have the good sense to ignore it. Anyway, au revoir! > -- Jim Ohlstein From nginx-forum at nginx.us Mon Jun 11 04:29:58 2012 From: nginx-forum at nginx.us (gigo1980) Date: Mon, 11 Jun 2012 00:29:58 -0400 (EDT) Subject: Is this Usecase posible? Message-ID: <27f17326427128f1344a1551b769db07.NginxMailingListEnglish@forum.nginx.org> Hi all, i have the following use case. I need an http/s Server that can do the following. Parse GET Request from the client -> call an internal module that do something (that have an response) -> use the response and modifie the GET REQUEST -> than call an other webserver that should handle the request -> output the response to the client. regards gigo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227414,227414#msg-227414 From admin at yqed.com Mon Jun 11 05:32:39 2012 From: admin at yqed.com (Floren Munteanu) Date: Mon, 11 Jun 2012 01:32:39 -0400 Subject: PROPFIND and OPTIONS support in Nginx? Message-ID: I was wondering why the Nginx devs do not add WebDAV support for PROPFIND and OPTIONS. There is a module available in Github but still, I would love to have this officially supported by Nginx: https://github.com/arut/nginx-dav-ext-module Are there any plans to have full WebDAV support implemented? This would be a nice addition into 1.3.0 branch. Regards, Floren Munteanu From Richard.Kearsley at m247.com Mon Jun 11 06:46:30 2012 From: Richard.Kearsley at m247.com (Richard Kearsley) Date: Mon, 11 Jun 2012 06:46:30 +0000 Subject: disable logging for certain response Message-ID: Hi I am using nginx lua to return a 302 for certain requests - probably about 50% of total requests e.g. (lua code) if (res.status == ngx.HTTP_MOVED_TEMPORARILY) then local loc = res.header["X-Location"] if (ngx.var.is_args == '?') then loc = loc .. '?' .. ngx.var.args end return ngx.redirect(loc) The other 50% are processed normally and a full reponse is returned I do not need to log the 302 responses, and indeed it is causing a much waste of processor time when I come to analyse/process the logs (this is 100s of gigabytes of logs per day - so is a HUGE waste) Is there any way to set a var prior to returning the 302 which will cause nginx not to log it? Richard Kearsley Systems Developer | M247 Limited Internal Dial 2210 | Mobile +44 7970 621236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jun 11 08:25:58 2012 From: nginx-forum at nginx.us (lima) Date: Mon, 11 Jun 2012 04:25:58 -0400 (EDT) Subject: Error while connecting to apache from nginx running on same machine In-Reply-To: References: Message-ID: <378a2333c87be2b9200d8c70b364a1c4.NginxMailingListEnglish@forum.nginx.org> Hi, This is our nginx configuration setup. -------------------------------------------------------------------------------------- http { include mime.types; gzip on; gzip_http_version 1.1; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain application/xml text/css application/x-javascript text/xml application/javascript text/javascript; gzip_disable "MSIE [1-6]\."; autoindex off; ssi off; server_tokens off; log_format main '$remote_addr [$time_local] - "$request" - ' '$status - $body_bytes_sent - "$http_referer"'; log_format lb_log '$remote_addr [$time_local] - "$request" - $status - ' 'worker_addr $upstream_addr - ' 'worker_status $upstream_status - ' 'worker_response_time $upstream_response_time - ' 'total_processing_time $request_time - ' 'content_type $upstream_http_content_type'; log_format doc_log '$remote_addr [$time_local] - "$request" - $status - ' 'worker_addr $upstream_addr - ' 'worker_status $upstream_status - ' 'worker_response_time $upstream_response_time - ' 'total_processing_time $request_time - ' 'content_type $upstream_http_content_type'; access_log logs/access.log main; error_log logs/error.log; sendfile on; keepalive_timeout 60; proxy_ssl_session_reuse on; upstream loadbalancer { server server1-ip:443 weight=1 max_fails=5 fail_timeout=3m; server server2-ip:443 weight=1 max_fails=5 fail_timeout=3m; } upstream docproxy { server 127.0.0.1:7443; } server { listen 443 ssl; server_name lb.abcd.net; location ~ ^/documents/(.*)(jpg|jpeg|gif|png|txt|pdf|html|htm){ root /home; access_log logs/doc_access.log doc_log; } location ~* ^.+.(jpg|jpeg|gif|png|ico|css|txt|js)$ { expires 24h; add_header Cache-Control public; root media; } ssl_certificate /root/Apache_New_SSL_Keys/lendingstream.co.uk.crt; ssl_certificate_key /root/Apache_New_SSL_Keys/lendingstream.key.nopass; ssl_session_timeout 3m; ssl_protocols SSLv3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; proxy_redirect / /; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 360s; location ~ ^/documents/ { proxy_pass https://docproxy; access_log logs/doc_access.log doc_log; } location / { proxy_pass https://loadbalancer; access_log logs/lb_access.log lb_log; } error_page 403 /403.html; error_page 404 /404.html; error_page 500 502 503 504 /500.html; location ~ ^/(403.html|404.html|500.html)$ { root html; } } } -------------------------------------------------------------------------------------------- Here, we will forward all the requests except documents to LB, which in turn send to either server1 or server2. The document related requests will be proxy forwarded to apache running in the same machine at 7443 port. But, here comes the problem that when it is sending any request to apache it is giving 500 error. In apache logs, it's been logged as [error] Hello. The apache configurations are: httpd.conf is, ------------------------------------------------------------------------------------------ ServerRoot "/usr/local/apache2" PidFile logs/httpd.pid Listen 80 ServerTokens ProductOnly ServerSignature Off ###### Loaded all modules which are required LoadModule *****.so ###### Loaded all modules which are required User USER Group GROUP DocumentRoot "/usr/local/apache2/htdocs" Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Deny from all Options -Indexes +FollowSymLinks AllowOverride None Order allow,deny Allow from all ErrorLog "logs/error_log" LogLevel notice SSLRandomSeed startup builtin SSLRandomSeed connect builtin Alias /documents /home/documents Order deny,allow Allow from all WSGIScriptAlias / apache/django.wsgi Order allow,deny Allow from all --------------------------------------------------------------------------------- and the httpd-ssl.conf is, --------------------------------------------------------------------------------- LoadModule ssl_module modules/mod_ssl.so LoadModule wsgi_module modules/mod_wsgi.so Listen 7443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl SSLPassPhraseDialog builtin SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)" SSLSessionCacheTimeout 15 SSLMutex "file:/usr/local/apache2/logs/ssl_mutex" ------------------------------------------------------------------------------- Please help me in resolving this as this is very crucial and urgent for us. Thanks for replying.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227341,227420#msg-227420 From nginx-forum at nginx.us Mon Jun 11 09:08:38 2012 From: nginx-forum at nginx.us (wwwyq2003) Date: Mon, 11 Jun 2012 05:08:38 -0400 (EDT) Subject: HP-UX (IA64) sendmsg() failed Message-ID: <5865478c4f3d86dc67fb605b6227c946.NginxMailingListEnglish@forum.nginx.org> nginx nginx-1.2.1 OS HP-UX B.11.23 U ia64 built by gcc 4.2.3 When I start the nginx server,it throws some errors. The error.log contains 2012/06/11 16:46:07 [notice] 6820#0: using the "/dev/poll" event method 2012/06/11 16:46:07 [notice] 6820#0: nginx/1.2.1 2012/06/11 16:46:07 [notice] 6820#0: built by gcc 4.2.3 2012/06/11 16:46:07 [notice] 6820#0: getrlimit(RLIMIT_NOFILE): 10240:10240 2012/06/11 16:46:07 [notice] 6821#0: start worker processes 2012/06/11 16:46:07 [notice] 6821#0: start worker process 6822 2012/06/11 16:46:07 [notice] 6821#0: start worker process 6823 2012/06/11 16:46:07 [alert] 6821#0: sendmsg() failed (9: Bad file number) 2012/06/11 16:46:07 [notice] 6821#0: start worker process 6824 2012/06/11 16:46:07 [alert] 6821#0: sendmsg() failed (9: Bad file number) 2012/06/11 16:46:07 [alert] 6821#0: sendmsg() failed (9: Bad file number) 2012/06/11 16:46:07 [notice] 6821#0: start worker process 6825 2012/06/11 16:46:07 [alert] 6821#0: sendmsg() failed (9: Bad file number) 2012/06/11 16:46:07 [alert] 6821#0: sendmsg() failed (9: Bad file number) 2012/06/11 16:46:07 [alert] 6821#0: sendmsg() failed (9: Bad file number) Someone help me. Thank you very much! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227421,227421#msg-227421 From nginx-forum at nginx.us Mon Jun 11 10:54:28 2012 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 11 Jun 2012 06:54:28 -0400 (EDT) Subject: Is this Usecase posible? In-Reply-To: <27f17326427128f1344a1551b769db07.NginxMailingListEnglish@forum.nginx.org> References: <27f17326427128f1344a1551b769db07.NginxMailingListEnglish@forum.nginx.org> Message-ID: Get the source, add the code handling and compile. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227414,227423#msg-227423 From nginx-forum at nginx.us Mon Jun 11 10:59:20 2012 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 11 Jun 2012 06:59:20 -0400 (EDT) Subject: Error while connecting to apache from nginx running on same machine In-Reply-To: <378a2333c87be2b9200d8c70b364a1c4.NginxMailingListEnglish@forum.nginx.org> References: <378a2333c87be2b9200d8c70b364a1c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9f2e40966b997088dce7adc7d1f8c9d9.NginxMailingListEnglish@forum.nginx.org> -- proxy_pass https://docproxy; Points to 443, Yet the upstream wants 7443... -- upstream docproxy { -- server 127.0.0.1:7443; -- } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227341,227424#msg-227424 From nginx-forum at nginx.us Mon Jun 11 11:03:56 2012 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 11 Jun 2012 07:03:56 -0400 (EDT) Subject: HP-UX (IA64) sendmsg() failed In-Reply-To: <5865478c4f3d86dc67fb605b6227c946.NginxMailingListEnglish@forum.nginx.org> References: <5865478c4f3d86dc67fb605b6227c946.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maybe this one: http://mailman.nginx.org/pipermail/nginx/2007-January/000588.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227421,227425#msg-227425 From nginx-forum at nginx.us Mon Jun 11 11:08:43 2012 From: nginx-forum at nginx.us (gigo1980) Date: Mon, 11 Jun 2012 07:08:43 -0400 (EDT) Subject: Is this Usecase posible? In-Reply-To: References: <27f17326427128f1344a1551b769db07.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2c2d75996644705b710689f10990efca.NginxMailingListEnglish@forum.nginx.org> http://tinypic.com/r/ezhf6v/6 <-- ok so i wll do this. but whith which module should i handle the code ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227414,227426#msg-227426 From nginx-forum at nginx.us Mon Jun 11 11:27:39 2012 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 11 Jun 2012 07:27:39 -0400 (EDT) Subject: Is this Usecase posible? In-Reply-To: <2c2d75996644705b710689f10990efca.NginxMailingListEnglish@forum.nginx.org> References: <27f17326427128f1344a1551b769db07.NginxMailingListEnglish@forum.nginx.org> <2c2d75996644705b710689f10990efca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <385a1b9c0a831a1cf84a521d4492653d.NginxMailingListEnglish@forum.nginx.org> No idea, a developer should answer that one. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227414,227428#msg-227428 From nginx-forum at nginx.us Mon Jun 11 12:59:38 2012 From: nginx-forum at nginx.us (dakrer) Date: Mon, 11 Jun 2012 08:59:38 -0400 (EDT) Subject: IPv6 fails for set_real_ip_from in 1.2.1 Message-ID: <1e450d9b4d59c035dd324d4ea742891a.NginxMailingListEnglish@forum.nginx.org> I noticed in the 1.2.1 changelog that set_real_ip_from is supposed to support IPv6 addresses. Unfortunaly I get a configuration error when I try it. This works fine: set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; This fails: set_real_ip_from 127.0.0.1; set_real_ip_from ::1; real_ip_header X-Forwarded-For; $ nginx -t nginx: [emerg] invalid parameter "::1" in nginx.conf:22 nginx: configuration file nginx.conf test failed Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227430,227430#msg-227430 From pinakee at vvidiacom.com Mon Jun 11 13:34:12 2012 From: pinakee at vvidiacom.com (Biswas, Pinakee) Date: Mon, 11 Jun 2012 19:04:12 +0530 Subject: Nginx and uwsgi Message-ID: <000301cd47d6$ef99f430$cecddc90$@vvidiacom.com> Hi, We are building a Media Content Management and Delivery Platform based on Python (and python based web framework like Pylons/Pyramid). We are planning to use nginx as the web server. We are new to nginx (have prior experience with Apache) and have downloaded 1.2.1. The OS is CentOS. We are not sure how uwsgi works with nginx: 1. Do we have to start uwsgi as a separate process? 2. There is no option/directives for loading modules in nginx (as it is there in Apache). 3. I couldn't find a good documentation on the uwsgi based directives. The one in nginx wiki is confusing. Since nginx I think works as a reverse proxy where it forwards HTTP requests to the uwsgi process, how about using something like Cherrypy or PasteHTTPserver? Would there be any difference? We would really appreciate your support regarding the above queries. Looking forward to your response. Thanks, Pinakee Biswas Director & CTO Description: Description: vvidialogo.jpg Just watch it ! 7E- Mail: pinakee at vvidiacom.com I 8Web: http://www.vvidiacom.com P Please don't print this e-mail unless you really need to, this will preserve trees on planet earth. ----------------------------Disclaimer-------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- --------- The information contained in this message (including any attachments) is confidential and may be privileged. If you have received it by mistake please notify the sender by return e-mail and permanently delete this message and any attachments from your system. Please note that e-mails are susceptible to change and malwares. VVIDIA COMMUNICATIONS PVT LTD. (including its group companies) shall not be liable for the improper or incomplete transmission of the information contained in this communication nor for any delay in its receipt or damage to your system. ---------------------------------------------------------------------------- ---------------------------------------------Disclaimer--------------------- ---------------------------------------------------------------------------- --------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1901 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 168 bytes Desc: not available URL: From roberto at unbit.it Mon Jun 11 13:48:17 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Mon, 11 Jun 2012 15:48:17 +0200 Subject: Nginx and uwsgi In-Reply-To: <000301cd47d6$ef99f430$cecddc90$@vvidiacom.com> References: <000301cd47d6$ef99f430$cecddc90$@vvidiacom.com> Message-ID: <1f66f279ed8918d0025d0fd3f93a7d78.squirrel@manage.unbit.it> > Hi, > > > > We are building a Media Content Management and Delivery Platform based on > Python (and python based web framework like Pylons/Pyramid). We are > planning > to use nginx as the web server. > > > > We are new to nginx (have prior experience with Apache) and have > downloaded > 1.2.1. The OS is CentOS. > > > > We are not sure how uwsgi works with nginx: > > 1. Do we have to start uwsgi as a separate process? Yes, but remember: uwsgi is a communication protocol (like http or fastcgi), uWSGI is the application server. You need to start uWSGI, and configure nginx to speak with it with the protocol of choice (uwsgi, http or fastcgi) > > 2. There is no option/directives for loading modules in nginx (as it > is there in Apache). nginx does not work in that way, but it should be not a problem for you as upstream modules (http, fastcgi, scgi, uwsgi) are compiled in by default. > > 3. I couldn't find a good documentation on the uwsgi based > directives. > The one in nginx wiki is confusing. you need nothing particular: include uwsgi_params; uwsgi_pass
; all the other options are for fine tuning. I suggest you to start from here: http://projects.unbit.it/uwsgi/wiki/Quickstart > > Since nginx I think works as a reverse proxy where it forwards HTTP > requests > to the uwsgi process, how about using something like Cherrypy or > PasteHTTPserver? Would there be any difference? you can proxy nginx to whatever you want/need if the backend speaks one of the supported protocol (http, scgi, fastcgi, uwsgi) -- Roberto De Ioris http://unbit.it From pinakee at vvidiacom.com Mon Jun 11 14:01:26 2012 From: pinakee at vvidiacom.com (Biswas, Pinakee) Date: Mon, 11 Jun 2012 19:31:26 +0530 Subject: Nginx and uwsgi In-Reply-To: <1f66f279ed8918d0025d0fd3f93a7d78.squirrel@manage.unbit.it> References: <000301cd47d6$ef99f430$cecddc90$@vvidiacom.com> <1f66f279ed8918d0025d0fd3f93a7d78.squirrel@manage.unbit.it> Message-ID: <000f01cd47da$bb9e78a0$32db69e0$@vvidiacom.com> Hi Roberto, Thanks for the prompt response. That really helps. Our major consideration for going with Nginx is performance. If we have another process running (a wsgi server with python application) and nginx working as a proxy translating HTTP requests to another protocol (uwsgi, http or fastcgi), won't that be an overhead? I was under the impression that the Python application can be embedded in Nginx using Wsgi. We would really appreciate your thoughts/suggestions on the above. Looking forward to your response... Thanks, Pinakee Biswas Director & CTO Just watch it ! FE- Mail: pinakee at vvidiacom.com I IWeb: http://www.vvidiacom.com ? Please don't print this e-mail unless you really need to, this will preserve trees on planet earth. ----------------------------Disclaimer------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The information contained in this message (including any attachments) is confidential and may be privileged. If you have received it by mistake please notify the sender by return e-mail and permanently delete this message and any attachments from your system. Please note that e-mails are susceptible to change and malwares. VVIDIA COMMUNICATIONS PVT LTD. (including its group companies) shall not be liable for the improper or incomplete transmission of the information contained in this communication nor for any delay in its receipt or damage to your system. -------------------------------------------------------------------------------------------------------------------------Disclaimer---------------------------------------------------------------------------------------------------------- -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Roberto De Ioris Sent: 11 June 2012 19:18 To: nginx at nginx.org Subject: Re: Nginx and uwsgi > Hi, > > > > We are building a Media Content Management and Delivery Platform based > on Python (and python based web framework like Pylons/Pyramid). We are > planning to use nginx as the web server. > > > > We are new to nginx (have prior experience with Apache) and have > downloaded 1.2.1. The OS is CentOS. > > > > We are not sure how uwsgi works with nginx: > > 1. Do we have to start uwsgi as a separate process? Yes, but remember: uwsgi is a communication protocol (like http or fastcgi), uWSGI is the application server. You need to start uWSGI, and configure nginx to speak with it with the protocol of choice (uwsgi, http or fastcgi) > > 2. There is no option/directives for loading modules in nginx (as it > is there in Apache). nginx does not work in that way, but it should be not a problem for you as upstream modules (http, fastcgi, scgi, uwsgi) are compiled in by default. > > 3. I couldn't find a good documentation on the uwsgi based > directives. > The one in nginx wiki is confusing. you need nothing particular: include uwsgi_params; uwsgi_pass
; all the other options are for fine tuning. I suggest you to start from here: http://projects.unbit.it/uwsgi/wiki/Quickstart > > Since nginx I think works as a reverse proxy where it forwards HTTP > requests to the uwsgi process, how about using something like Cherrypy > or PasteHTTPserver? Would there be any difference? you can proxy nginx to whatever you want/need if the backend speaks one of the supported protocol (http, scgi, fastcgi, uwsgi) -- Roberto De Ioris http://unbit.it _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jun 11 14:21:23 2012 From: nginx-forum at nginx.us (torajx) Date: Mon, 11 Jun 2012 10:21:23 -0400 (EDT) Subject: nginx + php-fpm and cookie Message-ID: <82573a2a314199aeb5c00c0b899764af.NginxMailingListEnglish@forum.nginx.org> HI, I dont know if it is nginx related or not, but i give a try here. we have a setup of nginx+php-fpm for cs cart shopping software. everything is works fine but one big problem. users can not login in all browsers except firefox. and the problem is because of some cookies dont save in browsers. the only browser that save cookie is firefox and other browsers dont save needed cookie. is there any thing related to nginx ?? or it is php-fpm problem ? by the way we have cscart installed on IIS6 and also IIS7 and everything is OK and works fine in all browsers. regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227434,227434#msg-227434 From nginx-forum at nginx.us Mon Jun 11 14:21:55 2012 From: nginx-forum at nginx.us (torajx) Date: Mon, 11 Jun 2012 10:21:55 -0400 (EDT) Subject: nginx + php-fpm and cookie In-Reply-To: <82573a2a314199aeb5c00c0b899764af.NginxMailingListEnglish@forum.nginx.org> References: <82573a2a314199aeb5c00c0b899764af.NginxMailingListEnglish@forum.nginx.org> Message-ID: I can send any need configuration too... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227434,227435#msg-227435 From soracchi at netbuilder.it Mon Jun 11 14:23:15 2012 From: soracchi at netbuilder.it (Andrea Soracchi) Date: Mon, 11 Jun 2012 16:23:15 +0200 (CEST) Subject: Reverse proxy with Route Lookup Handler In-Reply-To: <22298479.385.1339424100323.JavaMail.sorry@sorry> Message-ID: <9095834.414.1339424915433.JavaMail.sorry@sorry> Hi, I need to configure Nginx as a reverse proxy. This is the scenario: server1.example.com (webserver) server2.example.com (webserver) proxy.example.com (reverse proxy) The connection from users whose data lives on server1.example.com will be proxied to server1.example.com by the proxy running on the proxy.example.com. The connection from users whose data lives on server2.example.com will be proxied to server2.example.com by the proxy running on the proxy.example.com. How can I set a Route Lookup Handler in proxy.example.com? This is similar to Zimbra multiserver' scenario... Can you help me? Thanks in advanced, Andrea -- Andrea Soracchi - Netbuilder S.r.l. Multidialogo : La storia e' fatta da chi sa comunicare System Engineer // t. +39 0521 247791 // f. +39 0521 7431140 // www.netbuilder.it From nginx-forum at nginx.us Mon Jun 11 14:35:38 2012 From: nginx-forum at nginx.us (lima) Date: Mon, 11 Jun 2012 10:35:38 -0400 (EDT) Subject: Error while connecting to apache from nginx running on same machine In-Reply-To: <9f2e40966b997088dce7adc7d1f8c9d9.NginxMailingListEnglish@forum.nginx.org> References: <378a2333c87be2b9200d8c70b364a1c4.NginxMailingListEnglish@forum.nginx.org> <9f2e40966b997088dce7adc7d1f8c9d9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <049816b1096625b0b1ffb254724fb0dc.NginxMailingListEnglish@forum.nginx.org> Hey,I found the problem..anyway thank u very much... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227341,227437#msg-227437 From roberto at unbit.it Mon Jun 11 18:15:25 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Mon, 11 Jun 2012 20:15:25 +0200 Subject: Nginx and uwsgi In-Reply-To: <000f01cd47da$bb9e78a0$32db69e0$@vvidiacom.com> References: <000301cd47d6$ef99f430$cecddc90$@vvidiacom.com> <1f66f279ed8918d0025d0fd3f93a7d78.squirrel@manage.unbit.it> <000f01cd47da$bb9e78a0$32db69e0$@vvidiacom.com> Message-ID: > Hi Roberto, > > Thanks for the prompt response. That really helps. > > Our major consideration for going with Nginx is performance. > If we have another process running (a wsgi server with python application) > and nginx working as a proxy translating HTTP requests to another protocol > (uwsgi, http or fastcgi), won't that be an overhead? Yes, there is an overhead, but it is practically irrelevant in the big scheme. Your bootleneck will hardly be the webserver or the application server. Running custom/non-deterministic apps directly in the webserver is a really old-style approach and afaik, only the php world still pushes this kind of setups. Take in account (as you came from apache), mod_wsgi preferred setup is in daemon mode, that is a beatiful abstraction of a proxied setup. > I was under the impression that the Python application can be embedded in > Nginx using Wsgi. the only third-party-module allowing you to do so, is Manlio Perillo's mod_wsgi/mod_python for nginx. It is unmaintained, and its preferred usage is in having another nginx in front of it proxying requests. Nginx is a non-blocking server, putting blocking code in it (as 90% of the webapps are), is the key to hell :) If you have got experience with apache+mod_wsgi, you can simply use nginx for serving static files and using apache+mod_wsgi (in embedded mode) as your application server for python/wsgi. This is now a very common setup. -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Mon Jun 11 18:26:12 2012 From: nginx-forum at nginx.us (s1r0n) Date: Mon, 11 Jun 2012 14:26:12 -0400 (EDT) Subject: display problem Message-ID: <0421a41e194006bad275064426174626.NginxMailingListEnglish@forum.nginx.org> Hi all, i have just recently installed nginx. it works great but periodically the pages get all messed up. like now on this page http://www.sociology.org i posted an article and then tried to view but it's all messed up after that. nginx seems to be working other than this weird display problem can anybody help? i've tried everytying including rebooting Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227440,227440#msg-227440 From nginx-forum at nginx.us Mon Jun 11 18:48:00 2012 From: nginx-forum at nginx.us (s1r0n) Date: Mon, 11 Jun 2012 14:48:00 -0400 (EDT) Subject: display problem In-Reply-To: <0421a41e194006bad275064426174626.NginxMailingListEnglish@forum.nginx.org> References: <0421a41e194006bad275064426174626.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9feb5ba2ec5f082aacae9d052ea2a6aa.NginxMailingListEnglish@forum.nginx.org> here is an update. I am getting this error in the rror log 2012/06/11 14:46:37 [error] 3554#0: *70 open() "/home/httpd/sociology.org/wp-smooth/images/banner125.gif" failed (2: No such file or directory), client: 87.151.182.62, server: www.sociology.org, request: "GET /wp-content/themes/wp-smooth/images/banner125.gif HTTP/1.1", host: "www.sociology.org", referrer: "http://www.sociology.org/" but the files that are in the path exist, and a readable. if i reload apache the webspace displays just fine Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227440,227441#msg-227441 From nginx-forum at nginx.us Mon Jun 11 18:57:00 2012 From: nginx-forum at nginx.us (s1r0n) Date: Mon, 11 Jun 2012 14:57:00 -0400 (EDT) Subject: display problem In-Reply-To: <9feb5ba2ec5f082aacae9d052ea2a6aa.NginxMailingListEnglish@forum.nginx.org> References: <0421a41e194006bad275064426174626.NginxMailingListEnglish@forum.nginx.org> <9feb5ba2ec5f082aacae9d052ea2a6aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <77b8b91912cb7e01fa7f0471e56ad6ff.NginxMailingListEnglish@forum.nginx.org> Well, as it turns out the problem is this. I have the following rewrites for the server but these break the installation. location ~* ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { rewrite ^/.*(/wp-.*/.*\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js))$ $1 last; rewrite ^.*/files/(.*(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js))$ /wp-includes/ms-files.php?file=$1 last; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; break; } can somebody tell me how to include these rewrites so wordpress images work properly Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227440,227442#msg-227442 From ru at nginx.com Mon Jun 11 19:30:02 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 11 Jun 2012 23:30:02 +0400 Subject: IPv6 fails for set_real_ip_from in 1.2.1 In-Reply-To: <1e450d9b4d59c035dd324d4ea742891a.NginxMailingListEnglish@forum.nginx.org> References: <1e450d9b4d59c035dd324d4ea742891a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120611193002.GA41437@lo0.su> On Mon, Jun 11, 2012 at 08:59:38AM -0400, dakrer wrote: > I noticed in the 1.2.1 changelog that set_real_ip_from is supposed to > support IPv6 addresses. Unfortunaly I get a configuration error when I > try it. > > This works fine: > set_real_ip_from 127.0.0.1; > real_ip_header X-Forwarded-For; > > This fails: > set_real_ip_from 127.0.0.1; > set_real_ip_from ::1; > real_ip_header X-Forwarded-For; > > > $ nginx -t > nginx: [emerg] invalid parameter "::1" in nginx.conf:22 > nginx: configuration file nginx.conf test failed For IPv6 to work, nginx should be built with --with-ipv6. From nginx-forum at nginx.us Tue Jun 12 04:02:13 2012 From: nginx-forum at nginx.us (Daniel15) Date: Tue, 12 Jun 2012 00:02:13 -0400 (EDT) Subject: Sharing config between two sites Message-ID: I've got two sites, example.com and beta.example.com. The configuration for these two sites is almost identical, except for the root directory (/var/www/sitename/live/public/ for the first domain, and /var/www/sitename/beta/public/ for the second domain) and a few environment variables (database name, etc.) used by the code. What's the best way to share as much of the Nginx configuration as possible between the two sites? Should I pull the common settings into an include file and include it in both of the server blocks, or is there a better way? Thanks! - Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227445,227445#msg-227445 From nginx-forum at nginx.us Tue Jun 12 04:04:19 2012 From: nginx-forum at nginx.us (Daniel15) Date: Tue, 12 Jun 2012 00:04:19 -0400 (EDT) Subject: Setting up nginx as Visual Studio 2010 project In-Reply-To: <1979d5146e9ab6c19013841c788b3bb1.NginxMailingListEnglish@forum.nginx.org> References: <1979d5146e9ab6c19013841c788b3bb1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4956ca9e54ce22daff5fa933236d53ef.NginxMailingListEnglish@forum.nginx.org> You should be able to just create a new project and add all the source files from the Windows version of nginx (http://nginx.org/en/docs/windows.html) - Although I'm not too sure if it can actually be compiled with the Microsoft compiler or whether it depends on MinGW. The Windows port of nginx looks very incomplete. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227198,227446#msg-227446 From sammyraul1 at gmail.com Tue Jun 12 07:01:04 2012 From: sammyraul1 at gmail.com (sammy_raul) Date: Tue, 12 Jun 2012 00:01:04 -0700 (PDT) Subject: Video Streaming using non http backend, Ref ngx_drizzle In-Reply-To: References: Message-ID: <1339484464485-7580380.post@n2.nabble.com> Hi, I am trying to understand lua module. Using the above script in the conf file I am able to connect to my upstream. I have few questions regarding the Lua module. 1)How I can send some data i.e I have to send a message to my backend probably which is more than a simple string. I have to construct it and encode it. Probably I need to add to the c function I can see ngx_http_lua_socket_tcp_send is used to send data over Nginx Socket, but I could not figure out how I can modify this function and which buffers I need to put my own data. 2)Before sock:receive I need to decrypt the data before sending to the client. I think I can decode in the print function in lua_output.c where I receive the data from upstream. Is that correct. Thanks, Raul Raul -- View this message in context: http://nginx.2469901.n2.nabble.com/Video-Streaming-using-non-http-backend-Ref-ngx-drizzle-tp7580235p7580380.html Sent from the nginx mailing list archive at Nabble.com. From agentzh at gmail.com Tue Jun 12 11:43:02 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 12 Jun 2012 19:43:02 +0800 Subject: Video Streaming using non http backend, Ref ngx_drizzle In-Reply-To: <1339484464485-7580380.post@n2.nabble.com> References: <1339484464485-7580380.post@n2.nabble.com> Message-ID: Hello! On Tue, Jun 12, 2012 at 3:01 PM, sammy_raul wrote: > I am trying to understand lua module. > Using the above script in the conf file I am able to connect to my upstream. > I have few questions regarding the Lua module. > > 1)How I can send some data i.e I have to send a message to my backend > probably which is more than a simple string. I have to construct it and > encode it. Probably I need to add to the c function I can see > ngx_http_lua_socket_tcp_send is used to send data over Nginx Socket, but I > could not figure out how I can modify this function and which buffers I need > to put my own data. > > 2)Before sock:receive I need to decrypt the data before sending to the > client. I think I can decode in the print function in lua_output.c where I > receive the data from upstream. Is that correct. > You can just try doing encrypting and decrypting on the Lua level. It's a scripting language anyway and you're free to use the classic Lua/C API or LuaJIT FFI to extend your Lua script with C code if desired. It's not recommended, however, to patch ngx_lua cosocket's C implementation directly. Regards, -agentzh From manlio.perillo at gmail.com Tue Jun 12 13:18:24 2012 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Tue, 12 Jun 2012 15:18:24 +0200 Subject: Nginx and uwsgi In-Reply-To: References: <000301cd47d6$ef99f430$cecddc90$@vvidiacom.com> <1f66f279ed8918d0025d0fd3f93a7d78.squirrel@manage.unbit.it> <000f01cd47da$bb9e78a0$32db69e0$@vvidiacom.com> Message-ID: <4FD741A0.3080801@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Il 11/06/2012 20:15, Roberto De Ioris ha scritto: > [...] > >> I was under the impression that the Python application can be embedded in >> Nginx using Wsgi. > > > the only third-party-module allowing you to do so, is Manlio Perillo's > mod_wsgi/mod_python for nginx. It is unmaintained, Well, recently I have started to work on it again; at least now it works with recent Nginx versions ;-). I have not yet pushed the latest changes to the public repository at: https://bitbucket.org/mperillo/ngx_http_wsgi_module. As Roberto pointed out, ngx_http_wsgi_module is not a "general" Python web application server; but it works rather well for "carefully" written applications, and memory usare is low. > [...] Regards Manlio Perillo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/XQaAACgkQscQJ24LbaUR9dgCfQWI7IltdxyvI49QwT3EN+VGT BMUAnRIvcNZohU59rDGszLI9Toupivq6 =tu4T -----END PGP SIGNATURE----- From nginx-forum at nginx.us Tue Jun 12 13:57:20 2012 From: nginx-forum at nginx.us (oreaseca) Date: Tue, 12 Jun 2012 09:57:20 -0400 (EDT) Subject: duplicated directive on reverse proxy Message-ID: <4cb8a881f7b98cf627558ca148faf00d.NginxMailingListEnglish@forum.nginx.org> Hi, everyone, I'm going through some trouble with a nginx reverse proxy, and I'm kinda newbie to this tool. Fact is, I used to have 2 reverse proxies for addressing requests from Internet on port 80 to their respective servers on intranet, and one of them crashed irreversibly, so I'm setting up a new one with similar settings to the other one. My configuration consists on all subdomain configuration files contained on /etc/nginx/sites-available and symbolically linked to sites-enabled, and included on ngxin.conf. Subdomain files are like this: error_log /var/log/nginx/www.DOMAIN.gov.br-error.log; server { listen 80; server_name www.DOMAIN.gov.br; access_log /var/log/nginx/www.DOMAIN.gov.br-access.log; # Main location location / { proxy_pass http://192.168.9.45/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } My nginx.conf: user www-data; worker_processes 10; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 20000; use epoll; } worker_rlimit_nofile 25000; http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } But when I'm trying to set up such configs to the new server, when starting up the service, it goes: root at proxy01:/etc/init.d# /etc/init.d/nginx start Starting nginx: [emerg]: "error_log" directive is duplicate in /etc/nginx/sites-enabled/www.DOMAIN.gov.br:1 configuration file /etc/nginx/nginx.conf test failed It only starts when just 1 domain is configured. If another one is set, this error comes up. The other configured domain starts with "s", so nginx reads it and goes to the next letter, then we get this. Appreciate any help! Cheers, Silvio Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227462,227462#msg-227462 From nginx-forum at nginx.us Tue Jun 12 14:47:10 2012 From: nginx-forum at nginx.us (caleboconnell) Date: Tue, 12 Jun 2012 10:47:10 -0400 (EDT) Subject: rewrite url segment staging site and live site Message-ID: I have a staging site that I test anything relating to the website before we deploy. I have an nginx config that I also test with for this staging site. I wanted to rewrite a uri segment where a section of the site changed names. example.com/old/page1 /old/page2 example.com/new/page1 /new/page2 I added the following rewrite, which works perfectly on the staging site. location /old { rewrite ^/old/? /new/$1 permanent; } when I added this to the live nginx config, the rewrite works, but only sort of: what I want: example.com/old/page1 --> example.com/new/page1 what I get: example.com/old/page1 --> example.com/new It's fine for now, but I don't know why the exact config would work different. The live nginx config is different than the staging, but only in that it includes SSL info. This may be the problem, but I'm not sure why. Thank you in advance for any suggestions and/or answers. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227464,227464#msg-227464 From nginx-forum at nginx.us Tue Jun 12 15:48:17 2012 From: nginx-forum at nginx.us (tcbarrett) Date: Tue, 12 Jun 2012 11:48:17 -0400 (EDT) Subject: WordPress multisite/network domain mapping with multiple server blocks Message-ID: I'm thinking of setting up multiple server blocks with the same document root (all one WP mutisite), rather than one server block with a long list of aliases in the server_name. Trivial set up would be that they are all identical, except for the server_name directive. Is this a bad idea? Does it matter? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227467,227467#msg-227467 From francis at daoine.org Tue Jun 12 17:11:53 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 12 Jun 2012 18:11:53 +0100 Subject: duplicated directive on reverse proxy In-Reply-To: <4cb8a881f7b98cf627558ca148faf00d.NginxMailingListEnglish@forum.nginx.org> References: <4cb8a881f7b98cf627558ca148faf00d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120612171153.GA6210@craic.sysops.org> On Tue, Jun 12, 2012 at 09:57:20AM -0400, oreaseca wrote: Hi there, > error_log /var/log/nginx/www.DOMAIN.gov.br-error.log; > server { Quite possibly, just swapping those two lines will make things do what you want. > root at proxy01:/etc/init.d# /etc/init.d/nginx start > Starting nginx: [emerg]: "error_log" directive is duplicate in > /etc/nginx/sites-enabled/www.DOMAIN.gov.br:1 > configuration file /etc/nginx/nginx.conf test failed error_log only takes a single value at one level. Your "include" is at http level, so that's where your error_log currently is. As soon as you put a second error_log at http level, you'll see the error message. Put the error_log line at server level, and it should just Work. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jun 12 17:23:16 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 12 Jun 2012 18:23:16 +0100 Subject: rewrite url segment staging site and live site In-Reply-To: References: Message-ID: <20120612172316.GB6210@craic.sysops.org> On Tue, Jun 12, 2012 at 10:47:10AM -0400, caleboconnell wrote: Hi there, > location /old { > rewrite ^/old/? /new/$1 permanent; > } What do you think $1 is set to here? (Usually, it is "the last thing matched". So usually, it is worth making sure that you match something immediately before using it.) > what I want: > example.com/old/page1 --> example.com/new/page1 > > what I get: > example.com/old/page1 --> example.com/new http://nginx.org/r/rewrite "regex" can include things inside parentheses, which are then available as ${number} in "replacement". location /old/ { rewrite ^/old/(.*) /new/$1 permanent; } will probably do what you want; but read about the question mark, in case it matters. > It's fine for now, but I don't know why the exact config would work > different. Usually, the same config works the same way. If you've found a case where that isn't the case, it may be worth investigating. But it is potentially the *whole* config that matters here, where you use $1 without showing where it is set. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Jun 12 20:08:44 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2012 00:08:44 +0400 Subject: http_log_module filter by status In-Reply-To: References: Message-ID: <20120612200844.GO31671@mdounin.ru> Hello! On Sun, Jun 10, 2012 at 01:40:12PM -0400, karlseguin wrote: > I was interested in having nginx log 404s to their own file. > Essentially, i _hate_ 404s, so I like to parse access logs find and > report all 404s. However, as-is, parsing large access logs can be quite > inefficient since 404s represent such a small % of the entire file. I > was thinking nginx could filter it out at write-time: > > access_log not_found.log combined buffer=16K 404; > > > Apologies for the lameness of the code, but this is what I came up > with: > https://gist.github.com/2906701 > > I certainly don't recommend anyone uses it, I'm mostly just looking for > feedback. Is this better off in its own module? (there's so much code in > the http_log_module that I want to leverage though). There's much more > filtering that could go on that perhaps a new directive is a better > approach: > > access_log_filter $status /(40\d)/ > access_log_filter $method GET > > (which is certainly beyond my capabilities). > > Thoughts? error_page 404 /404.html; location = /404.html { access_log /path/to/log; } Maxim Dounin From nginx-forum at nginx.us Tue Jun 12 20:30:51 2012 From: nginx-forum at nginx.us (paphillon) Date: Tue, 12 Jun 2012 16:30:51 -0400 (EDT) Subject: Nginx proxy overhead with / out keep alive requests Message-ID: I have a test setup to measure the nginx overhead when plugged in front of a Jboss tomcat server. In tomcat I have deployed a test.jsp and use ab to measure the performance with the following scenarios With out keep alive ab option ab --> (ab -n 5000 -c 5 http://jboss.tomcat.url/test.jsp) ab --> --> (ab -n 5000 -c 5 http://nginx.proxy.url/test.jsp) With keep alive ab option (-k) ab --> (ab -n 5000 -c 5 -k http://jboss.tomcat.url/test.jsp) ab --> --> (ab -n 5000 -c 5 -k http://nginx.proxy.url/test.jsp) The performance numbers WITHOUT keep alive is almost same BUT WITH keep alive option the performance numbers are very different and nginx takes about 5 secs more than the page accessed directly via tomcat. Tomcat takes only 0.503430 on an average Why should there be so much of deviation with Keep alive? Is there anything I am missing? -------------Tomcat per result using ab keep alive (ab -n 5000 -c 5 -k http://jboss.tomcat.url/test.jsp)----------------------- Server Software: Apache-Coyote/1.1 Server Hostname: jboss.tomcat.host Server Port: 9080 Document Path: /test.jsp Document Length: 301 bytes Concurrency Level: 5 Time taken for tests: 0.503430 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Keep-Alive requests: 4952 Total transferred: 2944760 bytes HTML transferred: 1505000 bytes Requests per second: 9931.87 [#/sec] (mean) Time per request: 0.503 [ms] (mean) Time per request: 0.101 [ms] (mean, across all concurrent requests) Transfer rate: 5710.82 [Kbytes/sec] received -------------Nginx perf result using ab keep alive (ab -n 5000 -c 5 -k http://nginx.proxy.url/test.jsp) -------------------------- Server Software: nginx Server Hostname: nginx.proxy.url Server Port: 80 Document Path: /test.jsp Document Length: 301 bytes Concurrency Level: 5 Time taken for tests: 5.440499 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Keep-Alive requests: 4952 Total transferred: 2654760 bytes HTML transferred: 1505000 bytes Requests per second: 919.03 [#/sec] (mean) Time per request: 5.440 [ms] (mean) Time per request: 1.088 [ms] (mean, across all concurrent requests) Transfer rate: 476.43 [Kbytes/sec] received -------------Tomcat per result WITHOUT ab keep alive (ab -n 5000 -c 5 http://jboss.tomcat.url/test.jsp)----------------------- Server Software: Apache-Coyote/1.1 Server Hostname: jboss.tomcat.host Server Port: 9080 Document Path: /test.jsp Document Length: 301 bytes Concurrency Level: 5 Time taken for tests: 4.658429 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 2920000 bytes HTML transferred: 1505000 bytes Requests per second: 1073.32 [#/sec] (mean) Time per request: 4.658 [ms] (mean) Time per request: 0.932 [ms] (mean, across all concurrent requests) Transfer rate: 612.01 [Kbytes/sec] received -------------Nginx perf result WITHOUT ab keep alive (ab -n 5000 -c 5 http://nginx.proxy.url/test.jsp) -------------------------- Server Software: nginx Server Hostname: nginx.proxy.url Server Port: 80 Document Path: /test.jsp Document Length: 301 bytes Concurrency Level: 5 Time taken for tests: 4.916966 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 2630000 bytes HTML transferred: 1505000 bytes Requests per second: 1016.89 [#/sec] (mean) Time per request: 4.917 [ms] (mean) Time per request: 0.983 [ms] (mean, across all concurrent requests) Transfer rate: 522.27 [Kbytes/sec] received =========Configuration================= Tomcat maxKeepAlive connector is set to 100 Nginx keepalive_requests 100; worker_processes 4; worker_connections 4098; use epoll; multi_accept on; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227472,227472#msg-227472 From reallfqq-nginx at yahoo.fr Tue Jun 12 22:08:28 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 12 Jun 2012 18:08:28 -0400 Subject: http_log_module filter by status In-Reply-To: <20120612200844.GO31671@mdounin.ru> References: <20120612200844.GO31671@mdounin.ru> Message-ID: The documentation also says that if you don't want to redirect to another page, you can use a named redirection : location / { error_page 404 @404; } location @404 { access_log /path/to/log; } The wiki syntax seems to be wrong though, since it is using brackets and not braces. --- *B. R.* On Tue, Jun 12, 2012 at 4:08 PM, Maxim Dounin wrote: > Hello! > > On Sun, Jun 10, 2012 at 01:40:12PM -0400, karlseguin wrote: > > > I was interested in having nginx log 404s to their own file. > > Essentially, i _hate_ 404s, so I like to parse access logs find and > > report all 404s. However, as-is, parsing large access logs can be quite > > inefficient since 404s represent such a small % of the entire file. I > > was thinking nginx could filter it out at write-time: > > > > access_log not_found.log combined buffer=16K 404; > > > > > > Apologies for the lameness of the code, but this is what I came up > > with: > > https://gist.github.com/2906701 > > > > I certainly don't recommend anyone uses it, I'm mostly just looking for > > feedback. Is this better off in its own module? (there's so much code in > > the http_log_module that I want to leverage though). There's much more > > filtering that could go on that perhaps a new directive is a better > > approach: > > > > access_log_filter $status /(40\d)/ > > access_log_filter $method GET > > > > (which is certainly beyond my capabilities). > > > > Thoughts? > > error_page 404 /404.html; > > location = /404.html { > access_log /path/to/log; > } > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 13 00:34:03 2012 From: nginx-forum at nginx.us (karlseguin) Date: Tue, 12 Jun 2012 20:34:03 -0400 (EDT) Subject: http_log_module filter by status In-Reply-To: References: Message-ID: <012f5fe8db69f72c4d45c9bec88e7573.NginxMailingListEnglish@forum.nginx.org> heh, thanks :) much better! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227401,227474#msg-227474 From nginx-forum at nginx.us Wed Jun 13 06:04:56 2012 From: nginx-forum at nginx.us (chenmin7249) Date: Wed, 13 Jun 2012 02:04:56 -0400 (EDT) Subject: nginx clear static file cache without restarting Message-ID: <611c94f4afe2d70105155c2d98e8b1e8.NginxMailingListEnglish@forum.nginx.org> i'm using nginx1.0.15 stable to serve static files, and with following configurations: sendfile on; tcp_nopush on; tcp_nodelay on; open_file_cache max=1048000 inactive=604800s; open_file_cache_min_uses 1; open_file_cache_valid 3600s; here is my problem, how can i clear the static file cache without restarting nginx without 'killall -HUP nginx'? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227476,227476#msg-227476 From ru at nginx.com Wed Jun 13 06:12:45 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 13 Jun 2012 10:12:45 +0400 Subject: nginx default server not used In-Reply-To: <97221a02eacadcfdd878d65ba48ebf34.NginxMailingListEnglish@forum.nginx.org> References: <97221a02eacadcfdd878d65ba48ebf34.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120613061245.GA82810@lo0.su> On Sat, Jun 09, 2012 at 04:41:23AM -0400, valinor wrote: > Hello > > I have found another possible cause for nginx to ignore default server > I have encountered it just now, and it is not your case, but maybe it > would be helpful for someone > > At example: > > server { > listen 80 default_server; > server_name default.domain.dom; > ................ > } > > server { > listen :80; > server_name some.server.domain.dom; > .................. > } > > In this case, listening schema for the default server is _not_the_same_ > as for another server. > Considering the fact that we have described a server with the particular > ip address, our default server for that ip (although it is listening on > all IPs) would be ignored, and all queries to that particular ip, even > with "Host: default.domain.dom", would be directed to > "some.server.domain.dom" as it becomes a default (first-described) > server for this listening schema. > We have to explicitly describe "listen :80" on the > default server to enable it for this schema. > > WBW, valinor Here, you have two different listening addresses, 0.0.0.0:80 and :80. Each of them may have its own "default server". See here for details: http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers From mdounin at mdounin.ru Wed Jun 13 07:36:56 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2012 11:36:56 +0400 Subject: http_log_module filter by status In-Reply-To: References: <20120612200844.GO31671@mdounin.ru> Message-ID: <20120613073656.GP31671@mdounin.ru> Hello! On Tue, Jun 12, 2012 at 06:08:28PM -0400, B.R. wrote: > The documentation also > says that if you don't want to redirect to another page, you can use a > named redirection : > location / { > error_page 404 @404; > } > > location @404 { > access_log /path/to/log; > } This will cause another 404 as written. Note well that named location are really needed only if you don't want *internal* redirect to happen, i.e. want to preserve original uri (e.g. for an additional processing as in various fallback schemes) and/or really can't touch uri namespace for some reason. Maxim Dounin > > The wiki syntax seems to be wrong though, since it is using brackets and > not braces. > --- > *B. R.* > > > On Tue, Jun 12, 2012 at 4:08 PM, Maxim Dounin wrote: > > > Hello! > > > > On Sun, Jun 10, 2012 at 01:40:12PM -0400, karlseguin wrote: > > > > > I was interested in having nginx log 404s to their own file. > > > Essentially, i _hate_ 404s, so I like to parse access logs find and > > > report all 404s. However, as-is, parsing large access logs can be quite > > > inefficient since 404s represent such a small % of the entire file. I > > > was thinking nginx could filter it out at write-time: > > > > > > access_log not_found.log combined buffer=16K 404; > > > > > > > > > Apologies for the lameness of the code, but this is what I came up > > > with: > > > https://gist.github.com/2906701 > > > > > > I certainly don't recommend anyone uses it, I'm mostly just looking for > > > feedback. Is this better off in its own module? (there's so much code in > > > the http_log_module that I want to leverage though). There's much more > > > filtering that could go on that perhaps a new directive is a better > > > approach: > > > > > > access_log_filter $status /(40\d)/ > > > access_log_filter $method GET > > > > > > (which is certainly beyond my capabilities). > > > > > > Thoughts? > > > > error_page 404 /404.html; > > > > location = /404.html { > > access_log /path/to/log; > > } > > > > Maxim Dounin > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Jun 13 10:13:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2012 14:13:54 +0400 Subject: Nginx proxy overhead with / out keep alive requests In-Reply-To: References: Message-ID: <20120613101354.GQ31671@mdounin.ru> Hello! On Tue, Jun 12, 2012 at 04:30:51PM -0400, paphillon wrote: > I have a test setup to measure the nginx overhead when plugged in front > of a Jboss tomcat server. > In tomcat I have deployed a test.jsp and use ab to measure the > performance with the following scenarios > > With out keep alive ab option > ab --> (ab -n 5000 -c 5 > http://jboss.tomcat.url/test.jsp) > ab --> --> (ab -n 5000 -c 5 > http://nginx.proxy.url/test.jsp) > > With keep alive ab option (-k) > ab --> (ab -n 5000 -c 5 -k > http://jboss.tomcat.url/test.jsp) > ab --> --> (ab -n 5000 -c 5 -k > http://nginx.proxy.url/test.jsp) > > The performance numbers WITHOUT keep alive is almost same BUT WITH keep > alive option the performance numbers are very different and nginx takes > about 5 secs more than the page accessed directly via tomcat. Tomcat > takes only 0.503430 on an average > > Why should there be so much of deviation with Keep alive? Is there > anything I am missing? Unless you've configured keepalive to upstreams (see http://nginx.org/r/keepalive) nginx will not use keepalive connections to tomcat. And the expected result is: nginx is in par with tomcat used directly without keepalive. Nnumbers extracted from your data for clarity: tomcat + keepalive: 9931.87 r/s tomcat w/o keepalive: 1073.32 r/s nginx + keepalive: 919.03 r/s nginx w/o keepalvie: 1016.89 r/s All nginx numbers are about tomcat's one without keepalive, as expected (see above). The limiting factor is clearly tomcat's connection establishment cost, which drops performance from 10k r/s to 1k r/s. You may want to configure upstream keepalive to cope with it if it's really matters in you case (i.e. if real requests are as fast as test one you've used; usually real request processing takes much more than connection establishment). See the link above for details. Maxim Dounin From mdounin at mdounin.ru Wed Jun 13 10:37:14 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2012 14:37:14 +0400 Subject: nginx clear static file cache without restarting In-Reply-To: <611c94f4afe2d70105155c2d98e8b1e8.NginxMailingListEnglish@forum.nginx.org> References: <611c94f4afe2d70105155c2d98e8b1e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120613103713.GS31671@mdounin.ru> Hello! On Wed, Jun 13, 2012 at 02:04:56AM -0400, chenmin7249 wrote: > i'm using nginx1.0.15 stable to serve static files, and with following > configurations: > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > open_file_cache max=1048000 inactive=604800s; > open_file_cache_min_uses 1; > open_file_cache_valid 3600s; > > here is my problem, how can i clear the static file cache without > restarting nginx without 'killall -HUP nginx'? No, the only way to reset open file cache is to restart worker processes. Sending SIGHUP to let nginx start new worker processes and gracefully shutdown old ones is the easiest way to do this. And just to make sure it's clear: restarting worker processes via SIGHUP doesn't imply any downtime. All connections will be accepted and all requests will be handled, no animals will be harmed and so on. Maxim Dounin From agentzh at gmail.com Wed Jun 13 10:50:17 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 13 Jun 2012 18:50:17 +0800 Subject: [ANN] ngx_openresty stable version 1.0.15.10 released Message-ID: Hello, folks! I'm happy to announce that the new stable version of ngx_openresty, 1.0.15.10, has just been released: ? ?http://openresty.org/#Download This release is essentially equivalent to the devel version 1.0.15.9 except excluding all the vim backup files *~ from the source code distribution. thanks Xiaoyu Chen. Components bundled: * LuaJIT-2.0.0-beta10 * array-var-nginx-module-0.03rc1 * auth-request-nginx-module-0.2 * drizzle-nginx-module-0.1.2rc7 * echo-nginx-module-0.38rc2 * encrypted-session-nginx-module-0.02 * form-input-nginx-module-0.07rc5 * headers-more-nginx-module-0.17rc1 * iconv-nginx-module-0.10rc7 * lua-5.1.4 * lua-cjson-1.0.3 * lua-rds-parser-0.05 * lua-redis-parser-0.09 * lua-resty-memcached-0.07 * lua-resty-mysql-0.07 * lua-resty-redis-0.09 * lua-resty-string-0.06 * lua-resty-upload-0.03 * memc-nginx-module-0.13rc3 * nginx-1.0.15 * ngx_coolkit-0.2rc1 * ngx_devel_kit-0.2.17 * ngx_lua-0.5.0rc30 * ngx_postgres-0.9 * rds-csv-nginx-module-0.05rc2 * rds-json-nginx-module-0.12rc10 * redis-nginx-module-0.3.6 * redis2-nginx-module-0.08rc4 * set-misc-nginx-module-0.22rc8 * srcache-nginx-module-0.13rc8 * upstream-keepalive-nginx-module-0.7 * xss-nginx-module-0.03rc9 You can check out the complete change log here for comparison with the last stable version 1.0.11.28: ??? http://openresty.org/#ChangeLog1000015 Special thanks go to all our contributors and users for making this release happen :) OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well? as most of their external dependencies. See OpenResty's homepage for more details: ? http://openresty.org/ Have fun! -agentzh From mdounin at mdounin.ru Wed Jun 13 11:06:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2012 15:06:00 +0400 Subject: PROPFIND and OPTIONS support in Nginx? In-Reply-To: References: Message-ID: <20120613110559.GU31671@mdounin.ru> Hello! On Mon, Jun 11, 2012 at 01:32:39AM -0400, Floren Munteanu wrote: > I was wondering why the Nginx devs do not add WebDAV support for > PROPFIND and OPTIONS. There is a module available in Github but > still, I would love to have this officially supported by Nginx: > https://github.com/arut/nginx-dav-ext-module > > Are there any plans to have full WebDAV support implemented? This > would be a nice addition into 1.3.0 branch. It's in our TODO, though no ETA yet. Roman's implementation is likely ok if you can't wait, but it can't be imported as is due to number of reasons (the one recently reported on russian list: it doesn't compile on Windows). Maxim Dounin From gabor.farkas at gmail.com Wed Jun 13 12:09:32 2012 From: gabor.farkas at gmail.com (=?ISO-8859-1?Q?G=E1bor_Farkas?=) Date: Wed, 13 Jun 2012 14:09:32 +0200 Subject: proxy_next_upstream, only "connect" timeout? Message-ID: hi, is there any way to tell nginx to fallback to the next upstream in case of timeout, but only if the timeout occured during connection (for example because the upstream's backlog is full), and not when the upstream is already processing the request? basically the case specified by the proxy_connect_timeout. it seems if i do: "proxy_next_upstream timeout", then this can happen: 1. nginx sends the request to upstream1 2. upstream1 begins processing the request, stores data in the db, etc 3. while upstream1 creates the response, the timeout happens, and nginx sends the request to upstream2 4. upstream2 begins processing the request, stores data in the db, etc at this point the request was processed twice, data were written in the database twice etc. i would like to avoid it. but, on the other hand, if i say "proxy_next_upstream off", then this can happen: 1. nginx sends the request to upstream1 2. upstream1's socket-backlog is full, so nginx returns a http504 to the user, even if there is upstream2, that theoretically could have served the request. any ideas how to handle this situation? thanks, gabor From nginx-forum at nginx.us Wed Jun 13 14:31:02 2012 From: nginx-forum at nginx.us (caleboconnell) Date: Wed, 13 Jun 2012 10:31:02 -0400 (EDT) Subject: rewrite url segment staging site and live site In-Reply-To: <20120612172316.GB6210@craic.sysops.org> References: <20120612172316.GB6210@craic.sysops.org> Message-ID: That's exactly what I thought, but when I used (.*) at the end and used the $1 I kept getting 404. When I made it the way it is now, it worked on my staging site as expected but not on my live site. I can confirm that on neither site, the following does not work (404 error): location /old { rewrite ^/old/(.*)$ /new/$1 permanent; } the following will redirect anything from old to the landing page for the new section location /old { rewrite ^/old? /new permanent; } here is the current config, with prior rewrites before this location block: location / { index index.php; try_files $uri $uri/ @ee; } location @ee { rewrite ^(.*) /index.php?/$1 last; } location /old { rewrite ^/old? /new permanent; } I tired to use (.*) and $2 in hopes that the prior $1 wasn't breaking it. Still no luck. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227464,227496#msg-227496 From reallfqq-nginx at yahoo.fr Wed Jun 13 16:00:21 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 13 Jun 2012 12:00:21 -0400 Subject: rewrite url segment staging site and live site In-Reply-To: References: <20120612172316.GB6210@craic.sysops.org> Message-ID: Try to use your regex in the location path too. location '/old' only catches '/old', not even '/old/' and of course nothing lie '/old/....' Then, inside, you rewrite only URI which start with '/old/', so *in fine*, nothing will be ever redirected. The machine does precisely what you asked it to do. --- *B. R.* On Wed, Jun 13, 2012 at 10:31 AM, caleboconnell wrote: > That's exactly what I thought, but when I used (.*) at the end and used > the $1 I kept getting 404. When I made it the way it is now, it worked > on my staging site as expected but not on my live site. > > I can confirm that on neither site, the following does not work (404 > error): > > location /old { > rewrite ^/old/(.*)$ /new/$1 permanent; > } > > the following will redirect anything from old to the landing page for > the new section > > location /old { > rewrite ^/old? /new permanent; > } > > > here is the current config, with prior rewrites before this location > block: > > location / { > index index.php; > try_files $uri $uri/ @ee; > } > > location @ee { > rewrite ^(.*) /index.php?/$1 last; > } > > location /old { > rewrite ^/old? /new permanent; > } > > I tired to use (.*) and $2 in hopes that the prior $1 wasn't breaking > it. Still no luck. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227464,227496#msg-227496 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jun 13 16:31:35 2012 From: nginx-forum at nginx.us (caleboconnell) Date: Wed, 13 Jun 2012 12:31:35 -0400 (EDT) Subject: rewrite url segment staging site and live site In-Reply-To: References: Message-ID: The following worked. Thanks for helping me with this. Hopefully it answers another newbie's question someday. location ^~ /old { rewrite ^/old(.*)$ /new$1 permanent; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227464,227499#msg-227499 From nginx-forum at nginx.us Wed Jun 13 17:52:32 2012 From: nginx-forum at nginx.us (paphillon) Date: Wed, 13 Jun 2012 13:52:32 -0400 (EDT) Subject: Nginx proxy overhead with / out keep alive requests In-Reply-To: <20120613101354.GQ31671@mdounin.ru> References: <20120613101354.GQ31671@mdounin.ru> Message-ID: <5cf1ee2b092ea99b590a53fa9fe416d0.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for the insight! I had a hunch on the keepalive setting from nginx to tomcat, but did not really find anything in the document about that. I will give it a try and let you know the results. I am not sure if I will be able to use this keepalive in production as it requires upstream config and differs from the way we have architect Nginx configuration. I am using nginx as a dynamic proxy which routes the clients calls (primarily web service calls) to different servers depending on the key the client passes via cookie or http headers. The url's are in a map in the below form map $http_void $header_based_url{ default "no_http_header"; key1 server_instance_1_url; key2 server_instance_2_url; ...... } location /xxx { proxy_pass $header_based_url; } And to make it more simple to add / maintain the server url's, the URL's are stored in a flat file I am not sure how I can use the keepalive over here other than the upstream or how do I make this upstream config compatible. Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227472,227503#msg-227503 From nginx-forum at nginx.us Wed Jun 13 18:29:23 2012 From: nginx-forum at nginx.us (parttis) Date: Wed, 13 Jun 2012 14:29:23 -0400 (EDT) Subject: Code coverage on nginx using lcov (gcov) In-Reply-To: <201206080059.57528.ne@vbart.ru> References: <201206080059.57528.ne@vbart.ru> Message-ID: Valentin V. Bartenev Wrote: > Looks like it measures only the master process, > which does not > process requests. I don't know much about gcov, > probably it can > measure multi-process applications, but also you > can set the > "master_process" directive to "off". > > http://nginx.org/r/master_process > > wbr, Valentin V. Bartenev Thank you Valentin for your kind help. Now I am able to do code coverage measurements. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227329,227504#msg-227504 From nginx-forum at nginx.us Wed Jun 13 20:50:08 2012 From: nginx-forum at nginx.us (paphillon) Date: Wed, 13 Jun 2012 16:50:08 -0400 (EDT) Subject: Nginx proxy overhead with / out keep alive requests In-Reply-To: <5cf1ee2b092ea99b590a53fa9fe416d0.NginxMailingListEnglish@forum.nginx.org> References: <20120613101354.GQ31671@mdounin.ru> <5cf1ee2b092ea99b590a53fa9fe416d0.NginxMailingListEnglish@forum.nginx.org> Message-ID: upstream with keepalive has the results almost comparable with tomcat, so yes keepalive between nginx and tomcat really does the trick. :) Unfortunately we cannot use the upstream as explained in my previous post, unless upstream can offer something like below map $http_void $header_based_url{ default "no_http_header"; #These Key => URL's are currently stored in a flat file key1 server_instance_1_url_host; key2 server_instance_2_url_host; ...... } upstream http_backend { server $header_based_url_host; keepalive 100; } location /xxx { proxy_pass http_backend; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227472,227506#msg-227506 From p at state-of-mind.de Wed Jun 13 22:43:00 2012 From: p at state-of-mind.de (Patrick Ben Koetter) Date: Thu, 14 Jun 2012 00:43:00 +0200 Subject: Running mailman within a domain Message-ID: <20120613224300.GP18826@state-of-mind.de> Greetings, this is my first take at nginx. I try to add /cgi-bin/mailman/... to an existing server instance (mail.sys4.de). At the moment I can call scripts directly e.g. works. What I fail to accieve is getting access to lists e.g. . The Browser receives a 403 and the fast_cgi wrapper reports: Cannot chdir to script directory (/usr/lib/cgi-bin/mailman/listinfo)" while reading response header from upstream What is it I am doing wrong? This is my nginx configuration to include mailman into the website: location /cgi-bin/mailman { root /usr/lib/; fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_intercept_errors off; fastcgi_pass unix:/var/run/fcgiwrap.socket; } location /images/mailman { alias /usr/share/images/mailman; } location /pipermail { alias /var/lib/mailman/archives/public; autoindex on; } /etc/nginx/fastcgi_params contains these settings: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; My error log gives this output: 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 0000000002292AB0 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A62C0 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event 00000000022A62C0 2012/06/14 00:39:48 [debug] 27873#0: *13 http empty handler 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event 0000000002292AB0 2012/06/14 00:39:48 [debug] 27873#0: *13 http keepalive handler 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022C7AE0:1024 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: 1 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: 404 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: -1 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_get_error: 2 2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 000000000224D310:1296 2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: 000000000228B240:4096 @16 2012/06/14 00:39:48 [debug] 27873#0: *13 http process request line 2012/06/14 00:39:48 [debug] 27873#0: *13 http request line: "GET /cgi-bin/mailman/listinfo/users HTTP/1.1" 2012/06/14 00:39:48 [debug] 27873#0: *13 http uri: "/cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 http args: "" 2012/06/14 00:39:48 [debug] 27873#0: *13 http exten: "" 2012/06/14 00:39:48 [debug] 27873#0: *13 http process request header line 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Host: mail.sys4.de" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept-Encoding: gzip, deflate" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "DNT: 1" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Connection: keep-alive" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Referer: https://mail.sys4.de/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 http header done 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer del: 10: 1339627243545 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 rewrite phase: 1 2012/06/14 00:39:48 [debug] 27873#0: *13 http script regex: "^/$" 2012/06/14 00:39:48 [notice] 27873#0: *13 "^/$" does not match "/cgi-bin/mailman/listinfo/users", client: 178.27.33.0, server: mail.sys4.de, request: "GET /cgi-bin/mailman/listinfo/users HTTP/1.1", host: "mail.sys4.de", referrer: "https://mail.sys4.de/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 test location: "/images/mailman" 2012/06/14 00:39:48 [debug] 27873#0: *13 test location: "/cgi-bin/mailman" 2012/06/14 00:39:48 [debug] 27873#0: *13 using configuration "/cgi-bin/mailman" 2012/06/14 00:39:48 [debug] 27873#0: *13 http cl:-1 max:1048576 2012/06/14 00:39:48 [debug] 27873#0: *13 rewrite phase: 3 2012/06/14 00:39:48 [debug] 27873#0: *13 post rewrite phase: 4 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 5 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 6 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 7 2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 8 2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 9 2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 10 2012/06/14 00:39:48 [debug] 27873#0: *13 post access phase: 11 2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: 00000000022D15A0:4096 @16 2012/06/14 00:39:48 [debug] 27873#0: *13 http init upstream, client timer: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "QUERY_STRING" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "QUERY_STRING: " 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REQUEST_METHOD" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "GET" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REQUEST_METHOD: GET" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CONTENT_TYPE" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "CONTENT_TYPE: " 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CONTENT_LENGTH" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "CONTENT_LENGTH: " 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SCRIPT_FILENAME" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib/cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_FILENAME: /usr/lib/cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SCRIPT_NAME" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_NAME: /cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REQUEST_URI" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REQUEST_URI: /cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "DOCUMENT_URI" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "DOCUMENT_URI: /cgi-bin/mailman/listinfo/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "DOCUMENT_ROOT" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "DOCUMENT_ROOT: /usr/lib" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_PROTOCOL" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "HTTP/1.1" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "GATEWAY_INTERFACE" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CGI/1.1" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_SOFTWARE" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "nginx/" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "1.1.19" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_SOFTWARE: nginx/1.1.19" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REMOTE_ADDR" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "178.27.33.0" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REMOTE_ADDR: 178.27.33.0" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REMOTE_PORT" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "35670" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REMOTE_PORT: 35670" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_ADDR" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "194.126.158.57" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_ADDR: 194.126.158.57" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_PORT" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "443" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_PORT: 443" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_NAME" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "mail.sys4.de" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_NAME: mail.sys4.de" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "HTTPS" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "on" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTPS: on" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REDIRECT_STATUS" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "200" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REDIRECT_STATUS: 200" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SCRIPT_FILENAME" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_FILENAME: /usr/lib/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "PATH_INFO" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "PATH_INFO: /users" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "PATH_TRANSLATED" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "PATH_TRANSLATED: /usr/lib/users" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_HOST: mail.sys4.de" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_USER_AGENT: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_ACCEPT_LANGUAGE: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_ACCEPT_ENCODING: gzip, deflate" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_DNT: 1" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_CONNECTION: keep-alive" 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_REFERER: https://mail.sys4.de/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 http cleanup add: 00000000022D1C08 2012/06/14 00:39:48 [debug] 27873#0: *13 get rr peer, try: 1 2012/06/14 00:39:48 [debug] 27873#0: *13 socket 12 2012/06/14 00:39:48 [debug] 27873#0: *13 epoll add connection: fd:12 ev:80000005 2012/06/14 00:39:48 [debug] 27873#0: *13 connect to unix:/var/run/fcgiwrap.socket, fd:12 #15 2012/06/14 00:39:48 [debug] 27873#0: *13 connected 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream connect: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: 00000000022C7130:128 @16 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream send request 2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer buf fl:0 s:1008 2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer in: 00000000022D1C40 2012/06/14 00:39:48 [debug] 27873#0: *13 writev: 1008 2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer out: 0000000000000000 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer add: 12: 60000:1339627248804 2012/06/14 00:39:48 [debug] 27873#0: *13 http finalize request: -4, "/cgi-bin/mailman/listinfo/users?" a:1, c:2 2012/06/14 00:39:48 [debug] 27873#0: *13 http request count:2 blk:0 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event 00000000022A6328 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event 00000000022A6328 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 0000000002292B18 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event 00000000022A6328 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event 0000000002292B18 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream process header 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022525C0:4096 2012/06/14 00:39:48 [debug] 27873#0: *13 recv: fd:12 176 of 4096 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 07 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 45 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 03 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 69 2012/06/14 00:39:48 [error] 27873#0: *13 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/cgi-bin/mailman/listinfo)" while reading response header from upstream, client: 178.27.33.0, server: mail.sys4.de, request: "GET /cgi-bin/mailman/listinfo/users HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "mail.sys4.de", referrer: "https://mail.sys4.de/cgi-bin/mailman/listinfo" 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 07 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 06 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 33 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 05 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 51 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header: "Status: 403 Forbidden" 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header: "Content-type: text/plain" 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 1 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header done 2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter header 2012/06/14 00:39:48 [debug] 27873#0: *13 HTTP/1.1 403 Forbidden Server: nginx/1.1.19 Date: Wed, 13 Jun 2012 22:39:48 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:0 f:0 s:163 2012/06/14 00:39:48 [debug] 27873#0: *13 http cacheable: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream process upstream 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe read upstream: 1 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe preread: 32 2012/06/14 00:39:48 [debug] 27873#0: *13 readv: 1:3920 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe recv chain: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe buf free s:0 t:1 f:0 00000000022525C0, pos 0000000002252650, size: 32 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe length: -1 2012/06/14 00:39:48 [debug] 27873#0: *13 input buf #0 0000000002252650 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 06 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi closed stdout 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 03 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 08 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 8 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi sent end request 2012/06/14 00:39:48 [debug] 27873#0: *13 input buf 0000000002252650 3 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream: 1 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream flush in 2012/06/14 00:39:48 [debug] 27873#0: *13 http output filter "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 image filter 2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter body 2012/06/14 00:39:48 [debug] 27873#0: *13 http postpone filter "/cgi-bin/mailman/listinfo/users?" 00000000022D20A0 2012/06/14 00:39:48 [debug] 27873#0: *13 http chunk: 3 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 00000000022D2188, pos 00000000022D2188, size: 3 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 00000000022525C0, pos 0000000002252650, size: 3 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:0 f:0 0000000000000000, pos 000000000049DD7D, size: 2 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:0 f:0 s:171 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: 0 "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream done 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer: 12, old: 1339627248804, new: 1339627248805 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream exit: 0000000000000000 2012/06/14 00:39:48 [debug] 27873#0: *13 finalize http upstream request: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 finalize http fastcgi request 2012/06/14 00:39:48 [debug] 27873#0: *13 free rr peer 1 0 2012/06/14 00:39:48 [debug] 27873#0: *13 close http upstream connection: 12 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022C7130, unused: 48 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer del: 12: 1339627248804 2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream temp fd: -1 2012/06/14 00:39:48 [debug] 27873#0: *13 http output filter "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 image filter 2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter body 2012/06/14 00:39:48 [debug] 27873#0: *13 http postpone filter "/cgi-bin/mailman/listinfo/users?" 00007FFF43D71E70 2012/06/14 00:39:48 [debug] 27873#0: *13 http chunk: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 00000000022D2188, pos 00000000022D2188, size: 3 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 00000000022525C0, pos 0000000002252650, size: 3 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:0 f:0 0000000000000000, pos 000000000049DD7D, size: 2 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:0 f:0 0000000000000000, pos 000000000049DD7A, size: 5 file: 0, size: 0 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:1 f:0 s:176 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter limit 0 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022B9930:16384 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 163 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 3 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 3 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 2 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 5 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL to write: 176 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_write: 176 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter 0000000000000000 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: 0 "/cgi-bin/mailman/listinfo/users?" 2012/06/14 00:39:48 [debug] 27873#0: *13 http finalize request: 0, "/cgi-bin/mailman/listinfo/users?" a:1, c:1 2012/06/14 00:39:48 [debug] 27873#0: *13 set http keepalive handler 2012/06/14 00:39:48 [debug] 27873#0: *13 http close request 2012/06/14 00:39:48 [debug] 27873#0: *13 http log handler 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022525C0 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 000000000228B240, unused: 6 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022D15A0, unused: 444 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer add: 10: 65000:1339627253805 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 000000000224D310 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022C7AE0 2012/06/14 00:39:48 [debug] 27873#0: *13 hc free: 0000000000000000 0 2012/06/14 00:39:48 [debug] 27873#0: *13 hc busy: 0000000000000000 0 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022B9930 2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 1 Thanks p at rick -- state of mind () http://www.state-of-mind.de Franziskanerstra?e 15 Telefon +49 89 3090 4664 81669 M?nchen Telefax +49 89 3090 4666 Amtsgericht M?nchen Partnerschaftsregister PR 563 From adrianhayter at gmail.com Thu Jun 14 06:26:20 2012 From: adrianhayter at gmail.com (Adrian Hayter) Date: Thu, 14 Jun 2012 07:26:20 +0100 Subject: 500 Internal Server Error only on wp-admin/ directory... Message-ID: <4C382FFF-14C1-4431-8476-E8CFB921A49C@gmail.com> I upgraded my Wordpress install (via svn) and now trying to access the wp-admin/ directory results in a 500 Internal Server Error, despite the fact my config hasn't changed at all. The most annoying part is that no error messages seem to have been logged by either nginx or php-fpm. The front-end of the site appears to be working, which just adds to the strangeness. I've looked at the permissions; they are all fine. nginx should have access, and like I said, the front-end works. So my question(s): Does anyone know what might be causing this, and also, are there any helpful tutorials available for increasing the amount of information that nginx (and other applications like php, mysql, etc) outputs to error logs? Cheers, - Adrian From al-nginx at none.at Thu Jun 14 08:53:11 2012 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 14 Jun 2012 10:53:11 +0200 Subject: Running mailman within a domain In-Reply-To: <20120613224300.GP18826@state-of-mind.de> References: <20120613224300.GP18826@state-of-mind.de> Message-ID: Hi Patrick, On 14-06-2012 00:43, Patrick Ben Koetter wrote: > Greetings, > > this is my first take at nginx. I try to add /cgi-bin/mailman/... to > an > existing server instance (mail.sys4.de). At the moment I can call > scripts > directly e.g. works. > > What I fail to accieve is getting access to lists e.g. > . The Browser > receives a > 403 and the fast_cgi wrapper reports: > > Cannot chdir to script directory > (/usr/lib/cgi-bin/mailman/listinfo)" while > reading response header from upstream > > > What is it I am doing wrong? I don't know too much about mailman, so I try to guess. The /usr/lib/cgi-bin/mailman is the script which should be executed right? The /listinfo & /listinfo/users should be the path info, right? > This is my nginx configuration to include mailman into the website: > > location /cgi-bin/mailman { > root /usr/lib/; my suggestion: - fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$; + fastcgi_split_path_info (^/cgi-bin/mailman)/(.*)$; > include /etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED > $document_root$fastcgi_path_info; > fastcgi_intercept_errors off; > fastcgi_pass unix:/var/run/fcgiwrap.socket; > } > location /images/mailman { > alias /usr/share/images/mailman; > } > location /pipermail { > alias /var/lib/mailman/archives/public; > autoindex on; > } output of pcretest ### re> |(^/cgi-bin/mailman)/(.*)$| data> /cgi-bin/mailman/listinfo 0: /cgi-bin/mailman/listinfo 1: /cgi-bin/mailman 2: listinfo data> /cgi-bin/mailman/listinfo/users 0: /cgi-bin/mailman/listinfo/users 1: /cgi-bin/mailman 2: listinfo/users ### > /etc/nginx/fastcgi_params contains these settings: [snipp] > My error log gives this output: [snipp] > 2012/06/14 00:39:48 [debug] 27873#0: *13 http uri: > "/cgi-bin/mailman/listinfo/users" [snipp] > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SCRIPT_FILENAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/usr/lib/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "SCRIPT_FILENAME: /usr/lib/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SCRIPT_NAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_NAME: > /cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "REQUEST_URI" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REQUEST_URI: > /cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "DOCUMENT_URI" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "DOCUMENT_URI: /cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "DOCUMENT_ROOT" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "DOCUMENT_ROOT: /usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_PROTOCOL" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "HTTP/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "SERVER_PROTOCOL: HTTP/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "GATEWAY_INTERFACE" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CGI/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "GATEWAY_INTERFACE: CGI/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_SOFTWARE" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "nginx/" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "1.1.19" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "SERVER_SOFTWARE: nginx/1.1.19" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "REMOTE_ADDR" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "178.27.33.0" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REMOTE_ADDR: > 178.27.33.0" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "REMOTE_PORT" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "35670" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REMOTE_PORT: > 35670" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_ADDR" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "194.126.158.57" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_ADDR: > 194.126.158.57" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_PORT" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "443" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_PORT: > 443" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_NAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "mail.sys4.de" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_NAME: > mail.sys4.de" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "HTTPS" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "on" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTPS: on" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "REDIRECT_STATUS" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "200" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "REDIRECT_STATUS: 200" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SCRIPT_FILENAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "SCRIPT_FILENAME: /usr/lib/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "PATH_INFO" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "PATH_INFO: > /users" [snipp] > Thanks > > p at rick I'am sure you have read this but I post it here as reminder ;-) http://www.nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_split_path_info Hth Aleks From ft at falkotimme.com Thu Jun 14 09:16:57 2012 From: ft at falkotimme.com (Falko Timme) Date: Thu, 14 Jun 2012 11:16:57 +0200 Subject: Running mailman within a domain References: <20120613224300.GP18826@state-of-mind.de> Message-ID: Hi, if you use Debian or Ubuntu, you can get nginx to work with Mailman as follows: http://www.howtoforge.com/running-mailman-on-nginx-lemp-on-debian-squeeze-ubuntu-11.04-11.10 Best Regards, Falko Timme Ovelg?nner Weg 43 21335 L?neburg Germany Email: ft at falkotimme.com URL: http://www.falkotimme.com Projects: Timme Hosting: https://timmehosting.de ISPConfig: http://www.ispconfig.org HowtoForge: http://www.howtoforge.com ----- Original Message ----- From: "Patrick Ben Koetter"

To: Sent: Thursday, June 14, 2012 12:43 AM Subject: Running mailman within a domain > Greetings, > > this is my first take at nginx. I try to add /cgi-bin/mailman/... to an > existing server instance (mail.sys4.de). At the moment I can call scripts > directly e.g. works. > > What I fail to accieve is getting access to lists e.g. > . The Browser > receives a > 403 and the fast_cgi wrapper reports: > > Cannot chdir to script directory (/usr/lib/cgi-bin/mailman/listinfo)" > while > reading response header from upstream > > > What is it I am doing wrong? > > > This is my nginx configuration to include mailman into the website: > > location /cgi-bin/mailman { > root /usr/lib/; > fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$; > include /etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED > $document_root$fastcgi_path_info; > fastcgi_intercept_errors off; > fastcgi_pass unix:/var/run/fcgiwrap.socket; > } > location /images/mailman { > alias /usr/share/images/mailman; > } > location /pipermail { > alias /var/lib/mailman/archives/public; > autoindex on; > } > > /etc/nginx/fastcgi_params contains these settings: > > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > fastcgi_param HTTPS $https; > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > fastcgi_param REDIRECT_STATUS 200; > > > > My error log gives this output: > > 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 0000000002292AB0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A62C0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > 00000000022A62C0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http empty handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > 0000000002292AB0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http keepalive handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022C7AE0:1024 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: 1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: 404 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: -1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_get_error: 2 > 2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 000000000224D310:1296 > 2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: > 000000000228B240:4096 @16 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http process request line > 2012/06/14 00:39:48 [debug] 27873#0: *13 http request line: "GET > /cgi-bin/mailman/listinfo/users HTTP/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http uri: > "/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http args: "" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http exten: "" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http process request header line > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Host: mail.sys4.de" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "User-Agent: > Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) Gecko/20100101 > Firefox/13.0" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept-Language: > de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept-Encoding: > gzip, deflate" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "DNT: 1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Connection: > keep-alive" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Referer: > https://mail.sys4.de/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http header done > 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer del: 10: > 1339627243545 > 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 rewrite phase: 1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script regex: "^/$" > 2012/06/14 00:39:48 [notice] 27873#0: *13 "^/$" does not match > "/cgi-bin/mailman/listinfo/users", client: 178.27.33.0, server: > mail.sys4.de, request: "GET /cgi-bin/mailman/listinfo/users HTTP/1.1", > host: "mail.sys4.de", referrer: > "https://mail.sys4.de/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 test location: "/images/mailman" > 2012/06/14 00:39:48 [debug] 27873#0: *13 test location: "/cgi-bin/mailman" > 2012/06/14 00:39:48 [debug] 27873#0: *13 using configuration > "/cgi-bin/mailman" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http cl:-1 max:1048576 > 2012/06/14 00:39:48 [debug] 27873#0: *13 rewrite phase: 3 > 2012/06/14 00:39:48 [debug] 27873#0: *13 post rewrite phase: 4 > 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 5 > 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 6 > 2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 7 > 2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 8 > 2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 9 > 2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 10 > 2012/06/14 00:39:48 [debug] 27873#0: *13 post access phase: 11 > 2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: > 00000000022D15A0:4096 @16 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http init upstream, client timer: > 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "QUERY_STRING" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "QUERY_STRING: " > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "REQUEST_METHOD" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "GET" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REQUEST_METHOD: > GET" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CONTENT_TYPE" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "CONTENT_TYPE: " > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "CONTENT_LENGTH" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "CONTENT_LENGTH: " > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SCRIPT_FILENAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/usr/lib/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_FILENAME: > /usr/lib/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SCRIPT_NAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_NAME: > /cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REQUEST_URI" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REQUEST_URI: > /cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "DOCUMENT_URI" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "DOCUMENT_URI: > /cgi-bin/mailman/listinfo/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "DOCUMENT_ROOT" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "DOCUMENT_ROOT: > /usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_PROTOCOL" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "HTTP/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_PROTOCOL: > HTTP/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "GATEWAY_INTERFACE" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CGI/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "GATEWAY_INTERFACE: CGI/1.1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SERVER_SOFTWARE" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "nginx/" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "1.1.19" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_SOFTWARE: > nginx/1.1.19" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REMOTE_ADDR" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "178.27.33.0" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REMOTE_ADDR: > 178.27.33.0" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REMOTE_PORT" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "35670" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REMOTE_PORT: > 35670" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_ADDR" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "194.126.158.57" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_ADDR: > 194.126.158.57" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_PORT" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "443" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_PORT: 443" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_NAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "mail.sys4.de" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_NAME: > mail.sys4.de" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "HTTPS" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "on" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTPS: on" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "REDIRECT_STATUS" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "200" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "REDIRECT_STATUS: > 200" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "SCRIPT_FILENAME" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > "/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SCRIPT_FILENAME: > /usr/lib/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "PATH_INFO" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "PATH_INFO: > /users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > "PATH_TRANSLATED" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "PATH_TRANSLATED: > /usr/lib/users" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_HOST: > mail.sys4.de" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_USER_AGENT: > Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) Gecko/20100101 > Firefox/13.0" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_ACCEPT: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "HTTP_ACCEPT_LANGUAGE: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > "HTTP_ACCEPT_ENCODING: gzip, deflate" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_DNT: 1" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_CONNECTION: > keep-alive" > 2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_REFERER: > https://mail.sys4.de/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http cleanup add: > 00000000022D1C08 > 2012/06/14 00:39:48 [debug] 27873#0: *13 get rr peer, try: 1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 socket 12 > 2012/06/14 00:39:48 [debug] 27873#0: *13 epoll add connection: fd:12 > ev:80000005 > 2012/06/14 00:39:48 [debug] 27873#0: *13 connect to > unix:/var/run/fcgiwrap.socket, fd:12 #15 > 2012/06/14 00:39:48 [debug] 27873#0: *13 connected > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream connect: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: > 00000000022C7130:128 @16 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream send request > 2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer buf fl:0 s:1008 > 2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer in: 00000000022D1C40 > 2012/06/14 00:39:48 [debug] 27873#0: *13 writev: 1008 > 2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer out: > 0000000000000000 > 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer add: 12: > 60000:1339627248804 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http finalize request: -4, > "/cgi-bin/mailman/listinfo/users?" a:1, c:2 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http request count:2 blk:0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 > 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > 00000000022A6328 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 > 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > 00000000022A6328 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 0000000002292B18 > 2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 > 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > 00000000022A6328 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > 0000000002292B18 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream process header > 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022525C0:4096 > 2012/06/14 00:39:48 [debug] 27873#0: *13 recv: fd:12 176 of 4096 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 07 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 45 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 03 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 69 > 2012/06/14 00:39:48 [error] 27873#0: *13 FastCGI sent in stderr: "Cannot > chdir to script directory (/usr/lib/cgi-bin/mailman/listinfo)" while > reading response header from upstream, client: 178.27.33.0, server: > mail.sys4.de, request: "GET /cgi-bin/mailman/listinfo/users HTTP/1.1", > upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: > "mail.sys4.de", referrer: "https://mail.sys4.de/cgi-bin/mailman/listinfo" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 07 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 06 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 33 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 05 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 51 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header: "Status: 403 > Forbidden" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header: > "Content-type: text/plain" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header done > 2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter header > 2012/06/14 00:39:48 [debug] 27873#0: *13 HTTP/1.1 403 Forbidden > Server: nginx/1.1.19 > Date: Wed, 13 Jun 2012 22:39:48 GMT > Content-Type: text/plain > Transfer-Encoding: chunked > Connection: keep-alive > > 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 > 00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:0 f:0 s:163 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http cacheable: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream process upstream > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe read upstream: 1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe preread: 32 > 2012/06/14 00:39:48 [debug] 27873#0: *13 readv: 1:3920 > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe recv chain: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe buf free s:0 t:1 f:0 > 00000000022525C0, pos 0000000002252650, size: 32 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe length: -1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 input buf #0 0000000002252650 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 06 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi closed stdout > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 03 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 08 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 8 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi sent end request > 2012/06/14 00:39:48 [debug] 27873#0: *13 input buf 0000000002252650 3 > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream: 1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream flush in > 2012/06/14 00:39:48 [debug] 27873#0: *13 http output filter > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 image filter > 2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter body > 2012/06/14 00:39:48 [debug] 27873#0: *13 http postpone filter > "/cgi-bin/mailman/listinfo/users?" 00000000022D20A0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http chunk: 3 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > 00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 > 00000000022D2188, pos 00000000022D2188, size: 3 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 > 00000000022525C0, pos 0000000002252650, size: 3 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:0 f:0 > 0000000000000000, pos 000000000049DD7D, size: 2 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:0 f:0 s:171 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: 0 > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream done > 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer: 12, old: > 1339627248804, new: 1339627248805 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream exit: > 0000000000000000 > 2012/06/14 00:39:48 [debug] 27873#0: *13 finalize http upstream request: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 finalize http fastcgi request > 2012/06/14 00:39:48 [debug] 27873#0: *13 free rr peer 1 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 close http upstream connection: > 12 > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022C7130, unused: > 48 > 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer del: 12: > 1339627248804 > 2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream temp fd: -1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http output filter > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 image filter > 2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter body > 2012/06/14 00:39:48 [debug] 27873#0: *13 http postpone filter > "/cgi-bin/mailman/listinfo/users?" 00007FFF43D71E70 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http chunk: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > 00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > 00000000022D2188, pos 00000000022D2188, size: 3 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > 00000000022525C0, pos 0000000002252650, size: 3 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:0 f:0 > 0000000000000000, pos 000000000049DD7D, size: 2 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:0 f:0 > 0000000000000000, pos 000000000049DD7A, size: 5 file: 0, size: 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:1 f:0 s:176 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter limit 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022B9930:16384 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 163 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 3 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 3 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 2 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 5 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL to write: 176 > 2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_write: 176 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter > 0000000000000000 > 2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: 0 > "/cgi-bin/mailman/listinfo/users?" > 2012/06/14 00:39:48 [debug] 27873#0: *13 http finalize request: 0, > "/cgi-bin/mailman/listinfo/users?" a:1, c:1 > 2012/06/14 00:39:48 [debug] 27873#0: *13 set http keepalive handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 http close request > 2012/06/14 00:39:48 [debug] 27873#0: *13 http log handler > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022525C0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 000000000228B240, unused: 6 > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022D15A0, unused: > 444 > 2012/06/14 00:39:48 [debug] 27873#0: *13 event timer add: 10: > 65000:1339627253805 > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 000000000224D310 > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022C7AE0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 hc free: 0000000000000000 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 hc busy: 0000000000000000 0 > 2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022B9930 > 2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 1 > > Thanks > > p at rick > > > -- > state of mind () > > http://www.state-of-mind.de > > Franziskanerstra?e 15 Telefon +49 89 3090 4664 > 81669 M?nchen Telefax +49 89 3090 4666 > > Amtsgericht M?nchen Partnerschaftsregister PR 563 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From p at state-of-mind.de Thu Jun 14 10:45:34 2012 From: p at state-of-mind.de (Patrick Ben Koetter) Date: Thu, 14 Jun 2012 12:45:34 +0200 Subject: Running mailman within a domain In-Reply-To: References: <20120613224300.GP18826@state-of-mind.de> Message-ID: <20120614104533.GG2347@state-of-mind.de> Hi Falko, * Falko Timme : > if you use Debian or Ubuntu, you can get nginx to work with Mailman > as follows: > > http://www.howtoforge.com/running-mailman-on-nginx-lemp-on-debian-squeeze-ubuntu-11.04-11.10 I had already followed your HOWTO, but it doesn't seem to work for Ubuntu 12.04 LTS - at least not for me. Any ideas how I could debug this better? p at rick > >this is my first take at nginx. I try to add /cgi-bin/mailman/... to an > >existing server instance (mail.sys4.de). At the moment I can call scripts > >directly e.g. works. > > > >What I fail to accieve is getting access to lists e.g. > >. The Browser > >receives a > >403 and the fast_cgi wrapper reports: > > > > Cannot chdir to script directory > >(/usr/lib/cgi-bin/mailman/listinfo)" while > > reading response header from upstream > > > > > >What is it I am doing wrong? > > > > > >This is my nginx configuration to include mailman into the website: > > > > location /cgi-bin/mailman { > > root /usr/lib/; > > fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$; > > include /etc/nginx/fastcgi_params; > > fastcgi_param SCRIPT_FILENAME > >$document_root$fastcgi_script_name; > > fastcgi_param PATH_INFO $fastcgi_path_info; > > fastcgi_param PATH_TRANSLATED > >$document_root$fastcgi_path_info; > > fastcgi_intercept_errors off; > > fastcgi_pass unix:/var/run/fcgiwrap.socket; > > } > > location /images/mailman { > > alias /usr/share/images/mailman; > > } > > location /pipermail { > > alias /var/lib/mailman/archives/public; > > autoindex on; > > } > > > >/etc/nginx/fastcgi_params contains these settings: > > > >fastcgi_param QUERY_STRING $query_string; > >fastcgi_param REQUEST_METHOD $request_method; > >fastcgi_param CONTENT_TYPE $content_type; > >fastcgi_param CONTENT_LENGTH $content_length; > > > >fastcgi_param SCRIPT_FILENAME $request_filename; > >fastcgi_param SCRIPT_NAME $fastcgi_script_name; > >fastcgi_param REQUEST_URI $request_uri; > >fastcgi_param DOCUMENT_URI $document_uri; > >fastcgi_param DOCUMENT_ROOT $document_root; > >fastcgi_param SERVER_PROTOCOL $server_protocol; > > > >fastcgi_param GATEWAY_INTERFACE CGI/1.1; > >fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > > > >fastcgi_param REMOTE_ADDR $remote_addr; > >fastcgi_param REMOTE_PORT $remote_port; > >fastcgi_param SERVER_ADDR $server_addr; > >fastcgi_param SERVER_PORT $server_port; > >fastcgi_param SERVER_NAME $server_name; > > > >fastcgi_param HTTPS $https; > > > ># PHP only, required if PHP was built with --enable-force-cgi-redirect > >fastcgi_param REDIRECT_STATUS 200; > > > > > > > >My error log gives this output: > > > >2012/06/14 00:39:48 [debug] 27873#0: *13 post event 0000000002292AB0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A62C0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > >00000000022A62C0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http empty handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > >0000000002292AB0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http keepalive handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022C7AE0:1024 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: 1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: 404 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_read: -1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_get_error: 2 > >2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 000000000224D310:1296 > >2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: > >000000000228B240:4096 @16 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http process request line > >2012/06/14 00:39:48 [debug] 27873#0: *13 http request line: "GET > >/cgi-bin/mailman/listinfo/users HTTP/1.1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http uri: > >"/cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http args: "" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http exten: "" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http process request header line > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Host: mail.sys4.de" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "User-Agent: > >Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) Gecko/20100101 > >Firefox/13.0" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Accept: > >text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: > >"Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: > >"Accept-Encoding: gzip, deflate" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "DNT: 1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Connection: > >keep-alive" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header: "Referer: > >https://mail.sys4.de/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http header done > >2012/06/14 00:39:48 [debug] 27873#0: *13 event timer del: 10: > >1339627243545 > >2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 rewrite phase: 1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script regex: "^/$" > >2012/06/14 00:39:48 [notice] 27873#0: *13 "^/$" does not match > >"/cgi-bin/mailman/listinfo/users", client: 178.27.33.0, server: > >mail.sys4.de, request: "GET /cgi-bin/mailman/listinfo/users > >HTTP/1.1", host: "mail.sys4.de", referrer: > >"https://mail.sys4.de/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 test location: "/images/mailman" > >2012/06/14 00:39:48 [debug] 27873#0: *13 test location: "/cgi-bin/mailman" > >2012/06/14 00:39:48 [debug] 27873#0: *13 using configuration > >"/cgi-bin/mailman" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http cl:-1 max:1048576 > >2012/06/14 00:39:48 [debug] 27873#0: *13 rewrite phase: 3 > >2012/06/14 00:39:48 [debug] 27873#0: *13 post rewrite phase: 4 > >2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 5 > >2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 6 > >2012/06/14 00:39:48 [debug] 27873#0: *13 generic phase: 7 > >2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 8 > >2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 9 > >2012/06/14 00:39:48 [debug] 27873#0: *13 access phase: 10 > >2012/06/14 00:39:48 [debug] 27873#0: *13 post access phase: 11 > >2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: > >00000000022D15A0:4096 @16 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http init upstream, > >client timer: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "QUERY_STRING" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "QUERY_STRING: " > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"REQUEST_METHOD" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "GET" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"REQUEST_METHOD: GET" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CONTENT_TYPE" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "CONTENT_TYPE: " > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"CONTENT_LENGTH" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "CONTENT_LENGTH: " > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"SCRIPT_FILENAME" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > >"/usr/lib/cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SCRIPT_FILENAME: /usr/lib/cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SCRIPT_NAME" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > >"/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SCRIPT_NAME: /cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REQUEST_URI" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > >"/cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"REQUEST_URI: /cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "DOCUMENT_URI" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > >"/cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"DOCUMENT_URI: /cgi-bin/mailman/listinfo/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "DOCUMENT_ROOT" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"DOCUMENT_ROOT: /usr/lib" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"SERVER_PROTOCOL" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "HTTP/1.1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SERVER_PROTOCOL: HTTP/1.1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"GATEWAY_INTERFACE" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "CGI/1.1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"GATEWAY_INTERFACE: CGI/1.1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"SERVER_SOFTWARE" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "nginx/" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "1.1.19" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SERVER_SOFTWARE: nginx/1.1.19" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REMOTE_ADDR" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "178.27.33.0" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"REMOTE_ADDR: 178.27.33.0" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "REMOTE_PORT" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "35670" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"REMOTE_PORT: 35670" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_ADDR" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "194.126.158.57" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SERVER_ADDR: 194.126.158.57" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_PORT" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "443" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "SERVER_PORT: 443" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "SERVER_NAME" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "mail.sys4.de" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SERVER_NAME: mail.sys4.de" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "HTTPS" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "on" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTPS: on" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"REDIRECT_STATUS" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "200" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"REDIRECT_STATUS: 200" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"SCRIPT_FILENAME" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: > >"/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"SCRIPT_FILENAME: /usr/lib/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: "PATH_INFO" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"PATH_INFO: /users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script copy: > >"PATH_TRANSLATED" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/usr/lib" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http script var: "/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"PATH_TRANSLATED: /usr/lib/users" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_HOST: mail.sys4.de" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_USER_AGENT: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) > >Gecko/20100101 Firefox/13.0" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_ACCEPT: > >text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_ACCEPT_LANGUAGE: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_ACCEPT_ENCODING: gzip, deflate" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: "HTTP_DNT: 1" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_CONNECTION: keep-alive" > >2012/06/14 00:39:48 [debug] 27873#0: *13 fastcgi param: > >"HTTP_REFERER: https://mail.sys4.de/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http cleanup add: > >00000000022D1C08 > >2012/06/14 00:39:48 [debug] 27873#0: *13 get rr peer, try: 1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 socket 12 > >2012/06/14 00:39:48 [debug] 27873#0: *13 epoll add connection: > >fd:12 ev:80000005 > >2012/06/14 00:39:48 [debug] 27873#0: *13 connect to > >unix:/var/run/fcgiwrap.socket, fd:12 #15 > >2012/06/14 00:39:48 [debug] 27873#0: *13 connected > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream connect: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 posix_memalign: > >00000000022C7130:128 @16 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream send request > >2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer buf fl:0 s:1008 > >2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer in: 00000000022D1C40 > >2012/06/14 00:39:48 [debug] 27873#0: *13 writev: 1008 > >2012/06/14 00:39:48 [debug] 27873#0: *13 chain writer out: > >0000000000000000 > >2012/06/14 00:39:48 [debug] 27873#0: *13 event timer add: 12: > >60000:1339627248804 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http finalize request: > >-4, "/cgi-bin/mailman/listinfo/users?" a:1, c:2 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http request count:2 blk:0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 > >2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > >00000000022A6328 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 > >2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > >00000000022A6328 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 post event 0000000002292B18 > >2012/06/14 00:39:48 [debug] 27873#0: *13 post event 00000000022A6328 > >2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > >00000000022A6328 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream dummy handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 delete posted event > >0000000002292B18 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream request: > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream process header > >2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022525C0:4096 > >2012/06/14 00:39:48 [debug] 27873#0: *13 recv: fd:12 176 of 4096 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 07 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 45 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 03 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 69 > >2012/06/14 00:39:48 [error] 27873#0: *13 FastCGI sent in stderr: > >"Cannot chdir to script directory > >(/usr/lib/cgi-bin/mailman/listinfo)" while reading response header > >from upstream, client: 178.27.33.0, server: mail.sys4.de, request: > >"GET /cgi-bin/mailman/listinfo/users HTTP/1.1", upstream: > >"fastcgi://unix:/var/run/fcgiwrap.socket:", host: "mail.sys4.de", > >referrer: "https://mail.sys4.de/cgi-bin/mailman/listinfo" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 07 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 06 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 33 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 05 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 51 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header: > >"Status: 403 Forbidden" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header: > >"Content-type: text/plain" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi parser: 1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi header done > >2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter header > >2012/06/14 00:39:48 [debug] 27873#0: *13 HTTP/1.1 403 Forbidden > >Server: nginx/1.1.19 > >Date: Wed, 13 Jun 2012 22:39:48 GMT > >Content-Type: text/plain > >Transfer-Encoding: chunked > >Connection: keep-alive > > > >2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 > >00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:0 f:0 s:163 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http cacheable: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream process upstream > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe read upstream: 1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe preread: 32 > >2012/06/14 00:39:48 [debug] 27873#0: *13 readv: 1:3920 > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe recv chain: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe buf free s:0 t:1 f:0 > >00000000022525C0, pos 0000000002252650, size: 32 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe length: -1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 input buf #0 0000000002252650 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 06 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi closed stdout > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 03 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 01 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 08 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record byte: 00 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi record length: 8 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http fastcgi sent end request > >2012/06/14 00:39:48 [debug] 27873#0: *13 input buf 0000000002252650 3 > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream: 1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream flush in > >2012/06/14 00:39:48 [debug] 27873#0: *13 http output filter > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 image filter > >2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter body > >2012/06/14 00:39:48 [debug] 27873#0: *13 http postpone filter > >"/cgi-bin/mailman/listinfo/users?" 00000000022D20A0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http chunk: 3 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > >00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 > >00000000022D2188, pos 00000000022D2188, size: 3 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:1 f:0 > >00000000022525C0, pos 0000000002252650, size: 3 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:0 f:0 > >0000000000000000, pos 000000000049DD7D, size: 2 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:0 f:0 s:171 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: 0 > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 pipe write downstream done > >2012/06/14 00:39:48 [debug] 27873#0: *13 event timer: 12, old: > >1339627248804, new: 1339627248805 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream exit: > >0000000000000000 > >2012/06/14 00:39:48 [debug] 27873#0: *13 finalize http upstream request: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 finalize http fastcgi request > >2012/06/14 00:39:48 [debug] 27873#0: *13 free rr peer 1 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 close http upstream > >connection: 12 > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022C7130, > >unused: 48 > >2012/06/14 00:39:48 [debug] 27873#0: *13 event timer del: 12: > >1339627248804 > >2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http upstream temp fd: -1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http output filter > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 image filter > >2012/06/14 00:39:48 [debug] 27873#0: *13 xslt filter body > >2012/06/14 00:39:48 [debug] 27873#0: *13 http postpone filter > >"/cgi-bin/mailman/listinfo/users?" 00007FFF43D71E70 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http chunk: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > >00000000022D1E90, pos 00000000022D1E90, size: 163 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > >00000000022D2188, pos 00000000022D2188, size: 3 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:1 f:0 > >00000000022525C0, pos 0000000002252650, size: 3 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write old buf t:0 f:0 > >0000000000000000, pos 000000000049DD7D, size: 2 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 write new buf t:0 f:0 > >0000000000000000, pos 000000000049DD7A, size: 5 file: 0, size: 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter: l:1 f:0 s:176 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter limit 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 malloc: 00000000022B9930:16384 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 163 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 3 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 3 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 2 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL buf copy: 5 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL to write: 176 > >2012/06/14 00:39:48 [debug] 27873#0: *13 SSL_write: 176 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http write filter > >0000000000000000 > >2012/06/14 00:39:48 [debug] 27873#0: *13 http copy filter: 0 > >"/cgi-bin/mailman/listinfo/users?" > >2012/06/14 00:39:48 [debug] 27873#0: *13 http finalize request: 0, > >"/cgi-bin/mailman/listinfo/users?" a:1, c:1 > >2012/06/14 00:39:48 [debug] 27873#0: *13 set http keepalive handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 http close request > >2012/06/14 00:39:48 [debug] 27873#0: *13 http log handler > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022525C0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 000000000228B240, unused: 6 > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022D15A0, > >unused: 444 > >2012/06/14 00:39:48 [debug] 27873#0: *13 event timer add: 10: > >65000:1339627253805 > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 000000000224D310 > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022C7AE0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 hc free: 0000000000000000 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 hc busy: 0000000000000000 0 > >2012/06/14 00:39:48 [debug] 27873#0: *13 free: 00000000022B9930 > >2012/06/14 00:39:48 [debug] 27873#0: *13 reusable connection: 1 > > > >Thanks > > > >p at rick > > > > > >-- > >state of mind () > > > >http://www.state-of-mind.de > > > >Franziskanerstra?e 15 Telefon +49 89 3090 4664 > >81669 M?nchen Telefax +49 89 3090 4666 > > > >Amtsgericht M?nchen Partnerschaftsregister PR 563 > > > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- state of mind () http://www.state-of-mind.de Franziskanerstra?e 15 Telefon +49 89 3090 4664 81669 M?nchen Telefax +49 89 3090 4666 Amtsgericht M?nchen Partnerschaftsregister PR 563 From p at state-of-mind.de Thu Jun 14 11:18:05 2012 From: p at state-of-mind.de (Patrick Ben Koetter) Date: Thu, 14 Jun 2012 13:18:05 +0200 Subject: Running mailman within a domain In-Reply-To: References: <20120613224300.GP18826@state-of-mind.de> Message-ID: <20120614111805.GI2347@state-of-mind.de> Hi Aleksandar, * Aleksandar Lazic : > Hi Patrick, we seem to meet on the same lists, do we? :) > On 14-06-2012 00:43, Patrick Ben Koetter wrote: > >this is my first take at nginx. I try to add /cgi-bin/mailman/... to an > >existing server instance (mail.sys4.de). At the moment I can call scripts > >directly e.g. works. > > > >What I fail to accieve is getting access to lists e.g. > >. The Browser receives > >a 403 and the fast_cgi wrapper reports: > > > > Cannot chdir to script directory > >(/usr/lib/cgi-bin/mailman/listinfo)" while reading response header from > >upstream > > > > > >What is it I am doing wrong? > > I don't know too much about mailman, so I try to guess. > > The > > /usr/lib/cgi-bin/mailman > > is the script which should be executed right? Nope. It would be this: /usr/lib/cgi-bin/mailman/