From nginx-forum at nginx.us Fri Mar 1 04:06:05 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Thu, 28 Feb 2013 23:06:05 -0500 Subject: Optimal nginx settings for websockets sending images In-Reply-To: <20130226123146.GN81985@mdounin.ru> References: <20130226123146.GN81985@mdounin.ru> Message-ID: Thanks man :) > > proxy_buffers 8 2m; > > proxy_buffer_size 10m; > > proxy_busy_buffers_size 10m; > > Buffers used looks huge, make sure you have enough memory. Mmmhhh, do you think I should remove these and trust nginx's default values for these buffer? > > proxy_cache one; > > proxy_cache_key "$request_uri|$request_body"; > > Usuing request body as a cache key isn't really a good idea unless > all request bodies are known to be small. Ok, I changed that to: proxy_cache_key "$scheme$host$request_uri"; I also made few additions under location/: proxy_cache_valid 200 302 304 10m; proxy_cache_valid 301 1h; proxy_cache_valid any 1m; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404; Do you think these are good and justified? Unfortunately I'm seeing these warnings now: "an upstream response is buffered to a temporary file" Any hints why? Help is much appreciated Cheers Michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236601,236752#msg-236752 From nginx-forum at nginx.us Fri Mar 1 04:08:30 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Thu, 28 Feb 2013 23:08:30 -0500 Subject: Optimal nginx settings for websockets sending images In-Reply-To: References: <20130226123146.GN81985@mdounin.ru> Message-ID: <1548f9e56cb55f81d3dbc67c551d5402.NginxMailingListEnglish@forum.nginx.org> PS: happy to send the whole code by email if that's better? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236601,236753#msg-236753 From nick_ik at mail.ru Fri Mar 1 07:06:54 2013 From: nick_ik at mail.ru (=?UTF-8?B?TmljaG9sYXMgS29zdGlyeWE=?=) Date: Fri, 01 Mar 2013 11:06:54 +0400 Subject: proxy_cache and internal redirect In-Reply-To: <20130228163541.GN81985@mdounin.ru> References: <1362052219.499794677@f54.mail.ru> <20130228163541.GN81985@mdounin.ru> Message-ID: <1362121614.306780107@f113.mail.ru> But, I want use both redirect and save response with 'X-Accel-Redirect' in cache to prevent multiple requests to backend. NIck ???????, 28 ??????? 2013, 20:35 +04:00 ?? Maxim Dounin : >Hello! > >On Thu, Feb 28, 2013 at 03:50:19PM +0400, Nick wrote: > >> Hello. >> Can you please tell how to enable caching of responses with 'X-Accel-Redirect' headers. >> Nick. > >You may do so by using > >????proxy_ignore_headers X-Accel-Redirect; > >This will prevent nginx from doing a redirect based on >X-Accel-Redirect though. > >See http://nginx.org/r/proxy_ignore_headers for details. > >-- >Maxim Dounin >http://nginx.org/en/donation.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 1 08:11:15 2013 From: nginx-forum at nginx.us (deepak-kumar) Date: Fri, 01 Mar 2013 03:11:15 -0500 Subject: How to set up SPDY Protocol over Nginx? Message-ID: <449d93a16661b312f23967459cc2a81b.NginxMailingListEnglish@forum.nginx.org> Hi Guys I am having problem setting up the SPDY protocol for my rails app over nginx. Here is the complete detail of the setup on stacjoverflow. http://stackoverflow.com/questions/15152775/how-to-set-up-spdy-protocol-over-nginx Posting it here so that it can have a bigger audience. I can copy paste the entire question here if it in inconvenient to follow stackoverflow link. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236756,236756#msg-236756 From nginx-forum at nginx.us Fri Mar 1 08:11:19 2013 From: nginx-forum at nginx.us (nhatlt) Date: Fri, 01 Mar 2013 03:11:19 -0500 Subject: Reverse proxy and context path Message-ID: I have a Jetty application works at: https://my-server-ip (root context) Now I want to use https://my-domain.com/some/path for my app. Is it possible with Nginx? I'm a newbie. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236757,236757#msg-236757 From nginx+phil at spodhuis.org Fri Mar 1 08:22:51 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Fri, 1 Mar 2013 03:22:51 -0500 Subject: nginx/KQUEUE breaks proxy_ignore_client_abort Message-ID: <20130301082251.GA97216.take2@redoubt.spodhuis.org> Folks, If nginx is built on FreeBSD, "proxy_ignore_client_abort on;" has no/little effect, because TCP half-closes cause a connection drop even if not speaking to a proxy backend. Situation: PGP clients talk to PGP keyservers using the HKP protocol, which is a very light layer over HTTP. In GnuPG, if the cURL library was not available at build time, a mock-curl "curl-shim" implementation is used instead. In GnuPG 2.0.x, this code uses a TCP half-close to indicate when the sender has finished sending. This was a mistake and has been fixed for the next release, but people running PGP keyservers need to deal with the existing installed userbase. For various (good) reasons, the common PGP keyserver software is run with a reverse proxy in front of it, and nginx is a popular choice. nginx will default to drop connections on the shutdown, for reasons previously explained on this list. Enabling proxy_ignore_client_abort is, as far as I understand matters, supposed to allow these shutdowns to not be considered an abort. Temporarily turning on an error log for the :11371 server block (that's the HKP default port) gives: 2013/02/28 09:11:54 [info] 34110#0: *51 kevent() reported that client closed connection while waiting for request, client: 2a02:898:0:20::57:1, server: [2a02:898:31:0:48:4558:73:6b73]:11371 "proxy_ignore_client_abort on" avoids enabling logic in src/http/ngx_http_upstream.c which would log: kevent() reported that client prematurely closed connection, so upstream connection is closed too That's _not_ the error message here; instead, what we get comes from src/http/ngx_http_request.c in ngx_http_keepalive_handler(). As far as I can tell, as long as NGX_HAVE_KQUEUE is defined, it is impossible to avoid this handling. nginx folks: is this something you're likely to fix, or is this far enough outside of supportable behaviour that you consider the current situation a feature, not a bug? I'm not sufficiently familiar with the nginx code base to find a fix for this and don't currently have time to get familiar, sorry. :( Thanks, -Phil PS: nginx mail-server configuration is broken; it's checking SMTP Envelope Sender against the subscription list, not the RFC5322.From: header, so breaks on things such as PRVS. Posting via manual injection to your mail-server. :( From lists at ruby-forum.com Fri Mar 1 09:28:59 2013 From: lists at ruby-forum.com (Ali m.) Date: Fri, 01 Mar 2013 10:28:59 +0100 Subject: Left 4 Dead 2 free download full version pc game Message-ID: <26046c624e813785d765d7fca1a1991d@ruby-forum.com> Left 4 Dead 2 PC full game multiplayer SP 2.1.1.2 free downloadSystem requirements size: 7.16 GB Games : Windows : Full Game : English here http://www.freedownloadfullversioncrack.com/left-4-dead-2-free-download-full-version-pc-game/ -- Posted via http://www.ruby-forum.com/. From vbart at nginx.com Fri Mar 1 09:33:34 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 1 Mar 2013 13:33:34 +0400 Subject: How to set up SPDY Protocol over Nginx? In-Reply-To: <449d93a16661b312f23967459cc2a81b.NginxMailingListEnglish@forum.nginx.org> References: <449d93a16661b312f23967459cc2a81b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201303011333.34282.vbart@nginx.com> On Friday 01 March 2013 12:11:15 deepak-kumar wrote: > Hi Guys > > I am having problem setting up the SPDY protocol for my rails app over > nginx. Here is the complete detail of the setup on stacjoverflow. > > http://stackoverflow.com/questions/15152775/how-to-set-up-spdy-protocol-ove > r-nginx > > Posting it here so that it can have a bigger audience. > > I can copy paste the entire question here if it in inconvenient to follow > stackoverflow link. > Most likely you are trying to run a different version of nginx, not one that you have built with spdy. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Mar 1 09:39:04 2013 From: nginx-forum at nginx.us (deepak-kumar) Date: Fri, 01 Mar 2013 04:39:04 -0500 Subject: How to set up SPDY Protocol over Nginx? In-Reply-To: <201303011333.34282.vbart@nginx.com> References: <201303011333.34282.vbart@nginx.com> Message-ID: Well thats possible. But Nginx -V gives me correct version and shows that SPDY module is enabled and configured ... When I do which nginx on terminal it returns /usr/local/sbin/nginx Can you tell me how can I check which binary is being used for the app? $ nginx -V nginx version: nginx/1.3.13 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --sbin-path=/usr/local/sbin/nginx --prefix=/etc/nginx --conf- path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_gzip_static_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-http_spdy_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/software/ngx_cache_purge-1.6 --with-openssl=/software/openssl-1.0.1e Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236756,236763#msg-236763 From vbart at nginx.com Fri Mar 1 11:46:30 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 1 Mar 2013 15:46:30 +0400 Subject: How to set up SPDY Protocol over Nginx? In-Reply-To: References: <201303011333.34282.vbart@nginx.com> Message-ID: <201303011546.30921.vbart@nginx.com> On Friday 01 March 2013 13:39:04 deepak-kumar wrote: > Well thats possible. But Nginx -V gives me correct version and shows that > SPDY module is enabled and configured ... > > When I do which nginx on terminal it returns /usr/local/sbin/nginx > > Can you tell me how can I check which binary is being used for the app? > How do you start it? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html > $ nginx -V > > nginx version: nginx/1.3.13 > built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > TLS SNI support enabled > configure arguments: --sbin-path=/usr/local/sbin/nginx --prefix=/etc/nginx > --conf- path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi > --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid > --with-debug --with-http_addition_module --with-http_dav_module > --with-http_gzip_static_module > --with-http_realip_module --with-http_stub_status_module > --with-http_ssl_module --with-http_sub_module --with-http_xslt_module > --with-http_spdy_module --with-ipv6 --with-sha1=/usr/include/openssl > --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module > --add-module=/software/ngx_cache_purge-1.6 > --with-openssl=/software/openssl-1.0.1e > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,236756,236763#msg-236763 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Mar 1 11:56:40 2013 From: nginx-forum at nginx.us (deepak-kumar) Date: Fri, 01 Mar 2013 06:56:40 -0500 Subject: How to set up SPDY Protocol over Nginx? In-Reply-To: <201303011546.30921.vbart@nginx.com> References: <201303011546.30921.vbart@nginx.com> Message-ID: Yeah. I get your point. I ll post my findings here. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236756,236772#msg-236772 From nginx-forum at nginx.us Fri Mar 1 12:51:36 2013 From: nginx-forum at nginx.us (deepak-kumar) Date: Fri, 01 Mar 2013 07:51:36 -0500 Subject: How to set up SPDY Protocol over Nginx? In-Reply-To: <201303011333.34282.vbart@nginx.com> References: <201303011333.34282.vbart@nginx.com> Message-ID: Thanks for the hint. I had to change my init.d script. Modified to DAEMON=/usr/local/sbin/nginx earlier it was #DAEMON=/usr/sbin/nginx CLOSED. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236756,236777#msg-236777 From mdounin at mdounin.ru Fri Mar 1 12:59:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2013 16:59:01 +0400 Subject: Optimal nginx settings for websockets sending images In-Reply-To: References: <20130226123146.GN81985@mdounin.ru> Message-ID: <20130301125901.GC94127@mdounin.ru> Hello! On Thu, Feb 28, 2013 at 11:06:05PM -0500, michael.heuberger wrote: > Thanks man :) > > > > proxy_buffers 8 2m; > > > proxy_buffer_size 10m; > > > proxy_busy_buffers_size 10m; > > > > Buffers used looks huge, make sure you have enough memory. > > Mmmhhh, do you think I should remove these and trust nginx's default values > for these buffer? You should make sure you have enough memory for the buffers configured. If your system will start swapping - there will obvious performance degradation compared to smaller buffers. Default buffers indeed might be better unless you have good reasons for the buffers sizes set, or you might start with default sizes and tune them till you are happy with the result. Exact optimal sizes depend on a particular use case. > > > proxy_cache one; > > > proxy_cache_key "$request_uri|$request_body"; > > > > Usuing request body as a cache key isn't really a good idea unless > > all request bodies are known to be small. > > Ok, I changed that to: > proxy_cache_key "$scheme$host$request_uri"; > > I also made few additions under location/: > > proxy_cache_valid 200 302 304 10m; > proxy_cache_valid 301 1h; > proxy_cache_valid any 1m; > > proxy_next_upstream error timeout invalid_header http_500 http_502 > http_503 http_504 http_404; > > Do you think these are good and justified? This depends on what and how long you want to cache and how you would like to handle upstream errors. > Unfortunately I'm seeing these warnings now: > "an upstream response is buffered to a temporary file" > > Any hints why? Help is much appreciated The message indicate that an (uncacheable) response was buffered to a temporary file. It doesn't indicate the problem per se, but might be useful to track sources of I/O problems. It also might appear as a side effect from other problems - e.g. if you have network issues and clients just can't download files requested fast enough. If you see such messages it might be just ok if they are rare enough. Or might indicate that you should configure bigger buffers (if you have enough memory), or consider disabling disk buffering. Try reading here for more information: http://nginx.org/r/proxy_buffering -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Mar 1 13:12:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2013 17:12:16 +0400 Subject: nginx/KQUEUE breaks proxy_ignore_client_abort In-Reply-To: <20130301082251.GA97216.take2@redoubt.spodhuis.org> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> Message-ID: <20130301131216.GE94127@mdounin.ru> Hello! On Fri, Mar 01, 2013 at 03:22:51AM -0500, Phil Pennock wrote: > Folks, > > If nginx is built on FreeBSD, "proxy_ignore_client_abort on;" has > no/little effect, because TCP half-closes cause a connection drop > even if not speaking to a proxy backend. > > Situation: PGP clients talk to PGP keyservers using the HKP protocol, > which is a very light layer over HTTP. In GnuPG, if the cURL library > was not available at build time, a mock-curl "curl-shim" implementation > is used instead. In GnuPG 2.0.x, this code uses a TCP half-close to > indicate when the sender has finished sending. This was a mistake and > has been fixed for the next release, but people running PGP keyservers > need to deal with the existing installed userbase. > > For various (good) reasons, the common PGP keyserver software is run > with a reverse proxy in front of it, and nginx is a popular choice. > nginx will default to drop connections on the shutdown, for reasons > previously explained on this list. Enabling proxy_ignore_client_abort > is, as far as I understand matters, supposed to allow these shutdowns to > not be considered an abort. > > Temporarily turning on an error log for the :11371 server block (that's > the HKP default port) gives: > > 2013/02/28 09:11:54 [info] 34110#0: *51 > kevent() reported that client closed connection while waiting for request, > client: 2a02:898:0:20::57:1, server: [2a02:898:31:0:48:4558:73:6b73]:11371 > > "proxy_ignore_client_abort on" avoids enabling logic in > src/http/ngx_http_upstream.c which would log: > > kevent() reported that client prematurely closed > connection, so upstream connection is closed too > > That's _not_ the error message here; instead, what we get comes from > src/http/ngx_http_request.c in ngx_http_keepalive_handler(). > > As far as I can tell, as long as NGX_HAVE_KQUEUE is defined, it is > impossible to avoid this handling. > > nginx folks: is this something you're likely to fix, or is this far > enough outside of supportable behaviour that you consider the current > situation a feature, not a bug? > > I'm not sufficiently familiar with the nginx code base to find a fix for > this and don't currently have time to get familiar, sorry. :( It looks like you are running nginx with experimental SPDY patch, and it broke things here. Try recompiling nginx without SPDY patch to see if it helps. > PS: nginx mail-server configuration is broken; it's checking SMTP Envelope > Sender against the subscription list, not the RFC5322.From: header, so > breaks on things such as PRVS. Posting via manual injection to your > mail-server. :( Unfortunately, there is no way to properly reject messages at SMTP level (i.e. to avoid sending bounces) and doing checks based on message headers at the same time. If you use different envelope from and message from addresses and have problems with posting - just subscribe your envelope from address to the mailing list with mail delivery disabled. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Mar 1 13:20:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2013 17:20:40 +0400 Subject: proxy_cache and internal redirect In-Reply-To: <1362121614.306780107@f113.mail.ru> References: <1362052219.499794677@f54.mail.ru> <20130228163541.GN81985@mdounin.ru> <1362121614.306780107@f113.mail.ru> Message-ID: <20130301132040.GG94127@mdounin.ru> Hello! On Fri, Mar 01, 2013 at 11:06:54AM +0400, Nicholas Kostirya wrote: > But, I want use both redirect and save response with > 'X-Accel-Redirect' in cache to prevent multiple requests to > backend. You have to select just one, either X-Accel-Redirect handling or cache. As a workaround you may try doing another proxy layer with cache and X-Accel-Redirect ignored. > > NIck > > ???????, 28 ??????? 2013, 20:35 +04:00 ?? Maxim Dounin : > >Hello! > > > >On Thu, Feb 28, 2013 at 03:50:19PM +0400, Nick wrote: > > > >> Hello. > >> Can you please tell how to enable caching of responses with 'X-Accel-Redirect' headers. > >> Nick. > > > >You may do so by using > > > >????proxy_ignore_headers X-Accel-Redirect; > > > >This will prevent nginx from doing a redirect based on > >X-Accel-Redirect though. > > > >See http://nginx.org/r/proxy_ignore_headers for details. > > > >-- > >Maxim Dounin > >http://nginx.org/en/donation.html > > > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/en/donation.html From pasik at iki.fi Fri Mar 1 13:22:08 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Fri, 1 Mar 2013 15:22:08 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> <201302282251.03866.vbart@nginx.com> Message-ID: <20130301132208.GC8912@reaktio.net> On Thu, Feb 28, 2013 at 07:02:46PM +0000, Andr? Cruz wrote: > On Feb 28, 2013, at 6:51 PM, Valentin V. Bartenev wrote: > > > On Thursday 28 February 2013 21:36:23 Andr? Cruz wrote: > >> I'm also very interested in being able to configure nginx to NOT proxy the > >> entire request. > >> > > [...] > > > > Actually, you can. > > > > http://nginx.org/r/proxy_set_body > > http://nginx.org/r/proxy_pass_request_body > > I've probably explained myself wrong. What I want is for nginx to buffer only chunks of the request body and pass these chunks to the upstream server as they arrive. > Yes, the problem is nginx saves the request body to the local disk on the nginx server, filling the disk and making the upload slower. People want nginx to *not* save the http PUT/POST request to disk when using nginx as a reverse proxy. -- Pasi From WBrown at e1b.org Fri Mar 1 13:55:00 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Fri, 1 Mar 2013 08:55:00 -0500 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: > > Dump question, but why did you put the vhost-files into "conf.d"? > > Normally > > they are stored in "sites-available" and symlinked in "sites-enabled". > > nginx > > (as apache) uses this directory to read all information about the vhosts. > > Are there any templates in "sites-enabled"? How do they look like? > > To be honest I don' know. When I've setup this configuration (more than 1 > year ago I think) I've probably take 2 or 3 days on #nginx IRC channel and > when it was working I've never modified the configuration. If your install is anything like, mine there was no sites-available and sites-enabled directories. I used the directions on the download page to install the stable version. ( http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm followed by "yum install nginx") /etc/nginx.conf had an include for conf.d and in that directory, they had a sample >From the context, I take it the sites-available directory had the various site definition files, and for those that you want to use, you create the link in sites-enabled, much like rc3.d links to init.d? Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From lists at ruby-forum.com Fri Mar 1 14:54:52 2013 From: lists at ruby-forum.com (iqbal h.) Date: Fri, 01 Mar 2013 15:54:52 +0100 Subject: Call of duty black Ops 2 Free Download Pc Game Message-ID: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> Call of duty black ops free download proudly presentsCall of Duty: Black Ops II (c) Activision12-11-2012......Release Date <-> Protection.............Steam+CEGFPS................Game Type < -> Disk(s).................1 BLURAY Size: 4.08 GB download link RELEASE NOTES Forcing the limitations of what lovers have come to anticipate from the record-setting enjoyment series, Contact of Duty: Dark Ops II propels gamers into a near upcoming, Twenty-first Millennium Chilly War, where technology and weaponry have incorporated to make a new creation of warfare. Deluxe Version features: Nuketown 2025 Reward... more info: and free download here http://media-download-free.blogspot.com/2013/01/call-of-duty-black-ops-2-pc-game-free.html -- Posted via http://www.ruby-forum.com/. From reallfqq-nginx at yahoo.fr Fri Mar 1 14:59:30 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 1 Mar 2013 09:59:30 -0500 Subject: Call of duty black Ops 2 Free Download Pc Game In-Reply-To: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> References: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> Message-ID: Someone needs removal for spam it seems... --- *B. R.* On Fri, Mar 1, 2013 at 9:54 AM, iqbal h. wrote: > ?SPAM > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 1 15:17:31 2013 From: nginx-forum at nginx.us (double) Date: Fri, 01 Mar 2013 10:17:31 -0500 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130301132208.GC8912@reaktio.net> References: <20130301132208.GC8912@reaktio.net> Message-ID: > Pasi K?rkk?inen > People want nginx to *not* save the http PUT/POST request to disk > when using nginx as a reverse proxy. Have the same problem. I use "haproxy" in front of "nginx" - but this more a bad hack than a workaround. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,236793#msg-236793 From nginx at 2xlp.com Fri Mar 1 20:20:10 2013 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Fri, 1 Mar 2013 15:20:10 -0500 Subject: WildCard domains : how to treat IP Address and Specific Domains differently from Failover/Wildcard Domains ? Message-ID: forgive me if this has been asked before -- I couldn't find this exact question in my mailing list archives back to 2007 I am trying to deal with wildcard domains in a setup. The intended result is to do this : Requests for example.com Serve Site A All IP Address Requests : Serve Site A All other domains ( wildcard / failover ) Serve Site B I've tried several combinations of listen + server name, but I can't get this right. I end up sending everything to site A or site B. From nginx+phil at spodhuis.org Fri Mar 1 20:56:06 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Fri, 1 Mar 2013 15:56:06 -0500 Subject: nginx/KQUEUE+SPDY breaks proxy_ignore_client_abort In-Reply-To: <20130301131216.GE94127@mdounin.ru> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> Message-ID: <20130301205606.GA15343@redoubt.spodhuis.org> [fixed Subject: to help others with issue track it] On 2013-03-01 at 17:12 +0400, Maxim Dounin wrote: > It looks like you are running nginx with experimental SPDY patch, > and it broke things here. Try recompiling nginx without SPDY > patch to see if it helps. That fixed things, thank you. So, nginx+KQUEUE+SPDY breaks clients which shutdown on the write side, without the ability to disable treating this as a client abort. I'll sacrifice SPDY for now, to have correctness for existing clients. Do you think that the SPDY patch will change to include something like proxy_ignore_client_abort or will write-shutdowns just be treated as unacceptable? Given that SPDY requires SSL which inherently requires bi-directional connections at all times, the current behaviour with the SPDY patch is absolutely correct, if SPDY is enabled for that server. In this case, it's a cleartext server so SPDY wasn't enabled at all. [I'll reply to the list administrivia issue in a second email.] Thanks, -Phil From nginx+phil at spodhuis.org Fri Mar 1 20:59:59 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Fri, 1 Mar 2013 15:59:59 -0500 Subject: nginx mailing-list and sender filtering (vs BATV) In-Reply-To: <20130301131216.GE94127@mdounin.ru> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> Message-ID: <20130301205959.GB15343@redoubt.spodhuis.org> On 2013-03-01 at 17:12 +0400, Maxim Dounin wrote: > On Fri, Mar 01, 2013 at 03:22:51AM -0500, Phil Pennock wrote: > > PS: nginx mail-server configuration is broken; it's checking SMTP Envelope > > Sender against the subscription list, not the RFC5322.From: header, so > > breaks on things such as PRVS. Posting via manual injection to your > > mail-server. :( > > Unfortunately, there is no way to properly reject messages at SMTP > level (i.e. to avoid sending bounces) and doing checks based on > message headers at the same time. > > If you use different envelope from and message from addresses and > have problems with posting - just subscribe your envelope from > address to the mailing list with mail delivery disabled. I understand the problem you're fighting here, and why you're doing this at SMTP RCPT time, since Mailman doesn't have content scanning hooks to check if the message should be allowed based on the message headers. You can do these checks safely enough, but it requires more caution. When violating normal SMTP expectations by making a RCPT appear to only exist for certain MAIL FROM senders, it's important to understand variations in senders at SMTP time: the checks you're doing are not the same as the membership tests done by Mailman itself, which looks at the headers. I did the same thing as you, for expediency and to avoid forking yet more extra processes for scanning, but I made sure that the form of the address being checked for membership has had VERP and BATV variations stripped out first, to check a _normalized_ address against the Mailman membership roster. BATV changes the SMTP Envelope Sender, with a crypto-hash embedded in the address, and a secret and a daily timestamp going into the hash inputs, so that if all messages _from_ a domain are sent with BATV, then bounces inherently *must* be to BATV targets if they're legitimate. This is the only tool that prevents joe-job backscatter from flooding mailboxes. So that's a non-standard address-existence test breaking when exposed to an address variation that does have an Internic draft, albeit expired. I've sucked it up and configured up an exception mechanism, adding this mailing-list to that, accepting that any time I enable the backscatter filter, I'll lose bounce messages from this list to me, with rejections dropping into a blackhole. That's got a lower risk of being triggered than a joe-job (unfortunately) (and this varies depending on your involvement with email infrastructure and how much spammers dislike you). Next time you're touching your mailserver setup, could you please take a look at adding a canonicalisation step to the addresses being checked against list membership? Thanks, -Phil From vbart at nginx.com Fri Mar 1 22:09:46 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 2 Mar 2013 02:09:46 +0400 Subject: nginx/KQUEUE+SPDY breaks proxy_ignore_client_abort In-Reply-To: <20130301205606.GA15343@redoubt.spodhuis.org> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> <20130301205606.GA15343@redoubt.spodhuis.org> Message-ID: <201303020209.46947.vbart@nginx.com> On Saturday 02 March 2013 00:56:06 Phil Pennock wrote: > [fixed Subject: to help others with issue track it] > > On 2013-03-01 at 17:12 +0400, Maxim Dounin wrote: > > It looks like you are running nginx with experimental SPDY patch, > > and it broke things here. Try recompiling nginx without SPDY > > patch to see if it helps. > > That fixed things, thank you. > > So, nginx+KQUEUE+SPDY breaks clients which shutdown on the write side, > without the ability to disable treating this as a client abort. > > I'll sacrifice SPDY for now, to have correctness for existing clients. > > Do you think that the SPDY patch will change to include something like > proxy_ignore_client_abort or will write-shutdowns just be treated as > unacceptable? > > Given that SPDY requires SSL which inherently requires bi-directional > connections at all times, the current behaviour with the SPDY patch is > absolutely correct, if SPDY is enabled for that server. In this case, > it's a cleartext server so SPDY wasn't enabled at all. > SPDY patch also includes many changes for http core of nginx. The one that you see, is the unintended result of these changes. I'm going to fix it in upcoming revision, since it can break some setups as you have mentioned. Thank you for the report. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From francis at daoine.org Sat Mar 2 00:43:57 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Mar 2013 00:43:57 +0000 Subject: WildCard domains : how to treat IP Address and Specific Domains differently from Failover/Wildcard Domains ? In-Reply-To: References: Message-ID: <20130302004357.GF32392@craic.sysops.org> On Fri, Mar 01, 2013 at 03:20:10PM -0500, Jonathan Vanasco wrote: Hi there, > Requests for example.com > Serve Site A > > All IP Address Requests : > Serve Site A > > All other domains ( wildcard / failover ) > Serve Site B > > I've tried several combinations of listen + server name, but I can't get this right. I end up sending everything to site A or site B. You've seen http://nginx.org/en/docs/http/request_processing.html ? And http://nginx.org/r/listen and http://nginx.org/r/server_name ? You need the same "listen" ip:port in all servers -- simplest is to leave it at the default. The you need the correct "server_name" directives in the correct server{} blocks. B should be the default, so put it first: server { return 200 "site B\n"; } A should match the exact hostname example.com, and anything that is just numbers and dots: server { server_name example.com; server_name ~^[0-9.]*$; return 200 "site A\n"; } Because of the default value of server_name, a request with no "host" will match B. You can make it match A easily enough. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Mar 2 04:54:06 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Fri, 01 Mar 2013 23:54:06 -0500 Subject: Optimal nginx settings for websockets sending images In-Reply-To: <20130301125901.GC94127@mdounin.ru> References: <20130301125901.GC94127@mdounin.ru> Message-ID: thank you so much maxim i have read the documentation at http://nginx.org/en/docs/http/ngx_http_proxy_module.html and am trying to understand all that. it's not easy ... i'm serving video files (mp4 or webm). that's where these "an upstream response is buffered to a temporary file" warnings occur. how can i find out how far i should increase the buffers until these warnings are gone? how would you do this? second problem: when i made changes in the css file, uploaded that, then the nginx server was still serving the older version. because of "proxy_cache_valid 200 302 304 10m;" - how can i tell nginx to refresh cache asap when a new file was uploaded? cheers michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236601,236812#msg-236812 From igor at sysoev.ru Sat Mar 2 06:34:04 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 2 Mar 2013 10:34:04 +0400 Subject: WildCard domains : how to treat IP Address and Specific Domains differently from Failover/Wildcard Domains ? In-Reply-To: References: Message-ID: <1AEBFD2F-025F-4BD5-860C-756C1D9BE1B3@sysoev.ru> On Mar 2, 2013, at 0:20 , Jonathan Vanasco wrote: > forgive me if this has been asked before -- I couldn't find this exact question in my mailing list archives back to 2007 > > I am trying to deal with wildcard domains in a setup. > > The intended result is to do this : > > Requests for example.com > Serve Site A > > All IP Address Requests : > Serve Site A > > All other domains ( wildcard / failover ) > Serve Site B > > I've tried several combinations of listen + server name, but I can't get this right. I end up sending everything to site A or site B. server { listen 80; listen IP:80; server_name example.com; # site A } server { listen 80 default_server; # site B } "listen 80/server_name example.com" route all requests to example.com to site A. "listen IP:80" routes all requests to IP:80 to site A. Anything else is routed to default server of 80 port, i.e. to site B. -- Igor Sysoev http://nginx.com/support.html From list-reader at koshie.fr Sat Mar 2 19:00:47 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_k=C3=A9vin?=) Date: Sat, 02 Mar 2013 20:00:47 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <20130221135041.GP32392@craic.sysops.org> References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> <20130220235410.GO32392@craic.sysops.org> <20130221135041.GP32392@craic.sysops.org> Message-ID: I've removed all "127.0.0.1:9000" lines by "fastcgi_pass unix:/var/run/php5-fpm.sock;" and everything works now :-). Thanks for your help ! Le Thu, 21 Feb 2013 14:50:41 +0100, Francis Daly a ?crit: > On Thu, Feb 21, 2013 at 10:26:22AM +0100, GASPARD K?vin wrote: > > Hi there, > >> >So: what is the hostname in the url that you try to get, when you see >> >the 502 error? >> >> Trying to install a Wordpress, used a info.php page here: >> http://blog.koshie.fr/wp-admin/info.php > > Ok - so the one server{} block that is used is either the one that has > server_name blog.koshie.fr, or is the default one. > >> As you can see, there is a 502 Bad Gateway error. > > Yes, and that error log shows that: > >> 2013/02/21 10:21:22 [error] 1097#0: *5 connect() failed (111: Connection >> refused) while connecting to upstream, client: 46.218.152.242, server: >> koshie.fr, request: "GET /wordpress/info.php HTTP/1.1", upstream: >> "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" > > it is using the server "koshie.fr", not the server > "blog.koshie.fr". Presumably the server "koshie.fr" is the default, > and the server "blog.koshie.fr" does not exist. > > So the configuration that is running, is *not* the configuration that > you are showing here. > >> Logically, this is the vhost configuration file for >> http://blog.koshie.fr/wp-admin/info.php: > > But based on your later mail, this configuration file does not exist. > > If you want to get this configured correctly, your best bet is probably > to simplify the configuration significantly. > > Leave /etc/nginx/nginx.conf as it is. > > Let /etc/nginx/conf.d have exactly one file in it, this one. > > Then run your test and see if it works or fails. > >> >Maybe it is simplest if you rename the conf.d directory, then create >> >a new conf.d directory with just one vhost file. Then reload nginx and >> >re-do your test of a php request and see what it says. >> >> So, above you've the configuration file related to this log error: > > No. > > That configuration file does not result in this error. > >> >If it still fails, then you have a simpler test case to work from. >> >> What is this test case please? > > Your test case is: > > * you run "curl -i http://blog.koshie.fr/wordpress/info.php" > * you expect to see some useful content > * you actually see a 502 error. > > Then do whatever it takes to get the expected output. > > I think that one part of the problem is that you have only half-changed > from an old system to a new system. > > You new system has nothing listening on 127.0.0.1:9000, so any > configuration that mentions that ip:port is broken. It should be removed, > or replaced with the unix socket. > > And your new system does not actually include all of the files that you > want it to. > > When your nginx starts, it reads exactly one configuration file: > /etc/nginx/nginx.conf. > > That file then uses "include" to read some other files. Those other > files do not seem to be the ones you want, for some reason. > > I suggest: stop nginx. Make sure it is stopped, and not running, and > has nothing listening on port 80 or port 443. Then look at the files > in /etc/nginx/conf.d, and make sure that they are exactly the ones that > you want. Then start nginx, access the info.php url, and see if it works. > > Good luck, > > f -- Utilisant le logiciel de courrier d'Opera : http://www.opera.com/mail/ From list-reader at koshie.fr Sat Mar 2 19:14:55 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_k=C3=A9vin?=) Date: Sat, 02 Mar 2013 20:14:55 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> <20130220235410.GO32392@craic.sysops.org> <20130221135041.GP32392@craic.sysops.org> Message-ID: Well, in fact I've a new problem. info.php file works, so php works. But when I'm trying to access to my install.php file for wordpress installation I've "File not found." and in /var/log/nginx/error.log I have: "*1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 89.2.128.79, server: blog.koshie.fr, request: "GET /wp-admin/install.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "blog.koshie.fr"" An idea? I've googled a little and some peoples talk about /etc/nginx/fastcgi_params's SCRIPT_FILE_NAME parameters. But I don't understand what value to put into. PS : Sorry for the noise of my last e-mail. Cordially, Koshie Le Sat, 02 Mar 2013 20:00:47 +0100, GASPARD k?vin a ?crit: > I've removed all "127.0.0.1:9000" lines by "fastcgi_pass > unix:/var/run/php5-fpm.sock;" and everything works now :-). > > Thanks for your help ! > > Le Thu, 21 Feb 2013 14:50:41 +0100, Francis Daly a > ?crit: > >> On Thu, Feb 21, 2013 at 10:26:22AM +0100, GASPARD K?vin wrote: >> >> Hi there, >> >>> >So: what is the hostname in the url that you try to get, when you see >>> >the 502 error? >>> >>> Trying to install a Wordpress, used a info.php page here: >>> http://blog.koshie.fr/wp-admin/info.php >> >> Ok - so the one server{} block that is used is either the one that has >> server_name blog.koshie.fr, or is the default one. >> >>> As you can see, there is a 502 Bad Gateway error. >> >> Yes, and that error log shows that: >> >>> 2013/02/21 10:21:22 [error] 1097#0: *5 connect() failed (111: >>> Connection >>> refused) while connecting to upstream, client: 46.218.152.242, server: >>> koshie.fr, request: "GET /wordpress/info.php HTTP/1.1", upstream: >>> "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" >> >> it is using the server "koshie.fr", not the server >> "blog.koshie.fr". Presumably the server "koshie.fr" is the default, >> and the server "blog.koshie.fr" does not exist. >> >> So the configuration that is running, is *not* the configuration that >> you are showing here. >> >>> Logically, this is the vhost configuration file for >>> http://blog.koshie.fr/wp-admin/info.php: >> >> But based on your later mail, this configuration file does not exist. >> >> If you want to get this configured correctly, your best bet is probably >> to simplify the configuration significantly. >> >> Leave /etc/nginx/nginx.conf as it is. >> >> Let /etc/nginx/conf.d have exactly one file in it, this one. >> >> Then run your test and see if it works or fails. >> >>> >Maybe it is simplest if you rename the conf.d directory, then create >>> >a new conf.d directory with just one vhost file. Then reload nginx and >>> >re-do your test of a php request and see what it says. >>> >>> So, above you've the configuration file related to this log error: >> >> No. >> >> That configuration file does not result in this error. >> >>> >If it still fails, then you have a simpler test case to work from. >>> >>> What is this test case please? >> >> Your test case is: >> >> * you run "curl -i http://blog.koshie.fr/wordpress/info.php" >> * you expect to see some useful content >> * you actually see a 502 error. >> >> Then do whatever it takes to get the expected output. >> >> I think that one part of the problem is that you have only half-changed >> from an old system to a new system. >> >> You new system has nothing listening on 127.0.0.1:9000, so any >> configuration that mentions that ip:port is broken. It should be >> removed, >> or replaced with the unix socket. >> >> And your new system does not actually include all of the files that you >> want it to. >> >> When your nginx starts, it reads exactly one configuration file: >> /etc/nginx/nginx.conf. >> >> That file then uses "include" to read some other files. Those other >> files do not seem to be the ones you want, for some reason. >> >> I suggest: stop nginx. Make sure it is stopped, and not running, and >> has nothing listening on port 80 or port 443. Then look at the files >> in /etc/nginx/conf.d, and make sure that they are exactly the ones that >> you want. Then start nginx, access the info.php url, and see if it >> works. >> >> Good luck, >> >> f > > -- Utilisant le logiciel de courrier d'Opera : http://www.opera.com/mail/ From mdounin at mdounin.ru Sat Mar 2 22:11:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Mar 2013 02:11:25 +0400 Subject: nginx mailing-list and sender filtering (vs BATV) In-Reply-To: <20130301205959.GB15343@redoubt.spodhuis.org> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> <20130301205959.GB15343@redoubt.spodhuis.org> Message-ID: <20130302221125.GB15378@mdounin.ru> Hello! On Fri, Mar 01, 2013 at 03:59:59PM -0500, Phil Pennock wrote: > On 2013-03-01 at 17:12 +0400, Maxim Dounin wrote: > > On Fri, Mar 01, 2013 at 03:22:51AM -0500, Phil Pennock wrote: > > > PS: nginx mail-server configuration is broken; it's checking SMTP Envelope > > > Sender against the subscription list, not the RFC5322.From: header, so > > > breaks on things such as PRVS. Posting via manual injection to your > > > mail-server. :( > > > > Unfortunately, there is no way to properly reject messages at SMTP > > level (i.e. to avoid sending bounces) and doing checks based on > > message headers at the same time. > > > > If you use different envelope from and message from addresses and > > have problems with posting - just subscribe your envelope from > > address to the mailing list with mail delivery disabled. > > I understand the problem you're fighting here, and why you're doing this > at SMTP RCPT time, since Mailman doesn't have content scanning hooks to > check if the message should be allowed based on the message headers. > You can do these checks safely enough, but it requires more caution. You probably didn't understand the problem deep enough: content scanning hooks, even if implemented, won't help. To properly reject message at SMTP level you have to check envelope sender, and if you've accepted RCPT TO - it's too late to reject message at DATA stage, as the message might have other valid recipients. So the only way to properly check list membership is to check envelope addresses. Anything else means sending bounces, which is not acceptable nowadays. > When violating normal SMTP expectations by making a RCPT appear to only > exist for certain MAIL FROM senders, it's important to understand > variations in senders at SMTP time: the checks you're doing are not the > same as the membership tests done by Mailman itself, which looks at the > headers. > > I did the same thing as you, for expediency and to avoid forking yet > more extra processes for scanning, but I made sure that the form of the > address being checked for membership has had VERP and BATV variations > stripped out first, to check a _normalized_ address against the Mailman > membership roster. > > BATV changes the SMTP Envelope Sender, with a crypto-hash embedded in > the address, and a secret and a daily timestamp going into the hash > inputs, so that if all messages _from_ a domain are sent with BATV, then > bounces inherently *must* be to BATV targets if they're legitimate. > > This is the only tool that prevents joe-job backscatter from flooding > mailboxes. > > So that's a non-standard address-existence test breaking when exposed to > an address variation that does have an Internic draft, albeit expired. > > I've sucked it up and configured up an exception mechanism, adding this > mailing-list to that, accepting that any time I enable the backscatter > filter, I'll lose bounce messages from this list to me, with rejections > dropping into a blackhole. That's got a lower risk of being triggered > than a joe-job (unfortunately) (and this varies depending on your > involvement with email infrastructure and how much spammers dislike > you). > > Next time you're touching your mailserver setup, could you please take a > look at adding a canonicalisation step to the addresses being checked > against list membership? I personally think that BATV is awful, but normalization shouldn't make things worse and probably worth implementing. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sat Mar 2 22:50:41 2013 From: nginx-forum at nginx.us (karlseguin) Date: Sat, 02 Mar 2013 17:50:41 -0500 Subject: proxy pass server based on request method Message-ID: <8343c6d5675daafccfd635f9f50a6639.NginxMailingListEnglish@forum.nginx.org> We want to pick the backend server to proxy_pass to based on the request method. Saw one example that used limit_except. Another idea would be to use lua. Is there a preferred way to achieve this? Karl Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236839,236839#msg-236839 From nginx+phil at spodhuis.org Sun Mar 3 04:28:38 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Sat, 2 Mar 2013 23:28:38 -0500 Subject: nginx mailing-list and sender filtering (vs BATV) In-Reply-To: <20130302221125.GB15378@mdounin.ru> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> <20130301205959.GB15343@redoubt.spodhuis.org> <20130302221125.GB15378@mdounin.ru> Message-ID: <20130303042838.GA41259@redoubt.spodhuis.org> On 2013-03-03 at 02:11 +0400, Maxim Dounin wrote: > You probably didn't understand the problem deep enough: content I'm one of the maintainers of the MTA which runs a plurality of the MTA installs out there. Of course, I have crazy days and moments of pure stupidity, but in general I have a decent understanding of email. > scanning hooks, even if implemented, won't help. To properly > reject message at SMTP level you have to check envelope sender, > and if you've accepted RCPT TO - it's too late to reject message > at DATA stage, as the message might have other valid recipients. > > So the only way to properly check list membership is to check > envelope addresses. Anything else means sending bounces, which is > not acceptable nowadays. Er, the only visible email addresses in nginx.org are for mailing-lists, I wasn't aware it had user-accounts. Even so, not a major issue. So if the poster is a member of one mailing-list but not another, then you're not back-scattering to unknown addresses; if content-scanning then deems the message okay, then it's far more acceptable to bounce. And if folks are cross-posting to many lists but only subscribed to one, rejecting all of them with a "fix your sender address or don't spam lists you're not subscribed to" message works well too. :) This even applies for user-accounts: protects from the crazies mailing every address they can think of. But yes, I understand the issue and do use RCPT-time checks myself, after normalisation of the sender address, to work around the fact that I'm doing something a little dodgy that might break legitimate mail. Regards, -Phil From zsp042 at gmail.com Sun Mar 3 05:45:39 2013 From: zsp042 at gmail.com (=?UTF-8?B?5byg5rKI6bmP?=) Date: Sun, 3 Mar 2013 13:45:39 +0800 Subject: How can nginx reverse proxy google app engine via SPDY? In-Reply-To: References: Message-ID: In China , government block google app engine by GFW So , I use a nginx reverse proxy (proxy_pass) to google app engine via https My Site need https to keep security , so I want to know can I let nginx proxy a https google app engine website via SPDY? -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun Mar 3 06:18:00 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 3 Mar 2013 01:18:00 -0500 Subject: How can nginx reverse proxy google app engine via SPDY? In-Reply-To: References: Message-ID: My answer would not be 'Google is your friend' but the even better: RTFM :oP http://wiki.nginx.org/HttpProxyModule#proxy_pass --- *B. R.* On Sun, Mar 3, 2013 at 12:45 AM, ??? wrote: > > > In China , government block google app engine by GFW > > So , I use a nginx reverse proxy (proxy_pass) to google app engine via > https > > My Site need https to keep security , so I want to know can I let nginx > proxy a https google app engine website via SPDY? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zsp007 at gmail.com Sun Mar 3 06:58:13 2013 From: zsp007 at gmail.com (=?UTF-8?B?5byg5rKI6bmP?=) Date: Sun, 3 Mar 2013 14:58:13 +0800 Subject: How can nginx reverse proxy google app engine via SPDY? In-Reply-To: References: Message-ID: I write this and it success proxyed , but I want to let the proxy use SPDY there is no how to use SPDY in revese proxy in the document server{ listen 443; server_name 42btc.com; ssl on; ssl_certificate /etc/ssl/certs/42btc.crt; ssl_certificate_key /etc/ssl/private/42btc.key; #enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used. ssl_protocols SSLv3 TLSv1; #Disables all weak ciphers ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; location / { proxy_set_header HOST xxx.appspot.com ; proxy_set_header REMOTE-HOST $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass https://173.194.72.141; } } server { listen 80; server_name www.42btc.com 42btc.com *.42btc.com; charset utf-8; rewrite ^(.*)$ https://42btc.com$1 permanent; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Sun Mar 3 08:08:42 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Sun, 3 Mar 2013 12:08:42 +0400 Subject: How can nginx reverse proxy google app engine via SPDY? In-Reply-To: References: Message-ID: <7A969A2C-78B5-4981-8966-43BB6EA068FE@nginx.com> On Mar 3, 2013, at 10:58 AM, ??? wrote: > I write this and it success proxyed , but I want to let the proxy use SPDY > there is no how to use SPDY in revese proxy in the document There's not SPDY for upstream servers in nginx at this time, so you can't. > server{ > listen 443; > server_name 42btc.com; > > ssl on; > ssl_certificate /etc/ssl/certs/42btc.crt; > ssl_certificate_key /etc/ssl/private/42btc.key; > #enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used. > ssl_protocols SSLv3 TLSv1; > #Disables all weak ciphers > ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; > > location / { > > proxy_set_header HOST xxx.appspot.com; > > proxy_set_header REMOTE-HOST $remote_addr; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_pass https://173.194.72.141; > } > } > > server { > listen 80; > server_name www.42btc.com 42btc.com *.42btc.com; > charset utf-8; > rewrite ^(.*)$ https://42btc.com$1 permanent; > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sun Mar 3 12:08:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Mar 2013 16:08:24 +0400 Subject: Optimal nginx settings for websockets sending images In-Reply-To: References: <20130301125901.GC94127@mdounin.ru> Message-ID: <20130303120824.GE15378@mdounin.ru> Hello! On Fri, Mar 01, 2013 at 11:54:06PM -0500, michael.heuberger wrote: > thank you so much maxim > > i have read the documentation at > http://nginx.org/en/docs/http/ngx_http_proxy_module.html and am trying to > understand all that. it's not easy ... > > i'm serving video files (mp4 or webm). that's where these "an upstream > response is buffered to a temporary file" warnings occur. > > how can i find out how far i should increase the buffers until these > warnings are gone? how would you do this? As long as files are big - it probably doesn't make sense to even try to eliminate warnings by increasing buffers. Instead, you have to derive buffer sizes from amount of memory you want to use for buffering, keeping in mind that maximum memory will be (proxy_buffer_size + proxy_buffers) / (worker_processes * worker_connections). Note: as long as you have other activity on the host in question, including nginx with cache or even with just disk buffering, you likely want to keep at least part of the memory free - e.g., for VM cache. > second problem: > when i made changes in the css file, uploaded that, then the nginx server > was still serving the older version. because of "proxy_cache_valid 200 302 > 304 10m;" - how can i tell nginx to refresh cache asap when a new file was > uploaded? While in theory you may remove/refresh file in nginx cache, it won't really work in real life - as the same file might be cached at various other layers, including client browser cache. Correct aproach to the problem is to use unique links, e.g. with version number in them. Something like "/css/file.css?42", where "42" is a number you bump each time you do significant changes to the file, will do the trick. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Mar 3 12:32:05 2013 From: nginx-forum at nginx.us (Daniel15) Date: Sun, 03 Mar 2013 07:32:05 -0500 Subject: FastCGI cache has stopped working Message-ID: <5498420d857bb7b50b9474c14929cf28.NginxMailingListEnglish@forum.nginx.org> Nginx's FastCGI caching used to work perfectly for me, but recently it stopped working and I can't work out why. This is how the headers look: HTTP/1.1 200 OK Server: nginx/1.2.7 Date: Sun, 03 Mar 2013 12:28:24 GMT Content-Type: text/html; charset=utf-8 Content-Length: 10727 Connection: keep-alive X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 Expires: Sun, 3 Mar 2013 13:28:24 GMT Cache-Control: public, max-age=3600 Last-Modified: Sun, 3 Mar 2013 12:26:20 GMT X-Cache: MISS It appears that the Last-Modified, Expires and Cache-Control headers are set correctly, however the cache status is always "MISS" and the cache directory is empty. Any ideas on why this would be happening, and how to debug this? The site URL is http://dan.cx/, and here is my Nginx configuration for this site: https://github.com/Daniel15/Website/blob/master/nginx.conf Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236852,236852#msg-236852 From list-reader at koshie.fr Sun Mar 3 12:38:04 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_k=C3=A9vin?=) Date: Sun, 03 Mar 2013 13:38:04 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> <20130220235410.GO32392@craic.sysops.org> <20130221135041.GP32392@craic.sysops.org> Message-ID: Hi, I've solved my issue, I've moved all wordpress file into the document root of blog.koshie.fr, modified it into /etc/nginx/conf.d/blog.koshie.fr.conf according to my new structure and now it works. Again thanks :). Le Sat, 02 Mar 2013 20:14:55 +0100, GASPARD k?vin a ?crit: > Well, in fact I've a new problem. > > info.php file works, so php works. > > But when I'm trying to access to my install.php file for wordpress > installation I've "File not found." and in /var/log/nginx/error.log I > have: "*1 FastCGI sent in stderr: "Primary script unknown" while reading > response header from upstream, client: 89.2.128.79, server: > blog.koshie.fr, request: "GET /wp-admin/install.php HTTP/1.1", upstream: > "fastcgi://unix:/var/run/php5-fpm.sock:", host: "blog.koshie.fr"" > > An idea? I've googled a little and some peoples talk about > /etc/nginx/fastcgi_params's SCRIPT_FILE_NAME parameters. But I don't > understand what value to put into. > > PS : Sorry for the noise of my last e-mail. > > Cordially, Koshie > > Le Sat, 02 Mar 2013 20:00:47 +0100, GASPARD k?vin > a ?crit: > >> I've removed all "127.0.0.1:9000" lines by "fastcgi_pass >> unix:/var/run/php5-fpm.sock;" and everything works now :-). >> >> Thanks for your help ! >> >> Le Thu, 21 Feb 2013 14:50:41 +0100, Francis Daly a >> ?crit: >> >>> On Thu, Feb 21, 2013 at 10:26:22AM +0100, GASPARD K?vin wrote: >>> >>> Hi there, >>> >>>> >So: what is the hostname in the url that you try to get, when you see >>>> >the 502 error? >>>> >>>> Trying to install a Wordpress, used a info.php page here: >>>> http://blog.koshie.fr/wp-admin/info.php >>> >>> Ok - so the one server{} block that is used is either the one that has >>> server_name blog.koshie.fr, or is the default one. >>> >>>> As you can see, there is a 502 Bad Gateway error. >>> >>> Yes, and that error log shows that: >>> >>>> 2013/02/21 10:21:22 [error] 1097#0: *5 connect() failed (111: >>>> Connection >>>> refused) while connecting to upstream, client: 46.218.152.242, server: >>>> koshie.fr, request: "GET /wordpress/info.php HTTP/1.1", upstream: >>>> "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" >>> >>> it is using the server "koshie.fr", not the server >>> "blog.koshie.fr". Presumably the server "koshie.fr" is the default, >>> and the server "blog.koshie.fr" does not exist. >>> >>> So the configuration that is running, is *not* the configuration that >>> you are showing here. >>> >>>> Logically, this is the vhost configuration file for >>>> http://blog.koshie.fr/wp-admin/info.php: >>> >>> But based on your later mail, this configuration file does not exist. >>> >>> If you want to get this configured correctly, your best bet is probably >>> to simplify the configuration significantly. >>> >>> Leave /etc/nginx/nginx.conf as it is. >>> >>> Let /etc/nginx/conf.d have exactly one file in it, this one. >>> >>> Then run your test and see if it works or fails. >>> >>>> >Maybe it is simplest if you rename the conf.d directory, then create >>>> >a new conf.d directory with just one vhost file. Then reload nginx >>>> and >>>> >re-do your test of a php request and see what it says. >>>> >>>> So, above you've the configuration file related to this log error: >>> >>> No. >>> >>> That configuration file does not result in this error. >>> >>>> >If it still fails, then you have a simpler test case to work from. >>>> >>>> What is this test case please? >>> >>> Your test case is: >>> >>> * you run "curl -i http://blog.koshie.fr/wordpress/info.php" >>> * you expect to see some useful content >>> * you actually see a 502 error. >>> >>> Then do whatever it takes to get the expected output. >>> >>> I think that one part of the problem is that you have only half-changed >>> from an old system to a new system. >>> >>> You new system has nothing listening on 127.0.0.1:9000, so any >>> configuration that mentions that ip:port is broken. It should be >>> removed, >>> or replaced with the unix socket. >>> >>> And your new system does not actually include all of the files that you >>> want it to. >>> >>> When your nginx starts, it reads exactly one configuration file: >>> /etc/nginx/nginx.conf. >>> >>> That file then uses "include" to read some other files. Those other >>> files do not seem to be the ones you want, for some reason. >>> >>> I suggest: stop nginx. Make sure it is stopped, and not running, and >>> has nothing listening on port 80 or port 443. Then look at the files >>> in /etc/nginx/conf.d, and make sure that they are exactly the ones that >>> you want. Then start nginx, access the info.php url, and see if it >>> works. >>> >>> Good luck, >>> >>> f >> >> > > -- Utilisant le logiciel de courrier d'Opera : http://www.opera.com/mail/ From nginx-forum at nginx.us Sun Mar 3 12:45:21 2013 From: nginx-forum at nginx.us (Daniel15) Date: Sun, 03 Mar 2013 07:45:21 -0500 Subject: FastCGI cache has stopped working In-Reply-To: <5498420d857bb7b50b9474c14929cf28.NginxMailingListEnglish@forum.nginx.org> References: <5498420d857bb7b50b9474c14929cf28.NginxMailingListEnglish@forum.nginx.org> Message-ID: <911df5fb69b8f82be7eb9927f4d52fd2.NginxMailingListEnglish@forum.nginx.org> Figured it out. The Expires header ("Sun, 3 Mar 2013 13:28:24 GMT") is using a single digit for the date. If I change the "3" to "03", Nginx works as expected. RFC 1123 and RFC 822 say that both one- and two-digit numbers are valid, so this is a bug in Nginx's caching. I'll report it to the bug tracker. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236852,236855#msg-236855 From list-reader at koshie.fr Sun Mar 3 12:52:48 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_k=C3=A9vin?=) Date: Sun, 03 Mar 2013 13:52:48 +0100 Subject: Convert Apache rewrite to NGinx Message-ID: Hi, Using nginx 1.2.1 on Debian Wheezy 64 bits. My wordpress need rewrite, it gave me this: RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] I've tried to convert it with this website: http://winginx.com/htaccess. The result is: # nginx configuration location / { if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } } I've put it into /etc/nginx/conf.d/doinalefort.fr.conf like this: server { listen 80; listen 443 ssl; # server_name 176.31.122.26; server_name doinalefort.fr www.doinalefort.fr; root /var/www/doinalefort.fr; msie_padding on; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; index index.php; fastcgi_index index.php; client_max_body_size 8M; client_body_buffer_size 256K; location ~ \.php$ { include fastcgi_params; # Assuming php-fastcgi running on localhost port 9000 fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; location / { if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } } } } And restarted nginx, but it give me this error: Restarting nginx: nginx: [emerg] location "/" is outside location "\.php$" in /etc/nginx/conf.d/doinalefort.fr.conf:44 nginx: configuration file /etc/nginx/nginx.conf test failed And idea? Cordially, Koshie -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From mdounin at mdounin.ru Sun Mar 3 12:57:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Mar 2013 16:57:02 +0400 Subject: FastCGI cache has stopped working In-Reply-To: <5498420d857bb7b50b9474c14929cf28.NginxMailingListEnglish@forum.nginx.org> References: <5498420d857bb7b50b9474c14929cf28.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130303125702.GG15378@mdounin.ru> Hello! On Sun, Mar 03, 2013 at 07:32:05AM -0500, Daniel15 wrote: > Nginx's FastCGI caching used to work perfectly for me, but recently it > stopped working and I can't work out why. > > This is how the headers look: > HTTP/1.1 200 OK > Server: nginx/1.2.7 > Date: Sun, 03 Mar 2013 12:28:24 GMT > Content-Type: text/html; charset=utf-8 > Content-Length: 10727 > Connection: keep-alive > X-AspNetMvc-Version: 4.0 > X-AspNet-Version: 4.0.30319 > Expires: Sun, 3 Mar 2013 13:28:24 GMT > Cache-Control: public, max-age=3600 > Last-Modified: Sun, 3 Mar 2013 12:26:20 GMT > X-Cache: MISS > > It appears that the Last-Modified, Expires and Cache-Control headers are set > correctly, however the cache status is always "MISS" and the cache directory > is empty. Any ideas on why this would be happening, and how to debug this? > > The site URL is http://dan.cx/, and here is my Nginx configuration for this > site: https://github.com/Daniel15/Website/blob/master/nginx.conf Generic debugging hints are here: http://wiki.nginx.org/Debugging In ths particular case I would suggest there is another caching layer, which results in cached "X-Cache: MISS" being returned. Note that Expires and Last-Modified are the same, and X-ExecutionTime headers match exactly: HTTP/1.1 200 OK Server: nginx/1.2.7 Date: Sun, 03 Mar 2013 12:49:14 GMT Content-Type: text/html; charset=utf-8 Content-Length: 10727 Connection: close X-AspNetMvc-Version: 4.0 X-ExecutionTime: 00:00:00.1373919 X-ExecutionTime: 00:00:00.1717868 X-AspNet-Version: 4.0.30319 Expires: Sun, 3 Mar 2013 13:22:27 GMT Cache-Control: public, max-age=3600 Last-Modified: Sun, 3 Mar 2013 12:22:27 GMT X-Cache: MISS HTTP/1.1 200 OK Server: nginx/1.2.7 Date: Sun, 03 Mar 2013 12:49:31 GMT Content-Type: text/html; charset=utf-8 Content-Length: 10727 Connection: close X-AspNetMvc-Version: 4.0 X-ExecutionTime: 00:00:00.1373919 X-ExecutionTime: 00:00:00.1717868 X-AspNet-Version: 4.0.30319 Expires: Sun, 3 Mar 2013 13:22:27 GMT Cache-Control: public, max-age=3600 Last-Modified: Sun, 3 Mar 2013 12:22:27 GMT X-Cache: MISS HTTP/1.1 200 OK Server: nginx/1.2.7 Date: Sun, 03 Mar 2013 12:50:25 GMT Content-Type: text/html; charset=utf-8 Content-Length: 10727 Connection: close X-AspNetMvc-Version: 4.0 X-ExecutionTime: 00:00:00.1373919 X-ExecutionTime: 00:00:00.1717868 X-AspNet-Version: 4.0.30319 Expires: Sun, 3 Mar 2013 13:22:27 GMT Cache-Control: public, max-age=3600 Last-Modified: Sun, 3 Mar 2013 12:22:27 GMT X-Cache: MISS -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Mar 3 13:02:45 2013 From: nginx-forum at nginx.us (Daniel15) Date: Sun, 03 Mar 2013 08:02:45 -0500 Subject: FastCGI cache has stopped working In-Reply-To: <20130303125702.GG15378@mdounin.ru> References: <20130303125702.GG15378@mdounin.ru> Message-ID: <4ffd14baf38c80e9660a4e2962f0ad2f.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > In ths particular case I would suggest there is another caching > layer, which results in cached "X-Cache: MISS" being returned. > Note that Expires and Last-Modified are the same, and > X-ExecutionTime headers match exactly: Yeah, for that particular page there's another caching layer in the app itself. The X-Cache header is added by Nginx, however (tested this by renaming it in my Nginx configuration). Try http://dan.cx/bundles/main.css instead - This file does not have another caching layer, and you can see the Last-Modified and Expires headers changing on every request. As far as I can tell, this is a bug in Nginx (looks like it's not parsing the Expires date correctly if the day is a single digit) and I've raised it in Trac: http://trac.nginx.org/nginx/ticket/310 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236852,236858#msg-236858 From mdounin at mdounin.ru Sun Mar 3 13:05:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Mar 2013 17:05:35 +0400 Subject: FastCGI cache has stopped working In-Reply-To: <911df5fb69b8f82be7eb9927f4d52fd2.NginxMailingListEnglish@forum.nginx.org> References: <5498420d857bb7b50b9474c14929cf28.NginxMailingListEnglish@forum.nginx.org> <911df5fb69b8f82be7eb9927f4d52fd2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130303130535.GH15378@mdounin.ru> Hello! On Sun, Mar 03, 2013 at 07:45:21AM -0500, Daniel15 wrote: > Figured it out. > > The Expires header ("Sun, 3 Mar 2013 13:28:24 GMT") is using a single digit > for the date. If I change the "3" to "03", Nginx works as expected. RFC > 1123 and RFC 822 say that both one- and two-digit numbers are valid, so this > is a bug in Nginx's caching. I'll report it to the bug tracker. Nice catch. It's not a bug in nginx though, as HTTP only allows fixed-width subset of RFC1123, see here: http://tools.ietf.org/html/rfc2616#section-3.3.1 -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Mar 3 13:07:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Mar 2013 17:07:27 +0400 Subject: FastCGI cache has stopped working In-Reply-To: <4ffd14baf38c80e9660a4e2962f0ad2f.NginxMailingListEnglish@forum.nginx.org> References: <20130303125702.GG15378@mdounin.ru> <4ffd14baf38c80e9660a4e2962f0ad2f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130303130726.GI15378@mdounin.ru> Hello! On Sun, Mar 03, 2013 at 08:02:45AM -0500, Daniel15 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > In ths particular case I would suggest there is another caching > > layer, which results in cached "X-Cache: MISS" being returned. > > Note that Expires and Last-Modified are the same, and > > X-ExecutionTime headers match exactly: > > Yeah, for that particular page there's another caching layer in the app > itself. The X-Cache header is added by Nginx, however (tested this by > renaming it in my Nginx configuration). > > Try http://dan.cx/bundles/main.css instead - This file does not have another > caching layer, and you can see the Last-Modified and Expires headers > changing on every request. > > As far as I can tell, this is a bug in Nginx (looks like it's not parsing > the Expires date correctly if the day is a single digit) and I've raised it > in Trac: http://trac.nginx.org/nginx/ticket/310 Closed this as invalid, see my another message for links. -- Maxim Dounin http://nginx.org/en/donation.html From lists at ruby-forum.com Sun Mar 3 14:38:49 2013 From: lists at ruby-forum.com (iqbal h.) Date: Sun, 03 Mar 2013 15:38:49 +0100 Subject: Call of duty black Ops 2 Free Download Pc Game In-Reply-To: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> References: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> Message-ID: <108be447e9f0861d7080059ab8ad28fd@ruby-forum.com> thank i for post -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 3 14:40:09 2013 From: lists at ruby-forum.com (iqbal h.) Date: Sun, 03 Mar 2013 15:40:09 +0100 Subject: Brutal Legend free download full version pc game In-Reply-To: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> References: <8c17803c78409bff608afe49ff8b84d8@ruby-forum.com> Message-ID: <8542da6ba0f92a72b09c46a4adcc5e7a@ruby-forum.com> Brutal Legend free download full version pc game Brutal Legend Brutal Legend (c) Double Fine Prod 02/2013 :?.. RELEASE.DATE .. PROTECTION ??.: Steam 1 :???. DISC(S) .. GAME.TYPE ??..: Action. Total Size: 7.98 GB here download link http://www.freedownloadfullversioncrack.com/brutal-legend-free-download-full-version-pc-game/ -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 3 14:41:41 2013 From: lists at ruby-forum.com (iqbal h.) Date: Sun, 03 Mar 2013 15:41:41 +0100 Subject: Brutal Legend free download full version pc game Message-ID: Brutal Legend Brutal Legend (c) Double Fine Prod 02/2013 :?.. RELEASE.DATE .. PROTECTION ??.: Steam 1 :???. DISC(S) .. GAME.TYPE ??..: Action. Total Size: 7.98 GB here go this link http://www.freedownloadfullversioncrack.com/brutal-legend-free-download-full-version-pc-game/ -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 3 14:42:08 2013 From: lists at ruby-forum.com (iqbal h.) Date: Sun, 03 Mar 2013 15:42:08 +0100 Subject: Brutal Legend free download full version pc game In-Reply-To: References: Message-ID: thank you for posting -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 3 14:42:31 2013 From: lists at ruby-forum.com (iqbal h.) Date: Sun, 03 Mar 2013 15:42:31 +0100 Subject: Brutal Legend free download full version pc game In-Reply-To: References: Message-ID: really good site -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Sun Mar 3 14:52:48 2013 From: lists at ruby-forum.com (iqbal h.) Date: Sun, 03 Mar 2013 15:52:48 +0100 Subject: Brutal Legend free download full version pc game In-Reply-To: References: Message-ID: thank easy download just go this link -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Sun Mar 3 15:01:09 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2013 15:01:09 +0000 Subject: Convert Apache rewrite to NGinx In-Reply-To: References: Message-ID: <20130303150109.GG32392@craic.sysops.org> On Sun, Mar 03, 2013 at 01:52:48PM +0100, GASPARD k?vin wrote: Hi there, > Using nginx 1.2.1 on Debian Wheezy 64 bits. > > My wordpress need rewrite, it gave me this: > # nginx configuration > > location / { > if (!-e $request_filename){ > rewrite ^(.*)$ /index.php break; > } > } See http://wiki.nginx.org/Pitfalls Particularly the "Front Controller Pattern based packages" section. Probably a single extra try_files line will work for you. f -- Francis Daly francis at daoine.org From bruno.premont at restena.lu Sun Mar 3 15:07:50 2013 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Sun, 3 Mar 2013 16:07:50 +0100 Subject: Gzip not compressing response body with status code other than 200, 403, 404 In-Reply-To: <20120901005306.GE40452@mdounin.ru> References: <877016e0578575554305828cedbaa7a7.NginxMailingListEnglish@forum.nginx.org> <20120901005306.GE40452@mdounin.ru> Message-ID: <20130303160750.64026b17@neptune.home> Hello Maxim, On Sat, 01 September 2012 Maxim Dounin wrote: > On Sat, Aug 25, 2012 at 02:32:02AM -0400, soniclee wrote: > > > I met same problem while try to gzip response with other status code. Any > > update for this issue? > > In some cases it's just not possible to compress > response, e.g. you can't compress 206 as it doesn't contain full > entity body. Or you can't compress 304 as it has no entity at > all. > > In many cases it's unwise/unsafe to compress responses with some > status codes. I.e. you probably don't want to compress 400 (Bad > Request) response even if client claimed gzip support before it > did some syntax error in the request - as you already know client > did something wrong, and it's better to return easy to understand > response. Or e.g. if you try to return 500 (Internal Server > Error) due to memory allocation error - you probably don't want to > allocate memory for gzip. > > Due to the above reasons gzip compression is done only for certain > common and safe response codes. And from the above explanation it > should be more or less obvious that it just can't be enabled for > all status codes. > > If you think gzip compression should be enabled for some specific > status code - please provide your suggestions with reasoning. One status code that should allow compression is 410 "Gone", it is mostly equivalent to 404 just stronger. (patch attached) Probably most 5xx error pages generated by an upstream should also be compressed if the request as seen by nginx is valid. (e.g. a 503 or 500 error returned by upstream, especially when they are rather large) Bruno -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.7-gzip-http410.patch Type: text/x-patch Size: 1369 bytes Desc: not available URL: From nginx-forum at nginx.us Sun Mar 3 15:30:17 2013 From: nginx-forum at nginx.us (omolinete) Date: Sun, 03 Mar 2013 10:30:17 -0500 Subject: NginX and Magento very strange problem (?!) Message-ID: <9960140233989dfa1a9e438367cac9b0.NginxMailingListEnglish@forum.nginx.org> Hello All, This is my first post on the forum, so thank you very much in advance for your attention on my topic. Hope anyone could help me :) The problem we are getting with NginX and Magento, is relating to the download process of files such as PDF, which are in this case invoices. Magento seems to work flawlessly under NginX, except when we try to view from the server any PDF file through the Magento backend, or when we try to save the same file on our local disk. The tricky point is that we don't get any problem when we try to download a previously uploaded PDF file to the server using the "scp" command, and accessing to this file using the direct URL. The problem appears when we try to download the PDF file from the Magento's backend; NginX contact our browser but does not start the transfer/download after you perform a restart of the NginX's service. I have recorded a little video of 3:27 minutes from my desktop, where you will see by yourself what is the issue we have with NginX. Here is the URL: http://www.misspepa.com/capturedmovie.ogv . Please, note that there's no audio for the video. We are really convinced that the problem is at the NginX side and not on Magento side, because as you will see on the video, when we restart the NginX service the transfer which still says "Starting" (when it is really frozen) is rapidly accomplished with success. If I do not restart the NginX service, the download transfer process of such files will become stalled for a indefinitely period of time with no success... ... And last but not last, the logs of NginX didn't report any error. So, for that reason I decided to post my issue to the forum, in order to get some help on this subject. Thank you very much in advance!! MOLI Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236869,236869#msg-236869 From nginx-forum at nginx.us Sun Mar 3 17:43:57 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 03 Mar 2013 12:43:57 -0500 Subject: Regular expression used in server_name directive Message-ID: <1cfb17081ee3a1ad62ab82ffbef0ca8b.NginxMailingListEnglish@forum.nginx.org> I am using a regular expression with a capture group in my server_name directive. It looks like: server_name (?.+)\.mydomain\.com$ The problem is that I want to expand it slightly and say anything except web3.mydomain.com. I.E. something.mydomain.com matches, but web3.mydomain.com does not match. How is this possible? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236870,236870#msg-236870 From list-reader at koshie.fr Sun Mar 3 18:38:39 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_k=C3=A9vin?=) Date: Sun, 03 Mar 2013 19:38:39 +0100 Subject: Convert Apache rewrite to NGinx In-Reply-To: <20130303150109.GG32392@craic.sysops.org> References: <20130303150109.GG32392@craic.sysops.org> Message-ID: Thanks for your reply. > On Sun, Mar 03, 2013 at 01:52:48PM +0100, GASPARD k?vin wrote: > > Hi there, > >> Using nginx 1.2.1 on Debian Wheezy 64 bits. >> >> My wordpress need rewrite, it gave me this: > >> # nginx configuration >> >> location / { >> if (!-e $request_filename){ >> rewrite ^(.*)$ /index.php break; >> } >> } > > See http://wiki.nginx.org/Pitfalls > > Particularly the "Front Controller Pattern based packages" section. > > Probably a single extra try_files line will work for you. > > f This is my new config file : server { listen 80; listen 443 ssl; # server_name 176.31.122.26; server_name doinalefort.fr www.doinalefort.fr; root /var/www/doinalefort.fr; msie_padding on; # ssl_certificate /etc/nginx/certs/auction-web.crt; # ssl_certificate_key /etc/nginx/certs/auction-web.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; index index.php; fastcgi_index index.php; client_max_body_size 8M; client_body_buffer_size 256K; location ~ \.php$ { include fastcgi_params; # Assuming php-fastcgi running on localhost port 9000 fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; try_files $uri $uri/ /index.php?q=$uri&$args; } } By checking the URL you gave me it seems to be the good choice for Wordpress (3.5.1). I've restarted nginx, then I've this kind of permalink in Wordpress now: http://doinalefort.fr/%year%/%postname%/ and with the hello world topic it looks like this : http://doinalefort.fr/2013/hello-world/ As you can see, I've a 404 error, in /var/log/nginx/error.log I've: 2013/03/03 19:34:31 [error] 27463#0: *252 "/var/www/doinalefort.fr/2013/hello-world/index.php" is not found (2: No such file or directory), client: 80.239.242.158, server: doinalefort.fr, request: "GET /2013/hello-world/ HTTP/1.1", host: "doinalefort.fr" And in access.log: 80.239.242.158 - - [03/Mar/2013:19:35:28 +0100] "GET /2013/hello-world/ HTTP/1.1" 404 142 "-" "Opera/9.80 (X11; Linux x86_64; Edition Next) Presto/2.12.388 Version/12.14" An idea? Also, I've googled a little more and I finally found this blog: http://themesforge.com/featured/high-performance-wordpress-part-3/ But honestly I don't have enough Nginx's knowledge to know if this guy have write something serious. If you can tell me if it's okay it'll maybe solve a lot of problem before they appear. Cordially, Koshie -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From steve at greengecko.co.nz Sun Mar 3 18:59:17 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 4 Mar 2013 07:59:17 +1300 Subject: Convert Apache rewrite to NGinx In-Reply-To: References: <20130303150109.GG32392@craic.sysops.org> Message-ID: Why not just use the wp config examples in the docs? Both Wordpress and nginx offer them. Steve On 4/03/2013, at 7:38 AM, GASPARD k?vin wrote: > Thanks for your reply. > >> On Sun, Mar 03, 2013 at 01:52:48PM +0100, GASPARD k?vin wrote: >> >> Hi there, >> >>> Using nginx 1.2.1 on Debian Wheezy 64 bits. >>> >>> My wordpress need rewrite, it gave me this: >> >>> # nginx configuration >>> >>> location / { >>> if (!-e $request_filename){ >>> rewrite ^(.*)$ /index.php break; >>> } >>> } >> >> See http://wiki.nginx.org/Pitfalls >> >> Particularly the "Front Controller Pattern based packages" section. >> >> Probably a single extra try_files line will work for you. >> >> f > > This is my new config file : > > server { > listen 80; > listen 443 ssl; > # server_name 176.31.122.26; > server_name doinalefort.fr www.doinalefort.fr; > root /var/www/doinalefort.fr; > > msie_padding on; > > # ssl_certificate /etc/nginx/certs/auction-web.crt; > # ssl_certificate_key /etc/nginx/certs/auction-web.key; > > ssl_session_timeout 5m; > > ssl_protocols SSLv2 SSLv3 TLSv1; > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > error_log /var/log/nginx/error.log; > access_log /var/log/nginx/access.log; > > index index.php; > fastcgi_index index.php; > > client_max_body_size 8M; > client_body_buffer_size 256K; > > location ~ \.php$ { > include fastcgi_params; > > # Assuming php-fastcgi running on localhost port 9000 > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > fastcgi_connect_timeout 60; > fastcgi_send_timeout 180; > fastcgi_read_timeout 180; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > fastcgi_intercept_errors on; > > try_files $uri $uri/ /index.php?q=$uri&$args; > > } > } > > > By checking the URL you gave me it seems to be the good choice for Wordpress (3.5.1). I've restarted nginx, then I've this kind of permalink in Wordpress now: http://doinalefort.fr/%year%/%postname%/ and with the hello world topic it looks like this : http://doinalefort.fr/2013/hello-world/ > > As you can see, I've a 404 error, in /var/log/nginx/error.log I've: 2013/03/03 19:34:31 [error] 27463#0: *252 "/var/www/doinalefort.fr/2013/hello-world/index.php" is not found (2: No such file or directory), client: 80.239.242.158, server: doinalefort.fr, request: "GET /2013/hello-world/ HTTP/1.1", host: "doinalefort.fr" > > And in access.log: > > 80.239.242.158 - - [03/Mar/2013:19:35:28 +0100] "GET /2013/hello-world/ HTTP/1.1" 404 142 "-" "Opera/9.80 (X11; Linux x86_64; Edition Next) Presto/2.12.388 Version/12.14" > > An idea? > > Also, I've googled a little more and I finally found this blog: http://themesforge.com/featured/high-performance-wordpress-part-3/ > > But honestly I don't have enough Nginx's knowledge to know if this guy have write something serious. If you can tell me if it's okay it'll maybe solve a lot of problem before they appear. > > Cordially, Koshie > > > -- > Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. > This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Sun Mar 3 19:09:58 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2013 19:09:58 +0000 Subject: Convert Apache rewrite to NGinx In-Reply-To: References: <20130303150109.GG32392@craic.sysops.org> Message-ID: <20130303190958.GH32392@craic.sysops.org> On Sun, Mar 03, 2013 at 07:38:39PM +0100, GASPARD k?vin wrote: Hi there, > >Probably a single extra try_files line will work for you. > This is my new config file : > location ~ \.php$ { > try_files $uri $uri/ /index.php?q=$uri&$args; > } You will probably find things much easier when you fully understand what is written at http://nginx.org/r/location > http://doinalefort.fr/2013/hello-world/ One request is handled in one location{}. That request does not match this location, and so will not be handled in this location. The try_files directive should be in a location that does match -- perhaps "location / {}". f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 3 19:12:50 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2013 19:12:50 +0000 Subject: Regular expression used in server_name directive In-Reply-To: <1cfb17081ee3a1ad62ab82ffbef0ca8b.NginxMailingListEnglish@forum.nginx.org> References: <1cfb17081ee3a1ad62ab82ffbef0ca8b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130303191250.GI32392@craic.sysops.org> On Sun, Mar 03, 2013 at 12:43:57PM -0500, justin wrote: Hi there, > I am using a regular expression with a capture group in my server_name > directive. It looks like: > > server_name (?.+)\.mydomain\.com$ http://nginx.org/r/server_name for details of how the matching server{} block is chosen. > The problem is that I want to expand it slightly and say anything except > web3.mydomain.com. I.E. > > something.mydomain.com matches, but web3.mydomain.com does not match. > > How is this possible? Make a different server{} block that matches web3.mydomain.com. f -- Francis Daly francis at daoine.org From list-reader at koshie.fr Sun Mar 3 19:19:16 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_k=C3=A9vin?=) Date: Sun, 03 Mar 2013 20:19:16 +0100 Subject: Convert Apache rewrite to NGinx In-Reply-To: References: <20130303150109.GG32392@craic.sysops.org> Message-ID: Le Sun, 03 Mar 2013 19:59:17 +0100, Steve Holdoway a ?crit: > Why not just use the wp config examples in the docs? Both Wordpress and > nginx offer them. Well, I forget to check that... Sorry. Anyway, I've found this: > > > Steve > > On 4/03/2013, at 7:38 AM, GASPARD k?vin wrote: > >> Thanks for your reply. >> >>> On Sun, Mar 03, 2013 at 01:52:48PM +0100, GASPARD k?vin wrote: >>> >>> Hi there, >>> >>>> Using nginx 1.2.1 on Debian Wheezy 64 bits. >>>> >>>> My wordpress need rewrite, it gave me this: >>> >>>> # nginx configuration >>>> >>>> location / { >>>> if (!-e $request_filename){ >>>> rewrite ^(.*)$ /index.php break; >>>> } >>>> } >>> >>> See http://wiki.nginx.org/Pitfalls >>> >>> Particularly the "Front Controller Pattern based packages" section. >>> >>> Probably a single extra try_files line will work for you. >>> >>> f >> >> This is my new config file : >> >> server { >> listen 80; >> listen 443 ssl; >> # server_name 176.31.122.26; >> server_name doinalefort.fr www.doinalefort.fr; >> root /var/www/doinalefort.fr; >> >> msie_padding on; >> >> # ssl_certificate /etc/nginx/certs/auction-web.crt; >> # ssl_certificate_key /etc/nginx/certs/auction-web.key; >> >> ssl_session_timeout 5m; >> >> ssl_protocols SSLv2 SSLv3 TLSv1; >> ssl_ciphers HIGH:!aNULL:!MD5; >> ssl_prefer_server_ciphers on; >> >> error_log /var/log/nginx/error.log; >> access_log /var/log/nginx/access.log; >> >> index index.php; >> fastcgi_index index.php; >> >> client_max_body_size 8M; >> client_body_buffer_size 256K; >> >> location ~ \.php$ { >> include fastcgi_params; >> >> # Assuming php-fastcgi running on localhost port 9000 >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> >> fastcgi_connect_timeout 60; >> fastcgi_send_timeout 180; >> fastcgi_read_timeout 180; >> fastcgi_buffer_size 128k; >> fastcgi_buffers 4 256k; >> fastcgi_busy_buffers_size 256k; >> fastcgi_temp_file_write_size 256k; >> fastcgi_intercept_errors on; >> >> try_files $uri $uri/ /index.php?q=$uri&$args; >> >> } >> } >> >> >> By checking the URL you gave me it seems to be the good choice for >> Wordpress (3.5.1). I've restarted nginx, then I've this kind of >> permalink in Wordpress now: http://doinalefort.fr/%year%/%postname%/ >> and with the hello world topic it looks like this : >> http://doinalefort.fr/2013/hello-world/ >> >> As you can see, I've a 404 error, in /var/log/nginx/error.log I've: >> 2013/03/03 19:34:31 [error] 27463#0: *252 >> "/var/www/doinalefort.fr/2013/hello-world/index.php" is not found (2: >> No such file or directory), client: 80.239.242.158, server: >> doinalefort.fr, request: "GET /2013/hello-world/ HTTP/1.1", host: >> "doinalefort.fr" >> >> And in access.log: >> >> 80.239.242.158 - - [03/Mar/2013:19:35:28 +0100] "GET /2013/hello-world/ >> HTTP/1.1" 404 142 "-" "Opera/9.80 (X11; Linux x86_64; Edition Next) >> Presto/2.12.388 Version/12.14" >> >> An idea? >> >> Also, I've googled a little more and I finally found this blog: >> http://themesforge.com/featured/high-performance-wordpress-part-3/ >> >> But honestly I don't have enough Nginx's knowledge to know if this guy >> have write something serious. If you can tell me if it's okay it'll >> maybe solve a lot of problem before they appear. >> >> Cordially, Koshie >> >> >> -- >> Sorry for my english, I'm trying the best in each e-mail writing. Tell >> me if I'm not clear enough. >> This mail account is only for list reading, to contact me send an >> e-mail at kevingaspard at koshie.fr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From nginx-forum at nginx.us Sun Mar 3 19:21:10 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 03 Mar 2013 14:21:10 -0500 Subject: Regular expression used in server_name directive In-Reply-To: <20130303191250.GI32392@craic.sysops.org> References: <20130303191250.GI32392@craic.sysops.org> Message-ID: I actually have a working regular expression which should work now: server_name ^(?!web3\.)(?.+)\.mydomain\.com$; But for some odd reason when I restart nginx, I am getting: nginx: [emerg] unknown "account" variable This should work though, right? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236870,236876#msg-236876 From adrian at navarro.at Sun Mar 3 19:24:49 2013 From: adrian at navarro.at (=?utf-8?B?QWRyacOhbiBOYXZhcnJv?=) Date: Sun, 3 Mar 2013 19:24:49 +0000 Subject: Regular expression used in server_name directive Message-ID: <1651155140-1362338691-cardhu_decombobulator_blackberry.rim.net-283467209-@b15.c1.bise7.blackberry> Just remove the part and it will act as only matching regex. ------Original Message------ From: justin Sender: nginx-bounces at nginx.org To: nginx at nginx.org ReplyTo: nginx at nginx.org Subject: Re: Regular expression used in server_name directive Sent: Mar 4, 2013 04:21 I actually have a working regular expression which should work now: server_name ^(?!web3\.)(?.+)\.mydomain\.com$; But for some odd reason when I restart nginx, I am getting: nginx: [emerg] unknown "account" variable This should work though, right? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236870,236876#msg-236876 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Sent from my BlackBerry From nginx-forum at nginx.us Sun Mar 3 19:32:53 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 03 Mar 2013 14:32:53 -0500 Subject: Regular expression used in server_name directive In-Reply-To: <1651155140-1362338691-cardhu_decombobulator_blackberry.rim.net-283467209-@b15.c1.bise7.blackberry> References: <1651155140-1362338691-cardhu_decombobulator_blackberry.rim.net-283467209-@b15.c1.bise7.blackberry> Message-ID: <803741a5d533166dff311aef0a543405.NginxMailingListEnglish@forum.nginx.org> I need a variable set that I use in the rest of the server {} block. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236870,236878#msg-236878 From nginx at 2xlp.com Sun Mar 3 19:58:25 2013 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Sun, 3 Mar 2013 14:58:25 -0500 Subject: WildCard domains : how to treat IP Address and Specific Domains differently from Failover/Wildcard Domains ? In-Reply-To: <1AEBFD2F-025F-4BD5-860C-756C1D9BE1B3@sysoev.ru> References: <1AEBFD2F-025F-4BD5-860C-756C1D9BE1B3@sysoev.ru> Message-ID: <8E04A5EF-1C71-4DCA-96E6-720EBEEDEBF1@2xlp.com> > server { > listen 80; > listen IP:80; > server_name example.com; > # site A > } > > server { > listen 80 default_server; > # site B > } > > "listen 80/server_name example.com" route all requests to example.com to site A. > "listen IP:80" routes all requests to IP:80 to site A. > Anything else is routed to default server of 80 port, i.e. to site B. Thank you Igor. Unfortunately, that's not what I needed. I don't necessarily know the IP address(es) on these machines. This is part of an automated deployment. Server A: Specific Domain Name any IPs Server B any domain names Francis- Thank you for this bit -- > server { > server_name example.com; > server_name ~^[0-9.]*$; > return 200 "site A\n"; > } i didn't think of a regex-based server name. that works perfectly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Mar 3 20:04:58 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2013 20:04:58 +0000 Subject: Regular expression used in server_name directive In-Reply-To: References: <20130303191250.GI32392@craic.sysops.org> Message-ID: <20130303200458.GJ32392@craic.sysops.org> On Sun, Mar 03, 2013 at 02:21:10PM -0500, justin wrote: Hi there, > I actually have a working regular expression which should work now: > > server_name ^(?!web3\.)(?.+)\.mydomain\.com$; http://nginx.org/r/server_name Pay particular attention to the line that begins "It is also possible to use regular expressions in server names". > But for some odd reason when I restart nginx, I am getting: > > nginx: [emerg] unknown "account" variable Did you get that message when you tested with your initial version too? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Mar 3 20:16:09 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 03 Mar 2013 15:16:09 -0500 Subject: Regular expression used in server_name directive In-Reply-To: <20130303200458.GJ32392@craic.sysops.org> References: <20130303200458.GJ32392@craic.sysops.org> Message-ID: No, simply doing server_name ~^(?.+)\.mydomain\.com$; Works. This may be a bug with nginx? The new regular expression is valid and should work. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236870,236881#msg-236881 From francis at daoine.org Sun Mar 3 21:38:54 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2013 21:38:54 +0000 Subject: Regular expression used in server_name directive In-Reply-To: References: <20130303200458.GJ32392@craic.sysops.org> Message-ID: <20130303213854.GK32392@craic.sysops.org> On Sun, Mar 03, 2013 at 03:16:09PM -0500, justin wrote: Hi there, > server_name ~^(?.+)\.mydomain\.com$; > > Works. Correct. Now compare that line with the server_name lines in the previous mails. > This may be a bug with nginx? No. > The new regular expression is valid and should work. Yes; nginx is happy with the regular expression, provided that it knows that it is intended to be a regular expression. f -- Francis Daly francis at daoine.org From szh.subs at gmail.com Sun Mar 3 21:43:29 2013 From: szh.subs at gmail.com (Sergey Zhemzhitsky) Date: Mon, 4 Mar 2013 01:43:29 +0400 Subject: What's the best way to have some kind of state in a custom module? Message-ID: <554174998.20130304014329@gmail.com> Hello Nginx Gurus, I'm kind of a newby in nginx plugin development, so trying to understand what's the best way to connect to a proprietary backend system (that provides C API) from within nginx plugin? As far as I understand the best way for such issues is to implement custom upstream proxy module but it's pretty difficult to understand the low level protocol of the backend system, so I need to use the API it provides. Could you suggest where to place the initialization code/connection code to the backend system and where to hold the connection (which must be created only once) to the system itself. Regards, Sergey From nginx-forum at nginx.us Sun Mar 3 22:37:39 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Sun, 03 Mar 2013 17:37:39 -0500 Subject: Optimal nginx settings for websockets sending images In-Reply-To: <20130303120824.GE15378@mdounin.ru> References: <20130303120824.GE15378@mdounin.ru> Message-ID: <13f137448cf3aac9b9ce7f995459b266.NginxMailingListEnglish@forum.nginx.org> Great response Maxim, you're absolutely right here. Will do all that. No further questions :) Cheers Michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236601,236885#msg-236885 From nginx-forum at nginx.us Sun Mar 3 23:17:14 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 03 Mar 2013 18:17:14 -0500 Subject: Regular expression used in server_name directive In-Reply-To: <20130303213854.GK32392@craic.sysops.org> References: <20130303213854.GK32392@craic.sysops.org> Message-ID: My bad, stupid mistake, forgot the ^. Working fine now. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236870,236887#msg-236887 From lists at ruby-forum.com Mon Mar 4 07:59:12 2013 From: lists at ruby-forum.com (iqbal h.) Date: Mon, 04 Mar 2013 08:59:12 +0100 Subject: Tom Clancys Ghost Recon Future Soldier free download pc game Message-ID: <24e92d47628c690c9ec98f5036f4f444@ruby-forum.com> Tom Clancys Ghost Recon Future Soldier free download pc game Total Size: 1.72 GB proudly presents Tom Clancy?s Ghost Recon: Future Soldier Khyber Strike DLC 03-03-2013??Release Date Protection???..Ubisoft DRM FPS/TPS????Game Type Disk(s)???????.DLC here http://www.freedownloadfullversioncrack.com/tom-clancys-ghost-recon-future-soldier-free-download-pc-game/ -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Mon Mar 4 09:05:57 2013 From: nginx-forum at nginx.us (gmor) Date: Mon, 04 Mar 2013 04:05:57 -0500 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: <20130228152003.GJ81985@mdounin.ru> References: <20130228152003.GJ81985@mdounin.ru> Message-ID: <95767d411dbb22e16b4aa57668f8d49a.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for the explanation. It absolutely makes sense now that you've clarified how Nginx works. Following on from this, I've now chosen to use HAProxy for this particular Exchange / Outlook Anywhere use-case. This supports the 'streaming' model of Outlook Anywhere, by simply passing the data to the backend, without waiting for the complete body (with the 'option http-no-delay' directive - http://cbonte.github.com/haproxy-dconv/configuration-1.5.html#4-option%20http-no-delay). I hope to be able to find a case for using Nginx in the future as it still looks like a great product. Thanks again for the support, Graham Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236709,236892#msg-236892 From jdmls at yahoo.com Mon Mar 4 10:25:58 2013 From: jdmls at yahoo.com (John Doe) Date: Mon, 4 Mar 2013 02:25:58 -0800 (PST) Subject: Timing the backend response... Message-ID: <1362392758.9414.YahooMailNeo@web121602.mail.ne1.yahoo.com> Hi, in a reverse-proxy setup, what are the options to time the backend response...? There is no timing info in server-status. Is the only way to get them to continually parse the logs for the $request_time and/or $upstream_response_time (and optionally the $body_bytes_sent to calculate the size/time ratio)? Thx, JD From nginx-forum at nginx.us Mon Mar 4 10:44:12 2013 From: nginx-forum at nginx.us (Kaoz) Date: Mon, 04 Mar 2013 05:44:12 -0500 Subject: [HELP] Can't reach my server Message-ID: <24b06d39961aeb9c5b908d9f1e6e136a.NginxMailingListEnglish@forum.nginx.org> Hi, I followed this: https://www.digitalocean.com/community/articles/how-to-install-linux-nginx-mysql-php-lemp-stack-on-centos-6 // Starting nginx: [ OK ] // ps -ef | grep nginx: root 1739 1714 0 09:40 pts/0 00:00:00 vi /etc/yum.repos/nginx.repo nginx 24053 24052 0 11:28 ? 00:00:00 php-fpm: pool www nginx 24054 24052 0 11:28 ? 00:00:00 php-fpm: pool www nginx 24055 24052 0 11:28 ? 00:00:00 php-fpm: pool www nginx 24056 24052 0 11:28 ? 00:00:00 php-fpm: pool www nginx 24057 24052 0 11:28 ? 00:00:00 php-fpm: pool www root 24093 1 0 11:34 ? 00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 24095 24093 0 11:34 ? 00:00:00 nginx: worker process nginx 24096 24093 0 11:34 ? 00:00:00 nginx: worker process nginx 24097 24093 0 11:34 ? 00:00:00 nginx: worker process nginx 24098 24093 0 11:34 ? 00:00:00 nginx: worker process root 24105 23964 0 11:36 pts/2 00:00:00 grep nginx // I can ping the server but I can't reach the server through the IP. If someone can help me, it would be great ! :) www.conf: http://pastebin.com/7HmPUwS2 default.conf: http://pastebin.com/7Gpjf3rc nginx.conf: http://pastebin.com/Hq6be91p Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236896,236896#msg-236896 From black.fledermaus at arcor.de Mon Mar 4 11:35:54 2013 From: black.fledermaus at arcor.de (basti) Date: Mon, 04 Mar 2013 12:35:54 +0100 Subject: [HELP] Can't reach my server In-Reply-To: <24b06d39961aeb9c5b908d9f1e6e136a.NginxMailingListEnglish@forum.nginx.org> References: <24b06d39961aeb9c5b908d9f1e6e136a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5134871A.4080808@arcor.de> Have you try localhost on the server (use lynx or somethink else)? I think the default.conf is it: server { listen 80; server_name localhost; ... will be only used if you do something likehttp://localhost So change it to your domain. Basti Am 04.03.2013 11:44, schrieb Kaoz: > Hi, > > I followed this: > https://www.digitalocean.com/community/articles/how-to-install-linux-nginx-mysql-php-lemp-stack-on-centos-6 > > // > > Starting nginx: [ OK ] > > // > ps -ef | grep nginx: > > root 1739 1714 0 09:40 pts/0 00:00:00 vi > /etc/yum.repos/nginx.repo > nginx 24053 24052 0 11:28 ? 00:00:00 php-fpm: pool www > nginx 24054 24052 0 11:28 ? 00:00:00 php-fpm: pool www > nginx 24055 24052 0 11:28 ? 00:00:00 php-fpm: pool www > nginx 24056 24052 0 11:28 ? 00:00:00 php-fpm: pool www > nginx 24057 24052 0 11:28 ? 00:00:00 php-fpm: pool www > root 24093 1 0 11:34 ? 00:00:00 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf > nginx 24095 24093 0 11:34 ? 00:00:00 nginx: worker process > > nginx 24096 24093 0 11:34 ? 00:00:00 nginx: worker process > > nginx 24097 24093 0 11:34 ? 00:00:00 nginx: worker process > > nginx 24098 24093 0 11:34 ? 00:00:00 nginx: worker process > > root 24105 23964 0 11:36 pts/2 00:00:00 grep nginx > > // > > I can ping the server but I can't reach the server through the IP. > If someone can help me, it would be great ! :) > > www.conf: http://pastebin.com/7HmPUwS2 > default.conf: http://pastebin.com/7Gpjf3rc > nginx.conf: http://pastebin.com/Hq6be91p > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236896,236896#msg-236896 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Mar 4 11:42:44 2013 From: nginx-forum at nginx.us (plop) Date: Mon, 04 Mar 2013 06:42:44 -0500 Subject: Nginx 0.8.4 Trying to bind on port 80 ? In-Reply-To: <8c571d80b822ee23ed3658bce1f1ab04.NginxMailingList@forum.nginx.org> References: <8c571d80b822ee23ed3658bce1f1ab04.NginxMailingList@forum.nginx.org> Message-ID: <3a70fdd546f6b4db58c1872008250128.NginxMailingListEnglish@forum.nginx.org> An other process is using the same address, chances are it's apache, in that case : sudo /etc/init.d/apache2 stop may correct the problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3498,236899#msg-236899 From mdounin at mdounin.ru Mon Mar 4 11:44:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Mar 2013 15:44:37 +0400 Subject: NginX and Magento very strange problem (?!) In-Reply-To: <9960140233989dfa1a9e438367cac9b0.NginxMailingListEnglish@forum.nginx.org> References: <9960140233989dfa1a9e438367cac9b0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130304114437.GJ15378@mdounin.ru> Hello! On Sun, Mar 03, 2013 at 10:30:17AM -0500, omolinete wrote: > Hello All, > > This is my first post on the forum, so thank you very much in advance for > your attention on my topic. Hope anyone could help me :) > > The problem we are getting with NginX and Magento, is relating to the > download process of files such as PDF, which are in this case invoices. > > Magento seems to work flawlessly under NginX, except when we try to view > from the server any PDF file through the Magento backend, or when we try to > save the same file on our local disk. > > The tricky point is that we don't get any problem when we try to download a > previously uploaded PDF file to the server using the "scp" command, and > accessing to this file using the direct URL. > > The problem appears when we try to download the PDF file from the Magento's > backend; NginX contact our browser but does not start the transfer/download > after you perform a restart of the NginX's service. > > I have recorded a little video of 3:27 minutes from my desktop, where you > will see by yourself what is the issue we have with NginX. Here is the URL: > http://www.misspepa.com/capturedmovie.ogv . Please, note that there's no > audio for the video. > > We are really convinced that the problem is at the NginX side and not on > Magento side, because as you will see on the video, when we restart the > NginX service the transfer which still says "Starting" (when it is really > frozen) is rapidly accomplished with success. > > If I do not restart the NginX service, the download transfer process of such > files will become stalled for a indefinitely period of time with no > success... > > ... And last but not last, the logs of NginX didn't report any error. So, > for that reason I decided to post my issue to the forum, in order to get > some help on this subject. >From the symptoms described/shown I would suppose Content-Length is reported incorrectly by a backend, which makes browser to assume there will be more data and to wait for it. Restart of the nginx forces close of the connection and convinces browser there is nothing to wait for. It's just a guess though, you should use debug log and/or tcpdump to analize what actually goes on. See here for some debugging hints: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Mar 4 11:53:05 2013 From: nginx-forum at nginx.us (Daniel15) Date: Mon, 04 Mar 2013 06:53:05 -0500 Subject: FastCGI cache has stopped working In-Reply-To: <20130303130535.GH15378@mdounin.ru> References: <20130303130535.GH15378@mdounin.ru> Message-ID: <140425c23c8d335e30629b41b082f573.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Nice catch. It's not a bug in nginx though, as HTTP only allows > fixed-width subset of RFC1123, see here: > > http://tools.ietf.org/html/rfc2616#section-3.3.1 > Thanks Maxim! Easy fix. Bug and patch submitted to Mono to fix the date format used in the headers: https://bugzilla.xamarin.com/show_bug.cgi?id=10872 Gotta love open source software :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236852,236901#msg-236901 From mdounin at mdounin.ru Mon Mar 4 12:28:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Mar 2013 16:28:05 +0400 Subject: Gzip not compressing response body with status code other than 200, 403, 404 In-Reply-To: <20130303160750.64026b17@neptune.home> References: <877016e0578575554305828cedbaa7a7.NginxMailingListEnglish@forum.nginx.org> <20120901005306.GE40452@mdounin.ru> <20130303160750.64026b17@neptune.home> Message-ID: <20130304122805.GL15378@mdounin.ru> Hello! On Sun, Mar 03, 2013 at 04:07:50PM +0100, Bruno Pr?mont wrote: > Hello Maxim, > > On Sat, 01 September 2012 Maxim Dounin wrote: > > On Sat, Aug 25, 2012 at 02:32:02AM -0400, soniclee wrote: > > > > > I met same problem while try to gzip response with other status code. Any > > > update for this issue? > > > > In some cases it's just not possible to compress > > response, e.g. you can't compress 206 as it doesn't contain full > > entity body. Or you can't compress 304 as it has no entity at > > all. > > > > In many cases it's unwise/unsafe to compress responses with some > > status codes. I.e. you probably don't want to compress 400 (Bad > > Request) response even if client claimed gzip support before it > > did some syntax error in the request - as you already know client > > did something wrong, and it's better to return easy to understand > > response. Or e.g. if you try to return 500 (Internal Server > > Error) due to memory allocation error - you probably don't want to > > allocate memory for gzip. > > > > Due to the above reasons gzip compression is done only for certain > > common and safe response codes. And from the above explanation it > > should be more or less obvious that it just can't be enabled for > > all status codes. > > > > If you think gzip compression should be enabled for some specific > > status code - please provide your suggestions with reasoning. > > One status code that should allow compression is 410 "Gone", it > is mostly equivalent to 404 just stronger. (patch attached) Any specific reason to? 410 is rarely used at all, and I don't think we should treat it as a common response code which is safe to compress. > Probably most 5xx error pages generated by an upstream should > also be compressed if the request as seen by nginx is valid. > (e.g. a 503 or 500 error returned by upstream, especially when they > are rather large) With errors, especially 5xx, you never know what actually went wrong, and what will happen due to compression applied. It's always safer to respond with what we get from an upstream with minimal modifications. -- Maxim Dounin http://nginx.org/en/donation.html From sb at waeme.net Mon Mar 4 13:33:42 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 4 Mar 2013 17:33:42 +0400 Subject: nginx mailing-list and sender filtering (vs BATV) In-Reply-To: <20130303042838.GA41259@redoubt.spodhuis.org> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> <20130301205959.GB15343@redoubt.spodhuis.org> <20130302221125.GB15378@mdounin.ru> <20130303042838.GA41259@redoubt.spodhuis.org> Message-ID: <6591A0EC-214A-40F8-AD77-4841B9A51009@waeme.net> On 3 Mar2013, at 08:28 , Phil Pennock wrote: > > But yes, I understand the issue and do use RCPT-time checks myself, > after normalisation of the sender address, to work around the fact that > I'm doing something a little dodgy that might break legitimate mail. Although I have added sender address normalization, and new config allows to check batv mangled addresses against address database, it actually breaks what is batv used for, because bounce if any will be sent to normalized address. It looks like catch-22 situation. From mdounin at mdounin.ru Mon Mar 4 14:08:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Mar 2013 18:08:39 +0400 Subject: nginx mailing-list and sender filtering (vs BATV) In-Reply-To: <6591A0EC-214A-40F8-AD77-4841B9A51009@waeme.net> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301131216.GE94127@mdounin.ru> <20130301205959.GB15343@redoubt.spodhuis.org> <20130302221125.GB15378@mdounin.ru> <20130303042838.GA41259@redoubt.spodhuis.org> <6591A0EC-214A-40F8-AD77-4841B9A51009@waeme.net> Message-ID: <20130304140839.GP15378@mdounin.ru> Hello! On Mon, Mar 04, 2013 at 05:33:42PM +0400, Sergey Budnevitch wrote: > > On 3 Mar2013, at 08:28 , Phil Pennock wrote: > > > > > But yes, I understand the issue and do use RCPT-time checks myself, > > after normalisation of the sender address, to work around the fact that > > I'm doing something a little dodgy that might break legitimate mail. > > Although I have added sender address normalization, and new config > allows to check batv mangled addresses against address database, > it actually breaks what is batv used for, because bounce if any will be > sent to normalized address. It looks like catch-22 situation. Addresses should be normalized only for lookup in the database, and only _in_addition_ to the normal lookup (to make sure we don't reject correctly subscribed address which looks like BATV one and normalizes to something wrong). -- Maxim Dounin http://nginx.org/en/donation.html From anoopalias01 at gmail.com Mon Mar 4 14:47:15 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 4 Mar 2013 20:17:15 +0530 Subject: fastcgi_buffers question Message-ID: Hi, What is the best configuration for fastcgi_buffers fastcgi_buffers 512 4k; or fastcgi_buffers 4 512k; Both are allocating 2 Mb cache; but which is better . getconf PAGESIZE on my system returns 4k Thank you, -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From meteor8488 at 163.com Mon Mar 4 15:30:17 2013 From: meteor8488 at 163.com (Meteor) Date: Mon, 4 Mar 2013 23:30:17 +0800 (CST) Subject: get alert : sem_init() failed (78: Function not implemented Message-ID: <6f30123e.14caf.13d360653c0.Coremail.meteor8488@163.com> Hi All, I just upgrade freebsd from 8.1 to 9.1. And after that, every time I start Nginx, I'l get below error message: 2013/03/04 09:21:27 [alert] 43757#0: sem_init() failed (78: Function not implemented) 2013/03/04 09:21:27 [alert] 43757#0: sem_init() failed (78: Function not implemented) This is the only error message I got, does anyone know how can I fix this problem? Thank -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Mar 4 15:39:31 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 4 Mar 2013 19:39:31 +0400 Subject: get alert : sem_init() failed (78: Function not implemented In-Reply-To: <6f30123e.14caf.13d360653c0.Coremail.meteor8488@163.com> References: <6f30123e.14caf.13d360653c0.Coremail.meteor8488@163.com> Message-ID: On Mar 4, 2013, at 19:30 , Meteor wrote: > Hi All, > > I just upgrade freebsd from 8.1 to 9.1. And after that, every time I start Nginx, I'l get below error message: > > 2013/03/04 09:21:27 [alert] 43757#0: sem_init() failed (78: Function not implemented) > 2013/03/04 09:21:27 [alert] 43757#0: sem_init() failed (78: Function not implemented) > > This is the only error message I got, does anyone know how can I fix this problem? Do you FreeBSD? You need to rebuilt nginx. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil.mckee.ca at gmail.com Mon Mar 4 16:42:04 2013 From: neil.mckee.ca at gmail.com (Neil McKee) Date: Mon, 4 Mar 2013 08:42:04 -0800 Subject: nginx stats In-Reply-To: References: <366D9E98-20CC-4891-A76C-4541E0FF1679@gmail.com> Message-ID: <266C5E1D-39D1-4499-A2CA-812760020646@gmail.com> Still here. I'm not quite sure what you mean, though. Does the module not compile cleanly for you, or is it missing fields that are needed for reverse-proxy analysis? Neil On Mar 3, 2013, at 9:54 PM, Hailin Zeng wrote: > Hi, I am looking forward the same plugin module. I want to logging on reverse proxy response does not return 200. I check the nginx-sflow-module page, but it seems that they have stopped updating. > > On Thursday, March 29, 2012 8:12:48 PM UTC+8, Neil Mckee wrote: >> >> One option is nginx-sflow-module. It reports counters (and transactions) using a standard binary format over UDP - so you can have a whole web-farm reporting to a single port on another server. No log-file-tailing required. If you send the sFlow feed to Ganglia, for example, it will pull out the counters and trend them. If your transaction rate is high you can reduce the load by configuring random sampling on the transaction logging, but the counters will still be 100% accurate. >> http://code.google.com/p/nginx-sflow-module/ >> >> It works in conjunction with hsflowd, so you also get CPU, mem, I/O stats for each server: >> http://host-sflow.sourceforge.net >> >> If anything is missing that should be counted/sampled for a reverse-proxy scenario, please let me know. The sFlow-HTTP spec is here: >> http://sflow.org/sflow_http.txt >> >> Neil >> >> >> On Mar 29, 2012, at 4:52 AM, Samit Pal wrote: >> >> > Folks, >> > >> > I'am interested in nginx stats like per return status numbers, bytes >> > served etc. I run nginx in a reverse proxy mode. I know parsing logs >> > will give that, but I wanted to know if there any module which gives >> > these stats. I looked in the stub module, i think these stats are not >> > there. >> > >> > Thanks >> > samit >> > >> > _______________________________________________ >> > nginx mailing list >> > ng... at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> ng... at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander_koch_log at lavabit.com Mon Mar 4 18:38:02 2013 From: alexander_koch_log at lavabit.com (alexander_koch_log) Date: Mon, 04 Mar 2013 19:38:02 +0100 Subject: USE_THREAD nginx? Message-ID: <5134EA0A.5010904@lavabit.com> Hi, I see under auto/options, the macro USE_THREAD is set to no while the configure options are disabled. Out of curiosity, are they planned to bundle threading with the reactor loop in the future? or why is threading set as a "potential" option here? Thanks, Alex From trm.nagios at gmail.com Mon Mar 4 19:50:31 2013 From: trm.nagios at gmail.com (trm asn) Date: Tue, 5 Mar 2013 01:20:31 +0530 Subject: Websocket on port 80 Message-ID: Hi List, I am using Nginx-1.3.13 for this websocket support. I am doing socket.ioproxy from Nginx . Below is my nginx.conf for websocket . http { include /etc/nginx/mime.types; default_type application/octet-stream; map $http_upgrade $conn_header { default upgrade; '' ''; } server { listen 80; server_name _ access_log /var/log/nginx/access.log mylog; error_log /var/log/nginx/error.log; root /var/www/nginx; location /nodeapp { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://10.164.110.11:8888; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $conn_header; proxy_read_timeout 120s; proxy_set_header Host $host; } } my-node logs ... debug: got heartbeat packet debug: cleared heartbeat timeout for client tZhwv5ng-YYkTREOHsh4 debug: set heartbeat interval for client tZhwv5ng-YYkTREOHsh4 info: stats: "stats key" info: stats: "Sent gauge sessions.count with value 9" info: stats: "stats key" info: stats: "Sent gauge users.registered with value 3" info: stats: "stats key" info: stats: "Sent gauge sessions.unique with value 3" info: transport end (socket end) But if configure Nginx on SSL mode then it's upgrading to websocket ( 101 ) . debug: client authorized info: handshake authorized jthZZKLA1fR1eaHTHsih debug: setting request GET /dkitserver/ socket.io/1/websocket/jthZZKLA1fR1eaHTHsih debug: set heartbeat interval for client jthZZKLA1fR1eaHTHsih debug: client authorized for debug: websocket writing 1:: info: : "Session started for accessId: HbIzkBis5MYB9I7X" debug: websocket writing 5:::{"name":"session-marked-as-alive"} --Thanks, Tarak -------------- next part -------------- An HTML attachment was scrubbed... URL: From agus.262 at gmail.com Mon Mar 4 20:23:22 2013 From: agus.262 at gmail.com (Agus) Date: Mon, 4 Mar 2013 17:23:22 -0300 Subject: Manage Configuration with external script question... Message-ID: Hi guys, I set up an nginx as reverse proxy and did some scripts so hosts could be added automatically just by entering the name and ip. Pretty basic. Now i am interested in creating a script to manage the whole configuration files of nginx so it can be customized from a basic UI. The first thing that came to mind is to use xml. but before i reinvent the wheel i thought to send an email and perhaps getting some ideas. My plan was to use xml and create the xml file from the fresh install conf files that i built. I would add a tag enable for instance in each or the elements that are able to be customized. and then re-create the nginx configuration file according to this. Any hints are really appreciated. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From agus.262 at gmail.com Mon Mar 4 20:46:02 2013 From: agus.262 at gmail.com (Agus) Date: Mon, 4 Mar 2013 17:46:02 -0300 Subject: nginx stats In-Reply-To: References: Message-ID: Is it out there now this module ANdrew? Cheers! 2012/3/29 Andrew Alexeev > On Mar 29, 2012, at 3:52 PM, Samit Pal wrote: > > > Folks, > > > > I'am interested in nginx stats like per return status numbers, bytes > > served etc. I run nginx in a reverse proxy mode. I know parsing logs > > will give that, but I wanted to know if there any module which gives > > these stats. I looked in the stub module, i think these stats are not > > there. > > Yes, the stub one is quite limited. We've been working on a much better > version, which is going to appear soon. > > > Thanks > > samit > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Mar 4 23:58:07 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 4 Mar 2013 23:58:07 +0000 Subject: Wanted: suggestions on how to invert proxy_pass return codes Message-ID: Hi all - I have an odd requirement which I'm not quite able to solve - is there anyone out there with a solution? For reasons I won't go into, but which are unfortunately immovable and quite real, I need to come up with a location stanza that exhibits the following behaviour: -------------------------------------------------------------------------- location /healthcheck/ { if (!-f /tmp/flag) { return 503; } proxy_intercept_errors on; proxy_pass http://backend.server.fqdn.local/status; error_page 200 = @backend_up; error_page 404 500 501 502 503 504 = @backend_down; } location @backend_up { return 503; } location backend_down { return 200; } -------------------------------------------------------------------------- Yes - you read that correctly :-( Before making the proxy_pass call, check a marker on disk and serve a (real, not translated to =200) 503 if it exists. If I get a 200 from the backend status page, actually serve a 503. If I get a 404/5xx from the backend status page, serve a 200. I'm pretty sure the last time I tried something horrible like this, error_page complained that it could only override response codes >400 - hence this email. Can anyone see a way of achieving the behaviour I need? The behaviour described is *all* that's important - no other part of the config, above, is fixed. I can't introduce any more external services unless there's absolutely no choice, however. Thanks for having a think about this. I'll tell you what it's for if you're able to help - I'm pretty sure you'll be both surprised and a little bit proud that your idea got into this project, if you do ... :-) TIA all, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From contact at jpluscplusm.com Tue Mar 5 00:17:03 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 5 Mar 2013 00:17:03 +0000 Subject: Wanted: suggestions on how to invert proxy_pass return codes In-Reply-To: References: Message-ID: A slight thinko crept in to my original mail; there's a small difference (which does remove a minor complexity) as I've marked below ... On 4 March 2013 23:58, Jonathan Matthews wrote: [snip] > -------------------------------------------------------------------------- > location /healthcheck/ { > if (!-f /tmp/flag) { > return 503; > } SHOULD BE: " if (-f /tmp/flag) { return 200; } " [snip] > -------------------------------------------------------------------------- > > Before making the proxy_pass call, check a marker on disk and serve a > (real, not translated to =200) 503 if it exists. SHOULD BE: "... serve a 200 if it exists." I don't think this changes the meat of the problem - it just removes one minor niggle. Any thoughts? -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx at 2xlp.com Tue Mar 5 00:56:38 2013 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Mon, 4 Mar 2013 19:56:38 -0500 Subject: Any suggestions for setting up Global/Wildcard Errors ? Message-ID: <100A8F47-AD39-4D12-A6B4-0E50FBB358A2@2xlp.com> I'm surprised this hasn't come up before ( I've looked on this archive + Stack Overflow ) There doesn't seem to be a way to catch all errors in nginx. They need to all be specified. I'm using nginx with proxy_intercept_errors, so there can be many different codes. I've built out all the codes, but I just wanted to bring this up to the community, in case anyone has a better suggestion ( or wants to build in functionality ) I was hoping to use try_files to display custom error pages , then failover to a 'standard' html page if the custom doesn't work. this idea is possible , but requires every code to be set : error_page 401 /error/401.html; error_page 402 /error/402.html; error_page 403 /error/403.html; error_page 404 /error/404.html; location /error { internal; root /var/www/MySite.com/static ; try_files $uri /error/standard.html; } another issue, is that conditional types of 404s are often useful. a javascript or a css request might be best be handled by serving the error as a blank file , or something with some sort of semantic notion within. location /js { error_page 404 /error/404.js; } location /css { error_page 404 /error/404.css; } i'm not sure how others handle situations like this , but am interested -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 5 07:09:55 2013 From: nginx-forum at nginx.us (Ensiferous) Date: Tue, 05 Mar 2013 02:09:55 -0500 Subject: USE_THREAD nginx? In-Reply-To: <5134EA0A.5010904@lavabit.com> References: <5134EA0A.5010904@lavabit.com> Message-ID: <7f1149b023bf906217f1f78a9ba9f277.NginxMailingListEnglish@forum.nginx.org> It's old code left in. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236921,236933#msg-236933 From pluknet at nginx.com Tue Mar 5 07:49:12 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 5 Mar 2013 11:49:12 +0400 Subject: get alert : sem_init() failed (78: Function not implemented In-Reply-To: <6f30123e.14caf.13d360653c0.Coremail.meteor8488@163.com> References: <6f30123e.14caf.13d360653c0.Coremail.meteor8488@163.com> Message-ID: <83194E7A-82E0-493D-88D1-490BC3B29BF7@nginx.com> On Mar 4, 2013, at 7:30 PM, Meteor wrote: > Hi All, > > I just upgrade freebsd from 8.1 to 9.1. And after that, every time I start Nginx, I'l get below error message: > > 2013/03/04 09:21:27 [alert] 43757#0: sem_init() failed (78: Function not implemented) > 2013/03/04 09:21:27 [alert] 43757#0: sem_init() failed (78: Function not implemented) > > This is the only error message I got, does anyone know how can I fix this problem? You are likely running old binary built against FreeBSD 8.x. The latter uses in-kernel implementation of semaphores. FreeBSD 9.x switched to the new semaphore implementation based on umtx. Here the kernel module is only used to support old binaries for pre-9.x. So that's why you seem "Function not implemented" error message: since in 9.x sem isn't present in the default GENERIC kernel. I could try to kldload the module sem.ko, or better upgrade your system canonically which means rebuilding nginx, as said in another mail. See http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/makeworld.html for the canonical way to update your system. -- Sergey Kandaurov pluknet at nginx.com From nginx-forum at nginx.us Tue Mar 5 08:22:07 2013 From: nginx-forum at nginx.us (Demontager) Date: Tue, 05 Mar 2013 03:22:07 -0500 Subject: Getting HTTP 500 Error Message-ID: I'm trying to setup Nginx+phpFPM for first time (php5.2) and can't get it working. Base system is FreeBSD 9.1. So my main configs: nginx.conf - http://pastebin.com/KwhHr0NK php-fpm.conf - http://pastebin.com/KQBwp3KL vhost config - http://pastebin.com/qyFzc9Xa Nginx started and working as i may see output from phpinfo(), but if i put something comlex, eg. CMS like Drupal, WP, Joomla it generates HTTP 500 error. Also I can't see any php errors in logs despite they are defined in php-fpm.conf Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236937,236937#msg-236937 From nginx-forum at nginx.us Tue Mar 5 10:48:03 2013 From: nginx-forum at nginx.us (arty777) Date: Tue, 05 Mar 2013 05:48:03 -0500 Subject: No such file or directory errors, nginx hang after a few hours In-Reply-To: <5c8e9eeee57d57fd841a362ba5548df9.NginxMailingListEnglish@forum.nginx.org> References: <5c8e9eeee57d57fd841a362ba5548df9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi , after i clean cache , i see critical warning in my log, its normal? 2013/03/05 12:41:42 [crit] 40771#0: unlink() "/nginx/temp/proxy/3/9b/79fac0c8634496bb89b7d9d9dd3349b3" failed (2: No such file oectory) 2013/03/05 12:41:42 [crit] 40771#0: unlink() "/nginx/temp/proxy/f/0c/f04ebf30ad0cea667ee5938fe09660cf" failed (2: No such file oectory) 2013/03/05 12:41:42 [crit] 40771#0: unlink() "/nginx/temp/proxy/a/0e/2dd596f873e4901c7856634eff8470ea" failed (2: No such file oectory) 2013/03/05 12:41:42 [crit] 40771#0: unlink() "/nginx/temp/proxy/1/6b/16e2a000c02197d069b9d50170ec56b1" failed (2: No such file oectory) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,28335,236946#msg-236946 From nginx-forum at nginx.us Tue Mar 5 10:50:21 2013 From: nginx-forum at nginx.us (arty777) Date: Tue, 05 Mar 2013 05:50:21 -0500 Subject: No such file or directory errors, nginx hang after a few hours In-Reply-To: References: <5c8e9eeee57d57fd841a362ba5548df9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0ef3c27bf6f6a4ba6ea262929ac3543f.NginxMailingListEnglish@forum.nginx.org> I clear only files, don't remove dir, my script example #!/bin/bash CACHE_DIR='/nginx/temp/proxy' for i in `find ${CACHE_DIR} -type f`; do rm $i ; done Posted at Nginx Forum: http://forum.nginx.org/read.php?2,28335,236947#msg-236947 From nginx-forum at nginx.us Tue Mar 5 11:24:59 2013 From: nginx-forum at nginx.us (double) Date: Tue, 05 Mar 2013 06:24:59 -0500 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130228181246.GT81985@mdounin.ru> References: <20130228181246.GT81985@mdounin.ru> Message-ID: <539bb9ad2dd1837322e6f415629bf30a.NginxMailingListEnglish@forum.nginx.org> Are there any plans to integrate this feature into NGINX? It would be very great. > Maxim Dounin: > As a non-default mode of operation the aproach taken is likely > good enough (not looked into details), but the patch won't work > with current nginx versions - at least it needs (likely major) > adjustments to cope with changes introduced during work on chunked > request body support as available in nginx 1.3.9+. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,236948#msg-236948 From gokoproject at gmail.com Tue Mar 5 11:58:09 2013 From: gokoproject at gmail.com (John Wong) Date: Tue, 5 Mar 2013 06:58:09 -0500 Subject: Rewrite inside location block before proxy to local port Message-ID: Hi, I asked somewhat releated question here: http://serverfault.com/questions/484573/nginx-proxy-to-a-new-vm-without-affect-old-site-enabled-rules Problem arises with one java program. This java program now expects /beta/scm because it said "/scm not found" in its pretty java error page. Not nginx 404. So this is what I have server { rewrite ^/beta(.*)$ $1 last; # strip out the beta location /scm { proxy_pass http://localhost:8080; proxy_redirect default; #proxy_set_header Host $http_host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } So I am guessing, internally, when we proxy to 8080, the web app actually receives /scm. That makes sense. Can we rewrite the url back to /beta/scm inside the location? I tried this rewrite ^/scm(.*)$ /beta/scm last; not working at all. Ngninx would throw 404 access 10.10.0.57 - - [05/Mar/2013:06:55:29 -0500] "GET /beta/scm/ HTTP/1.0" 404 570 "-" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.97 Safari/537.22" error 2013/03/05 06:55:29 [error] 14867#0: *1 "/usr/share/nginx/html/beta/scm/index.html" is not found (2: No such file or directory), client: 10.10.0.57, server: localhost, request: "GET /beta/scm/ HTTP/1.0", host: "134.74.77.21" Why we are looking at /usr/share..../index.html/ ? Any idea how to resolve this issue? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 5 13:00:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Mar 2013 17:00:56 +0400 Subject: No such file or directory errors, nginx hang after a few hours In-Reply-To: References: <5c8e9eeee57d57fd841a362ba5548df9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130305130056.GX15378@mdounin.ru> Hello! On Tue, Mar 05, 2013 at 05:48:03AM -0500, arty777 wrote: > Hi , after i clean cache , i see critical warning in my log, its normal? > > 2013/03/05 12:41:42 [crit] 40771#0: unlink() > "/nginx/temp/proxy/3/9b/79fac0c8634496bb89b7d9d9dd3349b3" failed (2: No such > file oectory) > 2013/03/05 12:41:42 [crit] 40771#0: unlink() > "/nginx/temp/proxy/f/0c/f04ebf30ad0cea667ee5938fe09660cf" failed (2: No such > file oectory) > 2013/03/05 12:41:42 [crit] 40771#0: unlink() > "/nginx/temp/proxy/a/0e/2dd596f873e4901c7856634eff8470ea" failed (2: No such > file oectory) > 2013/03/05 12:41:42 [crit] 40771#0: unlink() > "/nginx/temp/proxy/1/6b/16e2a000c02197d069b9d50170ec56b1" failed (2: No such > file oectory) Yes, it's normal - as long as you removed the files in question yourself. -- Maxim Dounin http://nginx.org/en/donation.html From pasik at iki.fi Tue Mar 5 13:17:41 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Tue, 5 Mar 2013 15:17:41 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> <20130225101304.GZ8912@reaktio.net> Message-ID: <20130305131741.GN8912@reaktio.net> On Tue, Feb 26, 2013 at 10:13:11PM +0800, Weibin Yao wrote: > It still worked in my box. Can you show me the debug.log > ([1]http://wiki.nginx.org/Debugging)? You need recompile ? with > --with-debug configure argument and set debug level in error_log > directive. > Ok so I've sent you the debug log. Can you see anything obvious in it? I keep getting the "upstream sent invalid header while reading response header from upstream" error when using the no_buffer patch.. Thanks! -- Pasi > > 2013/2/25 Pasi K??rkk??inen <[2]pasik at iki.fi> > > On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > > ? ? Can you show me your configure? It works for me with nginx-1.2.7. > > ? ? Thanks. > > > > Hi, > > I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers more" module > added, > and your patch. > > I'm using the following configuration: > > server { > ? ? ? ? listen ? ? ? ? ? ? ? ? ? public_ip:443 ssl; > ? ? ? ? server_name ? ? ? ? ? ? service.domain.tld; > > ? ? ? ? ssl ? ? ? ? ? ? ? ? ? ? on; > ? ? ? ? keepalive_timeout ? ? ? 70; > > ? ? ? ? access_log ? ? ? ? ? ? > ? /var/log/nginx/access-service.log; > ? ? ? ? access_log ? ? ? ? ? ? > ? /var/log/nginx/access-service-full.log full; > ? ? ? ? error_log ? ? ? ? ? ? ? > /var/log/nginx/error-service.log; > > ? ? ? ? client_header_buffer_size 64k; > ? ? ? ? client_header_timeout ? 120; > > ? ? ? ? proxy_next_upstream error timeout invalid_header http_500 > http_502 http_503; > ? ? ? ? proxy_set_header Host $host; > ? ? ? ? proxy_set_header X-Real-IP $remote_addr; > ? ? ? ? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > ? ? ? ? proxy_redirect ? ? off; > ? ? ? ? proxy_buffering ? ? off; > ? ? ? ? proxy_cache ? ? ? ? off; > > ? ? ? ? add_header Last-Modified ""; > ? ? ? ? if_modified_since ? off; > > ? ? ? ? client_max_body_size ? ? 262144M; > ? ? ? ? client_body_buffer_size 1024k; > ? ? ? ? client_body_timeout ? ? 240; > > ? ? ? ? chunked_transfer_encoding off; > > # ? ? ? client_body_postpone_sending ? ? 64k; > # ? ? ? proxy_request_buffering ? ? ? ? off; > > ? ? ? ? location / { > > ? ? ? ? ? ? ? ? proxy_pass ? ? ? [3]https://service-backend; > ? ? ? ? } > } > > Thanks! > > -- Pasi > > > ? ? 2013/2/22 Pasi K?*?*?rkk?*?*?inen <[1][4]pasik at iki.fi> > > > > ? ? ? On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi > K?*?*?rkk?*?*?inen wrote: > > ? ? ? > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > > ? ? ? > > ?* ? ?* Use the patch I attached in this mail thread > instead, don't use > > ? ? ? the pull > > ? ? ? > > ?* ? ?* request patch which is for tengine.?** > > ? ? ? > > ?* ? ?* Thanks. > > ? ? ? > > > > ? ? ? > > > ? ? ? > Oh sorry I missed that attachment. It seems to apply and > build OK. > > ? ? ? > I'll start testing it. > > ? ? ? > > > > > ? ? ? I added the patch on top of nginx 1.2.7 and enabled the > following > > ? ? ? options: > > > > ? ? ? client_body_postpone_sending ?* ? ?* 64k; > > ? ? ? proxy_request_buffering ?* ? ?* ? ?* ? ?* ? off; > > > > ? ? ? after that connections through the nginx reverse proxy started > failing > > ? ? ? with errors like this: > > > > ? ? ? [error] 29087#0: *49 upstream prematurely closed connection > while > > ? ? ? reading response header from upstream > > ? ? ? [error] 29087#0: *60 upstream sent invalid header while > reading response > > ? ? ? header from upstream > > > > ? ? ? And the services are unusable. > > > > ? ? ? Commenting out the two config options above makes nginx happy > again. > > ? ? ? Any idea what causes that? Any tips how to troubleshoot it? > > ? ? ? Thanks! > > > > ? ? ? -- Pasi > > > > ? ? ? > > > ? ? ? > > ?* ? ?* 2013/2/22 Pasi K?**??*??rkk?**??*??inen > <[1][2][5]pasik at iki.fi> > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* On Fri, Jan 18, 2013 at 10:38:21AM +0200, > Pasi > > ? ? ? K?**??*??rkk?**??*??inen wrote: > > ? ? ? > > ?* ? ?* ? ?* > On Thu, Jan 17, 2013 at 11:15:58AM +0800, > ?????? wrote: > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Yes. It should work for any > request method. > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > > ? ? ? > > ?* ? ?* ? ?* > Great, thanks, I'll let you know how it > works for me. > > ? ? ? Probably in two > > ? ? ? > > ?* ? ?* ? ?* weeks or so. > > ? ? ? > > ?* ? ?* ? ?* > > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* Hi, > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* Adding the tengine pull request 91 on top of > nginx 1.2.7 > > ? ? ? doesn't work: > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* cc1: warnings being treated as errors > > ? ? ? > > ?* ? ?* ? ?* src/http/ngx_http_request_body.c: In function > > ? ? ? > > ?* ? ?* ? ?* > 'ngx_http_read_non_buffered_client_request_body': > > ? ? ? > > ?* ? ?* ? ?* src/http/ngx_http_request_body.c:506: error: > implicit > > ? ? ? declaration of > > ? ? ? > > ?* ? ?* ? ?* function 'ngx_http_top_input_body_filter' > > ? ? ? > > ?* ? ?* ? ?* make[1]: *** > [objs/src/http/ngx_http_request_body.o] Error 1 > > ? ? ? > > ?* ? ?* ? ?* make[1]: Leaving directory > `/root/src/nginx/nginx-1.2.7' > > ? ? ? > > ?* ? ?* ? ?* make: *** [build] Error 2 > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* ngx_http_top_input_body_filter() cannot be > found from any > > ? ? ? .c/.h files.. > > ? ? ? > > ?* ? ?* ? ?* Which other patches should I apply? > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* Perhaps this? > > ? ? ? > > ?* ? ?* > > ? ? ? ?* > [2][3][6]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* Thanks, > > ? ? ? > > ?* ? ?* ? ?* -- Pasi > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 2013/1/16 Pasi > K?***?*??*?*??rkk?***?*??*?*??inen > > ? ? ? <[1][3][4][7]pasik at iki.fi> > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** On Sun, Jan 13, 2013 at > 08:22:17PM +0800, > > ? ? ? ?????? wrote: > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** This > patch should work between > > ? ? ? nginx-1.2.6 and > > ? ? ? > > ?* ? ?* ? ?* nginx-1.3.8. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** The > documentation is here: > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ## > > ? ? ? client_body_postpone_sending ## > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Syntax: > > ? ? ? **client_body_postpone_sending** `size` > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Default: 64k > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Context: `http, server, > > ? ? ? location` > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** If you > specify the > > ? ? ? `proxy_request_buffering` or > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > `fastcgi_request_buffering` to > > ? ? ? be off, Nginx will > > ? ? ? > > ?* ? ?* ? ?* send the body > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to backend > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** when it > receives more than > > ? ? ? `size` data or the > > ? ? ? > > ?* ? ?* ? ?* whole request body > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** has been > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > received. It could save the > > ? ? ? connection and reduce > > ? ? ? > > ?* ? ?* ? ?* the IO number > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** with > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > backend. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ## > proxy_request_buffering ## > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Syntax: > > ? ? ? **proxy_request_buffering** `on | off` > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Default: `on` > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Context: `http, server, > > ? ? ? location` > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Specify > the request body will > > ? ? ? be buffered to the > > ? ? ? > > ?* ? ?* ? ?* disk or not. If > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** it's off, > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** the > request body will be > > ? ? ? stored in memory and sent > > ? ? ? > > ?* ? ?* ? ?* to backend > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** after Nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > receives more than > > ? ? ? `client_body_postpone_sending` > > ? ? ? > > ?* ? ?* ? ?* data. It could > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** save the > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** disk IO > with large request > > ? ? ? body. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** ?** ?*** ?** > > ? ? ? Note that, if you specify it > > ? ? ? > > ?* ? ?* ? ?* to be off, the nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** retry mechanism > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** with > unsuccessful response > > ? ? ? will be broken after > > ? ? ? > > ?* ? ?* ? ?* you sent part of > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** request > to backend. It will > > ? ? ? just return 500 when > > ? ? ? > > ?* ? ?* ? ?* it encounters > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** such > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > unsuccessful response. This > > ? ? ? directive also breaks > > ? ? ? > > ?* ? ?* ? ?* these > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** variables: > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > $request_body, > > ? ? ? $request_body_file. You should not > > ? ? ? > > ?* ? ?* ? ?* use these > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** variables any > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** more > while their values are > > ? ? ? undefined. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Hello, > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch sounds > exactly like what I need > > ? ? ? aswell! > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** I assume it works for > both POST and PUT > > ? ? ? requests? > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Thanks, > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** -- Pasi > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** Hello! > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** @yaoweibin > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** If you are eager > > ? ? ? for this feature, you > > ? ? ? > > ?* ? ?* ? ?* could try my > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** patch: > > ? ? ? > > ?* ? ?* ? ?* > [2][2][4][5][8]https://github.com/taobao/tengine/pull/91. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch has > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** been running in > > ? ? ? our production servers. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** what's the nginx > > ? ? ? version your patch based on? > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** Thanks! > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** On Fri, Jan 11, 2013 at > > ? ? ? 5:17 PM, ?****?*** > > ? ? ? > > ?* ? ?* ? ?* ?****?***?**?*???***?**?*???***?**?*?? > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > <[3][3][5][6][9]yaoweibin at gmail.com> wrote: > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** I know nginx > > ? ? ? team are working on it. You > > ? ? ? > > ?* ? ?* ? ?* can wait for it. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** If you are eager > > ? ? ? for this feature, you > > ? ? ? > > ?* ? ?* ? ?* could try my > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** patch: > > ? ? ? > > ?* ? ?* ? ?* > [4][4][6][7][10]https://github.com/taobao/tengine/pull/91. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch has > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** been running in > > ? ? ? our production servers. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** 2013/1/11 li > > ? ? ? zJay > > ? ? ? > > ?* ? ?* ? ?* <[5][5][7][8][11]zjay1987 at gmail.com> > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** Hello! > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** is it > > ? ? ? possible that nginx will not > > ? ? ? > > ?* ? ?* ? ?* buffer the client > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** body before > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** handle > > ? ? ? the request to upstream? > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** we want > > ? ? ? to use nginx as a reverse > > ? ? ? > > ?* ? ?* ? ?* proxy to upload very > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** very big file > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** to the > > ? ? ? upstream, but the default > > ? ? ? > > ?* ? ?* ? ?* behavior of nginx is to > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** save the > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** whole > > ? ? ? request to the local disk > > ? ? ? > > ?* ? ?* ? ?* first before handle it > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to the > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** upstream, > > ? ? ? which make the upstream > > ? ? ? > > ?* ? ?* ? ?* impossible to process > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the file on > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** the fly > > ? ? ? when the file is uploading, > > ? ? ? > > ?* ? ?* ? ?* results in much high > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** request > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** latency > > ? ? ? and server-side resource > > ? ? ? > > ?* ? ?* ? ?* consumption. > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** Thanks! > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** > > ? ? ? > > ?* ? ?* ? ?* > _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** nginx > > ? ? ? mailing list > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** > > ? ? ? [6][6][8][9][12]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** ?** ?*** > > ? ? ? > > ?* ? ?* ? ?* > [7][7][9][10][13]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** -- > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** Weibin Yao > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** Developer @ > > ? ? ? Server Platform Team of > > ? ? ? > > ?* ? ?* ? ?* Taobao > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** > > ? ? ? > > ?* ? ?* ? ?* > _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** nginx mailing > > ? ? ? list > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** > > ? ? ? [8][8][10][11][14]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** ?** ?*** > > ? ? ? > > ?* ? ?* > > ? ? ? ?* > [9][9][11][12][15]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** > > ? ? ? > > ?* ? ?* ? ?* > _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** nginx mailing list > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** > > ? ? ? [10][10][12][13][16]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > ?*** > > ? ? ? > > ?* ? ?* > > ? ? ? ?* > [11][11][13][14][17]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** -- > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Weibin > Yao > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Developer @ Server Platform > > ? ? ? Team of Taobao > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > References > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Visible > links > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 1. > > ? ? ? mailto:[12][14][15][18]zjay1987 at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 2. > > ? ? ? > > ?* ? ?* ? ?* > [13][15][16][19]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 3. > > ? ? ? mailto:[14][16][17][20]yaoweibin at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 4. > > ? ? ? > > ?* ? ?* ? ?* > [15][17][18][21]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 5. > > ? ? ? mailto:[16][18][19][22]zjay1987 at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 6. > > ? ? ? mailto:[17][19][20][23]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 7. > > ? ? ? > > ?* ? ?* ? ?* > [18][20][21][24]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 8. > > ? ? ? mailto:[19][21][22][25]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 9. > > ? ? ? > > ?* ? ?* ? ?* > [20][22][23][26]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** 10. > > ? ? ? mailto:[21][23][24][27]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** 11. > > ? ? ? > > ?* ? ?* ? ?* > [22][24][25][28]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > nginx mailing list > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > [23][25][26][29]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [24][26][27][30]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ? ? ? _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx mailing list > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > [25][27][28][31]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ? ? > ? [26][28][29][32]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** -- > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Weibin Yao > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Developer @ Server Platform > Team of Taobao > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > References > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Visible links > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 1. > mailto:[29][30][33]pasik at iki.fi > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 2. > > ? ? ? [30][31][34]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 3. > mailto:[31][32][35]yaoweibin at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 4. > > ? ? ? [32][33][36]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 5. > mailto:[33][34][37]zjay1987 at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 6. > mailto:[34][35][38]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 7. > > ? ? ? [35][36][39]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 8. > mailto:[36][37][40]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 9. > > ? ? ? [37][38][41]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 10. > mailto:[38][39][42]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 11. > > ? ? ? [39][40][43]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 12. > mailto:[40][41][44]zjay1987 at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 13. > > ? ? ? [41][42][45]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 14. > mailto:[42][43][46]yaoweibin at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 15. > > ? ? ? [43][44][47]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 16. > mailto:[44][45][48]zjay1987 at gmail.com > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 17. > mailto:[45][46][49]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 18. > > ? ? ? [46][47][50]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 19. > mailto:[47][48][51]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 20. > > ? ? ? [48][49][52]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 21. > mailto:[49][50][53]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 22. > > ? ? ? [50][51][54]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 23. > mailto:[51][52][55]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 24. > > ? ? ? [52][53][56]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 25. > mailto:[53][54][57]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 26. > > ? ? ? [54][55][58]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > > nginx mailing list > > ? ? ? > > ?* ? ?* ? ?* > > [55][56][59]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > > [56][57][60]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? ?* ? ?* > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* > nginx mailing list > > ? ? ? > > ?* ? ?* ? ?* > [57][58][61]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > > [58][59][62]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > > > ? ? ? > > ?* ? ?* ? ?* > _______________________________________________ > > ? ? ? > > ?* ? ?* ? ?* nginx mailing list > > ? ? ? > > ?* ? ?* ? ?* [59][60][63]nginx at nginx.org > > ? ? ? > > ?* ? ?* ? ?* > [60][61][64]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > > > ? ? ? > > ?* ? ?* -- > > ? ? ? > > ?* ? ?* Weibin Yao > > ? ? ? > > ?* ? ?* Developer @ Server Platform Team of Taobao > > ? ? ? > > > > ? ? ? > > References > > ? ? ? > > > > ? ? ? > > ?* ? ?* Visible links > > ? ? ? > > ?* ? ?* 1. mailto:[62][65]pasik at iki.fi > > ? ? ? > > ?* ? ?* 2. > > ? ? > ? [63][66]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > ? ? ? > > ?* ? ?* 3. mailto:[64][67]pasik at iki.fi > > ? ? ? > > ?* ? ?* 4. > [65][68]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* 5. mailto:[66][69]yaoweibin at gmail.com > > ? ? ? > > ?* ? ?* 6. > [67][70]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? ?* 7. mailto:[68][71]zjay1987 at gmail.com > > ? ? ? > > ?* ? ?* 8. mailto:[69][72]nginx at nginx.org > > ? ? ? > > ?* ? ?* 9. > [70][73]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 10. mailto:[71][74]nginx at nginx.org > > ? ? ? > > ?* ? 11. > [72][75]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 12. mailto:[73][76]nginx at nginx.org > > ? ? ? > > ?* ? 13. > [74][77]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 14. mailto:[75][78]zjay1987 at gmail.com > > ? ? ? > > ?* ? 15. [76][79]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? 16. mailto:[77][80]yaoweibin at gmail.com > > ? ? ? > > ?* ? 17. [78][81]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? 18. mailto:[79][82]zjay1987 at gmail.com > > ? ? ? > > ?* ? 19. mailto:[80][83]nginx at nginx.org > > ? ? ? > > ?* ? 20. > [81][84]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 21. mailto:[82][85]nginx at nginx.org > > ? ? ? > > ?* ? 22. > [83][86]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 23. mailto:[84][87]nginx at nginx.org > > ? ? ? > > ?* ? 24. > [85][88]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 25. mailto:[86][89]nginx at nginx.org > > ? ? ? > > ?* ? 26. > [87][90]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 27. mailto:[88][91]nginx at nginx.org > > ? ? ? > > ?* ? 28. > [89][92]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 29. mailto:[90][93]pasik at iki.fi > > ? ? ? > > ?* ? 30. [91][94]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? 31. mailto:[92][95]yaoweibin at gmail.com > > ? ? ? > > ?* ? 32. [93][96]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? 33. mailto:[94][97]zjay1987 at gmail.com > > ? ? ? > > ?* ? 34. mailto:[95][98]nginx at nginx.org > > ? ? ? > > ?* ? 35. > [96][99]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 36. mailto:[97][100]nginx at nginx.org > > ? ? ? > > ?* ? 37. > [98][101]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 38. mailto:[99][102]nginx at nginx.org > > ? ? ? > > ?* ? 39. > [100][103]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 40. mailto:[101][104]zjay1987 at gmail.com > > ? ? ? > > ?* ? 41. > [102][105]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? 42. mailto:[103][106]yaoweibin at gmail.com > > ? ? ? > > ?* ? 43. > [104][107]https://github.com/taobao/tengine/pull/91 > > ? ? ? > > ?* ? 44. mailto:[105][108]zjay1987 at gmail.com > > ? ? ? > > ?* ? 45. mailto:[106][109]nginx at nginx.org > > ? ? ? > > ?* ? 46. > [107][110]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 47. mailto:[108][111]nginx at nginx.org > > ? ? ? > > ?* ? 48. > [109][112]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 49. mailto:[110][113]nginx at nginx.org > > ? ? ? > > ?* ? 50. > [111][114]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 51. mailto:[112][115]nginx at nginx.org > > ? ? ? > > ?* ? 52. > [113][116]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 53. mailto:[114][117]nginx at nginx.org > > ? ? ? > > ?* ? 54. > [115][118]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 55. mailto:[116][119]nginx at nginx.org > > ? ? ? > > ?* ? 56. > [117][120]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 57. mailto:[118][121]nginx at nginx.org > > ? ? ? > > ?* ? 58. > [119][122]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > ?* ? 59. mailto:[120][123]nginx at nginx.org > > ? ? ? > > ?* ? 60. > [121][124]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > > ? ? ? > > _______________________________________________ > > ? ? ? > > nginx mailing list > > ? ? ? > > [122][125]nginx at nginx.org > > ? ? ? > > [123][126]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? ? > > > ? ? ? > _______________________________________________ > > ? ? ? > nginx mailing list > > ? ? ? > [124][127]nginx at nginx.org > > ? ? ? > [125][128]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? _______________________________________________ > > ? ? ? nginx mailing list > > ? ? ? [126][129]nginx at nginx.org > > ? ? ? [127][130]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? -- > > ? ? Weibin Yao > > ? ? Developer @ Server Platform Team of Taobao > > > > References > > > > ? ? Visible links > > ? ? 1. mailto:[131]pasik at iki.fi > > ? ? 2. mailto:[132]pasik at iki.fi > > ? ? 3. > [133]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > ? ? 4. mailto:[134]pasik at iki.fi > > ? ? 5. [135]https://github.com/taobao/tengine/pull/91 > > ? ? 6. mailto:[136]yaoweibin at gmail.com > > ? ? 7. [137]https://github.com/taobao/tengine/pull/91 > > ? ? 8. mailto:[138]zjay1987 at gmail.com > > ? ? 9. mailto:[139]nginx at nginx.org > > ? 10. [140]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 11. mailto:[141]nginx at nginx.org > > ? 12. [142]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 13. mailto:[143]nginx at nginx.org > > ? 14. [144]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 15. mailto:[145]zjay1987 at gmail.com > > ? 16. [146]https://github.com/taobao/tengine/pull/91 > > ? 17. mailto:[147]yaoweibin at gmail.com > > ? 18. [148]https://github.com/taobao/tengine/pull/91 > > ? 19. mailto:[149]zjay1987 at gmail.com > > ? 20. mailto:[150]nginx at nginx.org > > ? 21. [151]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 22. mailto:[152]nginx at nginx.org > > ? 23. [153]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 24. mailto:[154]nginx at nginx.org > > ? 25. [155]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 26. mailto:[156]nginx at nginx.org > > ? 27. [157]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 28. mailto:[158]nginx at nginx.org > > ? 29. [159]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 30. mailto:[160]pasik at iki.fi > > ? 31. [161]https://github.com/taobao/tengine/pull/91 > > ? 32. mailto:[162]yaoweibin at gmail.com > > ? 33. [163]https://github.com/taobao/tengine/pull/91 > > ? 34. mailto:[164]zjay1987 at gmail.com > > ? 35. mailto:[165]nginx at nginx.org > > ? 36. [166]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 37. mailto:[167]nginx at nginx.org > > ? 38. [168]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 39. mailto:[169]nginx at nginx.org > > ? 40. [170]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 41. mailto:[171]zjay1987 at gmail.com > > ? 42. [172]https://github.com/taobao/tengine/pull/91 > > ? 43. mailto:[173]yaoweibin at gmail.com > > ? 44. [174]https://github.com/taobao/tengine/pull/91 > > ? 45. mailto:[175]zjay1987 at gmail.com > > ? 46. mailto:[176]nginx at nginx.org > > ? 47. [177]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 48. mailto:[178]nginx at nginx.org > > ? 49. [179]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 50. mailto:[180]nginx at nginx.org > > ? 51. [181]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 52. mailto:[182]nginx at nginx.org > > ? 53. [183]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 54. mailto:[184]nginx at nginx.org > > ? 55. [185]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 56. mailto:[186]nginx at nginx.org > > ? 57. [187]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 58. mailto:[188]nginx at nginx.org > > ? 59. [189]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 60. mailto:[190]nginx at nginx.org > > ? 61. [191]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 62. mailto:[192]pasik at iki.fi > > ? 63. > [193]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > ? 64. mailto:[194]pasik at iki.fi > > ? 65. [195]https://github.com/taobao/tengine/pull/91 > > ? 66. mailto:[196]yaoweibin at gmail.com > > ? 67. [197]https://github.com/taobao/tengine/pull/91 > > ? 68. mailto:[198]zjay1987 at gmail.com > > ? 69. mailto:[199]nginx at nginx.org > > ? 70. [200]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 71. mailto:[201]nginx at nginx.org > > ? 72. [202]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 73. mailto:[203]nginx at nginx.org > > ? 74. [204]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 75. mailto:[205]zjay1987 at gmail.com > > ? 76. [206]https://github.com/taobao/tengine/pull/91 > > ? 77. mailto:[207]yaoweibin at gmail.com > > ? 78. [208]https://github.com/taobao/tengine/pull/91 > > ? 79. mailto:[209]zjay1987 at gmail.com > > ? 80. mailto:[210]nginx at nginx.org > > ? 81. [211]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 82. mailto:[212]nginx at nginx.org > > ? 83. [213]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 84. mailto:[214]nginx at nginx.org > > ? 85. [215]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 86. mailto:[216]nginx at nginx.org > > ? 87. [217]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 88. mailto:[218]nginx at nginx.org > > ? 89. [219]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 90. mailto:[220]pasik at iki.fi > > ? 91. [221]https://github.com/taobao/tengine/pull/91 > > ? 92. mailto:[222]yaoweibin at gmail.com > > ? 93. [223]https://github.com/taobao/tengine/pull/91 > > ? 94. mailto:[224]zjay1987 at gmail.com > > ? 95. mailto:[225]nginx at nginx.org > > ? 96. [226]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 97. mailto:[227]nginx at nginx.org > > ? 98. [228]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 99. mailto:[229]nginx at nginx.org > > ? 100. [230]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 101. mailto:[231]zjay1987 at gmail.com > > ? 102. [232]https://github.com/taobao/tengine/pull/91 > > ? 103. mailto:[233]yaoweibin at gmail.com > > ? 104. [234]https://github.com/taobao/tengine/pull/91 > > ? 105. mailto:[235]zjay1987 at gmail.com > > ? 106. mailto:[236]nginx at nginx.org > > ? 107. [237]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 108. mailto:[238]nginx at nginx.org > > ? 109. [239]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 110. mailto:[240]nginx at nginx.org > > ? 111. [241]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 112. mailto:[242]nginx at nginx.org > > ? 113. [243]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 114. mailto:[244]nginx at nginx.org > > ? 115. [245]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 116. mailto:[246]nginx at nginx.org > > ? 117. [247]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 118. mailto:[248]nginx at nginx.org > > ? 119. [249]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 120. mailto:[250]nginx at nginx.org > > ? 121. [251]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 122. mailto:[252]nginx at nginx.org > > ? 123. [253]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 124. mailto:[254]nginx at nginx.org > > ? 125. [255]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 126. mailto:[256]nginx at nginx.org > > ? 127. [257]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > [258]nginx at nginx.org > > [259]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > [260]nginx at nginx.org > [261]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > References > > Visible links > 1. http://wiki.nginx.org/Debugging > 2. mailto:pasik at iki.fi > 3. https://service-backend/ > 4. mailto:pasik at iki.fi > 5. mailto:pasik at iki.fi > 6. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 7. mailto:pasik at iki.fi > 8. https://github.com/taobao/tengine/pull/91 > 9. mailto:yaoweibin at gmail.com > 10. https://github.com/taobao/tengine/pull/91 > 11. mailto:zjay1987 at gmail.com > 12. mailto:nginx at nginx.org > 13. http://mailman.nginx.org/mailman/listinfo/nginx > 14. mailto:nginx at nginx.org > 15. http://mailman.nginx.org/mailman/listinfo/nginx > 16. mailto:nginx at nginx.org > 17. http://mailman.nginx.org/mailman/listinfo/nginx > 18. mailto:zjay1987 at gmail.com > 19. https://github.com/taobao/tengine/pull/91 > 20. mailto:yaoweibin at gmail.com > 21. https://github.com/taobao/tengine/pull/91 > 22. mailto:zjay1987 at gmail.com > 23. mailto:nginx at nginx.org > 24. http://mailman.nginx.org/mailman/listinfo/nginx > 25. mailto:nginx at nginx.org > 26. http://mailman.nginx.org/mailman/listinfo/nginx > 27. mailto:nginx at nginx.org > 28. http://mailman.nginx.org/mailman/listinfo/nginx > 29. mailto:nginx at nginx.org > 30. http://mailman.nginx.org/mailman/listinfo/nginx > 31. mailto:nginx at nginx.org > 32. http://mailman.nginx.org/mailman/listinfo/nginx > 33. mailto:pasik at iki.fi > 34. https://github.com/taobao/tengine/pull/91 > 35. mailto:yaoweibin at gmail.com > 36. https://github.com/taobao/tengine/pull/91 > 37. mailto:zjay1987 at gmail.com > 38. mailto:nginx at nginx.org > 39. http://mailman.nginx.org/mailman/listinfo/nginx > 40. mailto:nginx at nginx.org > 41. http://mailman.nginx.org/mailman/listinfo/nginx > 42. mailto:nginx at nginx.org > 43. http://mailman.nginx.org/mailman/listinfo/nginx > 44. mailto:zjay1987 at gmail.com > 45. https://github.com/taobao/tengine/pull/91 > 46. mailto:yaoweibin at gmail.com > 47. https://github.com/taobao/tengine/pull/91 > 48. mailto:zjay1987 at gmail.com > 49. mailto:nginx at nginx.org > 50. http://mailman.nginx.org/mailman/listinfo/nginx > 51. mailto:nginx at nginx.org > 52. http://mailman.nginx.org/mailman/listinfo/nginx > 53. mailto:nginx at nginx.org > 54. http://mailman.nginx.org/mailman/listinfo/nginx > 55. mailto:nginx at nginx.org > 56. http://mailman.nginx.org/mailman/listinfo/nginx > 57. mailto:nginx at nginx.org > 58. http://mailman.nginx.org/mailman/listinfo/nginx > 59. mailto:nginx at nginx.org > 60. http://mailman.nginx.org/mailman/listinfo/nginx > 61. mailto:nginx at nginx.org > 62. http://mailman.nginx.org/mailman/listinfo/nginx > 63. mailto:nginx at nginx.org > 64. http://mailman.nginx.org/mailman/listinfo/nginx > 65. mailto:pasik at iki.fi > 66. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 67. mailto:pasik at iki.fi > 68. https://github.com/taobao/tengine/pull/91 > 69. mailto:yaoweibin at gmail.com > 70. https://github.com/taobao/tengine/pull/91 > 71. mailto:zjay1987 at gmail.com > 72. mailto:nginx at nginx.org > 73. http://mailman.nginx.org/mailman/listinfo/nginx > 74. mailto:nginx at nginx.org > 75. http://mailman.nginx.org/mailman/listinfo/nginx > 76. mailto:nginx at nginx.org > 77. http://mailman.nginx.org/mailman/listinfo/nginx > 78. mailto:zjay1987 at gmail.com > 79. https://github.com/taobao/tengine/pull/91 > 80. mailto:yaoweibin at gmail.com > 81. https://github.com/taobao/tengine/pull/91 > 82. mailto:zjay1987 at gmail.com > 83. mailto:nginx at nginx.org > 84. http://mailman.nginx.org/mailman/listinfo/nginx > 85. mailto:nginx at nginx.org > 86. http://mailman.nginx.org/mailman/listinfo/nginx > 87. mailto:nginx at nginx.org > 88. http://mailman.nginx.org/mailman/listinfo/nginx > 89. mailto:nginx at nginx.org > 90. http://mailman.nginx.org/mailman/listinfo/nginx > 91. mailto:nginx at nginx.org > 92. http://mailman.nginx.org/mailman/listinfo/nginx > 93. mailto:pasik at iki.fi > 94. https://github.com/taobao/tengine/pull/91 > 95. mailto:yaoweibin at gmail.com > 96. https://github.com/taobao/tengine/pull/91 > 97. mailto:zjay1987 at gmail.com > 98. mailto:nginx at nginx.org > 99. http://mailman.nginx.org/mailman/listinfo/nginx > 100. mailto:nginx at nginx.org > 101. http://mailman.nginx.org/mailman/listinfo/nginx > 102. mailto:nginx at nginx.org > 103. http://mailman.nginx.org/mailman/listinfo/nginx > 104. mailto:zjay1987 at gmail.com > 105. https://github.com/taobao/tengine/pull/91 > 106. mailto:yaoweibin at gmail.com > 107. https://github.com/taobao/tengine/pull/91 > 108. mailto:zjay1987 at gmail.com > 109. mailto:nginx at nginx.org > 110. http://mailman.nginx.org/mailman/listinfo/nginx > 111. mailto:nginx at nginx.org > 112. http://mailman.nginx.org/mailman/listinfo/nginx > 113. mailto:nginx at nginx.org > 114. http://mailman.nginx.org/mailman/listinfo/nginx > 115. mailto:nginx at nginx.org > 116. http://mailman.nginx.org/mailman/listinfo/nginx > 117. mailto:nginx at nginx.org > 118. http://mailman.nginx.org/mailman/listinfo/nginx > 119. mailto:nginx at nginx.org > 120. http://mailman.nginx.org/mailman/listinfo/nginx > 121. mailto:nginx at nginx.org > 122. http://mailman.nginx.org/mailman/listinfo/nginx > 123. mailto:nginx at nginx.org > 124. http://mailman.nginx.org/mailman/listinfo/nginx > 125. mailto:nginx at nginx.org > 126. http://mailman.nginx.org/mailman/listinfo/nginx > 127. mailto:nginx at nginx.org > 128. http://mailman.nginx.org/mailman/listinfo/nginx > 129. mailto:nginx at nginx.org > 130. http://mailman.nginx.org/mailman/listinfo/nginx > 131. mailto:pasik at iki.fi > 132. mailto:pasik at iki.fi > 133. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 134. mailto:pasik at iki.fi > 135. https://github.com/taobao/tengine/pull/91 > 136. mailto:yaoweibin at gmail.com > 137. https://github.com/taobao/tengine/pull/91 > 138. mailto:zjay1987 at gmail.com > 139. mailto:nginx at nginx.org > 140. http://mailman.nginx.org/mailman/listinfo/nginx > 141. mailto:nginx at nginx.org > 142. http://mailman.nginx.org/mailman/listinfo/nginx > 143. mailto:nginx at nginx.org > 144. http://mailman.nginx.org/mailman/listinfo/nginx > 145. mailto:zjay1987 at gmail.com > 146. https://github.com/taobao/tengine/pull/91 > 147. mailto:yaoweibin at gmail.com > 148. https://github.com/taobao/tengine/pull/91 > 149. mailto:zjay1987 at gmail.com > 150. mailto:nginx at nginx.org > 151. http://mailman.nginx.org/mailman/listinfo/nginx > 152. mailto:nginx at nginx.org > 153. http://mailman.nginx.org/mailman/listinfo/nginx > 154. mailto:nginx at nginx.org > 155. http://mailman.nginx.org/mailman/listinfo/nginx > 156. mailto:nginx at nginx.org > 157. http://mailman.nginx.org/mailman/listinfo/nginx > 158. mailto:nginx at nginx.org > 159. http://mailman.nginx.org/mailman/listinfo/nginx > 160. mailto:pasik at iki.fi > 161. https://github.com/taobao/tengine/pull/91 > 162. mailto:yaoweibin at gmail.com > 163. https://github.com/taobao/tengine/pull/91 > 164. mailto:zjay1987 at gmail.com > 165. mailto:nginx at nginx.org > 166. http://mailman.nginx.org/mailman/listinfo/nginx > 167. mailto:nginx at nginx.org > 168. http://mailman.nginx.org/mailman/listinfo/nginx > 169. mailto:nginx at nginx.org > 170. http://mailman.nginx.org/mailman/listinfo/nginx > 171. mailto:zjay1987 at gmail.com > 172. https://github.com/taobao/tengine/pull/91 > 173. mailto:yaoweibin at gmail.com > 174. https://github.com/taobao/tengine/pull/91 > 175. mailto:zjay1987 at gmail.com > 176. mailto:nginx at nginx.org > 177. http://mailman.nginx.org/mailman/listinfo/nginx > 178. mailto:nginx at nginx.org > 179. http://mailman.nginx.org/mailman/listinfo/nginx > 180. mailto:nginx at nginx.org > 181. http://mailman.nginx.org/mailman/listinfo/nginx > 182. mailto:nginx at nginx.org > 183. http://mailman.nginx.org/mailman/listinfo/nginx > 184. mailto:nginx at nginx.org > 185. http://mailman.nginx.org/mailman/listinfo/nginx > 186. mailto:nginx at nginx.org > 187. http://mailman.nginx.org/mailman/listinfo/nginx > 188. mailto:nginx at nginx.org > 189. http://mailman.nginx.org/mailman/listinfo/nginx > 190. mailto:nginx at nginx.org > 191. http://mailman.nginx.org/mailman/listinfo/nginx > 192. mailto:pasik at iki.fi > 193. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 194. mailto:pasik at iki.fi > 195. https://github.com/taobao/tengine/pull/91 > 196. mailto:yaoweibin at gmail.com > 197. https://github.com/taobao/tengine/pull/91 > 198. mailto:zjay1987 at gmail.com > 199. mailto:nginx at nginx.org > 200. http://mailman.nginx.org/mailman/listinfo/nginx > 201. mailto:nginx at nginx.org > 202. http://mailman.nginx.org/mailman/listinfo/nginx > 203. mailto:nginx at nginx.org > 204. http://mailman.nginx.org/mailman/listinfo/nginx > 205. mailto:zjay1987 at gmail.com > 206. https://github.com/taobao/tengine/pull/91 > 207. mailto:yaoweibin at gmail.com > 208. https://github.com/taobao/tengine/pull/91 > 209. mailto:zjay1987 at gmail.com > 210. mailto:nginx at nginx.org > 211. http://mailman.nginx.org/mailman/listinfo/nginx > 212. mailto:nginx at nginx.org > 213. http://mailman.nginx.org/mailman/listinfo/nginx > 214. mailto:nginx at nginx.org > 215. http://mailman.nginx.org/mailman/listinfo/nginx > 216. mailto:nginx at nginx.org > 217. http://mailman.nginx.org/mailman/listinfo/nginx > 218. mailto:nginx at nginx.org > 219. http://mailman.nginx.org/mailman/listinfo/nginx > 220. mailto:pasik at iki.fi > 221. https://github.com/taobao/tengine/pull/91 > 222. mailto:yaoweibin at gmail.com > 223. https://github.com/taobao/tengine/pull/91 > 224. mailto:zjay1987 at gmail.com > 225. mailto:nginx at nginx.org > 226. http://mailman.nginx.org/mailman/listinfo/nginx > 227. mailto:nginx at nginx.org > 228. http://mailman.nginx.org/mailman/listinfo/nginx > 229. mailto:nginx at nginx.org > 230. http://mailman.nginx.org/mailman/listinfo/nginx > 231. mailto:zjay1987 at gmail.com > 232. https://github.com/taobao/tengine/pull/91 > 233. mailto:yaoweibin at gmail.com > 234. https://github.com/taobao/tengine/pull/91 > 235. mailto:zjay1987 at gmail.com > 236. mailto:nginx at nginx.org > 237. http://mailman.nginx.org/mailman/listinfo/nginx > 238. mailto:nginx at nginx.org > 239. http://mailman.nginx.org/mailman/listinfo/nginx > 240. mailto:nginx at nginx.org > 241. http://mailman.nginx.org/mailman/listinfo/nginx > 242. mailto:nginx at nginx.org > 243. http://mailman.nginx.org/mailman/listinfo/nginx > 244. mailto:nginx at nginx.org > 245. http://mailman.nginx.org/mailman/listinfo/nginx > 246. mailto:nginx at nginx.org > 247. http://mailman.nginx.org/mailman/listinfo/nginx > 248. mailto:nginx at nginx.org > 249. http://mailman.nginx.org/mailman/listinfo/nginx > 250. mailto:nginx at nginx.org > 251. http://mailman.nginx.org/mailman/listinfo/nginx > 252. mailto:nginx at nginx.org > 253. http://mailman.nginx.org/mailman/listinfo/nginx > 254. mailto:nginx at nginx.org > 255. http://mailman.nginx.org/mailman/listinfo/nginx > 256. mailto:nginx at nginx.org > 257. http://mailman.nginx.org/mailman/listinfo/nginx > 258. mailto:nginx at nginx.org > 259. http://mailman.nginx.org/mailman/listinfo/nginx > 260. mailto:nginx at nginx.org > 261. http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Mar 5 14:55:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Mar 2013 18:55:44 +0400 Subject: nginx-1.3.14 Message-ID: <20130305145544.GB15378@mdounin.ru> Changes with nginx 1.3.14 05 Mar 2013 *) Feature: $connections_active, $connections_reading, and $connections_writing variables in the ngx_http_stub_status_module. *) Feature: support of WebSocket connections in the ngx_http_uwsgi_module and ngx_http_scgi_module. *) Bugfix: in virtual servers handling with SNI. *) Bugfix: new sessions were not always stored if the "ssl_session_cache shared" directive was used and there was no free space in shared memory. Thanks to Piotr Sikora. *) Bugfix: multiple X-Forwarded-For headers were handled incorrectly. Thanks to Neal Poole for sponsoring this work. *) Bugfix: in the ngx_http_mp4_module. Thanks to Gernot Vormayr. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Tue Mar 5 15:33:20 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 5 Mar 2013 19:33:20 +0400 Subject: nginx/KQUEUE+SPDY breaks proxy_ignore_client_abort In-Reply-To: <201303020209.46947.vbart@nginx.com> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301205606.GA15343@redoubt.spodhuis.org> <201303020209.46947.vbart@nginx.com> Message-ID: <201303051933.20998.vbart@nginx.com> On Saturday 02 March 2013 02:09:46 Valentin V. Bartenev wrote: > On Saturday 02 March 2013 00:56:06 Phil Pennock wrote: > > [fixed Subject: to help others with issue track it] > > > > On 2013-03-01 at 17:12 +0400, Maxim Dounin wrote: > > > It looks like you are running nginx with experimental SPDY patch, > > > and it broke things here. Try recompiling nginx without SPDY > > > patch to see if it helps. > > > > That fixed things, thank you. > > > > So, nginx+KQUEUE+SPDY breaks clients which shutdown on the write side, > > without the ability to disable treating this as a client abort. > > > > I'll sacrifice SPDY for now, to have correctness for existing clients. > > > > Do you think that the SPDY patch will change to include something like > > proxy_ignore_client_abort or will write-shutdowns just be treated as > > unacceptable? > > > > Given that SPDY requires SSL which inherently requires bi-directional > > connections at all times, the current behaviour with the SPDY patch is > > absolutely correct, if SPDY is enabled for that server. In this case, > > it's a cleartext server so SPDY wasn't enabled at all. > > SPDY patch also includes many changes for http core of nginx. The one that > you see, is the unintended result of these changes. I'm going to fix it in > upcoming revision, since it can break some setups as you have mentioned. > Done. http://nginx.org/patches/spdy/patch.spdy-66_1.3.14.txt wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Mar 5 17:39:33 2013 From: nginx-forum at nginx.us (onel0ve) Date: Tue, 05 Mar 2013 12:39:33 -0500 Subject: Need help to convert .htaccess to nginx rules Message-ID: [code] #Compress SetOutputFilter DEFLATE Options +FollowSymLinks RewriteEngine on RewriteBase / # please use .php at url for admin RewriteRule ^recent-articles-feed rss.php [NC] RewriteRule ^rss/([^/]+) rss.php?q=$1 [NC] # To ignore htaccess - enlist here RewriteCond $1 !^(index\.php|ftpservice|demo|ftpdb|MDF|phpsysinfo|admin|templates|links|api|forum|articles|imageGallery|language|includes|ajax|resources|fonts|images|parse|directory|uploads|system|rss\.php|robots\.txt|sitemap\.php|Sitemap\.xml|sitemap\.xml|sitemap2\.xml|urllist\.txt) RewriteRule ^(.*) index.php [NC] [/code] This is my htaccess content . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236960,236960#msg-236960 From nginx-list at puzzled.xs4all.nl Tue Mar 5 18:38:30 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 05 Mar 2013 19:38:30 +0100 Subject: nginx-1.3.14: alert sched_setaffinity() failed Message-ID: <51363BA6.8080904@puzzled.xs4all.nl> Hi, On an updated CentOS 6.3 x86_64 VM I just installed nginx 1.3.14 with extra modules ngx_cache_purge and nginx-auth-ldap, the SPDY 66 patch and built statically against openssl-1.0.1e. In the error log I see this message: 2013/03/05 19:31:39 [alert] 13363#0: sched_setaffinity() failed (22: Invalid argument) No idea what it means. Maybe it is helpful for the developers. Regards, Patrick From peter_booth at me.com Tue Mar 5 18:46:13 2013 From: peter_booth at me.com (Peter Booth) Date: Tue, 05 Mar 2013 13:46:13 -0500 Subject: QNs about cookies and caching Message-ID: I'm wondering if someone can help with the following? I have a java app where I'm using nginx as a caching reverse proxy. I have a location defined for five distinct JSPs and different cache configurations and custom keys for each. Some locations are using: proxy_ignore_headers Set-Cookie Proxy_pass_header off proxy_hide_header Set-Cookie To ensure that responses that set cookies can be safely cached without returning responses that set cookies. Any of these JSPs could possibly create a backend session if the client doesn't pass a jsessionid cookie. I'd really like to get a different behavior whereby if a response from the backend includes a Set-Cookie then the response is cached without the Set-Cookie but the original caller does see the Set-Cookie header. Is this possible with some lua or other trickery? Peter Sent from my iPhone From mdounin at mdounin.ru Tue Mar 5 19:07:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Mar 2013 23:07:28 +0400 Subject: nginx-1.3.14: alert sched_setaffinity() failed In-Reply-To: <51363BA6.8080904@puzzled.xs4all.nl> References: <51363BA6.8080904@puzzled.xs4all.nl> Message-ID: <20130305190728.GK15378@mdounin.ru> Hello! On Tue, Mar 05, 2013 at 07:38:30PM +0100, Patrick Lists wrote: > Hi, > > On an updated CentOS 6.3 x86_64 VM I just installed nginx 1.3.14 > with extra modules ngx_cache_purge and nginx-auth-ldap, the SPDY 66 > patch and built statically against openssl-1.0.1e. In the error log > I see this message: > > 2013/03/05 19:31:39 [alert] 13363#0: sched_setaffinity() failed (22: > Invalid argument) > > No idea what it means. Maybe it is helpful for the developers. Do you have worker_cpu_affinity (http://nginx.org/r/worker_cpu_affinity) in your config? Does it match CPUs available? The message indicate that from kernel point of view configured CPU mask is wrong: EINVAL The affinity bit mask mask contains no processors that are currently physically on the system and permitted to the process according to any restrictions that may be imposed by the "cpuset" mechanism described in cpuset(7). An easy way to trigger the message above is to do something like worker_cpu_affinity 10; on a system with one CPU only. -- Maxim Dounin http://nginx.org/en/donation.html From trm.nagios at gmail.com Tue Mar 5 19:54:49 2013 From: trm.nagios at gmail.com (trm asn) Date: Wed, 6 Mar 2013 01:24:49 +0530 Subject: Websocket on port 80 In-Reply-To: References: Message-ID: On Tue, Mar 5, 2013 at 1:20 AM, trm asn wrote: > Hi List, > > I am using Nginx-1.3.13 for this websocket support. I am doing socket.ioproxy from Nginx . Below is my nginx.conf for websocket . > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > map $http_upgrade $conn_header { > default upgrade; > '' ''; > } > > server { > listen 80; > server_name _ > access_log /var/log/nginx/access.log mylog; > error_log /var/log/nginx/error.log; > root /var/www/nginx; > > location /nodeapp { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_redirect off; > proxy_pass http://10.164.110.11:8888; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $conn_header; > proxy_read_timeout 120s; > proxy_set_header Host $host; > } > } > > my-node logs ... > > debug: got heartbeat packet > debug: cleared heartbeat timeout for client tZhwv5ng-YYkTREOHsh4 > debug: set heartbeat interval for client tZhwv5ng-YYkTREOHsh4 > info: stats: "stats key" > info: stats: "Sent gauge sessions.count with value 9" > info: stats: "stats key" > info: stats: "Sent gauge users.registered with value 3" > info: stats: "stats key" > info: stats: "Sent gauge sessions.unique with value 3" > info: transport end (socket end) > > > > But if configure Nginx on SSL mode then it's upgrading to websocket ( 101 > ) . > > debug: client authorized > info: handshake authorized jthZZKLA1fR1eaHTHsih > debug: setting request GET /nodeapp/ > socket.io/1/websocket/jthZZKLA1fR1eaHTHsih > debug: set heartbeat interval for client jthZZKLA1fR1eaHTHsih > debug: client authorized for > debug: websocket writing 1:: > info: : "Session started for accessId: HbIzkBis5MYB9I7X" > debug: websocket writing 5:::{"name":"session-marked-as-alive"} > > --Thanks, > Tarak > Does anybody has faced this issue with 1.3.13 & socket.io . -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-list at puzzled.xs4all.nl Tue Mar 5 20:36:38 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 05 Mar 2013 21:36:38 +0100 Subject: nginx-1.3.14: alert sched_setaffinity() failed In-Reply-To: <20130305190728.GK15378@mdounin.ru> References: <51363BA6.8080904@puzzled.xs4all.nl> <20130305190728.GK15378@mdounin.ru> Message-ID: <51365756.7020906@puzzled.xs4all.nl> Hi Maxim, Thank you for your feedback. On 05-03-13 20:07, Maxim Dounin wrote: > Do you have worker_cpu_affinity (http://nginx.org/r/worker_cpu_affinity) > in your config? Does it match CPUs available? My bad, a config was copied which had worker_cpu_affinity set. I removed it and the alert is gone now. Sorry for the noise. Nginx 1.3.14 seems to work fine for me (light testing). Thanks for one great piece of software! Regards, Patrick From lists at ruby-forum.com Wed Mar 6 00:23:28 2013 From: lists at ruby-forum.com (Arman Mirk) Date: Wed, 06 Mar 2013 01:23:28 +0100 Subject: Why would nginx 0.7.6 is drop requests randomly Message-ID: <2d2e2e9693ec557a8cd14b2daef17693@ruby-forum.com> I notice our server is rapidly responding 500 errors for random pages. Our setup is pretty basic. We are running Nginx 0.7.6 with Unicorn and a Rails app on a Ubuntu 10.4 server. What ever the problem, it doesn't seem to happen at the application level since Unicorn logs don't contain any errors. Next we looked at Nginx's error logs and there doesn't seem to be any entries. Yes when we monitor the Nginx's access log we notice rapid 500 error responses on random pages. This is really strange since our server isn't under a lot of load. It uses a 4 core Linode instance and we have 4 Nginx worker processes. Could someone people point us to the right direction? Thanks -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Mar 6 00:51:47 2013 From: nginx-forum at nginx.us (dakun) Date: Tue, 05 Mar 2013 19:51:47 -0500 Subject: Forwarding to upstream server at port specified in url query paramter Message-ID: <2112f78cf01bfd21dd5a7a774b7e3395.NginxMailingListEnglish@forum.nginx.org> I am trying to use the HttpProxyModule to forward traffic to an upstream server. The incoming request will be like: http://foobar.com/path?p=1234 where p=1234 indicates the upstream port to use. For this example I would want to forward to http://upstreamserver.com:1234. How can I achieve this? I have experimented with regex and rewrites but was unsuccessful mainly because regex on "location" directive does not apply to url parameters. And rewrites do not allow me to change the outgoing port number. I had also tried to use if ($args ~ "p=(\d+)") { set $port $1; rewrite ^ ?????? } But I ended up with either a browser redirect which is not desirable, or I ended up sending traffic to local file which does not exist. Unfortunately I cannot change the incoming request to include '1234' as part of path, e.g. http://foobar.com/p/1234/. I have no control of that. I am using nginx 1.3.13 with websocket support. Thanks in advance for any help. Dakun Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236969,236969#msg-236969 From contact at jpluscplusm.com Wed Mar 6 01:19:27 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 6 Mar 2013 01:19:27 +0000 Subject: Forwarding to upstream server at port specified in url query paramter In-Reply-To: <2112f78cf01bfd21dd5a7a774b7e3395.NginxMailingListEnglish@forum.nginx.org> References: <2112f78cf01bfd21dd5a7a774b7e3395.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 March 2013 00:51, dakun wrote: > I am trying to use the HttpProxyModule to forward traffic to an upstream > server. The incoming request will be like: > > http://foobar.com/path?p=1234 > > where p=1234 indicates the upstream port to use. > > For this example I would want to forward to http://upstreamserver.com:1234. > How can I achieve this? Is http://wiki.nginx.org/HttpCoreModule#.24arg_PARAMETER of any use? Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From lists at ruby-forum.com Wed Mar 6 02:57:01 2013 From: lists at ruby-forum.com (Joseph O.) Date: Wed, 06 Mar 2013 03:57:01 +0100 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error Message-ID: I have had a terribly difficult time deploying SpreeCommerce, despite (or perhaps because of) following the Spree documentation. I have eventually gotten to the point where the Capistrano deployment runs through entirely, but I seem to have configured Nginx incorrectly, as I am getting a "502 - Bad Gateway" error. My current nginx.conf and site configuration files are attached; I have tried the recommendations from several different sources but no combination of options seems to have any effect. Attachments: http://www.ruby-forum.com/attachment/8194/nginx.conf http://www.ruby-forum.com/attachment/8195/mjtb-site -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Mar 6 03:50:54 2013 From: nginx-forum at nginx.us (moke110007) Date: Tue, 05 Mar 2013 22:50:54 -0500 Subject: how work ip_hash and weight in nginx 1.2.7 Message-ID: <592de3998b05b3b3570b7da781dca944.NginxMailingListEnglish@forum.nginx.org> sorry,my english is poor. >>> webserver: tomcat 6.0.23 When I tested on four tomcat servers, one server load is higher always.How to set up to make each server load is balanced. >>> Use weight to decide load distribution, iphash to decide the next should be assigned to which server. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236972,236972#msg-236972 From nginx-forum at nginx.us Wed Mar 6 06:07:05 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 06 Mar 2013 01:07:05 -0500 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: References: Message-ID: <54c754257b28d876fd64d4233b644f82.NginxMailingListEnglish@forum.nginx.org> looks like a recursion to me, no? you point your nginx.conf to proxy_pass to http://www.masterjoestoybox.net/ and then you define with your mjtb a config that listens to www.masterjoestoybox.net and proxy_passes to http://www.masterjoestoybox.net/ and you configure a upstream unicorn_server in your nginx.conf but never use it? maybe you can throw away the mjtb - config btw, http://wiki.nginx.org/IfIsEvil http://wiki.nginx.org/Pitfalls regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236971,236973#msg-236973 From nginx-forum at nginx.us Wed Mar 6 06:30:57 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 06 Mar 2013 01:30:57 -0500 Subject: Need help to convert .htaccess to nginx rules In-Reply-To: References: Message-ID: maybe this helps as a start: http://stackoverflow.com/questions/5840497/convert-htaccess-to-nginx and you may want to read http://wiki.nginx.org/HttpCoreModule#location http://wiki.nginx.org/HttpRewriteModule since usage and order-of-procerssing is quite a little different regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236960,236975#msg-236975 From nginx-forum at nginx.us Wed Mar 6 06:37:52 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 06 Mar 2013 01:37:52 -0500 Subject: nginx stats In-Reply-To: References: Message-ID: <2d583b1b2c99f46a13389890d914a34d.NginxMailingListEnglish@forum.nginx.org> > > Yes, the stub one is quite limited. We've been working on a much > better version, which is going to appear soon. > any more info on this topic yet? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,224597,236976#msg-236976 From lists at ruby-forum.com Wed Mar 6 06:43:54 2013 From: lists at ruby-forum.com (Evgeny T.) Date: Wed, 06 Mar 2013 07:43:54 +0100 Subject: =?UTF-8?B?TmdpeG4g0L3QtSDQvtGC0LTQsNC10YIg0YTQsNC50LvRiyDQsdC+0LvRjNGI0LUg?= =?UTF-8?B?MSDQvNCx?= Message-ID: <418275b01b063d5bbb35fbca00f50ac0@ruby-forum.com> Nginx ?? ?????? ????? > 1?? ???? Rails + Nginx + Unicorn ??? ????????, ?????? ????? ????????, ???? ????? ????? JWPlayer ?????????????. ???????? ? ???, ??? ???? ?????? 1.8 ?? ? ????? ????????? ?? ??????? ?? ???????? ??? ?????? ??????? ???????, ?? ??? ?? ????????????. ? ?????????? ???????? ????????? Age 0 Cache-Control public, must-revalidate Connection close Content-Length 1851461 Content-Type application/x-shockwave-flash Date Wed, 06 Mar 2013 06:32:10 GMT Etag "3eafc4038bc34934cbe666d0c1f91412" Last-Modified Tue, 05 Mar 2013 14:28:57 GMT Status 200 OK X-Content-Digest 4e79850037ee7988db983af4e9098612c3dc70de X-Rack-Cache miss, store X-Request-Id 8e2c3414729c4e18d9f86e861edcf256 X-Runtime 0.003221 X-Ua-Compatible IE=Edge Request Headersview source Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-US,en;q=0.5 Connection keep-alive Cookie Host xxx.xxx.xx:9091 User-Agent Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:19.0) Gecko/20100101 Firefox/19.0 ? ????? ?????, ??? size: 38b, Transfered: 1.77MB, ?? ?????? ??? :( -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Mar 6 07:04:20 2013 From: nginx-forum at nginx.us (dakun) Date: Wed, 06 Mar 2013 02:04:20 -0500 Subject: Forwarding to upstream server at port specified in url query paramter In-Reply-To: References: Message-ID: <2484b6b1f94a06c46ad9b41658603704.NginxMailingListEnglish@forum.nginx.org> Thanks for the advice on $arg_PARAMETER. It allows me to retrieve the parameter. However I am not able to use it as a port number for proxy_pass. This shows that I can get the parameter and use it in rewrite: location /test { rewrite ^ http://www.google.com/?q=$arg_p; } This does not work. Got error "no resolver defined to resolve www.google.com" in log. location /path { proxy_pass http://www.google.com/?q=$arg_p; } This shows that I can use the parameter in a rewrite prior to proxy_pass: location /path { rewrite ^(.*)$ /?q=$arg_p break; proxy_pass http://www.google.com/; } Unfortunately I still can't use the paramrter value as an upstream port number: location /path2 { rewrite ^(.*)$ :$arg_p break; proxy_pass http://www.google.com; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236969,236978#msg-236978 From contact at jpluscplusm.com Wed Mar 6 09:54:02 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 6 Mar 2013 09:54:02 +0000 Subject: Forwarding to upstream server at port specified in url query paramter In-Reply-To: <2484b6b1f94a06c46ad9b41658603704.NginxMailingListEnglish@forum.nginx.org> References: <2484b6b1f94a06c46ad9b41658603704.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 March 2013 07:04, dakun wrote: > Thanks for the advice on $arg_PARAMETER. It allows me to retrieve the > parameter. However I am not able to use it as a port number for proxy_pass. > > This shows that I can get the parameter and use it in rewrite: > location /test { > rewrite ^ http://www.google.com/?q=$arg_p; > } > > This does not work. Got error "no resolver defined to resolve > www.google.com" in log. > location /path { > proxy_pass http://www.google.com/?q=$arg_p; > } Come on chap - try a bit harder! http://bit.ly/12tikWj > This shows that I can use the parameter in a rewrite prior to proxy_pass: > location /path { > rewrite ^(.*)$ /?q=$arg_p break; > proxy_pass http://www.google.com/; > } > > Unfortunately I still can't use the paramrter value as an upstream port > number: > location /path2 { > rewrite ^(.*)$ :$arg_p break; > proxy_pass http://www.google.com; > } No, you just haven't configured it correctly. You really need to read the proxy_pass and rewrite documentation more carefully. Neither works exactly the way you seem to think they do. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Wed Mar 6 10:12:58 2013 From: nginx-forum at nginx.us (onel0ve) Date: Wed, 06 Mar 2013 05:12:58 -0500 Subject: Need help to convert .htaccess to nginx rules In-Reply-To: References: Message-ID: I just need to convert this to nginx rules . # To ignore htaccess - enlist here RewriteCond $1 !^(index\.php|ftpservice|demo|ftpdb|MDF|phpsysinfo|admin|templates|links| api|forum|articles|imageGallery|language|includes|ajax|resources|fonts|images|parse|directory| uploads|system|rss\.php|robots\.txt|sitemap\.php|Sitemap\.xml|sitemap\.xml|sitemap2\.xml|urllist\.txt) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236960,236980#msg-236980 From ru at nginx.com Wed Mar 6 10:18:17 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 6 Mar 2013 14:18:17 +0400 Subject: Wanted: suggestions on how to invert proxy_pass return codes In-Reply-To: References: Message-ID: <20130306101817.GI72670@lo0.su> On Tue, Mar 05, 2013 at 12:17:03AM +0000, Jonathan Matthews wrote: > A slight thinko crept in to my original mail; there's a small > difference (which does remove a minor complexity) as I've marked below > ... > > On 4 March 2013 23:58, Jonathan Matthews wrote: > [snip] > > -------------------------------------------------------------------------- > > location /healthcheck/ { > > if (!-f /tmp/flag) { > > return 503; > > } > > SHOULD BE: > > " > if (-f /tmp/flag) { > return 200; > } > " > > [snip] > > -------------------------------------------------------------------------- > > > > Before making the proxy_pass call, check a marker on disk and serve a > > (real, not translated to =200) 503 if it exists. > > SHOULD BE: "... serve a 200 if it exists." > > I don't think this changes the meat of the problem - it just removes > one minor niggle. > > Any thoughts? Well, if you can add headers on a backend ... : http { : map $connection $fail { : ~[02468]$ 1; : } : : server { : server_name backend; : listen 8001; : add_header X-Accel-Redirect /backend_up; : : if ($fail) { return 503 "real 503\n"; } : return 200 "real 200\n"; : } : : server { : listen 8000; : : location = /test { : proxy_pass http://127.0.0.1:8001; : proxy_intercept_errors on; : error_page 503 = @backend_down; : } : : location @backend_down { : return 200 "wrapped 200\n"; : } : : location = /backend_up { : internal; : return 503 "wrapped 503\n"; : } : } : } $ repeat 10 curl http://127.0.0.1:8000/test wrapped 503 wrapped 200 wrapped 503 wrapped 200 wrapped 503 wrapped 200 wrapped 503 wrapped 200 wrapped 503 wrapped 200 (See http://nginx.org/r/proxy_ignore_headers for the meaning of X-Accel-Redirect.) From nginx-forum at nginx.us Wed Mar 6 10:28:12 2013 From: nginx-forum at nginx.us (yvlasov) Date: Wed, 06 Mar 2013 05:28:12 -0500 Subject: Problem request_timeout not working with proxy_next_upstream on proxy_connect_timeout but proxy_read_timeout Message-ID: <85e6f89cf6d9d89c789ca68b0f4c67c5.NginxMailingListEnglish@forum.nginx.org> Hello In our setup we have an NGNX as front-end and several back-end. The problem is our load profile, we have a lot of simple and fast http requests, and very few but very heavy in terms of time and BE cpu requests. So my idea is to use proxy_next_upstream for simple request as usual and it works perfectly. And for heavy requests based on URL I want to passthrough it to BE with lowest CPU load by specifying small proxy_connect_timeout and using proxy_next_upstream timeout. But in case of all system overload with heavy requests i don't want them to travel through all BEs because of proxy_read_timeout is about 1 minute. I was hoping to set a request_timeout to the same value as proxy_read_timeout and from my point of view this should prevent heavy requests to travel all upstreams based on proxy_read_timeout, but they do. I ve found a similar topic but the proposition was to make two new options to proxy_next_upstream such as timeout_tcp timeout_http or something similar. Thanks for your future advices and comments. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236982,236982#msg-236982 From nginx-forum at nginx.us Wed Mar 6 10:44:30 2013 From: nginx-forum at nginx.us (yvlasov) Date: Wed, 06 Mar 2013 05:44:30 -0500 Subject: Problem request_timeout not working with proxy_next_upstream on proxy_connect_timeout but proxy_read_timeout In-Reply-To: <85e6f89cf6d9d89c789ca68b0f4c67c5.NginxMailingListEnglish@forum.nginx.org> References: <85e6f89cf6d9d89c789ca68b0f4c67c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <98dd1f421e0e15d3823fea879592abfe.NginxMailingListEnglish@forum.nginx.org> For beter understanding here is my config sniper upstream super_upstream { keepalive 128; server be1 max_fails=45 fail_timeout=3s; server be2 max_fails=45 fail_timeout=3s; server be3 max_fails=45 fail_timeout=3s; } server { server_name pytn.ru; location ^~ /simple_requests { proxy_read_timeout 2s; proxy_send_timeout 2s; proxy_connect_timeout 10ms; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_pass http://super_upstream; } location ^~ /very_heavy_requests { send_timeout 60s; proxy_read_timeout 60s; proxy_send_timeout 60s; proxy_connect_timeout 5ms; proxy_next_upstream timeout; proxy_pass http://super_upstream; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236982,236983#msg-236983 From mdounin at mdounin.ru Wed Mar 6 11:54:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 15:54:44 +0400 Subject: Why would nginx 0.7.6 is drop requests randomly In-Reply-To: <2d2e2e9693ec557a8cd14b2daef17693@ruby-forum.com> References: <2d2e2e9693ec557a8cd14b2daef17693@ruby-forum.com> Message-ID: <20130306115444.GN15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 01:23:28AM +0100, Arman Mirk wrote: > I notice our server is rapidly responding 500 errors for random pages. > Our setup is pretty basic. We are running Nginx 0.7.6 with Unicorn and a > Rails app on a Ubuntu 10.4 server. > > What ever the problem, it doesn't seem to happen at the application > level since Unicorn logs don't contain any errors. Next we looked at > Nginx's error logs and there doesn't seem to be any entries. Yes when we > monitor the Nginx's access log we notice rapid 500 error responses on > random pages. > > This is really strange since our server isn't under a lot of load. It > uses a 4 core Linode instance and we have 4 Nginx worker processes. > > Could someone people point us to the right direction? If it indeed happens in nginx, I would suppose it's some configuration problem, something like rewrite loop, or proxy loop, or something similar. It should have reasons logged to error log though. To make sure it's indeed happens in nginx, and not returned by your application, you may try logging $upstream_status variable, see docs here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html You may also use debug log to check what goes wrong in details, see here: http://nginx.org/en/docs/debugging_log.html But in any case 0.7.6 is a 5 years old development version, and you may want to upgrade. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Mar 6 12:02:28 2013 From: nginx-forum at nginx.us (jan5134) Date: Wed, 06 Mar 2013 07:02:28 -0500 Subject: Cache keeps growing despite Max_size limit Message-ID: Hi, I'm having issues with nginx where my cache directory keeps growing until the hdd is full. If anyone can give me any information on how to solve this it will be appreciated. nginx -V: nginx version: nginx/1.2.1 built by gcc 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-mail_ssl_module --add-module=/root/nginx-1.2.1/ngx_slowfs_cache-1.9 --add-module=/root/nginx-1.2.1/nginx-sticky-module-1.0 nginx.conf: user apache; worker_processes 4; error_log /var/log/nginx/error.log emerg; pid /var/run/nginx.pid; worker_rlimit_nofile 30000; events { worker_connections 8192; use epoll; multi_accept off; } http { include /etc/nginx/mime.types; sendfile on; gzip on; gzip_min_length 10; gzip_types text/plain text/css image/png image/gif image/jpeg application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; gzip_vary on; gzip_comp_level 9; gzip_proxied any; gzip_disable msie6; tcp_nodelay off; log_format '$remote_addr - $remote_user [$time_local]' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" "$gzip_ratio"'; slowfs_cache_path /var/cache/nginx/cache levels=1:2 keys_zone=fastcache:4096m max_size=25g; slowfs_temp_path /var/cache/nginx/temp 1 2; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; include /etc/nginx/sites-enabled/*; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236987,236987#msg-236987 From mdounin at mdounin.ru Wed Mar 6 12:13:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 16:13:34 +0400 Subject: Cache keeps growing despite Max_size limit In-Reply-To: References: Message-ID: <20130306121334.GP15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 07:02:28AM -0500, jan5134 wrote: > Hi, > > I'm having issues with nginx where my cache directory keeps growing until > the hdd is full. > If anyone can give me any information on how to solve this it will be > appreciated. > > nginx -V: > > nginx version: nginx/1.2.1 > built by gcc 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx > --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_stub_status_module > --with-http_perl_module --with-mail --with-mail_ssl_module > --add-module=/root/nginx-1.2.1/ngx_slowfs_cache-1.9 > --add-module=/root/nginx-1.2.1/nginx-sticky-module-1.0 > > nginx.conf: > user apache; > > worker_processes 4; > > error_log /var/log/nginx/error.log emerg; > pid /var/run/nginx.pid; > > worker_rlimit_nofile 30000; > > events { > worker_connections 8192; > use epoll; > multi_accept off; > } > > http { > include /etc/nginx/mime.types; > > sendfile on; > > gzip on; > gzip_min_length 10; > gzip_types text/plain text/css image/png image/gif > image/jpeg application/x-javascript text/xml > application/xml application/xml+rss > text/javascript application/javascript; > gzip_vary on; > gzip_comp_level 9; > gzip_proxied any; > gzip_disable msie6; > > tcp_nodelay off; > > log_format '$remote_addr - $remote_user [$time_local]' > '"$request" $status $bytes_sent ' > '"$http_referer" "$http_user_agent" > "$gzip_ratio"'; > > slowfs_cache_path /var/cache/nginx/cache levels=1:2 > keys_zone=fastcache:4096m max_size=25g; > slowfs_temp_path /var/cache/nginx/temp 1 2; > > proxy_buffer_size 128k; > proxy_buffers 4 256k; > proxy_busy_buffers_size 256k; > > include /etc/nginx/sites-enabled/*; > } As a quick test you may try switching to a proxy + proxy_cache setup instead of slowfs_cache to see if slowfs_cache module problem or something more general. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Mar 6 12:16:39 2013 From: nginx-forum at nginx.us (jan5134) Date: Wed, 06 Mar 2013 07:16:39 -0500 Subject: Cache keeps growing despite Max_size limit In-Reply-To: <20130306121334.GP15378@mdounin.ru> References: <20130306121334.GP15378@mdounin.ru> Message-ID: Thanks for the quick reply. I would like to test that but i'm kind of new to nginx and also our website can't have any downtime. Any other suggestion maybe? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236987,236989#msg-236989 From mdounin at mdounin.ru Wed Mar 6 12:28:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 16:28:09 +0400 Subject: Cache keeps growing despite Max_size limit In-Reply-To: References: <20130306121334.GP15378@mdounin.ru> Message-ID: <20130306122809.GQ15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 07:16:39AM -0500, jan5134 wrote: > Thanks for the quick reply. > > I would like to test that but i'm kind of new to nginx and also our website > can't have any downtime. > Any other suggestion maybe? Try to reproduce the problem in a sandbox, it should help with your downtime concerns. I'm also not sure if Piotr Sikora, author of the slowfs cache module, is reading this list on a regular basis, so you may want to CC him once you have some data to work with. -- Maxim Dounin http://nginx.org/en/donation.html From andrejaenisch at googlemail.com Wed Mar 6 12:34:39 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Wed, 6 Mar 2013 13:34:39 +0100 Subject: Need help to convert .htaccess to nginx rules In-Reply-To: References: Message-ID: 2013/3/6 onel0ve : > I just need to convert this to nginx rules . I guess, the thing is, that you shall learn, how this can be achieved. So you can do it on your own the next time. In case you've tried it yet, it may be helpful to show your attempts. Look, it seems that you want others to do your job. As far as I can tell many people will refuse to do so. So show some love and see what comes back ;-) Regards, Andr? From mdounin at mdounin.ru Wed Mar 6 12:39:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 16:39:34 +0400 Subject: Problem request_timeout not working with proxy_next_upstream on proxy_connect_timeout but proxy_read_timeout In-Reply-To: <85e6f89cf6d9d89c789ca68b0f4c67c5.NginxMailingListEnglish@forum.nginx.org> References: <85e6f89cf6d9d89c789ca68b0f4c67c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130306123934.GS15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 05:28:12AM -0500, yvlasov wrote: > Hello > In our setup we have an NGNX as front-end and several back-end. > The problem is our load profile, we have a lot of simple and fast http > requests, and very few but very heavy in terms of time and BE cpu requests. > > So my idea is to use proxy_next_upstream for simple request as usual and it > works perfectly. > And for heavy requests based on URL I want to passthrough it to BE with > lowest CPU load by specifying small proxy_connect_timeout and using > proxy_next_upstream timeout. > But in case of all system overload with heavy requests i don't want them to > travel through all BEs because of proxy_read_timeout is about 1 minute. > I was hoping to set a request_timeout to the same value as > proxy_read_timeout and from my point of view this should prevent heavy > requests to travel all upstreams based on proxy_read_timeout, but they do. > > I ve found a similar topic but the proposition was to make two new options > to proxy_next_upstream such as timeout_tcp timeout_http or something > similar. > > Thanks for your future advices and comments. I think that some aggregate upstream timeout, which will prevent switch to a next upstream server if passed, whould be better solution to this problem. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Mar 6 12:58:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 16:58:19 +0400 Subject: =?UTF-8?B?UmU6IE5naXhuINC90LUg0L7RgtC00LDQtdGCINGE0LDQudC70Ysg0LHQvtC70Yw=?= =?UTF-8?B?0YjQtSAxINC80LE=?= In-Reply-To: <418275b01b063d5bbb35fbca00f50ac0@ruby-forum.com> References: <418275b01b063d5bbb35fbca00f50ac0@ruby-forum.com> Message-ID: <20130306125819.GU15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 07:43:54AM +0100, Evgeny T. wrote: > Nginx ?? ?????? ????? > 1?? > > ???? Rails + Nginx + Unicorn > ??? ????????, ?????? ????? ????????, ???? ????? ????? JWPlayer > ?????????????. > ???????? ? ???, ??? ???? ?????? 1.8 ?? ? ????? ????????? ?? ??????? ?? > ???????? ??? ?????? ??????? ???????, ?? ??? ?? ????????????. ????????? ? error log, ?????? ????? ??? ???????? ?????. ??????????, ??? ??? ????? ??? ????????????? ????????? ????? ??????? ? proxy_temp_path ?? ?????-?? ???????? (e.g., ??? ????). -- Maxim Dounin http://nginx.org/en/donation.html From calin.don at gmail.com Wed Mar 6 13:28:13 2013 From: calin.don at gmail.com (Calin Don) Date: Wed, 6 Mar 2013 15:28:13 +0200 Subject: Limit serving to responses only below certain size Message-ID: Hi, Is there a way to server files only below a certain size? eg. Return 403 on files bigger than 5MB? -------------- next part -------------- An HTML attachment was scrubbed... URL: From list-reader at koshie.fr Wed Mar 6 13:40:55 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Wed, 06 Mar 2013 14:40:55 +0100 Subject: Convert Apache rewrite to NGinx In-Reply-To: <20130303190958.GH32392@craic.sysops.org> References: <20130303150109.GG32392@craic.sysops.org> <20130303190958.GH32392@craic.sysops.org> Message-ID: Hi, > On Sun, Mar 03, 2013 at 07:38:39PM +0100, GASPARD k?vin wrote: > > Hi there, > >> >Probably a single extra try_files line will work for you. > >> This is my new config file : > >> location ~ \.php$ { > >> try_files $uri $uri/ /index.php?q=$uri&$args; >> } > > You will probably find things much easier when you fully understand what > is written at http://nginx.org/r/location I understand a little more how it works now :). >> http://doinalefort.fr/2013/hello-world/ > > One request is handled in one location{}. That request does not match > this location, and so will not be handled in this location. > > The try_files directive should be in a location that does match -- > perhaps "location / {}". > > f I've created a new locations as you say and I've put the try_files directive into, it works. Cordially, Thanks :) ! From kirilk at cloudxcel.com Wed Mar 6 13:44:37 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Wed, 6 Mar 2013 15:44:37 +0200 Subject: Nginx proxy_intercept_errors Message-ID: Hi, I have noticed that when I set 'proxy_intercept_errors on;' in my nginx config it kills tcp connection to the origin server if it returns 4xx or 5xx? This is my example config to reproduce the situation(https://gist.github.com/kirilkalchev/5098882). I am in a situation where my backend server returns only 403 and 404 (it is some kind of home made authentication system 403 means go away 404 means continue) and I need to display different things in this cases, but without keep alive connections to the backend are exhausted pretty fast. Thanks, Kiril -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Mar 6 13:54:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 17:54:06 +0400 Subject: Nginx proxy_intercept_errors In-Reply-To: References: Message-ID: <20130306135406.GY15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 03:44:37PM +0200, Kiril Kalchev wrote: > I have noticed that when I set 'proxy_intercept_errors on;' in > my nginx config it kills tcp connection to the origin server if > it returns 4xx or 5xx? > This is my example config to reproduce the > situation(https://gist.github.com/kirilkalchev/5098882). I am > in a situation where my backend server returns only 403 and 404 > (it is some kind of home made authentication system 403 means go > away 404 means continue) and I need to display different things > in this cases, but without keep alive connections to the backend > are exhausted pretty fast. With proxy_intercept_errors nginx doesn't read the response body if a response returned is the error, and the upstream connection can't be kept alive due to this (unless there is no body and it's known after reading response headers). -- Maxim Dounin http://nginx.org/en/donation.html From kirilk at cloudxcel.com Wed Mar 6 14:02:30 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Wed, 6 Mar 2013 16:02:30 +0200 Subject: Nginx proxy_intercept_errors In-Reply-To: <20130306135406.GY15378@mdounin.ru> References: <20130306135406.GY15378@mdounin.ru> Message-ID: Is there any way to force nginx to read request body? I really don't care about this overhead, I hit connection limit much more faster. Thank you for the super fast answer. Regards, Kiril On Mar 6, 2013, at 3:54 PM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 06, 2013 at 03:44:37PM +0200, Kiril Kalchev wrote: > >> I have noticed that when I set 'proxy_intercept_errors on;' in >> my nginx config it kills tcp connection to the origin server if >> it returns 4xx or 5xx? >> This is my example config to reproduce the >> situation(https://gist.github.com/kirilkalchev/5098882). I am >> in a situation where my backend server returns only 403 and 404 >> (it is some kind of home made authentication system 403 means go >> away 404 means continue) and I need to display different things >> in this cases, but without keep alive connections to the backend >> are exhausted pretty fast. > > With proxy_intercept_errors nginx doesn't read the response body > if a response returned is the error, and the upstream connection > can't be kept alive due to this (unless there is no body and it's > known after reading response headers). > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Mar 6 14:31:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 18:31:59 +0400 Subject: Nginx proxy_intercept_errors In-Reply-To: References: <20130306135406.GY15378@mdounin.ru> Message-ID: <20130306143159.GA15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 04:02:30PM +0200, Kiril Kalchev wrote: > Is there any way to force nginx to read request body? I really > don't care about this overhead, I hit connection limit much more > faster. Thank you for the super fast answer. No, there is no way to force nginx to read response body - errors interception happens right after reading response headers and before the body is read. (Well, you may configure another proxy layer without intercept errors, but this probably doesn't counts as a real solution to what you are trying to do.) On the other hand, if you have only 403/404 responses you want to intercept - you may force your backend to return only headers by using proxy_method HEAD; in your config (see http://nginx.org/r/proxy_method). -- Maxim Dounin http://nginx.org/en/donation.html From kirilk at cloudxcel.com Wed Mar 6 14:45:33 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Wed, 6 Mar 2013 16:45:33 +0200 Subject: Nginx proxy_intercept_errors In-Reply-To: <20130306143159.GA15378@mdounin.ru> References: <20130306135406.GY15378@mdounin.ru> <20130306143159.GA15378@mdounin.ru> Message-ID: Just for the record, I think I found a kind of solution. It looks good if my backend returns http codes 3xx. I have tried with 333 and 334 and it looks great. I know it is an ugly hack, but my findings may help to other poor souls. I hope this behavior will not change in the next versions. Regards, Kiril On Mar 6, 2013, at 4:31 PM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 06, 2013 at 04:02:30PM +0200, Kiril Kalchev wrote: > >> Is there any way to force nginx to read request body? I really >> don't care about this overhead, I hit connection limit much more >> faster. Thank you for the super fast answer. > > No, there is no way to force nginx to read response body - errors > interception happens right after reading response headers and > before the body is read. (Well, you may configure another proxy > layer without intercept errors, but this probably doesn't counts > as a real solution to what you are trying to do.) > > On the other hand, if you have only 403/404 responses you want to > intercept - you may force your backend to return only headers by > using > > proxy_method HEAD; > > in your config (see http://nginx.org/r/proxy_method). > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From lists at ruby-forum.com Wed Mar 6 14:53:38 2013 From: lists at ruby-forum.com (Joseph O.) Date: Wed, 06 Mar 2013 15:53:38 +0100 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: References: Message-ID: <80e159dc6785d87029c109275453fe27@ruby-forum.com> Ah, thank you. Removing the site configuration file did fix the gateway error, but now the server is still pointing at the default Nginx welcome page. Can you (or anyone else here) give me any advice on how to get the server to point to my actual service? BTW, you pointed out that there was an upstream Unicorn server configured, something that I wasn't sure was necessary when I was following the instructions; the Unicorn workers should in fact be on the same host as the web server. I had thought that the way it was defined, using a temporary socket, was correct for the layout I have, but it sounds as if I misunderstood this (and probably a lot of other things). Can you advise me on configuring the Unicorn workers correctly? If there is any additional information or any other configuration files which would help, please let me know. -- Posted via http://www.ruby-forum.com/. From contact at jpluscplusm.com Wed Mar 6 15:30:07 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 6 Mar 2013 15:30:07 +0000 Subject: Limit serving to responses only below certain size In-Reply-To: References: Message-ID: On 6 March 2013 13:28, Calin Don wrote: > Hi, > > Is there a way to server files only below a certain size? > eg. Return 403 on files bigger than 5MB? Assuming you're talking about local filesystem files, you might try to proxy_pass back round to yourself, and do an if() based on $upstream_http_content_length. If you're already proxy'ing, you could use the same technique but without the double nginx hit. I'd personally look at /how/ too-large files are getting onto disk, and fix that, however. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From contact at jpluscplusm.com Wed Mar 6 15:39:19 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 6 Mar 2013 15:39:19 +0000 Subject: Need help to convert .htaccess to nginx rules In-Reply-To: References: Message-ID: On 6 March 2013 12:34, Andre Jaenisch wrote: > 2013/3/6 onel0ve : >> I just need to convert this to nginx rules . > > I guess, the thing is, that you shall learn, how this can be achieved. > So you can do it on your own the next time. > In case you've tried it yet, it may be helpful to show your attempts. > Look, it seems that you want others to do your job. As far as I can > tell many people will refuse to do so. > > So show some love and see what comes back ;-) +1. Show your working, show where you got stuck or what doesn't work, and you'll find people are much more eager to help you :-) Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From mdounin at mdounin.ru Wed Mar 6 15:44:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Mar 2013 19:44:00 +0400 Subject: Nginx proxy_intercept_errors In-Reply-To: References: <20130306135406.GY15378@mdounin.ru> <20130306143159.GA15378@mdounin.ru> Message-ID: <20130306154359.GD15378@mdounin.ru> Hello! On Wed, Mar 06, 2013 at 04:45:33PM +0200, Kiril Kalchev wrote: > Just for the record, I think I found a kind of solution. It > looks good if my backend returns http codes 3xx. I have tried > with 333 and 334 and it looks great. I know it is an ugly hack, > but my findings may help to other poor souls. I hope this > behavior will not change in the next versions. I would suppose it works as these responses are returned with "Content-Length: 0" by your backend. As already mentioned in the very first reply, connections are kept alive if it's known from response headers that there is no body. -- Maxim Dounin http://nginx.org/en/donation.html From kirilk at cloudxcel.com Wed Mar 6 15:55:22 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Wed, 6 Mar 2013 17:55:22 +0200 Subject: Nginx proxy_intercept_errors In-Reply-To: <20130306154359.GD15378@mdounin.ru> References: <20130306135406.GY15378@mdounin.ru> <20130306143159.GA15378@mdounin.ru> <20130306154359.GD15378@mdounin.ru> Message-ID: Yes you are right. Thank you. On Mar 6, 2013, at 5:44 PM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 06, 2013 at 04:45:33PM +0200, Kiril Kalchev wrote: > >> Just for the record, I think I found a kind of solution. It >> looks good if my backend returns http codes 3xx. I have tried >> with 333 and 334 and it looks great. I know it is an ugly hack, >> but my findings may help to other poor souls. I hope this >> behavior will not change in the next versions. > > I would suppose it works as these responses are returned with > "Content-Length: 0" by your backend. As already mentioned in the > very first reply, connections are kept alive if it's known from > response headers that there is no body. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Mar 6 16:33:20 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 06 Mar 2013 11:33:20 -0500 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: <80e159dc6785d87029c109275453fe27@ruby-forum.com> References: <80e159dc6785d87029c109275453fe27@ruby-forum.com> Message-ID: Joseph O. Wrote: ------------------------------------------------------- > Ah, thank you. Removing the site configuration file did fix the > gateway > error, but now the server is still pointing at the default Nginx > welcome > page. Can you (or anyone else here) give me any advice on how to get > the > server to point to my actual service? proxy_pass should do the trick maybe you post your actual config. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236971,237014#msg-237014 From lists at ruby-forum.com Wed Mar 6 16:55:19 2013 From: lists at ruby-forum.com (Joseph O.) Date: Wed, 06 Mar 2013 17:55:19 +0100 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: References: Message-ID: I'm not sure what you mean here; Which aspects of the actual configuration? The Capistrano deployment, the Unicorn.rb file, or some other part of the Nginx configuration? I've attached the former two, with the sensitive data masked out of course, but I'm not certain what else you would need. I did correct the socket path (it is now set to /tmp/spree.sock in both the deploy.rb and nginx.conf), but I am unclear about what to do with the proxy_pass settings. Attachments: http://www.ruby-forum.com/attachment/8200/unicorn.rb http://www.ruby-forum.com/attachment/8201/deploy.rb -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Mar 6 18:36:29 2013 From: nginx-forum at nginx.us (dakun) Date: Wed, 06 Mar 2013 13:36:29 -0500 Subject: Forwarding to upstream server at port specified in url query paramter In-Reply-To: References: Message-ID: Thanks! The Resolver did the trick. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236969,237020#msg-237020 From calin.don at gmail.com Wed Mar 6 19:28:13 2013 From: calin.don at gmail.com (Calin Don) Date: Wed, 6 Mar 2013 21:28:13 +0200 Subject: Limit serving to responses only below certain size In-Reply-To: References: Message-ID: Unfortunately the way big files are getting there is beyond my control. On Wed, Mar 6, 2013 at 5:30 PM, Jonathan Matthews wrote: > On 6 March 2013 13:28, Calin Don wrote: > > Hi, > > > > Is there a way to server files only below a certain size? > > eg. Return 403 on files bigger than 5MB? > > Assuming you're talking about local filesystem files, you might try to > proxy_pass back round to yourself, and do an if() based on > $upstream_http_content_length. > > If you're already proxy'ing, you could use the same technique but > without the double nginx hit. > > I'd personally look at /how/ too-large files are getting onto disk, > and fix that, however. > > Jonathan > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 6 20:04:47 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 06 Mar 2013 15:04:47 -0500 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: References: Message-ID: your nginx-conf please that points to your upstream References: Message-ID: Are you saying that you do not have administrative control of your system? Jonathan is right - set policies that disallow large file sizes and enforce them. If necessary use chron to check for large files and remove them. On Wed, Mar 6, 2013 at 2:28 PM, Calin Don wrote: > Unfortunately the way big files are getting there is beyond my control. > > > On Wed, Mar 6, 2013 at 5:30 PM, Jonathan Matthews > wrote: > >> On 6 March 2013 13:28, Calin Don wrote: >> > Hi, >> > >> > Is there a way to server files only below a certain size? >> > eg. Return 403 on files bigger than 5MB? >> >> Assuming you're talking about local filesystem files, you might try to >> proxy_pass back round to yourself, and do an if() based on >> $upstream_http_content_length. >> >> If you're already proxy'ing, you could use the same technique but >> without the double nginx hit. >> >> I'd personally look at /how/ too-large files are getting onto disk, >> and fix that, however. >> >> Jonathan >> -- >> Jonathan Matthews // Oxford, London, UK >> http://www.jpluscplusm.com/contact.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Wed Mar 6 22:14:41 2013 From: lists at ruby-forum.com (Joseph O.) Date: Wed, 06 Mar 2013 23:14:41 +0100 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: References: Message-ID: <95f7dbf2b5c65cceeab0933c1cb00648@ruby-forum.com> *sigh* Clearly I am missing something here, as the only nginx.conf file I have is the one I posted last night. There is no upstream server; the code for that was from the example config file I was trying to modify for my purposes. It seems that this copypasta approach isn't adequate for setting this up properly (I can't say I'm too surprised, but the Spree documentation I was going by implied that it would be), and that I will need to actually understand how Nginx works if I am to get anywhere with this. At this point, a tutorial or other documentation is probably what I need most. I had hoped that the problem was fairly simple, and could be fixed by some tweaking; it now is clear that the Spree documentation isn't work the electrons it is printed on. -- Posted via http://www.ruby-forum.com/. From piotr.sikora at frickle.com Wed Mar 6 23:19:55 2013 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Thu, 7 Mar 2013 00:19:55 +0100 Subject: Cache keeps growing despite Max_size limit In-Reply-To: <20130306121334.GP15378@mdounin.ru> References: <20130306121334.GP15378@mdounin.ru> Message-ID: Hey, > As a quick test you may try switching to a proxy + proxy_cache > setup instead of slowfs_cache to see if slowfs_cache module > problem or something more general. It's a known issue with my ngx_slowfs_cache module. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From kworthington at gmail.com Thu Mar 7 01:38:40 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 6 Mar 2013 20:38:40 -0500 Subject: nginx-1.3.14 In-Reply-To: <20130305145544.GB15378@mdounin.ru> References: <20130305145544.GB15378@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.14 For Windows http://goo.gl/WFaWC (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Mar 5, 2013 at 9:55 AM, Maxim Dounin wrote: > Changes with nginx 1.3.14 05 Mar > 2013 > > *) Feature: $connections_active, $connections_reading, and > $connections_writing variables in the ngx_http_stub_status_module. > > *) Feature: support of WebSocket connections in the > ngx_http_uwsgi_module and ngx_http_scgi_module. > > *) Bugfix: in virtual servers handling with SNI. > > *) Bugfix: new sessions were not always stored if the > "ssl_session_cache > shared" directive was used and there was no free space in shared > memory. > Thanks to Piotr Sikora. > > *) Bugfix: multiple X-Forwarded-For headers were handled incorrectly. > Thanks to Neal Poole for sponsoring this work. > > *) Bugfix: in the ngx_http_mp4_module. > Thanks to Gernot Vormayr. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duanemulder at rattyshack.ca Thu Mar 7 02:48:57 2013 From: duanemulder at rattyshack.ca (duanemulder at rattyshack.ca) Date: Thu, 07 Mar 2013 02:48:57 Subject: =?UTF-8?B?UmU6IE5naXhuINC90LUg0L7RgtC00LDQtdGCINGE0LDQudC70Ysg0LHQvtC70Yw=?= =?UTF-8?B?0YjQtSAxINC80LE=?= In-Reply-To: <20130306125819.GU15378@mdounin.ru> References: <418275b01b063d5bbb35fbca00f50ac0@ruby-forum.com> <20130306125819.GU15378@mdounin.ru> Message-ID: <20130307024858.4390B33406B@homiemail-a13.g.dreamhost.com> An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 7 05:47:05 2013 From: nginx-forum at nginx.us (jan5134) Date: Thu, 07 Mar 2013 00:47:05 -0500 Subject: Cache keeps growing despite Max_size limit In-Reply-To: References: Message-ID: <3a40bd192ef0e5f8e25bc984f859b08a.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for your contribution to the nginx community Piotr and your reply. Do you have any suggestions on how to bypass this problem? Any update on the module in the near future that might fix this issue? Thanks Jan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236987,237036#msg-237036 From nginx-forum at nginx.us Thu Mar 7 08:26:33 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 07 Mar 2013 03:26:33 -0500 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: <95f7dbf2b5c65cceeab0933c1cb00648@ruby-forum.com> References: <95f7dbf2b5c65cceeab0933c1cb00648@ruby-forum.com> Message-ID: <20a69f603ed30ecef53c619def92a0aa.NginxMailingListEnglish@forum.nginx.org> Joseph O. Wrote: ------------------------------------------------------- ok, first make sure your railsapp is loading as expected and working w/out nginx in front; you can check it from that machine using w3m/lynx or telnet. if this works we'll check the nginx-part. > *sigh* Clearly I am missing something here, as the only nginx.conf > file > I have is the one I posted last night. There is no upstream server; > the > code for that was from the example config file I was trying to modify > for my purposes. you define a upstream {} section but you dont use it later, instead of proxy_pass $upstream you define proxy_pass to use a hostname instead of your upstream; thats a mismatch in your config > > It seems that this copypasta approach isn't adequate for setting this > up properly +1 :) you'll find some howtos and best-practice-guides for various applications here: http://wiki.nginx.org/Configuration but understanding what you're doing is key for long-time-happyness. basically, your setup is quite simple and straight-forward. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236971,237037#msg-237037 From nginx at westbrook.com Thu Mar 7 09:17:04 2013 From: nginx at westbrook.com (E. Westbrook) Date: Thu, 07 Mar 2013 02:17:04 -0700 Subject: Want to access UNIX environment variable In-Reply-To: <27f59c792f1cde8d9cfcff61b589886a.NginxMailingListEnglish@forum.nginx.org> References: <27f59c792f1cde8d9cfcff61b589886a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51385B10.7050309@westbrook.com> Posting a bit late to this thread, but thought I'd contribute. Having found no better way myself, the way I "import" environment variables into my nginx configurations is as follows. At top config level: env MYVAR; At http level: perl_set $myvar 'sub { return $ENV{"MYVAR"}; }'; Then, for example, at server level: root $myvar; Works a treat. Obviously there must be an overhead cost with this approach, but either it's small, or I'm small, because I haven't noticed it. (And even then, if I found it significant, I'd probably get to work on a proper patch, since environment variables are rather central to my general service configuration strategy.) I'd love to know if anyone does it differently and/or better. If a helper is how it must be done, I'd actually prefer to use Lua -- but "perl_set" is documented for use at the http config level, whereas "lua_set" (for whatever reason) is not. $0.02, Eric On 02/27/2013 11:22 PM, amodpandey wrote: > Thank you. > > I am looking for an simpler ( direct ) approach. For now I have put a sed > script in my bounce nginx which does that. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236654,236706#msg-236706 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From tom at miramedia.co.uk Thu Mar 7 09:30:01 2013 From: tom at miramedia.co.uk (Tom Barrett) Date: Thu, 7 Mar 2013 09:30:01 +0000 Subject: 'Primary' domain in server block? Message-ID: Hi I'm doing some work with PayPal integration. My server block looks like this: server { server_name dev01.localmap.net dev01.devsite.co.uk; .. } The work is being done in under 'dev01.devsite.co.uk' domain. However, for link backs from PayPal, it is working out that it should be ' dev01.localmap.net'. If I switch the two around in the server block, then PayPal picks that up. How is PayPal doing that? And are there any config settings I can use to change this behaviour? It is not a major issue, mainly it is confusing some testers who are 'losing' cookie data. I'm looking to explain the situation and make sure we have a nice solution to it. Thanks, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Mar 7 12:32:05 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 7 Mar 2013 17:32:05 +0500 Subject: I/O error on uploading video upto 4mb Message-ID: Hello, We recentyl changed our ip of the server to upgrade port to 2Gbps but after upgrading port, we are unable to upload .flv video upto 4mb. Everything was working fine before the up-gradation. Following is the nginx.conf file. Please i need a quick help :- user nginx; worker_processes 16; worker_rlimit_nofile 300000; #2 filehandlers for each connection #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 6000; use epoll; } http { include mime.types; default_type application/octet-stream; client_body_buffer_size 128K; sendfile_max_chunk 128k; client_max_body_size 800m; client_header_buffer_size 256k; large_client_header_buffers 4 256k; output_buffers 1 512k; # fastcgi_buffers 512 8k; # proxy_buffers 512 8k; # fastcgi_read_timeout 300s; server_tokens off; #Conceals nginx version #access_log logs/access.log main; access_log off; sendfile off; #tcp_nopush on; # gzip on; # gzip_vary on; # gzip_disable "MSIE [1-6]\."; # gzip_proxied any; # gzip_http_version 1.1; # gzip_min_length 1000; # gzip_comp_level 6; # gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU # gzip_types text/plain text/xml text/css application/x-javascript application/xml application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; keepalive_timeout 0; reset_timedout_connection on; Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony.kerz at gmail.com Thu Mar 7 13:11:21 2013 From: anthony.kerz at gmail.com (anthony kerz) Date: Thu, 7 Mar 2013 08:11:21 -0500 Subject: build from source pointing to installed libs (e.g. pcre) Message-ID: hi, i'm trying to build from source on an ubuntu system which has libpcre3 installed: tony at quantal:~/Downloads/nginx-1.3.14$ dpkg -l | grep pcre ii libpcre3:i386 1:8.30-5ubuntu1 i386 Perl 5 Compatible Regular Expression Library - runtime files but i can't for the life of me get the configure script to resolve to this, libpcre.so.3 is in /lib/i386-linux-gnu, so i've tried: --with-ld-opt="-L/lib/i386-linux-gnu" but still get: ----------------- ... checking for PCRE library ... not found checking for PCRE library in /usr/local/ ... not found checking for PCRE library in /usr/include/pcre/ ... not found checking for PCRE library in /usr/pkg/ ... not found checking for PCRE library in /opt/local/ ... not found ./configure: error: the HTTP rewrite module requires the PCRE library. You can either disable the module by using --without-http_rewrite_module option, or install the PCRE library into the system, or build the PCRE library statically from the source with nginx by using --with-pcre= option. ---------------- any guidance appreciated! thx, tony. -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr.sikora at frickle.com Thu Mar 7 13:17:13 2013 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Thu, 7 Mar 2013 14:17:13 +0100 Subject: [ANNOUNCE] ngx_slowfs_cache-1.10 Message-ID: Version 1.10 is now available at: http://labs.frickle.com/nginx_ngx_slowfs_cache/ GitHub repository is available at: http://github.com/FRiCKLE/ngx_slowfs_cache/ Changes: 2013-03-07 VERSION 1.10 * Fix compatibility with nginx-1.1.12+. Due to the changes in cache index usage accounting (1 per request instead of 1 per access) initial cache insert of "small" file was decreasing usage count, rendering index entry invalid. Because cache manager ignores invalid entries, expired cache files were not being removed from the cache, which forced it to outgrow specified "max_size" value. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From piotr.sikora at frickle.com Thu Mar 7 13:30:26 2013 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Thu, 7 Mar 2013 14:30:26 +0100 Subject: Cache keeps growing despite Max_size limit In-Reply-To: <3a40bd192ef0e5f8e25bc984f859b08a.NginxMailingListEnglish@forum.nginx.org> References: <3a40bd192ef0e5f8e25bc984f859b08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey Jan, > Any update on the module in the near future that might fix this issue? Latest release shouldn't have this problem: http://labs.frickle.com/files/ngx_slowfs_cache-1.10.tar.gz Best regards, Piotr Sikora < piotr.sikora at frickle.com > From nginx-forum at nginx.us Thu Mar 7 13:40:11 2013 From: nginx-forum at nginx.us (athukral) Date: Thu, 07 Mar 2013 08:40:11 -0500 Subject: shibboleth and authorizer mode In-Reply-To: References: Message-ID: <498a7b7761509415d9acccd92e7cfa63.NginxMailingListEnglish@forum.nginx.org> Hi, I need to implement shebboleth SP with nginx webserver but got to know that nginx doesn't support autorizer mode. I saw that the above post is dated very old. I would like to check whether this is still the case or has there been any fix provided from nginx for this ? Is there any way this can be achieved ? Regards, Amit Thukral Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218694,237063#msg-237063 From nginx-forum at nginx.us Thu Mar 7 13:47:04 2013 From: nginx-forum at nginx.us (yvlasov) Date: Thu, 07 Mar 2013 08:47:04 -0500 Subject: Problem request_timeout not working with proxy_next_upstream on proxy_connect_timeout but proxy_read_timeout In-Reply-To: <20130306123934.GS15378@mdounin.ru> References: <20130306123934.GS15378@mdounin.ru> Message-ID: <386929792801fa748b60fecc9d04ad2d.NginxMailingListEnglish@forum.nginx.org> Good idea but we have to keep in mind it should depend on location context. THX Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236982,237064#msg-237064 From nginx-forum at nginx.us Thu Mar 7 14:09:37 2013 From: nginx-forum at nginx.us (jan5134) Date: Thu, 07 Mar 2013 09:09:37 -0500 Subject: Cache keeps growing despite Max_size limit In-Reply-To: References: Message-ID: <638d77073311b69db0b39a0a3171b1c5.NginxMailingListEnglish@forum.nginx.org> Thanks for that quick update Piotr. Will start with the testing on a new server that i'll set up and let you know if there is anything else. Thanks Jan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236987,237065#msg-237065 From shahzaib.cb at gmail.com Thu Mar 7 14:40:31 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 7 Mar 2013 19:40:31 +0500 Subject: I/O error on uploading video upto 4mb In-Reply-To: References: Message-ID: The issue has been solved, it was monit tool which was creating issues. Thanks :) Best Regards. Shahzaib On Thu, Mar 7, 2013 at 5:32 PM, shahzaib shahzaib wrote: > Hello, > > We recentyl changed our ip of the server to upgrade port to 2Gbps > but after upgrading port, we are unable to upload .flv video upto 4mb. > Everything was working fine before the up-gradation. Following is the > nginx.conf file. Please i need a quick help :- > > user nginx; > worker_processes 16; > worker_rlimit_nofile 300000; #2 filehandlers for each connection > #error_log logs/error.log; > #error_log logs/error.log notice; > #error_log logs/error.log info; > > #pid logs/nginx.pid; > > > events { > worker_connections 6000; > use epoll; > } > http { > include mime.types; > default_type application/octet-stream; > client_body_buffer_size 128K; > sendfile_max_chunk 128k; > client_max_body_size 800m; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > output_buffers 1 512k; > # fastcgi_buffers 512 8k; > # proxy_buffers 512 8k; > # fastcgi_read_timeout 300s; > server_tokens off; #Conceals nginx version > #access_log logs/access.log main; > access_log off; > sendfile off; > #tcp_nopush on; > # gzip on; > # gzip_vary on; > # gzip_disable "MSIE [1-6]\."; > # gzip_proxied any; > # gzip_http_version 1.1; > # gzip_min_length 1000; > # gzip_comp_level 6; > # gzip_buffers 16 8k; > # You can remove image/png image/x-icon image/gif image/jpeg if you have > slow CPU > # gzip_types text/plain text/xml text/css application/x-javascript > application/xml application/xml+rss text/javascript application/atom+xml; > ignore_invalid_headers on; > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > keepalive_timeout 0; > reset_timedout_connection on; > > Best Regards. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Thu Mar 7 15:04:34 2013 From: edho at myconan.net (Edho Arief) Date: Fri, 8 Mar 2013 00:04:34 +0900 Subject: build from source pointing to installed libs (e.g. pcre) In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 10:11 PM, anthony kerz wrote: > hi, > > i'm trying to build from source on an ubuntu system which has libpcre3 > installed: > > tony at quantal:~/Downloads/nginx-1.3.14$ dpkg -l | grep pcre > ii libpcre3:i386 1:8.30-5ubuntu1 i386 Perl 5 Compatible Regular Expression > Library - runtime files > > but i can't for the life of me get the configure script to resolve to this, > libpcre.so.3 is in /lib/i386-linux-gnu, so i've tried: > > --with-ld-opt="-L/lib/i386-linux-gnu" > > but still get: > ----------------- > ... > checking for PCRE library ... not found > checking for PCRE library in /usr/local/ ... not found > checking for PCRE library in /usr/include/pcre/ ... not found > checking for PCRE library in /usr/pkg/ ... not found > checking for PCRE library in /opt/local/ ... not found > > ./configure: error: the HTTP rewrite module requires the PCRE library. > You can either disable the module by using --without-http_rewrite_module > option, or install the PCRE library into the system, or build the PCRE > library > statically from the source with nginx by using --with-pcre= option. > ---------------- > > any guidance appreciated! > building stuff from source require installation of development (-dev/-devel) packages of its dependencies in ubuntu, rhel, and some other distros. (read: try installing libpcre3-dev) -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From anatoly at sonru.com Thu Mar 7 15:13:48 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Thu, 7 Mar 2013 15:13:48 +0000 Subject: build from source pointing to installed libs (e.g. pcre) In-Reply-To: References: Message-ID: <3B1B3977-8419-4A2F-8489-8000F6586981@sonru.com> On Mar 7, 2013, at 3:04 PM, Edho Arief wrote: > On Thu, Mar 7, 2013 at 10:11 PM, anthony kerz wrote: >> hi, >> >> i'm trying to build from source on an ubuntu system which has libpcre3 >> installed: >> >> tony at quantal:~/Downloads/nginx-1.3.14$ dpkg -l | grep pcre >> ii libpcre3:i386 1:8.30-5ubuntu1 i386 Perl 5 Compatible Regular Expression >> Library - runtime files >> >> but i can't for the life of me get the configure script to resolve to this, >> libpcre.so.3 is in /lib/i386-linux-gnu, so i've tried: >> >> --with-ld-opt="-L/lib/i386-linux-gnu" >> >> but still get: >> ----------------- >> ... >> checking for PCRE library ... not found >> checking for PCRE library in /usr/local/ ... not found >> checking for PCRE library in /usr/include/pcre/ ... not found >> checking for PCRE library in /usr/pkg/ ... not found >> checking for PCRE library in /opt/local/ ... not found >> >> ./configure: error: the HTTP rewrite module requires the PCRE library. >> You can either disable the module by using --without-http_rewrite_module >> option, or install the PCRE library into the system, or build the PCRE >> library >> statically from the source with nginx by using --with-pcre= option. >> ---------------- >> >> any guidance appreciated! >> > > building stuff from source require installation of development > (-dev/-devel) packages of its dependencies in ubuntu, rhel, and some > other distros. > > (read: try installing libpcre3-dev) cd /usr/src wget http://nginx.org/download/nginx-1.3.14.tar.gz tar xzvf ./nginx-1.3.14.tar.gz && rm -f ./nginx-1.3.14.tar.gz && cd nginx-1.3.14 wget http://nginx.org/patches/spdy/patch.spdy-66_1.3.14.txt && patch -p1 < patch.spdy.txt cd /usr/src wget http://zlib.net/zlib127.zip unzip zlib127.zip && rm -f zlib127.zip wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.30.tar.gz tar xzvf pcre-8.30.tar.gz && rm -f ./pcre-8.30.tar.gz wget http://www.openssl.org/source/openssl-1.0.1c.tar.gz tar xzvf openssl-1.0.1c.tar.gz && rm -f openssl-1.0.1c.tar.gz cd /usr/src/nginx-1.3.14 ./configure --prefix=/opt/nginx --with-pcre=/usr/src/pcre-8.30 --with-zlib=/usr/src/zlib-1.2.7 --with-openssl=/usr/src/openssl-1.0.1c --with-http_ssl_module --with-http_spdy_module make && make install > > -- > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Mar 7 16:33:38 2013 From: nginx-forum at nginx.us (onel0ve) Date: Thu, 07 Mar 2013 11:33:38 -0500 Subject: Need help to convert .htaccess to nginx rules In-Reply-To: References: Message-ID: ok CAn you just tell how to exclude a directory from nginx rules ? . I want to disable mod rewrite for one sub folder . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236960,237083#msg-237083 From francis at daoine.org Thu Mar 7 17:06:37 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 7 Mar 2013 17:06:37 +0000 Subject: 'Primary' domain in server block? In-Reply-To: References: Message-ID: <20130307170637.GD10870@craic.sysops.org> On Thu, Mar 07, 2013 at 09:30:01AM +0000, Tom Barrett wrote: Hi there, > I'm doing some work with PayPal integration. How does nginx talk to your PayPal integration system? proxy_pass, fastcgi_pass, something else? > The work is being done in under 'dev01.devsite.co.uk' domain. However, for > link backs from PayPal, it is working out that it should be ' > dev01.localmap.net'. If I switch the two around in the server block, then > PayPal picks that up. In nginx, the variable $server_name is the first entry for the server_name directive in the appropriate server{} block. There are circumstances in which the variable $host will take that value. > How is PayPal doing that? Presumably you're giving the information it to PayPal somehow? > And are there any config settings I can use to > change this behaviour? Probably. What's your current relevant config? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Mar 7 17:25:43 2013 From: nginx-forum at nginx.us (double) Date: Thu, 07 Mar 2013 12:25:43 -0500 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130305131741.GN8912@reaktio.net> References: <20130305131741.GN8912@reaktio.net> Message-ID: > I keep getting the "upstream sent invalid header while reading response header from upstream" > error when using the no_buffer patch.. The patch does not work for you? Thanks Markus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,237090#msg-237090 From pasik at iki.fi Thu Mar 7 17:48:37 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 7 Mar 2013 19:48:37 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <20130305131741.GN8912@reaktio.net> Message-ID: <20130307174837.GB8912@reaktio.net> On Thu, Mar 07, 2013 at 12:25:43PM -0500, double wrote: > > I keep getting the "upstream sent invalid header while reading response > header from upstream" > > error when using the no_buffer patch.. > > The patch does not work for you? > Thanks > Markus > Yep, the patch doesn't work for me. -- Pasi From nginx+phil at spodhuis.org Thu Mar 7 19:31:17 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Thu, 7 Mar 2013 14:31:17 -0500 Subject: nginx/KQUEUE+SPDY breaks proxy_ignore_client_abort In-Reply-To: <201303051933.20998.vbart@nginx.com> References: <20130301082251.GA97216.take2@redoubt.spodhuis.org> <20130301205606.GA15343@redoubt.spodhuis.org> <201303020209.46947.vbart@nginx.com> <201303051933.20998.vbart@nginx.com> Message-ID: <20130307193117.GA68431@redoubt.spodhuis.org> On 2013-03-05 at 19:33 +0400, Valentin V. Bartenev wrote: > Done. > > http://nginx.org/patches/spdy/patch.spdy-66_1.3.14.txt Thanks; I deployed this a day and a half ago; I could no longer trigger the conneciton drops from another box on the same network, where I could do so fairly reliably before. Nobody has replied to my request for testing from more places, so I'm going to call this fixed. Much appreciated! -Phil From nginx-forum at nginx.us Thu Mar 7 20:39:30 2013 From: nginx-forum at nginx.us (ubunifu) Date: Thu, 07 Mar 2013 15:39:30 -0500 Subject: Kindly help me convert this htaccess to nginx directives? Message-ID: <7c56349843a4e5bcf8ef6e5cc677fc8c.NginxMailingListEnglish@forum.nginx.org> ErrorDocument 404 / ErrorDocument 500 / Options +FollowSymlinks RewriteEngine on RewriteBase / SecFilterEngine Off SecFilterScanPOST Off RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteRule ^([^\.\/\-]+)$ $1.php [L] RewriteRule ^videos/([^\.\/]+)$ /video-file.php?id=$1 [L] RewriteRule ^audios/([^\.\/]+)$ /audio-file.php?id=$1 [L] RewriteRule ^notes/([^\.\/]+)$ /note-file.php?id=$1 [L] RewriteRule ^exercises/([^\.\/]+)$ /exercise-file.php?id=$1 [L] RewriteRule ^links/([^\.\/]+)$ /link.php?id=$1 [L] RewriteRule ^class/([^\.\/]+)$ /class.php?slug=$1 [L] RewriteRule ^subject/([^\.\/]+)$ /subject.php?slug=$1 [L] RewriteRule ^author/([^\.\/]+)$ /author.php?username=$1 [L] RewriteRule ^search/([^\.]+)$ /search.php?query=$1 [L] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237098,237098#msg-237098 From emailgrant at gmail.com Thu Mar 7 20:49:58 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 7 Mar 2013 12:49:58 -0800 Subject: nginx for images (apache for pages) Message-ID: I'm serving images and dynamic .html pages via apache on port 80. I'd like to have nginx to serve the images. How can this be done since both apache and nginx need to serve requests on port 80? - Grant From steve at greengecko.co.nz Thu Mar 7 20:54:00 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 08 Mar 2013 09:54:00 +1300 Subject: nginx for images (apache for pages) In-Reply-To: References: Message-ID: <1362689640.15187.1470.camel@steve-new> On Thu, 2013-03-07 at 12:49 -0800, Grant wrote: > I'm serving images and dynamic .html pages via apache on port 80. I'd > like to have nginx to serve the images. How can this be done since > both apache and nginx need to serve requests on port 80? > > - Grant > Set apache up as a proxy server for dynamic html behind the nginx server. (Or do like I do, and drop apache completely (: ). Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From emailgrant at gmail.com Thu Mar 7 21:28:40 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 7 Mar 2013 13:28:40 -0800 Subject: nginx for images (apache for pages) In-Reply-To: <1362689640.15187.1470.camel@steve-new> References: <1362689640.15187.1470.camel@steve-new> Message-ID: >> I'm serving images and dynamic .html pages via apache on port 80. I'd >> like to have nginx to serve the images. How can this be done since >> both apache and nginx need to serve requests on port 80? >> >> - Grant >> > Set apache up as a proxy server for dynamic html behind the nginx > server. Is there a good howto for this? Is it difficult when dealing with an ecommerce site? - Grant From emailgrant at gmail.com Thu Mar 7 21:55:35 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 7 Mar 2013 13:55:35 -0800 Subject: imap: invalid header in response while in http auth state Message-ID: I'm using imapproxy and trying to switch to nginx. courier is listening on port 143. mail { auth_http localhost:143; proxy on; server { listen 144; protocol imap; } } I get: auth http server 127.0.0.1:143 sent invalid header in response while in http auth state, client: 127.0.0.1, server: 0.0.0.0:144 Does anyone know what's wrong? - Grant From contact at jpluscplusm.com Thu Mar 7 23:03:12 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 7 Mar 2013 23:03:12 +0000 Subject: Kindly help me convert this htaccess to nginx directives? In-Reply-To: <7c56349843a4e5bcf8ef6e5cc677fc8c.NginxMailingListEnglish@forum.nginx.org> References: <7c56349843a4e5bcf8ef6e5cc677fc8c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Why don't you show us what you've tried; what hasn't worked; and where you got stuck? That way, *you* get to learn about nginx, and *we* don't waste our time a) repeating things you've already managed to do and b) doing your job for you! Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From lists at ruby-forum.com Thu Mar 7 23:58:32 2013 From: lists at ruby-forum.com (Joseph O.) Date: Fri, 08 Mar 2013 00:58:32 +0100 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: References: Message-ID: <154450734e306d932e099bb3767d0f87@ruby-forum.com> OK, so I've gone through the documentation on the Nginx wiki, and I still only have a minimal grasp on how to configure the it to serve Spree via Unicorn. Things are now back where they started (with a 502 error), but I am making some progress, however, in that the server now points to the custom error pages I made, rather than the default ones. Oh well... Attached is the current version of my nginx.conf file. Attachments: http://www.ruby-forum.com/attachment/8205/nginx.conf -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Mar 8 04:19:47 2013 From: nginx-forum at nginx.us (redleaderuk) Date: Thu, 07 Mar 2013 23:19:47 -0500 Subject: Issue with HttpAuthDigestModule Message-ID: <8b48c4d905232b3491fae1b97e454314.NginxMailingListEnglish@forum.nginx.org> Hello Nginx people! I hope you can help me. I'm having an infuriating problem with auth digest via the HttpAuthDigestModule. The first site I added to Nginx used the same htdigest password file as my Apache webserver (on the same box) uses. This works fine. However, I added a second website that uses an almost identical configuration but for some reason I simply cannot authenticate for this second site. I'm using the same htdigest password file, same username and password. The first website I can authenticate, the second one refuses to authenticate me. I can't figure out why! Here is the first website's conf file: ######### # Project 1 configuration upstream project1_backend { server unix:/var/www/project_1/tmp/php.sock; } server { listen 8083; server_name dev.project_1.site.com; root /var/www/project_1/web; access_log /var/www/project_1/logs/nginx_access.log; error_log /var/www/project_1/logs/nginx_error.log; # strip app.php/ prefix if it is present rewrite ^/app\.php/?(.*)$ /$1 permanent; auth_digest_user_file /etc/apache-digest-passwd; location / { auth_digest 'my-realm'; index app.php; try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } # pass the PHP scripts to FastCGI server listening on php socket location ~ ^/(app|app_dev|config)\.php(/|$) { fastcgi_pass project1_backend; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ######### The second configuration file looks like this: ######### # Project 2 configuration file upstream project2_backend { server unix:/var/www/project_2/tmp/php.sock; } server { listen 8083; server_name project_2.site.com; root /var/www/project_2/web; access_log /var/www/project_2/logs/nginx_access.log; error_log /var/www/project_2/logs/nginx_error.log error; # strip app.php/ prefix if it is present rewrite ^/app\.php/?(.*)$ /$1 permanent; auth_digest_user_file /etc/apache-digest-passwd; location / { auth_digest 'my-realm'; index app.php; try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } # pass the PHP scripts to FastCGI server listening on php socket # REMOVE config from choices on PRODUCTION! location ~ ^/(app|app_dev|config)\.php(/|$) { fastcgi_pass project2_backend; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ######### I've tried creating a new password file with htdigest and adding a user/password to it, then referencing that new password file in the second website's conf file but I still can't authenticate. Can anyone shed some light on this for me please? Thanks for any help you can offer, Alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237107,237107#msg-237107 From emailgrant at gmail.com Fri Mar 8 05:16:11 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 7 Mar 2013 21:16:11 -0800 Subject: IMAP: auth_http Message-ID: nginx seems to require being pointed to an HTTP server for imap authentication. Here's the protocol spec: http://wiki.nginx.org/MailCoreModule#Authentication Is the idea to program this server yourself or does a server like this already exist? - Grant From anthony.kerz at gmail.com Fri Mar 8 05:39:24 2013 From: anthony.kerz at gmail.com (anthony kerz) Date: Fri, 8 Mar 2013 00:39:24 -0500 Subject: build from source pointing to installed libs (e.g. pcre) In-Reply-To: References: Message-ID: installing the related 'dev' packages was the ticket: - libpcre3-dev - libssl-dev - zlib1g-dev then acquiring source and static linking can be avoided... thanks @edho! On Thu, Mar 7, 2013 at 10:04 AM, Edho Arief wrote: > On Thu, Mar 7, 2013 at 10:11 PM, anthony kerz > wrote: > > hi, > > > > i'm trying to build from source on an ubuntu system which has libpcre3 > > installed: > > > > tony at quantal:~/Downloads/nginx-1.3.14$ dpkg -l | grep pcre > > ii libpcre3:i386 1:8.30-5ubuntu1 i386 Perl 5 Compatible Regular > Expression > > Library - runtime files > > > > but i can't for the life of me get the configure script to resolve to > this, > > libpcre.so.3 is in /lib/i386-linux-gnu, so i've tried: > > > > --with-ld-opt="-L/lib/i386-linux-gnu" > > > > but still get: > > ----------------- > > ... > > checking for PCRE library ... not found > > checking for PCRE library in /usr/local/ ... not found > > checking for PCRE library in /usr/include/pcre/ ... not found > > checking for PCRE library in /usr/pkg/ ... not found > > checking for PCRE library in /opt/local/ ... not found > > > > ./configure: error: the HTTP rewrite module requires the PCRE library. > > You can either disable the module by using --without-http_rewrite_module > > option, or install the PCRE library into the system, or build the PCRE > > library > > statically from the source with nginx by using --with-pcre= option. > > ---------------- > > > > any guidance appreciated! > > > > building stuff from source require installation of development > (-dev/-devel) packages of its dependencies in ubuntu, rhel, and some > other distros. > > (read: try installing libpcre3-dev) > > -- > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Fri Mar 8 06:33:06 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 7 Mar 2013 22:33:06 -0800 Subject: nginx for images (apache for pages) In-Reply-To: References: <1362689640.15187.1470.camel@steve-new> Message-ID: >>> I'm serving images and dynamic .html pages via apache on port 80. I'd >>> like to have nginx to serve the images. How can this be done since >>> both apache and nginx need to serve requests on port 80? >>> >>> - Grant >>> >> Set apache up as a proxy server for dynamic html behind the nginx >> server. > > Is there a good howto for this? Is it difficult when dealing with an > ecommerce site? > > - Grant What a fine little server. This howto was perfect: http://kbeezie.com/apache-with-nginx/ The ecommerce factor was a breeze. Slightly different SSL certificate handling. - Grant From emailgrant at gmail.com Fri Mar 8 07:30:47 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 7 Mar 2013 23:30:47 -0800 Subject: proxy_read_timeout for an apache location? Message-ID: Can I set proxy_read_timeout for only a particular location which is passed to apache? - Grant From tom at miramedia.co.uk Fri Mar 8 08:53:34 2013 From: tom at miramedia.co.uk (Tom Barrett) Date: Fri, 8 Mar 2013 08:53:34 +0000 Subject: 'Primary' domain in server block? In-Reply-To: <20130307170637.GD10870@craic.sysops.org> References: <20130307170637.GD10870@craic.sysops.org> Message-ID: On 7 March 2013 17:06, Francis Daly wrote: > On Thu, Mar 07, 2013 at 09:30:01AM +0000, Tom Barrett wrote: > > Hi there, > Thanks for reading :) > I'm doing some work with PayPal integration. > > How does nginx talk to your PayPal integration system? I think perhaps my use of the word 'integration' was a little grandiose. The general server setup is PHP fastcgi with a WordPress installation. The link to the PayPal account is via a simple HTML form with a price and submit button. In nginx, the variable $server_name is the first entry for the server_name > directive in the appropriate server{} block. > > There are circumstances in which the variable $host will take that value. > > > How is PayPal doing that? > > Presumably you're giving the information it to PayPal somehow? > So, however PayPal is grabbing it's information, it gets the first entry in the server_name directive. And that's my only option? > > And are there any config settings I can use to > > change this behaviour? > > Probably. What's your current relevant config? Well, assuming my above statement is correct, then only the server_name directive is relevant? It is a pretty trivial setup: http://pastebin.com/SdaB1ZW4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 8 09:14:48 2013 From: nginx-forum at nginx.us (Demontager) Date: Fri, 08 Mar 2013 04:14:48 -0500 Subject: Nginx rewrite rules for phpBB Message-ID: <929785a1de2b59f4392cf0c2226ee3ae.NginxMailingListEnglish@forum.nginx.org> I need help with phpBB rewrite rules. Used this tool http://winginx.ru/htaccess to convert some rules, but they are not applied. The syntax is ok, so i don't know the problem. Original .htaccess - [code] # Lines That should already be in your .htacess Order Allow,Deny Deny from All Order Allow,Deny Deny from All # You may need to un-comment the following lines # Options +FollowSymlinks # To make sure that rewritten dir or file (/|.html) will not load dir.php in case it exist # Options -MultiViews # REMEBER YOU ONLY NEED TO STARD MOD REWRITE ONCE RewriteEngine On # Uncomment the statement below if you want to make use of # HTTP authentication and it does not already work. # This could be required if you are for example using PHP via Apache CGI. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # REWRITE BASE RewriteBase / # HERE IS A GOOD PLACE TO FORCE CANONICAL DOMAIN # RewriteCond %{HTTP_HOST} !^localhost/forum$ [NC] # RewriteRule ^(.*)$ http://localhost/forum/$1 [QSA,L,R=301] # DO NOT GO FURTHER IF THE REQUESTED FILE / DIR DOES EXISTS RewriteCond %{REQUEST_FILENAME} -f RewriteCond %{REQUEST_FILENAME} -d RewriteRule . - [L] ##################################################### # PHPBB SEO REWRITE RULES ALL MODES ##################################################### # AUTHOR : dcz www.phpbb-seo.com # STARTED : 01/2006 ################################# # FORUMS PAGES ############### # FORUM INDEX REWRITERULE WOULD STAND HERE IF USED. "forum" REQUIRES TO BE SET AS FORUM INDEX # RewriteRule ^forum\.html$ /index.php [QSA,L,NC] # FORUM ALL MODES RewriteRule ^(forum|[a-z0-9_-]*-f)([0-9]+)(-([0-9]+))?\.html$ /viewforum.php?f=$2&start=$4 [QSA,L,NC] # TOPIC WITH VIRTUAL FOLDER ALL MODES RewriteRule ^(forum|[a-z0-9_-]*-f)([0-9]+)/(topic|[a-z0-9_-]*-t)([0-9]+)(-([0-9]+))?\.html$ /viewtopic.php?f=$2&t=$4&start=$6 [QSA,L,NC] # TOPIC WITHOUT FORUM ID & DELIM ALL MODES RewriteRule ^([a-z0-9_-]*)/?(topic|[a-z0-9_-]*-t)([0-9]+)(-([0-9]+))?\.html$ /viewtopic.php?forum_uri=$1&t=$3&start=$5 [QSA,L,NC] # PHPBB FILES ALL MODES RewriteRule ^resources/[a-z0-9_-]+/(thumb/)?([0-9]+)$ /download/file.php?id=$2&t=$1 [QSA,L,NC] # PROFILES THROUGH USERNAME RewriteRule ^member/([^/]+)/?$ /memberlist.php?mode=viewprofile&un=$1 [QSA,L,NC] # USER MESSAGES THROUGH USERNAME RewriteRule ^member/([^/]+)/(topics|posts)/?(page([0-9]+)\.html)?$ /search.php?author=$1&sr=$2&start=$4 [QSA,L,NC] # GROUPS ALL MODES RewriteRule ^(group|[a-z0-9_-]*-g)([0-9]+)(-([0-9]+))?\.html$ /memberlist.php?mode=group&g=$2&start=$4 [QSA,L,NC] # POST RewriteRule ^post([0-9]+)\.html$ /viewtopic.php?p=$1 [QSA,L,NC] # ACTIVE TOPICS RewriteRule ^active-topics(-([0-9]+))?\.html$ /search.php?search_id=active_topics&start=$2&sr=topics [QSA,L,NC] # UNANSWERED TOPICS RewriteRule ^unanswered(-([0-9]+))?\.html$ /search.php?search_id=unanswered&start=$2&sr=topics [QSA,L,NC] # NEW POSTS RewriteRule ^newposts(-([0-9]+))?\.html$ /search.php?search_id=newposts&start=$2&sr=topics [QSA,L,NC] # UNREAD POSTS RewriteRule ^unreadposts(-([0-9]+))?\.html$ /search.php?search_id=unreadposts&start=$2 [QSA,L,NC] # THE TEAM RewriteRule ^the-team\.html$ /memberlist.php?mode=leaders [QSA,L,NC] # HERE IS A GOOD PLACE TO ADD OTHER PHPBB RELATED REWRITERULES # FORUM WITHOUT ID & DELIM ALL MODES # THESE FOUR LINES MUST BE LOCATED AT THE END OF YOUR HTACCESS TO WORK PROPERLY RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([a-z0-9_-]+)(-([0-9]+))\.html$ /viewforum.php?forum_uri=$1&start=$3 [QSA,L,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([a-z0-9_-]+)\.html$ /viewforum.php?forum_uri=$1 [QSA,L,NC] # FIX RELATIVE PATHS : FILES RewriteRule ^.+/(style\.php|ucp\.php|mcp\.php|faq\.php|download/file.php)$ /$1 [QSA,L,NC,R=301] # FIX RELATIVE PATHS : IMAGES RewriteRule ^.+/(styles/.*|images/.*)/$ /$1 [QSA,L,NC,R=301] # END PHPBB PAGES [/code] And nginx with few rewrites: [code] server { listen 80; server_name www.tangoresults.com; rewrite ^ http://tangoresults.com$request_uri?; error_log /var/log/www/tangoresults.com/nerror.log; } server { listen 80; server_name tangoresults.com; server_name_in_redirect off; root /usr/local/www/tangoresults.com; index index.php index.html index.htm; location ~* ^.+\.(ico|js|gif|jpg|jpeg|png|bmp)$ { expires 30d; } location /post { rewrite ^/post([0-9]+)\.html$ /viewtopic.php?p=$1 break; } location / { try_files $uri $uri/ /index.php; rewrite ^/(forum|[a-z0-9_-]*-f)([0-9]+)(-([0-9]+))?\.html$ /viewforum.php?f=$2&start=$4 break; rewrite ^/(forum|[a-z0-9_-]*-f)([0-9]+)/(topic|[a-z0-9_-]*-t)([0-9]+)(-([0-9]+))?\.html$ /viewtopic.php?f=$2&t=$4&start=$6 break; } location ~ /(config\.php|common\.php|cache|files|images/avatars/upload|includes|store) { deny all; return 403; } location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } location ~ /\.ht { deny all; } } [/code] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237113,237113#msg-237113 From nginx-forum at nginx.us Fri Mar 8 10:53:35 2013 From: nginx-forum at nginx.us (mex) Date: Fri, 08 Mar 2013 05:53:35 -0500 Subject: proxy_read_timeout for an apache location? In-Reply-To: References: Message-ID: <3c659e53e8a2065152c305284b8c9867.NginxMailingListEnglish@forum.nginx.org> > Can I set proxy_read_timeout for only a particular location which is > passed to apache? > http://wiki.nginx.org/HttpProxyModule#proxy_read_timeout Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237111,237116#msg-237116 From pasik at iki.fi Fri Mar 8 13:36:29 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Fri, 8 Mar 2013 15:36:29 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130305131741.GN8912@reaktio.net> References: <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> <20130225101304.GZ8912@reaktio.net> <20130305131741.GN8912@reaktio.net> Message-ID: <20130308133629.GD8912@reaktio.net> On Tue, Mar 05, 2013 at 03:17:41PM +0200, Pasi K?rkk?inen wrote: > On Tue, Feb 26, 2013 at 10:13:11PM +0800, Weibin Yao wrote: > > It still worked in my box. Can you show me the debug.log > > ([1]http://wiki.nginx.org/Debugging)? You need recompile ? with > > --with-debug configure argument and set debug level in error_log > > directive. > > > > Ok so I've sent you the debug log. > Can you see anything obvious in it? > > I keep getting the "upstream sent invalid header while reading response header from upstream" > error when using the no_buffer patch.. > Is there something you'd want me to try? Adjusting some config options? Did you find anything weird in the log? Thanks! -- Pasi > > > > > 2013/2/25 Pasi K??rkk??inen <[2]pasik at iki.fi> > > > > On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > > > ? ? Can you show me your configure? It works for me with nginx-1.2.7. > > > ? ? Thanks. > > > > > > > Hi, > > > > I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers more" module > > added, > > and your patch. > > > > I'm using the following configuration: > > > > server { > > ? ? ? ? listen ? ? ? ? ? ? ? ? ? public_ip:443 ssl; > > ? ? ? ? server_name ? ? ? ? ? ? service.domain.tld; > > > > ? ? ? ? ssl ? ? ? ? ? ? ? ? ? ? on; > > ? ? ? ? keepalive_timeout ? ? ? 70; > > > > ? ? ? ? access_log ? ? ? ? ? ? > > ? /var/log/nginx/access-service.log; > > ? ? ? ? access_log ? ? ? ? ? ? > > ? /var/log/nginx/access-service-full.log full; > > ? ? ? ? error_log ? ? ? ? ? ? ? > > /var/log/nginx/error-service.log; > > > > ? ? ? ? client_header_buffer_size 64k; > > ? ? ? ? client_header_timeout ? 120; > > > > ? ? ? ? proxy_next_upstream error timeout invalid_header http_500 > > http_502 http_503; > > ? ? ? ? proxy_set_header Host $host; > > ? ? ? ? proxy_set_header X-Real-IP $remote_addr; > > ? ? ? ? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > ? ? ? ? proxy_redirect ? ? off; > > ? ? ? ? proxy_buffering ? ? off; > > ? ? ? ? proxy_cache ? ? ? ? off; > > > > ? ? ? ? add_header Last-Modified ""; > > ? ? ? ? if_modified_since ? off; > > > > ? ? ? ? client_max_body_size ? ? 262144M; > > ? ? ? ? client_body_buffer_size 1024k; > > ? ? ? ? client_body_timeout ? ? 240; > > > > ? ? ? ? chunked_transfer_encoding off; > > > > # ? ? ? client_body_postpone_sending ? ? 64k; > > # ? ? ? proxy_request_buffering ? ? ? ? off; > > > > ? ? ? ? location / { > > > > ? ? ? ? ? ? ? ? proxy_pass ? ? ? [3]https://service-backend; > > ? ? ? ? } > > } > > > > Thanks! > > > > -- Pasi > > > > > ? ? 2013/2/22 Pasi K?*?*?rkk?*?*?inen <[1][4]pasik at iki.fi> > > > > > > ? ? ? On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi > > K?*?*?rkk?*?*?inen wrote: > > > ? ? ? > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > > > ? ? ? > > ?* ? ?* Use the patch I attached in this mail thread > > instead, don't use > > > ? ? ? the pull > > > ? ? ? > > ?* ? ?* request patch which is for tengine.?** > > > ? ? ? > > ?* ? ?* Thanks. > > > ? ? ? > > > > > ? ? ? > > > > ? ? ? > Oh sorry I missed that attachment. It seems to apply and > > build OK. > > > ? ? ? > I'll start testing it. > > > ? ? ? > > > > > > > ? ? ? I added the patch on top of nginx 1.2.7 and enabled the > > following > > > ? ? ? options: > > > > > > ? ? ? client_body_postpone_sending ?* ? ?* 64k; > > > ? ? ? proxy_request_buffering ?* ? ?* ? ?* ? ?* ? off; > > > > > > ? ? ? after that connections through the nginx reverse proxy started > > failing > > > ? ? ? with errors like this: > > > > > > ? ? ? [error] 29087#0: *49 upstream prematurely closed connection > > while > > > ? ? ? reading response header from upstream > > > ? ? ? [error] 29087#0: *60 upstream sent invalid header while > > reading response > > > ? ? ? header from upstream > > > > > > ? ? ? And the services are unusable. > > > > > > ? ? ? Commenting out the two config options above makes nginx happy > > again. > > > ? ? ? Any idea what causes that? Any tips how to troubleshoot it? > > > ? ? ? Thanks! > > > > > > ? ? ? -- Pasi > > > > > > ? ? ? > > > > ? ? ? > > ?* ? ?* 2013/2/22 Pasi K?**??*??rkk?**??*??inen > > <[1][2][5]pasik at iki.fi> > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* On Fri, Jan 18, 2013 at 10:38:21AM +0200, > > Pasi > > > ? ? ? K?**??*??rkk?**??*??inen wrote: > > > ? ? ? > > ?* ? ?* ? ?* > On Thu, Jan 17, 2013 at 11:15:58AM +0800, > > ?????? wrote: > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Yes. It should work for any > > request method. > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > Great, thanks, I'll let you know how it > > works for me. > > > ? ? ? Probably in two > > > ? ? ? > > ?* ? ?* ? ?* weeks or so. > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* Hi, > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* Adding the tengine pull request 91 on top of > > nginx 1.2.7 > > > ? ? ? doesn't work: > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* cc1: warnings being treated as errors > > > ? ? ? > > ?* ? ?* ? ?* src/http/ngx_http_request_body.c: In function > > > ? ? ? > > ?* ? ?* ? ?* > > 'ngx_http_read_non_buffered_client_request_body': > > > ? ? ? > > ?* ? ?* ? ?* src/http/ngx_http_request_body.c:506: error: > > implicit > > > ? ? ? declaration of > > > ? ? ? > > ?* ? ?* ? ?* function 'ngx_http_top_input_body_filter' > > > ? ? ? > > ?* ? ?* ? ?* make[1]: *** > > [objs/src/http/ngx_http_request_body.o] Error 1 > > > ? ? ? > > ?* ? ?* ? ?* make[1]: Leaving directory > > `/root/src/nginx/nginx-1.2.7' > > > ? ? ? > > ?* ? ?* ? ?* make: *** [build] Error 2 > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* ngx_http_top_input_body_filter() cannot be > > found from any > > > ? ? ? .c/.h files.. > > > ? ? ? > > ?* ? ?* ? ?* Which other patches should I apply? > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* Perhaps this? > > > ? ? ? > > ?* ? ?* > > > ? ? ? ?* > > [2][3][6]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* Thanks, > > > ? ? ? > > ?* ? ?* ? ?* -- Pasi > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 2013/1/16 Pasi > > K?***?*??*?*??rkk?***?*??*?*??inen > > > ? ? ? <[1][3][4][7]pasik at iki.fi> > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** On Sun, Jan 13, 2013 at > > 08:22:17PM +0800, > > > ? ? ? ?????? wrote: > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** This > > patch should work between > > > ? ? ? nginx-1.2.6 and > > > ? ? ? > > ?* ? ?* ? ?* nginx-1.3.8. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** The > > documentation is here: > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ## > > > ? ? ? client_body_postpone_sending ## > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Syntax: > > > ? ? ? **client_body_postpone_sending** `size` > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > Default: 64k > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > Context: `http, server, > > > ? ? ? location` > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** If you > > specify the > > > ? ? ? `proxy_request_buffering` or > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > `fastcgi_request_buffering` to > > > ? ? ? be off, Nginx will > > > ? ? ? > > ?* ? ?* ? ?* send the body > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to backend > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** when it > > receives more than > > > ? ? ? `size` data or the > > > ? ? ? > > ?* ? ?* ? ?* whole request body > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** has been > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > received. It could save the > > > ? ? ? connection and reduce > > > ? ? ? > > ?* ? ?* ? ?* the IO number > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** with > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > backend. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ## > > proxy_request_buffering ## > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Syntax: > > > ? ? ? **proxy_request_buffering** `on | off` > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > Default: `on` > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > Context: `http, server, > > > ? ? ? location` > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Specify > > the request body will > > > ? ? ? be buffered to the > > > ? ? ? > > ?* ? ?* ? ?* disk or not. If > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** it's off, > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** the > > request body will be > > > ? ? ? stored in memory and sent > > > ? ? ? > > ?* ? ?* ? ?* to backend > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** after Nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > receives more than > > > ? ? ? `client_body_postpone_sending` > > > ? ? ? > > ?* ? ?* ? ?* data. It could > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** save the > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** disk IO > > with large request > > > ? ? ? body. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** ?** ?*** ?** > > > ? ? ? Note that, if you specify it > > > ? ? ? > > ?* ? ?* ? ?* to be off, the nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** retry mechanism > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** with > > unsuccessful response > > > ? ? ? will be broken after > > > ? ? ? > > ?* ? ?* ? ?* you sent part of > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** request > > to backend. It will > > > ? ? ? just return 500 when > > > ? ? ? > > ?* ? ?* ? ?* it encounters > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** such > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > unsuccessful response. This > > > ? ? ? directive also breaks > > > ? ? ? > > ?* ? ?* ? ?* these > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** variables: > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > $request_body, > > > ? ? ? $request_body_file. You should not > > > ? ? ? > > ?* ? ?* ? ?* use these > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** variables any > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** more > > while their values are > > > ? ? ? undefined. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Hello, > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch sounds > > exactly like what I need > > > ? ? ? aswell! > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** I assume it works for > > both POST and PUT > > > ? ? ? requests? > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Thanks, > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** -- Pasi > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** Hello! > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** @yaoweibin > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** If you are eager > > > ? ? ? for this feature, you > > > ? ? ? > > ?* ? ?* ? ?* could try my > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** patch: > > > ? ? ? > > ?* ? ?* ? ?* > > [2][2][4][5][8]https://github.com/taobao/tengine/pull/91. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch has > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** been running in > > > ? ? ? our production servers. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** what's the nginx > > > ? ? ? version your patch based on? > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** Thanks! > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** On Fri, Jan 11, 2013 at > > > ? ? ? 5:17 PM, ?****?*** > > > ? ? ? > > ?* ? ?* ? ?* ?****?***?**?*???***?**?*???***?**?*?? > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > <[3][3][5][6][9]yaoweibin at gmail.com> wrote: > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** I know nginx > > > ? ? ? team are working on it. You > > > ? ? ? > > ?* ? ?* ? ?* can wait for it. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** If you are eager > > > ? ? ? for this feature, you > > > ? ? ? > > ?* ? ?* ? ?* could try my > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** patch: > > > ? ? ? > > ?* ? ?* ? ?* > > [4][4][6][7][10]https://github.com/taobao/tengine/pull/91. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch has > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** been running in > > > ? ? ? our production servers. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** 2013/1/11 li > > > ? ? ? zJay > > > ? ? ? > > ?* ? ?* ? ?* <[5][5][7][8][11]zjay1987 at gmail.com> > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** Hello! > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** is it > > > ? ? ? possible that nginx will not > > > ? ? ? > > ?* ? ?* ? ?* buffer the client > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** body before > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** handle > > > ? ? ? the request to upstream? > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** we want > > > ? ? ? to use nginx as a reverse > > > ? ? ? > > ?* ? ?* ? ?* proxy to upload very > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** very big file > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** to the > > > ? ? ? upstream, but the default > > > ? ? ? > > ?* ? ?* ? ?* behavior of nginx is to > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** save the > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** whole > > > ? ? ? request to the local disk > > > ? ? ? > > ?* ? ?* ? ?* first before handle it > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to the > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** upstream, > > > ? ? ? which make the upstream > > > ? ? ? > > ?* ? ?* ? ?* impossible to process > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the file on > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** the fly > > > ? ? ? when the file is uploading, > > > ? ? ? > > ?* ? ?* ? ?* results in much high > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** request > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** latency > > > ? ? ? and server-side resource > > > ? ? ? > > ?* ? ?* ? ?* consumption. > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** Thanks! > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** nginx > > > ? ? ? mailing list > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** > > > ? ? ? [6][6][8][9][12]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** ?** ?*** > > > ? ? ? > > ?* ? ?* ? ?* > > [7][7][9][10][13]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** -- > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** Weibin Yao > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** Developer @ > > > ? ? ? Server Platform Team of > > > ? ? ? > > ?* ? ?* ? ?* Taobao > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** nginx mailing > > > ? ? ? list > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** > > > ? ? ? [8][8][10][11][14]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** ?** ?*** > > > ? ? ? > > ?* ? ?* > > > ? ? ? ?* > > [9][9][11][12][15]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** > > > ? ? ? [10][10][12][13][16]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** ?** > > ?*** > > > ? ? ? > > ?* ? ?* > > > ? ? ? ?* > > [11][11][13][14][17]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** -- > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Weibin > > Yao > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > Developer @ Server Platform > > > ? ? ? Team of Taobao > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > References > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** Visible > > links > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 1. > > > ? ? ? mailto:[12][14][15][18]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 2. > > > ? ? ? > > ?* ? ?* ? ?* > > [13][15][16][19]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 3. > > > ? ? ? mailto:[14][16][17][20]yaoweibin at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 4. > > > ? ? ? > > ?* ? ?* ? ?* > > [15][17][18][21]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 5. > > > ? ? ? mailto:[16][18][19][22]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 6. > > > ? ? ? mailto:[17][19][20][23]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 7. > > > ? ? ? > > ?* ? ?* ? ?* > > [18][20][21][24]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 8. > > > ? ? ? mailto:[19][21][22][25]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** 9. > > > ? ? ? > > ?* ? ?* ? ?* > > [20][22][23][26]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** 10. > > > ? ? ? mailto:[21][23][24][27]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** 11. > > > ? ? ? > > ?* ? ?* ? ?* > > [22][24][25][28]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > [23][25][26][29]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [24][26][27][30]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > [25][27][28][31]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > > ? [26][28][29][32]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** -- > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Weibin Yao > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Developer @ Server Platform > > Team of Taobao > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > References > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Visible links > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 1. > > mailto:[29][30][33]pasik at iki.fi > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 2. > > > ? ? ? [30][31][34]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 3. > > mailto:[31][32][35]yaoweibin at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 4. > > > ? ? ? [32][33][36]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 5. > > mailto:[33][34][37]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 6. > > mailto:[34][35][38]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 7. > > > ? ? ? [35][36][39]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 8. > > mailto:[36][37][40]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 9. > > > ? ? ? [37][38][41]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 10. > > mailto:[38][39][42]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 11. > > > ? ? ? [39][40][43]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 12. > > mailto:[40][41][44]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 13. > > > ? ? ? [41][42][45]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 14. > > mailto:[42][43][46]yaoweibin at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 15. > > > ? ? ? [43][44][47]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 16. > > mailto:[44][45][48]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 17. > > mailto:[45][46][49]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 18. > > > ? ? ? [46][47][50]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 19. > > mailto:[47][48][51]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 20. > > > ? ? ? [48][49][52]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 21. > > mailto:[49][50][53]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 22. > > > ? ? ? [50][51][54]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 23. > > mailto:[51][52][55]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 24. > > > ? ? ? [52][53][56]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 25. > > mailto:[53][54][57]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 26. > > > ? ? ? [54][55][58]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > > nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > > [55][56][59]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > > > [56][57][60]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > [57][58][61]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > > [58][59][62]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* [59][60][63]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > [60][61][64]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* -- > > > ? ? ? > > ?* ? ?* Weibin Yao > > > ? ? ? > > ?* ? ?* Developer @ Server Platform Team of Taobao > > > ? ? ? > > > > > ? ? ? > > References > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* Visible links > > > ? ? ? > > ?* ? ?* 1. mailto:[62][65]pasik at iki.fi > > > ? ? ? > > ?* ? ?* 2. > > > ? ? > > ? [63][66]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? ? > > ?* ? ?* 3. mailto:[64][67]pasik at iki.fi > > > ? ? ? > > ?* ? ?* 4. > > [65][68]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* 5. mailto:[66][69]yaoweibin at gmail.com > > > ? ? ? > > ?* ? ?* 6. > > [67][70]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* 7. mailto:[68][71]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* 8. mailto:[69][72]nginx at nginx.org > > > ? ? ? > > ?* ? ?* 9. > > [70][73]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 10. mailto:[71][74]nginx at nginx.org > > > ? ? ? > > ?* ? 11. > > [72][75]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 12. mailto:[73][76]nginx at nginx.org > > > ? ? ? > > ?* ? 13. > > [74][77]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 14. mailto:[75][78]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 15. [76][79]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 16. mailto:[77][80]yaoweibin at gmail.com > > > ? ? ? > > ?* ? 17. [78][81]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 18. mailto:[79][82]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 19. mailto:[80][83]nginx at nginx.org > > > ? ? ? > > ?* ? 20. > > [81][84]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 21. mailto:[82][85]nginx at nginx.org > > > ? ? ? > > ?* ? 22. > > [83][86]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 23. mailto:[84][87]nginx at nginx.org > > > ? ? ? > > ?* ? 24. > > [85][88]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 25. mailto:[86][89]nginx at nginx.org > > > ? ? ? > > ?* ? 26. > > [87][90]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 27. mailto:[88][91]nginx at nginx.org > > > ? ? ? > > ?* ? 28. > > [89][92]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 29. mailto:[90][93]pasik at iki.fi > > > ? ? ? > > ?* ? 30. [91][94]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 31. mailto:[92][95]yaoweibin at gmail.com > > > ? ? ? > > ?* ? 32. [93][96]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 33. mailto:[94][97]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 34. mailto:[95][98]nginx at nginx.org > > > ? ? ? > > ?* ? 35. > > [96][99]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 36. mailto:[97][100]nginx at nginx.org > > > ? ? ? > > ?* ? 37. > > [98][101]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 38. mailto:[99][102]nginx at nginx.org > > > ? ? ? > > ?* ? 39. > > [100][103]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 40. mailto:[101][104]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 41. > > [102][105]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 42. mailto:[103][106]yaoweibin at gmail.com > > > ? ? ? > > ?* ? 43. > > [104][107]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 44. mailto:[105][108]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 45. mailto:[106][109]nginx at nginx.org > > > ? ? ? > > ?* ? 46. > > [107][110]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 47. mailto:[108][111]nginx at nginx.org > > > ? ? ? > > ?* ? 48. > > [109][112]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 49. mailto:[110][113]nginx at nginx.org > > > ? ? ? > > ?* ? 50. > > [111][114]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 51. mailto:[112][115]nginx at nginx.org > > > ? ? ? > > ?* ? 52. > > [113][116]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 53. mailto:[114][117]nginx at nginx.org > > > ? ? ? > > ?* ? 54. > > [115][118]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 55. mailto:[116][119]nginx at nginx.org > > > ? ? ? > > ?* ? 56. > > [117][120]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 57. mailto:[118][121]nginx at nginx.org > > > ? ? ? > > ?* ? 58. > > [119][122]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 59. mailto:[120][123]nginx at nginx.org > > > ? ? ? > > ?* ? 60. > > [121][124]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > > _______________________________________________ > > > ? ? ? > > nginx mailing list > > > ? ? ? > > [122][125]nginx at nginx.org > > > ? ? ? > > [123][126]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > _______________________________________________ > > > ? ? ? > nginx mailing list > > > ? ? ? > [124][127]nginx at nginx.org > > > ? ? ? > [125][128]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? _______________________________________________ > > > ? ? ? nginx mailing list > > > ? ? ? [126][129]nginx at nginx.org > > > ? ? ? [127][130]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? -- > > > ? ? Weibin Yao > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > ? ? Visible links > > > ? ? 1. mailto:[131]pasik at iki.fi > > > ? ? 2. mailto:[132]pasik at iki.fi > > > ? ? 3. > > [133]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? 4. mailto:[134]pasik at iki.fi > > > ? ? 5. [135]https://github.com/taobao/tengine/pull/91 > > > ? ? 6. mailto:[136]yaoweibin at gmail.com > > > ? ? 7. [137]https://github.com/taobao/tengine/pull/91 > > > ? ? 8. mailto:[138]zjay1987 at gmail.com > > > ? ? 9. mailto:[139]nginx at nginx.org > > > ? 10. [140]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 11. mailto:[141]nginx at nginx.org > > > ? 12. [142]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 13. mailto:[143]nginx at nginx.org > > > ? 14. [144]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 15. mailto:[145]zjay1987 at gmail.com > > > ? 16. [146]https://github.com/taobao/tengine/pull/91 > > > ? 17. mailto:[147]yaoweibin at gmail.com > > > ? 18. [148]https://github.com/taobao/tengine/pull/91 > > > ? 19. mailto:[149]zjay1987 at gmail.com > > > ? 20. mailto:[150]nginx at nginx.org > > > ? 21. [151]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 22. mailto:[152]nginx at nginx.org > > > ? 23. [153]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 24. mailto:[154]nginx at nginx.org > > > ? 25. [155]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 26. mailto:[156]nginx at nginx.org > > > ? 27. [157]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 28. mailto:[158]nginx at nginx.org > > > ? 29. [159]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 30. mailto:[160]pasik at iki.fi > > > ? 31. [161]https://github.com/taobao/tengine/pull/91 > > > ? 32. mailto:[162]yaoweibin at gmail.com > > > ? 33. [163]https://github.com/taobao/tengine/pull/91 > > > ? 34. mailto:[164]zjay1987 at gmail.com > > > ? 35. mailto:[165]nginx at nginx.org > > > ? 36. [166]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 37. mailto:[167]nginx at nginx.org > > > ? 38. [168]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 39. mailto:[169]nginx at nginx.org > > > ? 40. [170]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 41. mailto:[171]zjay1987 at gmail.com > > > ? 42. [172]https://github.com/taobao/tengine/pull/91 > > > ? 43. mailto:[173]yaoweibin at gmail.com > > > ? 44. [174]https://github.com/taobao/tengine/pull/91 > > > ? 45. mailto:[175]zjay1987 at gmail.com > > > ? 46. mailto:[176]nginx at nginx.org > > > ? 47. [177]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 48. mailto:[178]nginx at nginx.org > > > ? 49. [179]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 50. mailto:[180]nginx at nginx.org > > > ? 51. [181]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 52. mailto:[182]nginx at nginx.org > > > ? 53. [183]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 54. mailto:[184]nginx at nginx.org > > > ? 55. [185]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 56. mailto:[186]nginx at nginx.org > > > ? 57. [187]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 58. mailto:[188]nginx at nginx.org > > > ? 59. [189]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 60. mailto:[190]nginx at nginx.org > > > ? 61. [191]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 62. mailto:[192]pasik at iki.fi > > > ? 63. > > [193]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? 64. mailto:[194]pasik at iki.fi > > > ? 65. [195]https://github.com/taobao/tengine/pull/91 > > > ? 66. mailto:[196]yaoweibin at gmail.com > > > ? 67. [197]https://github.com/taobao/tengine/pull/91 > > > ? 68. mailto:[198]zjay1987 at gmail.com > > > ? 69. mailto:[199]nginx at nginx.org > > > ? 70. [200]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 71. mailto:[201]nginx at nginx.org > > > ? 72. [202]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 73. mailto:[203]nginx at nginx.org > > > ? 74. [204]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 75. mailto:[205]zjay1987 at gmail.com > > > ? 76. [206]https://github.com/taobao/tengine/pull/91 > > > ? 77. mailto:[207]yaoweibin at gmail.com > > > ? 78. [208]https://github.com/taobao/tengine/pull/91 > > > ? 79. mailto:[209]zjay1987 at gmail.com > > > ? 80. mailto:[210]nginx at nginx.org > > > ? 81. [211]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 82. mailto:[212]nginx at nginx.org > > > ? 83. [213]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 84. mailto:[214]nginx at nginx.org > > > ? 85. [215]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 86. mailto:[216]nginx at nginx.org > > > ? 87. [217]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 88. mailto:[218]nginx at nginx.org > > > ? 89. [219]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 90. mailto:[220]pasik at iki.fi > > > ? 91. [221]https://github.com/taobao/tengine/pull/91 > > > ? 92. mailto:[222]yaoweibin at gmail.com > > > ? 93. [223]https://github.com/taobao/tengine/pull/91 > > > ? 94. mailto:[224]zjay1987 at gmail.com > > > ? 95. mailto:[225]nginx at nginx.org > > > ? 96. [226]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 97. mailto:[227]nginx at nginx.org > > > ? 98. [228]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 99. mailto:[229]nginx at nginx.org > > > ? 100. [230]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 101. mailto:[231]zjay1987 at gmail.com > > > ? 102. [232]https://github.com/taobao/tengine/pull/91 > > > ? 103. mailto:[233]yaoweibin at gmail.com > > > ? 104. [234]https://github.com/taobao/tengine/pull/91 > > > ? 105. mailto:[235]zjay1987 at gmail.com > > > ? 106. mailto:[236]nginx at nginx.org > > > ? 107. [237]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 108. mailto:[238]nginx at nginx.org > > > ? 109. [239]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 110. mailto:[240]nginx at nginx.org > > > ? 111. [241]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 112. mailto:[242]nginx at nginx.org > > > ? 113. [243]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 114. mailto:[244]nginx at nginx.org > > > ? 115. [245]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 116. mailto:[246]nginx at nginx.org > > > ? 117. [247]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 118. mailto:[248]nginx at nginx.org > > > ? 119. [249]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 120. mailto:[250]nginx at nginx.org > > > ? 121. [251]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 122. mailto:[252]nginx at nginx.org > > > ? 123. [253]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 124. mailto:[254]nginx at nginx.org > > > ? 125. [255]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 126. mailto:[256]nginx at nginx.org > > > ? 127. [257]http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > > > nginx mailing list > > > [258]nginx at nginx.org > > > [259]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [260]nginx at nginx.org > > [261]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > References > > > > Visible links > > 1. http://wiki.nginx.org/Debugging > > 2. mailto:pasik at iki.fi > > 3. https://service-backend/ > > 4. mailto:pasik at iki.fi > > 5. mailto:pasik at iki.fi > > 6. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 7. mailto:pasik at iki.fi > > 8. https://github.com/taobao/tengine/pull/91 > > 9. mailto:yaoweibin at gmail.com > > 10. https://github.com/taobao/tengine/pull/91 > > 11. mailto:zjay1987 at gmail.com > > 12. mailto:nginx at nginx.org > > 13. http://mailman.nginx.org/mailman/listinfo/nginx > > 14. mailto:nginx at nginx.org > > 15. http://mailman.nginx.org/mailman/listinfo/nginx > > 16. mailto:nginx at nginx.org > > 17. http://mailman.nginx.org/mailman/listinfo/nginx > > 18. mailto:zjay1987 at gmail.com > > 19. https://github.com/taobao/tengine/pull/91 > > 20. mailto:yaoweibin at gmail.com > > 21. https://github.com/taobao/tengine/pull/91 > > 22. mailto:zjay1987 at gmail.com > > 23. mailto:nginx at nginx.org > > 24. http://mailman.nginx.org/mailman/listinfo/nginx > > 25. mailto:nginx at nginx.org > > 26. http://mailman.nginx.org/mailman/listinfo/nginx > > 27. mailto:nginx at nginx.org > > 28. http://mailman.nginx.org/mailman/listinfo/nginx > > 29. mailto:nginx at nginx.org > > 30. http://mailman.nginx.org/mailman/listinfo/nginx > > 31. mailto:nginx at nginx.org > > 32. http://mailman.nginx.org/mailman/listinfo/nginx > > 33. mailto:pasik at iki.fi > > 34. https://github.com/taobao/tengine/pull/91 > > 35. mailto:yaoweibin at gmail.com > > 36. https://github.com/taobao/tengine/pull/91 > > 37. mailto:zjay1987 at gmail.com > > 38. mailto:nginx at nginx.org > > 39. http://mailman.nginx.org/mailman/listinfo/nginx > > 40. mailto:nginx at nginx.org > > 41. http://mailman.nginx.org/mailman/listinfo/nginx > > 42. mailto:nginx at nginx.org > > 43. http://mailman.nginx.org/mailman/listinfo/nginx > > 44. mailto:zjay1987 at gmail.com > > 45. https://github.com/taobao/tengine/pull/91 > > 46. mailto:yaoweibin at gmail.com > > 47. https://github.com/taobao/tengine/pull/91 > > 48. mailto:zjay1987 at gmail.com > > 49. mailto:nginx at nginx.org > > 50. http://mailman.nginx.org/mailman/listinfo/nginx > > 51. mailto:nginx at nginx.org > > 52. http://mailman.nginx.org/mailman/listinfo/nginx > > 53. mailto:nginx at nginx.org > > 54. http://mailman.nginx.org/mailman/listinfo/nginx > > 55. mailto:nginx at nginx.org > > 56. http://mailman.nginx.org/mailman/listinfo/nginx > > 57. mailto:nginx at nginx.org > > 58. http://mailman.nginx.org/mailman/listinfo/nginx > > 59. mailto:nginx at nginx.org > > 60. http://mailman.nginx.org/mailman/listinfo/nginx > > 61. mailto:nginx at nginx.org > > 62. http://mailman.nginx.org/mailman/listinfo/nginx > > 63. mailto:nginx at nginx.org > > 64. http://mailman.nginx.org/mailman/listinfo/nginx > > 65. mailto:pasik at iki.fi > > 66. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 67. mailto:pasik at iki.fi > > 68. https://github.com/taobao/tengine/pull/91 > > 69. mailto:yaoweibin at gmail.com > > 70. https://github.com/taobao/tengine/pull/91 > > 71. mailto:zjay1987 at gmail.com > > 72. mailto:nginx at nginx.org > > 73. http://mailman.nginx.org/mailman/listinfo/nginx > > 74. mailto:nginx at nginx.org > > 75. http://mailman.nginx.org/mailman/listinfo/nginx > > 76. mailto:nginx at nginx.org > > 77. http://mailman.nginx.org/mailman/listinfo/nginx > > 78. mailto:zjay1987 at gmail.com > > 79. https://github.com/taobao/tengine/pull/91 > > 80. mailto:yaoweibin at gmail.com > > 81. https://github.com/taobao/tengine/pull/91 > > 82. mailto:zjay1987 at gmail.com > > 83. mailto:nginx at nginx.org > > 84. http://mailman.nginx.org/mailman/listinfo/nginx > > 85. mailto:nginx at nginx.org > > 86. http://mailman.nginx.org/mailman/listinfo/nginx > > 87. mailto:nginx at nginx.org > > 88. http://mailman.nginx.org/mailman/listinfo/nginx > > 89. mailto:nginx at nginx.org > > 90. http://mailman.nginx.org/mailman/listinfo/nginx > > 91. mailto:nginx at nginx.org > > 92. http://mailman.nginx.org/mailman/listinfo/nginx > > 93. mailto:pasik at iki.fi > > 94. https://github.com/taobao/tengine/pull/91 > > 95. mailto:yaoweibin at gmail.com > > 96. https://github.com/taobao/tengine/pull/91 > > 97. mailto:zjay1987 at gmail.com > > 98. mailto:nginx at nginx.org > > 99. http://mailman.nginx.org/mailman/listinfo/nginx > > 100. mailto:nginx at nginx.org > > 101. http://mailman.nginx.org/mailman/listinfo/nginx > > 102. mailto:nginx at nginx.org > > 103. http://mailman.nginx.org/mailman/listinfo/nginx > > 104. mailto:zjay1987 at gmail.com > > 105. https://github.com/taobao/tengine/pull/91 > > 106. mailto:yaoweibin at gmail.com > > 107. https://github.com/taobao/tengine/pull/91 > > 108. mailto:zjay1987 at gmail.com > > 109. mailto:nginx at nginx.org > > 110. http://mailman.nginx.org/mailman/listinfo/nginx > > 111. mailto:nginx at nginx.org > > 112. http://mailman.nginx.org/mailman/listinfo/nginx > > 113. mailto:nginx at nginx.org > > 114. http://mailman.nginx.org/mailman/listinfo/nginx > > 115. mailto:nginx at nginx.org > > 116. http://mailman.nginx.org/mailman/listinfo/nginx > > 117. mailto:nginx at nginx.org > > 118. http://mailman.nginx.org/mailman/listinfo/nginx > > 119. mailto:nginx at nginx.org > > 120. http://mailman.nginx.org/mailman/listinfo/nginx > > 121. mailto:nginx at nginx.org > > 122. http://mailman.nginx.org/mailman/listinfo/nginx > > 123. mailto:nginx at nginx.org > > 124. http://mailman.nginx.org/mailman/listinfo/nginx > > 125. mailto:nginx at nginx.org > > 126. http://mailman.nginx.org/mailman/listinfo/nginx > > 127. mailto:nginx at nginx.org > > 128. http://mailman.nginx.org/mailman/listinfo/nginx > > 129. mailto:nginx at nginx.org > > 130. http://mailman.nginx.org/mailman/listinfo/nginx > > 131. mailto:pasik at iki.fi > > 132. mailto:pasik at iki.fi > > 133. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 134. mailto:pasik at iki.fi > > 135. https://github.com/taobao/tengine/pull/91 > > 136. mailto:yaoweibin at gmail.com > > 137. https://github.com/taobao/tengine/pull/91 > > 138. mailto:zjay1987 at gmail.com > > 139. mailto:nginx at nginx.org > > 140. http://mailman.nginx.org/mailman/listinfo/nginx > > 141. mailto:nginx at nginx.org > > 142. http://mailman.nginx.org/mailman/listinfo/nginx > > 143. mailto:nginx at nginx.org > > 144. http://mailman.nginx.org/mailman/listinfo/nginx > > 145. mailto:zjay1987 at gmail.com > > 146. https://github.com/taobao/tengine/pull/91 > > 147. mailto:yaoweibin at gmail.com > > 148. https://github.com/taobao/tengine/pull/91 > > 149. mailto:zjay1987 at gmail.com > > 150. mailto:nginx at nginx.org > > 151. http://mailman.nginx.org/mailman/listinfo/nginx > > 152. mailto:nginx at nginx.org > > 153. http://mailman.nginx.org/mailman/listinfo/nginx > > 154. mailto:nginx at nginx.org > > 155. http://mailman.nginx.org/mailman/listinfo/nginx > > 156. mailto:nginx at nginx.org > > 157. http://mailman.nginx.org/mailman/listinfo/nginx > > 158. mailto:nginx at nginx.org > > 159. http://mailman.nginx.org/mailman/listinfo/nginx > > 160. mailto:pasik at iki.fi > > 161. https://github.com/taobao/tengine/pull/91 > > 162. mailto:yaoweibin at gmail.com > > 163. https://github.com/taobao/tengine/pull/91 > > 164. mailto:zjay1987 at gmail.com > > 165. mailto:nginx at nginx.org > > 166. http://mailman.nginx.org/mailman/listinfo/nginx > > 167. mailto:nginx at nginx.org > > 168. http://mailman.nginx.org/mailman/listinfo/nginx > > 169. mailto:nginx at nginx.org > > 170. http://mailman.nginx.org/mailman/listinfo/nginx > > 171. mailto:zjay1987 at gmail.com > > 172. https://github.com/taobao/tengine/pull/91 > > 173. mailto:yaoweibin at gmail.com > > 174. https://github.com/taobao/tengine/pull/91 > > 175. mailto:zjay1987 at gmail.com > > 176. mailto:nginx at nginx.org > > 177. http://mailman.nginx.org/mailman/listinfo/nginx > > 178. mailto:nginx at nginx.org > > 179. http://mailman.nginx.org/mailman/listinfo/nginx > > 180. mailto:nginx at nginx.org > > 181. http://mailman.nginx.org/mailman/listinfo/nginx > > 182. mailto:nginx at nginx.org > > 183. http://mailman.nginx.org/mailman/listinfo/nginx > > 184. mailto:nginx at nginx.org > > 185. http://mailman.nginx.org/mailman/listinfo/nginx > > 186. mailto:nginx at nginx.org > > 187. http://mailman.nginx.org/mailman/listinfo/nginx > > 188. mailto:nginx at nginx.org > > 189. http://mailman.nginx.org/mailman/listinfo/nginx > > 190. mailto:nginx at nginx.org > > 191. http://mailman.nginx.org/mailman/listinfo/nginx > > 192. mailto:pasik at iki.fi > > 193. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 194. mailto:pasik at iki.fi > > 195. https://github.com/taobao/tengine/pull/91 > > 196. mailto:yaoweibin at gmail.com > > 197. https://github.com/taobao/tengine/pull/91 > > 198. mailto:zjay1987 at gmail.com > > 199. mailto:nginx at nginx.org > > 200. http://mailman.nginx.org/mailman/listinfo/nginx > > 201. mailto:nginx at nginx.org > > 202. http://mailman.nginx.org/mailman/listinfo/nginx > > 203. mailto:nginx at nginx.org > > 204. http://mailman.nginx.org/mailman/listinfo/nginx > > 205. mailto:zjay1987 at gmail.com > > 206. https://github.com/taobao/tengine/pull/91 > > 207. mailto:yaoweibin at gmail.com > > 208. https://github.com/taobao/tengine/pull/91 > > 209. mailto:zjay1987 at gmail.com > > 210. mailto:nginx at nginx.org > > 211. http://mailman.nginx.org/mailman/listinfo/nginx > > 212. mailto:nginx at nginx.org > > 213. http://mailman.nginx.org/mailman/listinfo/nginx > > 214. mailto:nginx at nginx.org > > 215. http://mailman.nginx.org/mailman/listinfo/nginx > > 216. mailto:nginx at nginx.org > > 217. http://mailman.nginx.org/mailman/listinfo/nginx > > 218. mailto:nginx at nginx.org > > 219. http://mailman.nginx.org/mailman/listinfo/nginx > > 220. mailto:pasik at iki.fi > > 221. https://github.com/taobao/tengine/pull/91 > > 222. mailto:yaoweibin at gmail.com > > 223. https://github.com/taobao/tengine/pull/91 > > 224. mailto:zjay1987 at gmail.com > > 225. mailto:nginx at nginx.org > > 226. http://mailman.nginx.org/mailman/listinfo/nginx > > 227. mailto:nginx at nginx.org > > 228. http://mailman.nginx.org/mailman/listinfo/nginx > > 229. mailto:nginx at nginx.org > > 230. http://mailman.nginx.org/mailman/listinfo/nginx > > 231. mailto:zjay1987 at gmail.com > > 232. https://github.com/taobao/tengine/pull/91 > > 233. mailto:yaoweibin at gmail.com > > 234. https://github.com/taobao/tengine/pull/91 > > 235. mailto:zjay1987 at gmail.com > > 236. mailto:nginx at nginx.org > > 237. http://mailman.nginx.org/mailman/listinfo/nginx > > 238. mailto:nginx at nginx.org > > 239. http://mailman.nginx.org/mailman/listinfo/nginx > > 240. mailto:nginx at nginx.org > > 241. http://mailman.nginx.org/mailman/listinfo/nginx > > 242. mailto:nginx at nginx.org > > 243. http://mailman.nginx.org/mailman/listinfo/nginx > > 244. mailto:nginx at nginx.org > > 245. http://mailman.nginx.org/mailman/listinfo/nginx > > 246. mailto:nginx at nginx.org > > 247. http://mailman.nginx.org/mailman/listinfo/nginx > > 248. mailto:nginx at nginx.org > > 249. http://mailman.nginx.org/mailman/listinfo/nginx > > 250. mailto:nginx at nginx.org > > 251. http://mailman.nginx.org/mailman/listinfo/nginx > > 252. mailto:nginx at nginx.org > > 253. http://mailman.nginx.org/mailman/listinfo/nginx > > 254. mailto:nginx at nginx.org > > 255. http://mailman.nginx.org/mailman/listinfo/nginx > > 256. mailto:nginx at nginx.org > > 257. http://mailman.nginx.org/mailman/listinfo/nginx > > 258. mailto:nginx at nginx.org > > 259. http://mailman.nginx.org/mailman/listinfo/nginx > > 260. mailto:nginx at nginx.org > > 261. http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Peter_Booth at s5a.com Fri Mar 8 17:00:16 2013 From: Peter_Booth at s5a.com (Peter Booth) Date: Fri, 8 Mar 2013 11:00:16 -0600 Subject: Selectively implement something like proxy_ignore_headers Set-Cookie? Message-ID: I'm wondering if anyone has any thoughts about how I might address the following? I am using nginx as a caching reverse proxy in front of a complex Apache/Weblogic Java application. I have a half-dozen Location blocks that have different caching policies with custom keys and different cache lifetimes. I'm using openresty to implement different cache key logic with lua scripts and until now everything has gone smoothly. Some of the locations use the combination of proxy_ignore_headers Set-Cookie and proxy_hide_header Set-Cookie to strip cookies from cached responses. Another location needs to allow for session cookies to be created and so it has the default nginx behavior of not caching responses that contain cookies. Now I need a different hybrid behavior- if a response contains "Set-Cookie: TLTHID", then strip it from the response, but still cache the response. If the response contains any other Set-Cookie header then don't cache the response. I have tried to use more_clear_headers 'Set-Cookie: TLTHID*" to strip the cookie and I see that it does indeed remove the cookie from the response, however the debug log shows me that the object is still seen as being not cacheable, presumably because nginx knows that the stripped Set-Cookie was there. Is there an easy way to work around this? I'd considered running a second nginx process to simply act as a proxy that strips the 'Set-Cookie: TLTHID*" header - it sounds clunky but I expect it would work. Is there a way to achieve this within a single nginx instance? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Fri Mar 8 17:49:39 2013 From: emailgrant at gmail.com (Grant) Date: Fri, 8 Mar 2013 09:49:39 -0800 Subject: proxy_read_timeout for an apache location? In-Reply-To: <3c659e53e8a2065152c305284b8c9867.NginxMailingListEnglish@forum.nginx.org> References: <3c659e53e8a2065152c305284b8c9867.NginxMailingListEnglish@forum.nginx.org> Message-ID: >> Can I set proxy_read_timeout for only a particular location which is >> passed to apache? >> > > http://wiki.nginx.org/HttpProxyModule#proxy_read_timeout This config causes nginx to serve the request instead of apache: location /for-apache.html { proxy_read_timeout 30m; } Can I pass for-apache.html to apache and wait 30m for it? - Grant From contact at jpluscplusm.com Fri Mar 8 18:44:58 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 8 Mar 2013 18:44:58 +0000 Subject: Selectively implement something like proxy_ignore_headers Set-Cookie? In-Reply-To: References: Message-ID: On 8 March 2013 17:00, Peter Booth wrote: > Some of the locations use the combination of > > proxy_ignore_headers Set-Cookie and proxy_hide_header Set-Cookie to strip > cookies from cached responses. > > Another location needs to allow for session cookies to be created and so it > has the default nginx behavior of not caching > > responses that contain cookies. > > Now I need a different hybrid behavior- > > if a response contains ?Set-Cookie: TLTHID?, then strip it from the > response, but still cache the response. > > If the response contains any other Set-Cookie header then don?t cache the > response. > > Is there a way to achieve this within a single nginx instance? I don't use nginx as a cache, so I can't speak as for how this will work with the caching subsystem, but I wonder if something like this might work: ----------------------------- map $sent_http_set_cookie $header_to_drop { default NA; ~TLTHID Set-Cookie; } server { proxy_pass http://upstream.fqdn; proxy_ignore_headers $header_to_drop; proxy_hide_header $header_to_drop; } ----------------------------- The map might need $upstream_http_set_cookie instead - I don't recall the difference off the top of my head. HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From francis at daoine.org Fri Mar 8 20:30:02 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Mar 2013 20:30:02 +0000 Subject: proxy_read_timeout for an apache location? In-Reply-To: References: <3c659e53e8a2065152c305284b8c9867.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130308203002.GE10870@craic.sysops.org> On Fri, Mar 08, 2013 at 09:49:39AM -0800, Grant wrote: Hi there, > >> Can I set proxy_read_timeout for only a particular location which is > >> passed to apache? > > > > http://wiki.nginx.org/HttpProxyModule#proxy_read_timeout That's a good resource. http://nginx.org/r/proxy_read_timeout redirects to the official documentation for the directive, and if it says "context: location", then it can apply to a particular location. > This config causes nginx to serve the request instead of apache: > > location /for-apache.html { > proxy_read_timeout 30m; > } > > Can I pass for-apache.html to apache and wait 30m for it? In nginx, one request is handled in one location. Only the configuration in, or inherited into, that location applies. To send to apache, you need the proxy_pass directive -- which has documentation at http://nginx.org/r/proxy_pass So the answer is "yes you can, but you have to configure it". f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 8 20:42:40 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Mar 2013 20:42:40 +0000 Subject: 'Primary' domain in server block? In-Reply-To: References: <20130307170637.GD10870@craic.sysops.org> Message-ID: <20130308204240.GF10870@craic.sysops.org> On Fri, Mar 08, 2013 at 08:53:34AM +0000, Tom Barrett wrote: > On 7 March 2013 17:06, Francis Daly wrote: > > On Thu, Mar 07, 2013 at 09:30:01AM +0000, Tom Barrett wrote: Hi there, > I think perhaps my use of the word 'integration' was a little > grandiose. The general server setup is PHP fastcgi with a WordPress > installation. So: fastcgi is involved. nginx interacts with the fastcgi server by sending key/value pairs from fastcgi_param directives. That is (pretty much) all it does. The fastcgi server will (typically) use some specific fastcgi_param keys for itself, and then make all of the key/values available to the php script. What happens after that is nothing to do with nginx. > > > How is PayPal doing that? > > > > Presumably you're giving the information it to PayPal somehow? > > So, however PayPal is grabbing it's information, it gets the first entry in > the server_name directive. And that's my only option? No. It sounds like your php script is talking to PayPal, and so your php script is sending the information. If you know what information you want to send, either change your php script to send only that; or else read your php script to see what key names it uses, and change your nginx to send the values you want for those keys. > Well, assuming my above statement is correct, then only the server_name > directive is relevant? That depends on what your php script does. > It is a pretty trivial setup: http://pastebin.com/SdaB1ZW4 My guess is that your php script uses the value of SERVER_NAME when it talks to PayPal. Your config sets SERVER_NAME to whatever $server_name is (in the fastcgi_param directives, in the "include" file). If my guess is correct, then if you send SERVER_NAME with the value of "www.example.com", then that's what PayPal will see, and will use. You have the php script; you don't need to guess. You can set a fixed string, or you can use another variable -- perhaps $host is closer to what you want it to be. Or maybe PayPal will use some other key if SERVER_NAME is not set to anything. It shouldn't be too difficult to set up a test system to learn some of what PayPal is doing. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Mar 8 22:16:23 2013 From: nginx-forum at nginx.us (christopherincanada) Date: Fri, 08 Mar 2013 17:16:23 -0500 Subject: RSA+DSA+ECC bundles In-Reply-To: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com> References: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com> Message-ID: I too am interested in this capability... any comments on this topic are appreciated. Chris Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235967,237143#msg-237143 From nginx-forum at nginx.us Fri Mar 8 23:41:46 2013 From: nginx-forum at nginx.us (redleaderuk) Date: Fri, 08 Mar 2013 18:41:46 -0500 Subject: Issue with HttpAuthDigestModule In-Reply-To: <8b48c4d905232b3491fae1b97e454314.NginxMailingListEnglish@forum.nginx.org> References: <8b48c4d905232b3491fae1b97e454314.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55caac39383703dba10046ce38e7b748.NginxMailingListEnglish@forum.nginx.org> I think I've found the actual problem: having a querystring at the end of the URL stops the HttpAuthDigiestModule from working correctly. In fact, simply appending a question mark at the end of a URL is enough to prevent authentication. I just get repeatedly asked to authenticate, round and round I go in a loop. Can anyone shed any light on this please? I hope it's something I can fix via the config file but perhaps it's simply a limitation with the module itself? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237107,237144#msg-237144 From nginx-forum at nginx.us Sat Mar 9 03:51:48 2013 From: nginx-forum at nginx.us (moke110007) Date: Fri, 08 Mar 2013 22:51:48 -0500 Subject: how work ip_hash and weight in nginx 1.2.7 In-Reply-To: <592de3998b05b3b3570b7da781dca944.NginxMailingListEnglish@forum.nginx.org> References: <592de3998b05b3b3570b7da781dca944.NginxMailingListEnglish@forum.nginx.org> Message-ID: <93fe874f5db7a6070208cd2c6f66a70f.NginxMailingListEnglish@forum.nginx.org> Nobody reply. Tested,iphash and weight,support balance. Over. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236972,237147#msg-237147 From steve at greengecko.co.nz Sat Mar 9 03:56:56 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sat, 09 Mar 2013 16:56:56 +1300 Subject: how work ip_hash and weight in nginx 1.2.7 In-Reply-To: <93fe874f5db7a6070208cd2c6f66a70f.NginxMailingListEnglish@forum.nginx.org> References: <592de3998b05b3b3570b7da781dca944.NginxMailingListEnglish@forum.nginx.org> <93fe874f5db7a6070208cd2c6f66a70f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <513AB308.4040003@greengecko.co.nz> On 09/03/13 16:51, moke110007 wrote: > Nobody reply. > Tested,iphash and weight,support balance. > Over. > > Last time I used it, weight wasn't supported on iphash so I just used multiple entries to weight instead. Steve From nginx-forum at nginx.us Sat Mar 9 08:48:29 2013 From: nginx-forum at nginx.us (mex) Date: Sat, 09 Mar 2013 03:48:29 -0500 Subject: how work ip_hash and weight in nginx 1.2.7 In-Reply-To: <93fe874f5db7a6070208cd2c6f66a70f.NginxMailingListEnglish@forum.nginx.org> References: <592de3998b05b3b3570b7da781dca944.NginxMailingListEnglish@forum.nginx.org> <93fe874f5db7a6070208cd2c6f66a70f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <33dff4739c36d25f17c54a70f098d57c.NginxMailingListEnglish@forum.nginx.org> did you resolved your problems? i must admit, i did not understood what your problems where. moke110007 Wrote: ------------------------------------------------------- > Nobody reply. > Tested,iphash and weight,support balance. > Over. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236972,237150#msg-237150 From nginx-forum at nginx.us Sat Mar 9 09:02:11 2013 From: nginx-forum at nginx.us (mex) Date: Sat, 09 Mar 2013 04:02:11 -0500 Subject: nginx for images (apache for pages) In-Reply-To: References: Message-ID: <871000c7c00a3e2568ea94b7616115c0.NginxMailingListEnglish@forum.nginx.org> nice tutorial! didnt you found anything approbiate here? http://wiki.nginx.org/Configuration regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237099,237151#msg-237151 From nginx-forum at nginx.us Sat Mar 9 09:05:43 2013 From: nginx-forum at nginx.us (mex) Date: Sat, 09 Mar 2013 04:05:43 -0500 Subject: RSA+DSA+ECC bundles In-Reply-To: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com> References: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com> Message-ID: <7c5c4872d8c174ea95b3ec8a0653d703.NginxMailingListEnglish@forum.nginx.org> are you talking about SNI? http://de.wikipedia.org/wiki/Server_Name_Indication nginx can handle this http://nginx.org/en/docs/http/configuring_https_servers.html#sni Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235967,237152#msg-237152 From nginx-forum at nginx.us Sat Mar 9 09:14:46 2013 From: nginx-forum at nginx.us (mex) Date: Sat, 09 Mar 2013 04:14:46 -0500 Subject: Problem configuring Nginx to host SpreeCommerce - 502 error In-Reply-To: <154450734e306d932e099bb3767d0f87@ruby-forum.com> References: <154450734e306d932e099bb3767d0f87@ruby-forum.com> Message-ID: <88ca61b1906359b5fe1b9aeeedc1e3c1.NginxMailingListEnglish@forum.nginx.org> ok,your config is a little confused. in your @ruby - part you should proxy_pass to your unicorn - server proxy_pass http://unicorn_server; (thats why you put your definition there :) ------------------------------------------------------- > OK, so I've gone through the documentation on the Nginx wiki, and I > still only have a minimal grasp on how to configure the it to serve > Spree via Unicorn. Things are now back where they started (with a 502 > error), but I am making some progress, however, in that the server now > > points to the custom error pages I made, rather than the default ones. > > Oh well... > > Attached is the current version of my nginx.conf file. > Attachments: > http://www.ruby-forum.com/attachment/8205/nginx.conf > > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236971,237153#msg-237153 From zhangji87 at gmail.com Sat Mar 9 14:43:47 2013 From: zhangji87 at gmail.com (Ji Zhang) Date: Sat, 9 Mar 2013 22:43:47 +0800 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? Message-ID: Hi, I'm doing some research on FastCGI recently. As I see from the FastCGI specification, it does support multiplexing through a single connection. But apparently none of the current web servers, like Nginx, Apache, or Lighttpd supports this feature. I found a thread from nginx dev mailing list back to 2009, stating that multiplexing won't make much difference in performance: http://forum.nginx.org/read.php?29,30275,30312 But I also find an interesting article on how great this feature is, back to 2002: http://www.nongnu.org/fastcgi/#multiplexing I don't have the ability to perform a test on this, but another protocol, SPDY, that recently becomes very popular, and its Nginx patch is already usable, also features multiplexing. So I'm curious about why spdy's multiplexing is great while fastcgi's is not. One reason I can think of is that tcp connection on the internet is expensive, affecting by RTT, CWND, and other tube warming-up issue. But tcp conneciton within IDC (or unix-domain socket on localhost) is much cheaper. Besides, the application can also go the event-based way, to accept as much connections as it can from the listening socket and perform asynchronously. Does my point make sense? or some other more substantial reasons? Thanks Jerry From tpanagiotis at gmail.com Sat Mar 9 17:17:07 2013 From: tpanagiotis at gmail.com (Panagiotis Theodoropoulos) Date: Sat, 9 Mar 2013 19:17:07 +0200 Subject: nginx forward proxy - 502 bad gateway Message-ID: I have install nginx forward proxy in two Ubuntu 12.04 servers with the following configuration. server { listen 8080; location / { resolver 8.8.8.8; proxy_pass http://$http_host$uri$is_args$args; } } After this I use the two servers as proxy on firefox. The one server works fine. But on the second server I get the message 502 bad gateway. This server is behind a cisco firewall with port 8080 open. Where is the problem? The line: proxy_pass http://$http_host$uri$is_args$args; has problem with firewalls? or it is something else? -- ?????????? ????????????? Panagiotis Theodoropoulos -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Mar 9 17:58:56 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 9 Mar 2013 17:58:56 +0000 Subject: Issue with HttpAuthDigestModule In-Reply-To: <55caac39383703dba10046ce38e7b748.NginxMailingListEnglish@forum.nginx.org> References: <8b48c4d905232b3491fae1b97e454314.NginxMailingListEnglish@forum.nginx.org> <55caac39383703dba10046ce38e7b748.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130309175856.GG10870@craic.sysops.org> On Fri, Mar 08, 2013 at 06:41:46PM -0500, redleaderuk wrote: Hi there, > I think I've found the actual problem: having a querystring at the end of > the URL stops the HttpAuthDigiestModule from working correctly. > Can anyone shed any light on this please? I hope it's something I can fix > via the config file but perhaps it's simply a limitation with the module > itself? It looks to me like a problem with this third-party module. It calculates a hash over r->uri, and compares it with the hash that the browser calculated over its idea of the request. If you replace r->uri with r->unparsed_uri and recompile, it will work for more requests. The "right" fix is probably to use fields->uri instead, and then also make sure that fields->uri and r->unparsed_uri correspond to the same thing. That (in theory) should work for all requests, but would also take longer to do right. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Mar 9 18:29:01 2013 From: nginx-forum at nginx.us (mex) Date: Sat, 09 Mar 2013 13:29:01 -0500 Subject: nginx forward proxy - 502 bad gateway In-Reply-To: References: Message-ID: <5e4e3ef30dea3bf6d2a1364c8426c7f8.NginxMailingListEnglish@forum.nginx.org> > > This server is behind a cisco firewall with port 8080 open. Where is > the > problem? > > The line: > > proxy_pass http://$http_host$uri$is_args$args; > > has problem with firewalls? or it is something else? > are you able to connect to $http_host from that second server, maybe via lynx/w3m? $ telnet $http_host 8080 and then GET / additionally, can you execute the following on that server to check if you're able to connect to your resolver: $ nslookup $http_host 8.8.8.8 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237162,237164#msg-237164 From emailgrant at gmail.com Sat Mar 9 19:54:28 2013 From: emailgrant at gmail.com (Grant) Date: Sat, 9 Mar 2013 11:54:28 -0800 Subject: nginx for images (apache for pages) In-Reply-To: <871000c7c00a3e2568ea94b7616115c0.NginxMailingListEnglish@forum.nginx.org> References: <871000c7c00a3e2568ea94b7616115c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: > nice tutorial! > > didnt you found anything approbiate here? > http://wiki.nginx.org/Configuration I tried some of those but nothing seemed to match my situation as clearly as the one I used. http://kbeezie.com/apache-with-nginx/ - Grant From nginx-forum at nginx.us Sun Mar 10 00:52:40 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Sat, 09 Mar 2013 19:52:40 -0500 Subject: error unlink() nginx 1.2.6 In-Reply-To: <20121229150801.GV40452@mdounin.ru> References: <20121229150801.GV40452@mdounin.ru> Message-ID: <9a00e41d844064849ef7ad501d0d904f.NginxMailingListEnglish@forum.nginx.org> Hello guys I'm having this problem too in version 3.1.14 and never do delete these files manually. How can I solve this? Maxim, if you say the message is too scary, then why don't we change the level of this log message from critical to notice? Cheers Michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234560,237168#msg-237168 From emailgrant at gmail.com Sun Mar 10 03:54:02 2013 From: emailgrant at gmail.com (Grant) Date: Sat, 9 Mar 2013 19:54:02 -0800 Subject: proxy_read_timeout for an apache location? In-Reply-To: <20130308203002.GE10870@craic.sysops.org> References: <3c659e53e8a2065152c305284b8c9867.NginxMailingListEnglish@forum.nginx.org> <20130308203002.GE10870@craic.sysops.org> Message-ID: >> Can I pass for-apache.html to apache and wait 30m for it? > > In nginx, one request is handled in one location. Only the configuration > in, or inherited into, that location applies. > > To send to apache, you need the proxy_pass directive -- which has > documentation at http://nginx.org/r/proxy_pass > > So the answer is "yes you can, but you have to configure it". Got it, thank you. location /for-apache.html { proxy_read_timeout 30m; proxy_pass http://127.0.0.1:8080; } - Grant From nginx-forum at nginx.us Sun Mar 10 04:02:09 2013 From: nginx-forum at nginx.us (nano) Date: Sat, 09 Mar 2013 23:02:09 -0500 Subject: Headers set in http {} go missing after setting headers in location {} Message-ID: Here is my nginx configuration http://pastie.org/private/4lceuccm9twmuiozdjnzkg My nginx -V is: nginx version: nginx/1.2.7 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled I noticed that when I had headers (add_header) in the http{ } block, those headers were not being displayed when another add_header was placed in a location{ } block. How can I have global headers sent to the client, and send additional headers when the client reaches a location block? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237172,237172#msg-237172 From nginx-forum at nginx.us Sun Mar 10 04:05:40 2013 From: nginx-forum at nginx.us (nano) Date: Sat, 09 Mar 2013 23:05:40 -0500 Subject: Headers set in http {} go missing after setting headers in location {} In-Reply-To: References: Message-ID: Accidentally pasted the headers twice. The config should look like this; http://pastie.org/private/lz9zjkmvd3drbo4ezsp3fg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237172,237173#msg-237173 From emailgrant at gmail.com Sun Mar 10 05:51:02 2013 From: emailgrant at gmail.com (Grant) Date: Sat, 9 Mar 2013 21:51:02 -0800 Subject: HTTPS header missing from single server Message-ID: How can I make nginx set the HTTPS header in a single http/https server? piwik with force_ssl=1 on apache goes into a redirect loop because it doesn't know SSL is on due to the nginx reverse proxy. There is a piwik bug which references a similar problem and blames the HTTPS header: http://dev.piwik.org/trac/ticket/2073 - Grant From emailgrant at gmail.com Sun Mar 10 05:55:13 2013 From: emailgrant at gmail.com (Grant) Date: Sat, 9 Mar 2013 21:55:13 -0800 Subject: "nginx does not suck at ssl" Message-ID: After reading "nginx does not suck at ssl": http://matt.io/entry/ur I'm using: ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:!kEDH:RC4+RSA:+HIGH; Is this a good choice? - Grant From reallfqq-nginx at yahoo.fr Sun Mar 10 09:29:18 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 10 Mar 2013 05:29:18 -0400 Subject: Location regex + if + basic auth to restrict directory access In-Reply-To: References: Message-ID: I'll answer to my own question there: Apparently, yes, evaluating something with the 'if' directive doesn't propagate the environment containing the variables from the 'location' directive. All explained on StackOverflow . The *incorrect* way: location ^~ /documents/(\w+) { if ($1 != $remote_user) { return 503; } } *--> $1 variable is unknown* The *correct* way: location ^~ /documents/(\w+) { set $user $1; if ($user != $remote_user) { return 503; } } Although the syntax is now OK and the configuration is able to be reloaded, it doesn't seem to work at all... When connecting with the user 'abc', I am still able to access /documents/def/mydoc.txt. What's wrong with my way of restricting access? Thanks for any help, --- *B. R.* On Thu, Feb 28, 2013 at 5:36 PM, B.R. wrote: > Hello, > > I am using basic auth + $remote_user variable send to the back-end > application to change context depending on the logged-in user. > > The thing is, even if the page rendered by the back-end uses nginx user > authentication, resources from a directory are still allowed for everyone. > > My 'documents' directory is sorted as follows: > documents/ > abc/ --> stores content for user 'abc' > def/ --> stores content for user 'def' > ... > > I tried the following: > location ^~ /documents/(\w+) { > if ($1 != $remote_user) { > return 503; > } > } > > But Nginx refuses to validate configuration: > nginx: [emerg] unknown "1" variable > nginx: configuration file /etc/nginx/nginx.conf test failed > > Does the 'if' directive have an environment isolated for the on of the > 'location' directive? > Am I using wrong syntax? > Is there a 'IfIsEvil' case corresponding to my needs to avoid the use of > the 'if' directive? > > Thanks, > --- > *B. R.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 10 09:36:08 2013 From: nginx-forum at nginx.us (edbloom) Date: Sun, 10 Mar 2013 05:36:08 -0400 Subject: Deny rules not working - raw php files being served! Message-ID: Hi all, I'm using a pretty simple WordPress nginx config that is documented on the WordPress codex. http://codex.wordpress.org/Nginx All works fine except for 1 critical aspect. The config uses a restrictions.conf which has some fairly simple rules for blocking unauthorized access to specific files and file patterns like the following: # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). # Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban) location ~ /\. { deny all; } What I've found is rather than actually denying requests, raw php files are being served up via nginx - which is very odd. Any ideas why this would be happening? Ed Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237177,237177#msg-237177 From nginx-forum at nginx.us Sun Mar 10 10:02:50 2013 From: nginx-forum at nginx.us (mex) Date: Sun, 10 Mar 2013 06:02:50 -0400 Subject: "nginx does not suck at ssl" In-Reply-To: References: Message-ID: one quote from that post i can confirm: > nobody has any idea how SSL performance works esp. when it comes to CIPER1 vs CIPHER, compared oin terms of speed and security. what i can suggest to test if your ssl-implementation is stil secure from a cipher-pov is https://www.ssllabs.com/ssltest/ Grant Wrote: ------------------------------------------------------- > After reading "nginx does not suck at ssl": > > http://matt.io/entry/ur > > I'm using: > > ssl_ciphers > ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:!kEDH:RC4+RSA:+HIGH; > > Is this a good choice? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237175,237179#msg-237179 From francis at daoine.org Sun Mar 10 10:26:40 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 10 Mar 2013 10:26:40 +0000 Subject: Headers set in http {} go missing after setting headers in location {} In-Reply-To: References: Message-ID: <20130310102640.GH10870@craic.sysops.org> On Sat, Mar 09, 2013 at 11:02:09PM -0500, nano wrote: Hi there, > How can I have global headers sent to the client, and send additional > headers when the client reaches a location block? The short version is "you can't". The longer version is "you can, but you have to configure it the nginx way". Which means that in the final parsed nginx.conf, all the add_header directives that you want to apply to a request are at the same inheritance level. You can "include" a file containing global ones wherever you set local ones; or you can use a macro language to do that for you when creating nginx.conf. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 10 10:40:31 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 10 Mar 2013 10:40:31 +0000 Subject: HTTPS header missing from single server In-Reply-To: References: Message-ID: <20130310104031.GA22716@craic.sysops.org> On Sat, Mar 09, 2013 at 09:51:02PM -0800, Grant wrote: Hi there, > How can I make nginx set the HTTPS header in a single http/https > server? What is "the HTTPS header"? > piwik with force_ssl=1 on apache goes into a redirect loop > because it doesn't know SSL is on due to the nginx reverse proxy. This sounds like one or more fastcgi_param key/value pairs are not set the way your application wants them to be set. http://nginx.org/r/fastcgi_param is how you set them. And it includes an example with the $https variable, which is described in http://nginx.org/en/docs/http/ngx_http_core_module.html#variables The usual nginx directive inheritance rules apply, so you'll want to add your fastcgi_param line to the correct place -- possibly just after you include the external file. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 10 10:46:57 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 10 Mar 2013 10:46:57 +0000 Subject: Location regex + if + basic auth to restrict directory access In-Reply-To: References: Message-ID: <20130310104657.GB22716@craic.sysops.org> On Sun, Mar 10, 2013 at 05:29:18AM -0400, B.R. wrote: Hi there, > The *correct* way: > location ^~ /documents/(\w+) { > set $user $1; > if ($user != $remote_user) { > return 503; > } > } > > Although the syntax is now OK and the configuration is able to be reloaded, > it doesn't seem to work at all... I haven't tested the "if" part; but in this case it's mostly likely that this location{} block is not being used at all. Your configuration is syntactically correct, so nginx can load it. But it is not semantically correct, as in "it does not mean what you want it to mean". http://nginx.org/r/location "^~" does not mean "this is a regex location" f -- Francis Daly francis at daoine.org From thomas at glanzmann.de Sun Mar 10 12:30:22 2013 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Sun, 10 Mar 2013 13:30:22 +0100 Subject: External Redirect when expecting internal redirect Message-ID: <20130310123021.GA17921@glanzmann.de> Hello, I'm running nginx 1.2.1-2.2 on Debian Wheezy (testing). I try to obtain the following: Depending on the subnet accessing either rewrite internally to a cgi script or to a static Website. For the cgi script that works perfectly fine, for the static web site nginx always does a HTTP 301 instead of an internal rewrite. Here is a stripped down configuration demonstrating the issue: (mini) [/etc/nginx] cat nginx.conf user www-data; pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; geo $site { 127.0.0.0/8 eva; default blank; } server { listen 80; server_name localhost; root /var/www; location /eva { internal; gzip off; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /home/sithglan/work/scripts/web/eva.pl; } location /blank { internal; autoindex on; autoindex_exact_size off; } location = / { rewrite ^ /$site last; } } } (mini) [/etc/nginx] /etc/init.d/nginx restart Restarting nginx: nginx. (mini) [/etc/nginx] curl -I http://192.168.0.7/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.1 Date: Sun, 10 Mar 2013 12:28:44 GMT Content-Type: text/html Content-Length: 184 Location: http://192.168.0.7/blank/ Connection: keep-alive (mini) [/etc/nginx] curl -I http://192.168.0.7/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.1 Date: Sun, 10 Mar 2013 12:28:47 GMT Content-Type: text/html Content-Length: 184 Location: http://192.168.0.7/blank/ Connection: keep-alive I would like to acomplish that when 127.0.0.0/8 access the webserver the client is internally redirected to eva.pl, which works, but if another ip address access a website from static files is displayed, which works was well if I drop the 'internal' keyword from location /blank but not using an internal redirect. What do I have to change in order to make the internal rewrite for the files work? Cheers, Thomas From francis at daoine.org Sun Mar 10 12:57:09 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 10 Mar 2013 12:57:09 +0000 Subject: External Redirect when expecting internal redirect In-Reply-To: <20130310123021.GA17921@glanzmann.de> References: <20130310123021.GA17921@glanzmann.de> Message-ID: <20130310125709.GC22716@craic.sysops.org> On Sun, Mar 10, 2013 at 01:30:22PM +0100, Thomas Glanzmann wrote: Hi there, > Depending on the subnet accessing either rewrite internally to a cgi > script or to a static Website. For the cgi script that works perfectly > fine, for the static web site nginx always does a HTTP 301 instead of an > internal rewrite. What do you expect the user to see with the static web site? As in, what content do you wish nginx to return? > location /blank { > internal; > autoindex on; > autoindex_exact_size off; > } > > location = / { > rewrite ^ /$site last; > } So, the request is for "/", nginx does a rewrite (internal) to "/blank", and that is a directory, so nginx does a redirect (external) to "/blank/". That's pretty much what I expect to happen. (Then the browser requests /blank/ and gets rejected because the location{} is marked "internal".) > I would like to acomplish that when 127.0.0.0/8 access the webserver the client > is internally redirected to eva.pl, which works, but if another ip address > access a website from static files is displayed, which works was well if I drop > the 'internal' keyword from location /blank but not using an internal redirect. > > What do I have to change in order to make the internal rewrite for the files > work? You can use an internal rewrite to a file, provided that you actually rewrite to a file. Here, you rewrite to a directory without including the trailing /. I confess I'm not sure what it is that you want to do. Possibly setting the default in the map to "blank/" will help? But you will have to decide what the next step for the user is. f -- Francis Daly francis at daoine.org From thomas at glanzmann.de Sun Mar 10 13:28:15 2013 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Sun, 10 Mar 2013 14:28:15 +0100 Subject: External Redirect when expecting internal redirect In-Reply-To: <20130310125709.GC22716@craic.sysops.org> References: <20130310123021.GA17921@glanzmann.de> <20130310125709.GC22716@craic.sysops.org> Message-ID: <20130310132815.GA19360@glanzmann.de> Hello Francis, * Francis Daly [2013-03-10 13:57]: > You can use an internal rewrite to a file, provided that you actually > rewrite to a file. Here, you rewrite to a directory without including > the trailing /. I wanted to rewrite to a directory. I see my mistake now and it should have been obvious to me, but was not. > Possibly setting the default in the map to "blank/" will help? Yes, it resolved my issue. Now it works perfectly fine. Thank you for helping me out. I was searching for two hours now. Cheers, Thomas From mdounin at mdounin.ru Sun Mar 10 16:30:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 Mar 2013 20:30:11 +0400 Subject: error unlink() nginx 1.2.6 In-Reply-To: <9a00e41d844064849ef7ad501d0d904f.NginxMailingListEnglish@forum.nginx.org> References: <20121229150801.GV40452@mdounin.ru> <9a00e41d844064849ef7ad501d0d904f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130310163011.GN15378@mdounin.ru> Hello! On Sat, Mar 09, 2013 at 07:52:40PM -0500, michael.heuberger wrote: > Hello guys > > I'm having this problem too in version 3.1.14 and never do delete these > files manually. How can I solve this? Even if you did not delete files manually, the message still indicate files were somehow removed out of nginx control. You may want to dig further to find out how files were removed. > Maxim, if you say the message is too scary, then why don't we change the > level of this log message from critical to notice? The notice is certainly too low, as this isn't something which should happen under normal conditions. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Mar 10 16:32:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 Mar 2013 20:32:57 +0400 Subject: imap: invalid header in response while in http auth state In-Reply-To: References: Message-ID: <20130310163256.GO15378@mdounin.ru> Hello! On Thu, Mar 07, 2013 at 01:55:35PM -0800, Grant wrote: > I'm using imapproxy and trying to switch to nginx. courier is > listening on port 143. > > mail { > auth_http localhost:143; > proxy on; > server { > listen 144; > protocol imap; > } > } > > I get: > > auth http server 127.0.0.1:143 sent invalid header in response while > in http auth state, client: 127.0.0.1, server: 0.0.0.0:144 > > Does anyone know what's wrong? You are trying to use auth_http with imap port, which is wrong. It is expected to be http service with headers used for communication, see here: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Mar 10 16:40:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 Mar 2013 20:40:31 +0400 Subject: IMAP: auth_http In-Reply-To: References: Message-ID: <20130310164031.GP15378@mdounin.ru> Hello! On Thu, Mar 07, 2013 at 09:16:11PM -0800, Grant wrote: > nginx seems to require being pointed to an HTTP server for imap > authentication. Here's the protocol spec: > > http://wiki.nginx.org/MailCoreModule#Authentication > > Is the idea to program this server yourself or does a server like this > already exist? It's usually a script written individualy for a specific system. Some samples may be found on the wiki, e.g. here: http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Mar 10 17:49:38 2013 From: nginx-forum at nginx.us (nano) Date: Sun, 10 Mar 2013 13:49:38 -0400 Subject: Headers set in http {} go missing after setting headers in location {} In-Reply-To: <20130310102640.GH10870@craic.sysops.org> References: <20130310102640.GH10870@craic.sysops.org> Message-ID: Thank you very much Francis. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237172,237192#msg-237192 From reallfqq-nginx at yahoo.fr Sun Mar 10 19:14:55 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 10 Mar 2013 15:14:55 -0400 Subject: Location regex + if + basic auth to restrict directory access In-Reply-To: <20130310104657.GB22716@craic.sysops.org> References: <20130310104657.GB22716@craic.sysops.org> Message-ID: Hello, Thanks for that... I thought the ^~ was meaning 'starting with regex'... My bad! I changed the symbol for some of the ones relly meaning 'regex' (~*) and it works! :o) If there is no better way than sticking with 'if', then it's all good. Thanks again, problem solved, --- *B. R.* On Sun, Mar 10, 2013 at 6:46 AM, Francis Daly wrote: > On Sun, Mar 10, 2013 at 05:29:18AM -0400, B.R. wrote: > > Hi there, > > > The *correct* way: > > location ^~ /documents/(\w+) { > > set $user $1; > > if ($user != $remote_user) { > > return 503; > > } > > } > > > > Although the syntax is now OK and the configuration is able to be > reloaded, > > it doesn't seem to work at all... > > I haven't tested the "if" part; but in this case it's mostly likely that > this location{} block is not being used at all. > > Your configuration is syntactically correct, so nginx can load it. > > But it is not semantically correct, as in "it does not mean what you want > it to mean". > > http://nginx.org/r/location > > "^~" does not mean "this is a regex location" > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 10 20:07:23 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 10 Mar 2013 16:07:23 -0400 Subject: securing access to a folder - 404 error Message-ID: <3f9b42885c146cd20a56a1e69a001f93.NginxMailingListEnglish@forum.nginx.org> I'm trying to secure a directory on a CentOS 6.3 64 server running NGINX 1.2.7. I think I've set this up correctly, but it keeps giving me a 404 Not Found error when I try to access a file in that folder in the browser using domainName/secure/hello2.html. I created an .htpasswd file using printf "MYUSER:$(openssl passwd -1 MYPASSWORD)\n" >> .htpasswd and put that into the /var/www/protected/ folder. I also modified the NGINX config file and included a location/auth block for the /secure/ folder: # protect the "secure" folder ( /var/www/html/secure ) location ^~ /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/protected/.htpasswd; } If I comment out this block from the config file and restart NGINX, I can see the file in the browser with no problem. I even moved the .htpasswd file into the /secure/ folder and changed the config file to reflect that change (just to see what would happen), but I still get the 404 Not Found error. Can anyone tell me what I'm missing? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237196,237196#msg-237196 From emailgrant at gmail.com Sun Mar 10 21:41:12 2013 From: emailgrant at gmail.com (Grant) Date: Sun, 10 Mar 2013 14:41:12 -0700 Subject: "nginx does not suck at ssl" In-Reply-To: References: Message-ID: > one quote from that post i can confirm: > >> nobody has any idea how SSL performance works > > esp. when it comes to CIPER1 vs CIPHER, compared > oin terms of speed and security. > > what i can suggest to test if your ssl-implementation is stil > secure from a cipher-pov is > https://www.ssllabs.com/ssltest/ All things considered, do you think it's best to leave ssl_ciphers default? - Grant >> After reading "nginx does not suck at ssl": >> >> http://matt.io/entry/ur >> >> I'm using: >> >> ssl_ciphers >> ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:!kEDH:RC4+RSA:+HIGH; >> >> Is this a good choice? >> >> - Grant From emailgrant at gmail.com Sun Mar 10 21:43:11 2013 From: emailgrant at gmail.com (Grant) Date: Sun, 10 Mar 2013 14:43:11 -0700 Subject: IMAP: auth_http In-Reply-To: <20130310164031.GP15378@mdounin.ru> References: <20130310164031.GP15378@mdounin.ru> Message-ID: >> nginx seems to require being pointed to an HTTP server for imap >> authentication. Here's the protocol spec: >> >> http://wiki.nginx.org/MailCoreModule#Authentication >> >> Is the idea to program this server yourself or does a server like this >> already exist? > > It's usually a script written individualy for a specific system. > Some samples may be found on the wiki, e.g. here: > > http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript In that case I request for nginx's imap proxy to function more like imapproxy which is easier to set up. - Grant From emailgrant at gmail.com Sun Mar 10 22:24:46 2013 From: emailgrant at gmail.com (Grant) Date: Sun, 10 Mar 2013 15:24:46 -0700 Subject: HTTPS header missing from single server In-Reply-To: <20130310104031.GA22716@craic.sysops.org> References: <20130310104031.GA22716@craic.sysops.org> Message-ID: >> How can I make nginx set the HTTPS header in a single http/https >> server? > > What is "the HTTPS header"? I meant to say HTTPS environment variable. >> piwik with force_ssl=1 on apache goes into a redirect loop >> because it doesn't know SSL is on due to the nginx reverse proxy. > > This sounds like one or more fastcgi_param key/value pairs are not set > the way your application wants them to be set. > > http://nginx.org/r/fastcgi_param is how you set them. And it > includes an example with the $https variable, which is described in > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables I should have mentioned that I'm using proxy_pass. I was able to get it working like this: proxy_set_header X-Forwarded-Proto $scheme; - Grant From appa at perusio.net Mon Mar 11 00:42:36 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 11 Mar 2013 01:42:36 +0100 Subject: SPDY patch not applying cleanly. Message-ID: <874ngibwcj.wl%appa@perusio.net> Hello, Apparently the SPDY patch doesn't apply cleanly to 1.3.14. See the results below for the offending files. I suppose that the patch should apply cleanly to the "dev" branch? Thanks, --- appa |# HG changeset patch |# User Valentin Bartenev |# Date 1362681099 -14400 |# Node ID b2981903b9bd996870a870b78a1409dbb9d4f528 |# Parent 3dc1c0a21acbd7c0ed843c25f3cabb1879ae3480 |Respect the new behavior of TCP_DEFER_ACCEPT. | |In Linux 2.6.32, TCP_DEFER_ACCEPT was changed to accept connections |after the deferring period is finished without any data available. |(Reading from the socket returns EAGAIN in this case.) | |Since in nginx TCP_DEFER_ACCEPT is set to "post_accept_timeout", we |do not need to wait longer if deferred accept returns with no data. | |diff -r 3dc1c0a21acb -r b2981903b9bd src/http/ngx_http_request.c |--- a/src/http/ngx_http_request.c Thu Mar 07 22:31:30 2013 +0400 |+++ b/src/http/ngx_http_request.c Thu Mar 07 22:31:39 2013 +0400 -------------------------- Patching file src/http/ngx_http_request.c using Plan A... Hunk #1 FAILED at 416. Hunk #2 succeeded at 540 (offset -77 lines). 1 out of 2 hunks FAILED -- saving rejects to file src/http/ngx_http_request.c.rej -------------------------- |# HG changeset patch |# User Valentin Bartenev |# Date 1362681142 -14400 |# Node ID 4fddad5dfe0a97bd6a48412fb4b1e8be0016f900 |# Parent d0a841d5f19a2ef2d9e15c52ccfb76774413c59b |Refactored ngx_http_init_request(). | |Now it can be used as the request object factory with minimal impact on the |connection object. Therefore it was renamed to ngx_http_create_request(). | |diff -r d0a841d5f19a -r 4fddad5dfe0a src/http/ngx_http_request.c |--- a/src/http/ngx_http_request.c Tue Oct 30 16:14:21 2012 +0400 |+++ b/src/http/ngx_http_request.c Thu Mar 07 22:32:22 2013 +0400 -------------------------- Patching file src/http/ngx_http_request.c using Plan A... Hunk #1 FAILED at 11. Hunk #2 FAILED at 466. Hunk #3 FAILED at 483. Hunk #4 succeeded at 392 (offset -101 lines). Hunk #5 succeeded at 434 (offset -91 lines). Hunk #6 FAILED at 540. Hunk #7 succeeded at 492 (offset -90 lines). Hunk #8 succeeded at 2612 (offset -99 lines). Hunk #9 FAILED at 2759. Hunk #10 FAILED at 3008. 6 out of 10 hunks FAILED -- saving rejects to file src/http/ngx_http_request.c.rej -------------------------- |# HG changeset patch |# User Valentin Bartenev |# Date 1362683272 -14400 |# Node ID 5d86db8e27827ad1f2ff4ebc560ecb95372e43e3 |# Parent 69a3b3c0751ce2ed8c3eb2580c37027db7f2817e |Allow to reuse connections that wait their first request. | |This should improve behavior under deficiency of connections. | |diff -r 69a3b3c0751c -r 5d86db8e2782 src/http/ngx_http_request.c |--- a/src/http/ngx_http_request.c Thu Mar 07 22:32:46 2013 +0400 |+++ b/src/http/ngx_http_request.c Thu Mar 07 23:07:52 2013 +0400 -------------------------- Patching file src/http/ngx_http_request.c using Plan A... Hunk #1 succeeded at 353 (offset -1 lines). Hunk #2 succeeded at 388 with fuzz 2 (offset 3 lines). Hunk #3 FAILED at 439. Hunk #4 FAILED at 473. Hunk #5 succeeded at 538 (offset -80 lines). Hunk #6 succeeded at 593 (offset -89 lines). Hunk #7 FAILED at 728. 3 out of 7 hunks FAILED -- saving rejects to file src/http/ngx_http_request.c.rej -------------------------- |diff -r c487eaf83bb7 -r 3e63aa2c08b7 src/http/modules/ngx_http_ssl_module.c |--- a/src/http/modules/ngx_http_ssl_module.c Thu Mar 07 23:08:05 2013 +0400 |+++ b/src/http/modules/ngx_http_ssl_module.c Sun Mar 10 21:23:04 2013 +0400 -------------------------- Patching file src/http/modules/ngx_http_ssl_module.c using Plan A... Hunk #1 FAILED at 275. 1 out of 1 hunk FAILED -- saving rejects to file src/http/modules/ngx_http_ssl_module.c.rej -------------------------- |diff -r c487eaf83bb7 -r 3e63aa2c08b7 src/http/ngx_http_request.c |--- a/src/http/ngx_http_request.c Thu Mar 07 23:08:05 2013 +0400 |+++ b/src/http/ngx_http_request.c Sun Mar 10 21:23:04 2013 +0400 -------------------------- Patching file src/http/ngx_http_request.c using Plan A... Hunk #1 FAILED at 11. Hunk #2 succeeded at 30 (offset -1 lines). Hunk #3 succeeded at 38 (offset -1 lines). Hunk #4 succeeded at 51 (offset -1 lines). Hunk #5 succeeded at 310 with fuzz 1 (offset -1 lines). Hunk #6 FAILED at 487. Hunk #7 FAILED at 726. Hunk #8 succeeded at 1574 (offset -108 lines). Hunk #9 succeeded at 1644 (offset -108 lines). Hunk #10 succeeded at 1982 (offset -108 lines). Hunk #11 succeeded at 2321 (offset -108 lines). Hunk #12 succeeded at 2382 (offset -108 lines). Hunk #13 succeeded at 2535 (offset -108 lines). Hunk #14 succeeded at 3157 (offset -134 lines). Hunk #15 succeeded at 3252 (offset -134 lines). Hunk #16 succeeded at 3270 (offset -134 lines). 3 out of 16 hunks FAILED -- saving rejects to file src/http/ngx_http_request.c.rej -------------------------- |diff -r c487eaf83bb7 -r 3e63aa2c08b7 src/http/ngx_http_request.h |--- a/src/http/ngx_http_request.h Thu Mar 07 23:08:05 2013 +0400 |+++ b/src/http/ngx_http_request.h Sun Mar 10 21:23:04 2013 +0400 -------------------------- Patching file src/http/ngx_http_request.h using Plan A... Hunk #1 succeeded at 291. Hunk #2 FAILED at 311. Hunk #3 succeeded at 428 (offset -1 lines). 1 out of 3 hunks FAILED -- saving rejects to file src/http/ngx_http_request.h.rej From vbart at nginx.com Mon Mar 11 00:58:20 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 11 Mar 2013 04:58:20 +0400 Subject: SPDY patch not applying cleanly. In-Reply-To: <874ngibwcj.wl%appa@perusio.net> References: <874ngibwcj.wl%appa@perusio.net> Message-ID: <201303110458.20152.vbart@nginx.com> On Monday 11 March 2013 04:42:36 Ant?nio P. P. Almeida wrote: > Hello, > > Apparently the SPDY patch doesn't apply cleanly to 1.3.14. See the > results below for the offending files. [...] You're probably doing something wrong. I've just checked it myself: % wget -q http://nginx.org/download/nginx-1.3.14.tar.gz % tar xzf nginx-1.3.14.tar.gz % cd nginx-1.3.14 % wget -q http://nginx.org/patches/spdy/patch.spdy.txt % patch -p1 < patch.spdy.txt patching file src/http/ngx_http_request.c patching file src/http/ngx_http_request.c patching file src/http/ngx_http_request.c patching file src/core/ngx_connection.c patching file src/core/ngx_connection.h patching file src/http/ngx_http_request.c patching file src/http/ngx_http_upstream.c patching file src/http/ngx_http_request.c patching file src/http/ngx_http_request.h patching file src/http/modules/ngx_http_ssl_module.c patching file src/http/modules/ngx_http_gzip_filter_module.c patching file src/http/ngx_http_request.c patching file src/core/ngx_connection.c patching file src/event/ngx_event.c patching file src/event/ngx_event.h patching file src/http/modules/ngx_http_stub_status_module.c patching file auto/modules patching file auto/options patching file auto/sources patching file src/http/modules/ngx_http_ssl_module.c patching file src/http/ngx_http.c patching file src/http/ngx_http.h patching file src/http/ngx_http_core_module.c patching file src/http/ngx_http_core_module.h patching file src/http/ngx_http_parse.c patching file src/http/ngx_http_request.c patching file src/http/ngx_http_request.h patching file src/http/ngx_http_request_body.c patching file src/http/ngx_http_spdy.c patching file src/http/ngx_http_spdy.h patching file src/http/ngx_http_spdy_filter_module.c patching file src/http/ngx_http_spdy_module.c patching file src/http/ngx_http_spdy_module.h patching file src/http/ngx_http_upstream.c No errors. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html > |# HG changeset patch > |# User Valentin Bartenev > |# Date 1362681099 -14400 > |# Node ID b2981903b9bd996870a870b78a1409dbb9d4f528 > |# Parent 3dc1c0a21acbd7c0ed843c25f3cabb1879ae3480 > |Respect the new behavior of TCP_DEFER_ACCEPT. > | > |In Linux 2.6.32, TCP_DEFER_ACCEPT was changed to accept connections > |after the deferring period is finished without any data available. > |(Reading from the socket returns EAGAIN in this case.) > | > |Since in nginx TCP_DEFER_ACCEPT is set to "post_accept_timeout", we > |do not need to wait longer if deferred accept returns with no data. > | > |diff -r 3dc1c0a21acb -r b2981903b9bd src/http/ngx_http_request.c > |--- a/src/http/ngx_http_request.c Thu Mar 07 22:31:30 2013 +0400 > |+++ b/src/http/ngx_http_request.c Thu Mar 07 22:31:39 2013 +0400 > > -------------------------- > Patching file src/http/ngx_http_request.c using Plan A... > Hunk #1 FAILED at 416. > Hunk #2 succeeded at 540 (offset -77 lines). > 1 out of 2 hunks FAILED -- saving rejects to file > src/http/ngx_http_request.c.rej > > -------------------------- > > |# HG changeset patch > |# User Valentin Bartenev > |# Date 1362681142 -14400 > |# Node ID 4fddad5dfe0a97bd6a48412fb4b1e8be0016f900 > |# Parent d0a841d5f19a2ef2d9e15c52ccfb76774413c59b > |Refactored ngx_http_init_request(). > | > |Now it can be used as the request object factory with minimal impact on > |the connection object. Therefore it was renamed to > |ngx_http_create_request(). > | > |diff -r d0a841d5f19a -r 4fddad5dfe0a src/http/ngx_http_request.c > |--- a/src/http/ngx_http_request.c Tue Oct 30 16:14:21 2012 +0400 > |+++ b/src/http/ngx_http_request.c Thu Mar 07 22:32:22 2013 +0400 > > -------------------------- > Patching file src/http/ngx_http_request.c using Plan A... > Hunk #1 FAILED at 11. > Hunk #2 FAILED at 466. > Hunk #3 FAILED at 483. > Hunk #4 succeeded at 392 (offset -101 lines). > Hunk #5 succeeded at 434 (offset -91 lines). > Hunk #6 FAILED at 540. > Hunk #7 succeeded at 492 (offset -90 lines). > Hunk #8 succeeded at 2612 (offset -99 lines). > Hunk #9 FAILED at 2759. > Hunk #10 FAILED at 3008. > 6 out of 10 hunks FAILED -- saving rejects to file > src/http/ngx_http_request.c.rej > > -------------------------- > > |# HG changeset patch > |# User Valentin Bartenev > |# Date 1362683272 -14400 > |# Node ID 5d86db8e27827ad1f2ff4ebc560ecb95372e43e3 > |# Parent 69a3b3c0751ce2ed8c3eb2580c37027db7f2817e > |Allow to reuse connections that wait their first request. > | > |This should improve behavior under deficiency of connections. > | > |diff -r 69a3b3c0751c -r 5d86db8e2782 src/http/ngx_http_request.c > |--- a/src/http/ngx_http_request.c Thu Mar 07 22:32:46 2013 +0400 > |+++ b/src/http/ngx_http_request.c Thu Mar 07 23:07:52 2013 +0400 > > -------------------------- > Patching file src/http/ngx_http_request.c using Plan A... > Hunk #1 succeeded at 353 (offset -1 lines). > Hunk #2 succeeded at 388 with fuzz 2 (offset 3 lines). > Hunk #3 FAILED at 439. > Hunk #4 FAILED at 473. > Hunk #5 succeeded at 538 (offset -80 lines). > Hunk #6 succeeded at 593 (offset -89 lines). > Hunk #7 FAILED at 728. > 3 out of 7 hunks FAILED -- saving rejects to file > src/http/ngx_http_request.c.rej > > -------------------------- > > |diff -r c487eaf83bb7 -r 3e63aa2c08b7 > |src/http/modules/ngx_http_ssl_module.c --- > |a/src/http/modules/ngx_http_ssl_module.c Thu Mar 07 23:08:05 2013 +0400 > |+++ b/src/http/modules/ngx_http_ssl_module.c Sun Mar 10 21:23:04 2013 > |+0400 > > -------------------------- > Patching file src/http/modules/ngx_http_ssl_module.c using Plan A... > Hunk #1 FAILED at 275. > 1 out of 1 hunk FAILED -- saving rejects to file > src/http/modules/ngx_http_ssl_module.c.rej > > -------------------------- > > |diff -r c487eaf83bb7 -r 3e63aa2c08b7 src/http/ngx_http_request.c > |--- a/src/http/ngx_http_request.c Thu Mar 07 23:08:05 2013 +0400 > |+++ b/src/http/ngx_http_request.c Sun Mar 10 21:23:04 2013 +0400 > > -------------------------- > Patching file src/http/ngx_http_request.c using Plan A... > Hunk #1 FAILED at 11. > Hunk #2 succeeded at 30 (offset -1 lines). > Hunk #3 succeeded at 38 (offset -1 lines). > Hunk #4 succeeded at 51 (offset -1 lines). > Hunk #5 succeeded at 310 with fuzz 1 (offset -1 lines). > Hunk #6 FAILED at 487. > Hunk #7 FAILED at 726. > Hunk #8 succeeded at 1574 (offset -108 lines). > Hunk #9 succeeded at 1644 (offset -108 lines). > Hunk #10 succeeded at 1982 (offset -108 lines). > Hunk #11 succeeded at 2321 (offset -108 lines). > Hunk #12 succeeded at 2382 (offset -108 lines). > Hunk #13 succeeded at 2535 (offset -108 lines). > Hunk #14 succeeded at 3157 (offset -134 lines). > Hunk #15 succeeded at 3252 (offset -134 lines). > Hunk #16 succeeded at 3270 (offset -134 lines). > 3 out of 16 hunks FAILED -- saving rejects to file > src/http/ngx_http_request.c.rej > > -------------------------- > > |diff -r c487eaf83bb7 -r 3e63aa2c08b7 src/http/ngx_http_request.h > |--- a/src/http/ngx_http_request.h Thu Mar 07 23:08:05 2013 +0400 > |+++ b/src/http/ngx_http_request.h Sun Mar 10 21:23:04 2013 +0400 > > -------------------------- > Patching file src/http/ngx_http_request.h using Plan A... > Hunk #1 succeeded at 291. > Hunk #2 FAILED at 311. > Hunk #3 succeeded at 428 (offset -1 lines). > 1 out of 3 hunks FAILED -- saving rejects to file > src/http/ngx_http_request.h.rej > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From appa at perusio.net Mon Mar 11 01:18:44 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 11 Mar 2013 02:18:44 +0100 Subject: SPDY patch not applying cleanly. In-Reply-To: <201303110458.20152.vbart@nginx.com> References: <874ngibwcj.wl%appa@perusio.net> <201303110458.20152.vbart@nginx.com> Message-ID: <871ubmbuob.wl%appa@perusio.net> On 11 Mar 2013 01h58 CET, vbart at nginx.com wrote: > You're probably doing something wrong. I've just checked it myself: I just reproduced all your commands below with the same result. I.e., no clean application, with rejection of some hunks :( --- appa > % wget -q http://nginx.org/download/nginx-1.3.14.tar.gz > % tar xzf nginx-1.3.14.tar.gz > % cd nginx-1.3.14 > % wget -q http://nginx.org/patches/spdy/patch.spdy.txt > % patch -p1 < patch.spdy.txt > patching file src/http/ngx_http_request.c > patching file src/http/ngx_http_request.c > patching file src/http/ngx_http_request.c > patching file src/core/ngx_connection.c > patching file src/core/ngx_connection.h > patching file src/http/ngx_http_request.c > patching file src/http/ngx_http_upstream.c > patching file src/http/ngx_http_request.c > patching file src/http/ngx_http_request.h > patching file src/http/modules/ngx_http_ssl_module.c > patching file src/http/modules/ngx_http_gzip_filter_module.c > patching file src/http/ngx_http_request.c > patching file src/core/ngx_connection.c > patching file src/event/ngx_event.c > patching file src/event/ngx_event.h > patching file src/http/modules/ngx_http_stub_status_module.c > patching file auto/modules > patching file auto/options > patching file auto/sources > patching file src/http/modules/ngx_http_ssl_module.c > patching file src/http/ngx_http.c > patching file src/http/ngx_http.h > patching file src/http/ngx_http_core_module.c > patching file src/http/ngx_http_core_module.h > patching file src/http/ngx_http_parse.c > patching file src/http/ngx_http_request.c > patching file src/http/ngx_http_request.h > patching file src/http/ngx_http_request_body.c > patching file src/http/ngx_http_spdy.c > patching file src/http/ngx_http_spdy.h > patching file src/http/ngx_http_spdy_filter_module.c > patching file src/http/ngx_http_spdy_module.c > patching file src/http/ngx_http_spdy_module.h > patching file src/http/ngx_http_upstream.c > > No errors. > > wbr, Valentin V. Bartenev From vbart at nginx.com Mon Mar 11 02:17:36 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 11 Mar 2013 06:17:36 +0400 Subject: SPDY patch not applying cleanly. In-Reply-To: <871ubmbuob.wl%appa@perusio.net> References: <874ngibwcj.wl%appa@perusio.net> <201303110458.20152.vbart@nginx.com> <871ubmbuob.wl%appa@perusio.net> Message-ID: <201303110617.36534.vbart@nginx.com> On Monday 11 March 2013 05:18:44 Ant?nio P. P. Almeida wrote: > On 11 Mar 2013 01h58 CET, vbart at nginx.com wrote: > > You're probably doing something wrong. I've just checked it myself: > I just reproduced all your commands below with the same result. I.e., > no clean application, with rejection of some hunks :( > Please, verify the patch file: % md5sum patch.spdy.txt a5cb5cb3fc8a8e04efb62b2f8f48a5ac patch.spdy.txt % shasum patch.spdy.txt 1a9ffddffbde0812b67eaca91a22ff0aa17293cc patch.spdy.txt Also, note that "patch --dry-run" (or "patch -C" in BSD world) does not work with multiple patches in one file. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html > > > % wget -q http://nginx.org/download/nginx-1.3.14.tar.gz > > % tar xzf nginx-1.3.14.tar.gz > > % cd nginx-1.3.14 > > % wget -q http://nginx.org/patches/spdy/patch.spdy.txt > > % patch -p1 < patch.spdy.txt > > patching file src/http/ngx_http_request.c > > patching file src/http/ngx_http_request.c > > patching file src/http/ngx_http_request.c > > patching file src/core/ngx_connection.c > > patching file src/core/ngx_connection.h > > patching file src/http/ngx_http_request.c > > patching file src/http/ngx_http_upstream.c > > patching file src/http/ngx_http_request.c > > patching file src/http/ngx_http_request.h > > patching file src/http/modules/ngx_http_ssl_module.c > > patching file src/http/modules/ngx_http_gzip_filter_module.c > > patching file src/http/ngx_http_request.c > > patching file src/core/ngx_connection.c > > patching file src/event/ngx_event.c > > patching file src/event/ngx_event.h > > patching file src/http/modules/ngx_http_stub_status_module.c > > patching file auto/modules > > patching file auto/options > > patching file auto/sources > > patching file src/http/modules/ngx_http_ssl_module.c > > patching file src/http/ngx_http.c > > patching file src/http/ngx_http.h > > patching file src/http/ngx_http_core_module.c > > patching file src/http/ngx_http_core_module.h > > patching file src/http/ngx_http_parse.c > > patching file src/http/ngx_http_request.c > > patching file src/http/ngx_http_request.h > > patching file src/http/ngx_http_request_body.c > > patching file src/http/ngx_http_spdy.c > > patching file src/http/ngx_http_spdy.h > > patching file src/http/ngx_http_spdy_filter_module.c > > patching file src/http/ngx_http_spdy_module.c > > patching file src/http/ngx_http_spdy_module.h > > patching file src/http/ngx_http_upstream.c > > > > No errors. > > > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From emailgrant at gmail.com Mon Mar 11 04:48:47 2013 From: emailgrant at gmail.com (Grant) Date: Sun, 10 Mar 2013 21:48:47 -0700 Subject: SSL default changes? Message-ID: It looks like these changes from default are required for SSL session resumption and to mitigate the BEAST SSL vulnerability: ssl_session_cache shared:SSL:10m; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; Should the defaults be changed to these? - Grant From maxim at nginx.com Mon Mar 11 07:31:43 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 11 Mar 2013 11:31:43 +0400 Subject: SPDY patch not applying cleanly. In-Reply-To: <871ubmbuob.wl%appa@perusio.net> References: <874ngibwcj.wl%appa@perusio.net> <201303110458.20152.vbart@nginx.com> <871ubmbuob.wl%appa@perusio.net> Message-ID: <513D885F.4010007@nginx.com> On 3/11/13 5:18 AM, Ant?nio P. P. Almeida wrote: > On 11 Mar 2013 01h58 CET, vbart at nginx.com wrote: > >> You're probably doing something wrong. I've just checked it myself: > > I just reproduced all your commands below with the same result. I.e., > no clean application, with rejection of some hunks :( > > --- appa > >> % wget -q http://nginx.org/download/nginx-1.3.14.tar.gz >> % tar xzf nginx-1.3.14.tar.gz >> % cd nginx-1.3.14 >> % wget -q http://nginx.org/patches/spdy/patch.spdy.txt >> % patch -p1 < patch.spdy.txt Works just fine here. -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From ru at nginx.com Mon Mar 11 07:40:42 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 11 Mar 2013 11:40:42 +0400 Subject: how work ip_hash and weight in nginx 1.2.7 In-Reply-To: <513AB308.4040003@greengecko.co.nz> References: <592de3998b05b3b3570b7da781dca944.NginxMailingListEnglish@forum.nginx.org> <93fe874f5db7a6070208cd2c6f66a70f.NginxMailingListEnglish@forum.nginx.org> <513AB308.4040003@greengecko.co.nz> Message-ID: <20130311074042.GD23048@lo0.su> On Sat, Mar 09, 2013 at 04:56:56PM +1300, Steve Holdoway wrote: > On 09/03/13 16:51, moke110007 wrote: > > Nobody reply. > > Tested,iphash and weight,support balance. > > Over. > > > > > Last time I used it, weight wasn't supported on iphash so I just used > multiple entries to weight instead. Quote from http://nginx.org/r/ip_hash: : Until versions 1.3.1 and 1.2.2 it was not possible to specify a : weight for servers using the ip_hash load balancing method. From kirilk at cloudxcel.com Mon Mar 11 07:58:28 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Mon, 11 Mar 2013 09:58:28 +0200 Subject: error unlink() nginx 1.2.6 In-Reply-To: <20130310163011.GN15378@mdounin.ru> References: <20121229150801.GV40452@mdounin.ru> <9a00e41d844064849ef7ad501d0d904f.NginxMailingListEnglish@forum.nginx.org> <20130310163011.GN15378@mdounin.ru> Message-ID: <58E3DE39-B2C8-40CB-9DB2-B0EA43DC3144@cloudxcel.com> Hello! After I read the thread, I am wondering what is the recommended way to purge nginx cache? Regards, Kiril On Mar 10, 2013, at 6:30 PM, Maxim Dounin wrote: > Hello! > > On Sat, Mar 09, 2013 at 07:52:40PM -0500, michael.heuberger wrote: > >> Hello guys >> >> I'm having this problem too in version 3.1.14 and never do delete these >> files manually. How can I solve this? > > Even if you did not delete files manually, the message still > indicate files were somehow removed out of nginx control. You > may want to dig further to find out how files were removed. > >> Maxim, if you say the message is too scary, then why don't we change the >> level of this log message from critical to notice? > > The notice is certainly too low, as this isn't something which > should happen under normal conditions. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From m6rkalan at gmail.com Mon Mar 11 08:41:31 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Mon, 11 Mar 2013 08:41:31 +0000 Subject: "nginx does not suck at ssl" In-Reply-To: References: Message-ID: <513d98bd.c462b40a.36eb.ffff9598@mx.google.com> On Sat, 9 Mar 2013 21:55:13 -0800, Grant wrote: > After reading "nginx does not suck at ssl": > > http://matt.io/entry/ur > > I'm using: > > ssl_ciphers > ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:!kEDH:RC4+RSA:+HIGH; Some of us use the following to mitigate BEAST attacks: ssl_ciphers ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!aNULL:!MD5:!EDH; r. M. From wrr at mixedbit.org Mon Mar 11 09:20:10 2013 From: wrr at mixedbit.org (Jan Wrobel) Date: Mon, 11 Mar 2013 10:20:10 +0100 Subject: auth_request: is it possible to return auth_request result directly to client? Message-ID: Hi, Currently I have auth_request module configured to return static pages on 401 and 403 errors. This looks like this: auth_request /auth/api/is-authorized/; error_page 401 /auth/login.html; error_page 403 /auth/not_authorized.html; Instead of this, is it possible to return 401 and 403 responses (along with bodies) from the authentication backend directly to the client? Thanks, Jan From nhadie at gmail.com Mon Mar 11 09:52:20 2013 From: nhadie at gmail.com (ron ramos) Date: Mon, 11 Mar 2013 17:52:20 +0800 Subject: multiple docroot Message-ID: Hi All, Is it possible to have multiple docroot for a single domain? and will load different docroot based on condition? our developers are currently developing our application but based on a new framework. so they would like to be able to have the legacy framework and the new framework to co-exists to ensure that the app will have less error when new framework is launched. one sample condition is mobile, if nginx detects that it is user-agent is from mobile it will use legacy if not use new framework. that is just one condition, we will have other conditions as well. is it possible on nginx? Regards, Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 11 10:53:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Mar 2013 14:53:24 +0400 Subject: SSL default changes? In-Reply-To: References: Message-ID: <20130311105324.GQ15378@mdounin.ru> Hello! On Sun, Mar 10, 2013 at 09:48:47PM -0700, Grant wrote: > It looks like these changes from default are required for SSL session > resumption and to mitigate the BEAST SSL vulnerability: > > ssl_session_cache shared:SSL:10m; > ssl_ciphers RC4:HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > Should the defaults be changed to these? The BEAST attack could be mitigated by various means, including switching to TLS 1.1/1.2 (you probably do not want to due to compatibility reasons) and/or fixing it on a client side (which is considered to be right solution and already implemented by all modern browsers). Use of the RC4 cipher is more a workaround than a permanent solution, and hence there are no plans to make it the default. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Mar 11 10:55:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Mar 2013 14:55:17 +0400 Subject: auth_request: is it possible to return auth_request result directly to client? In-Reply-To: References: Message-ID: <20130311105517.GR15378@mdounin.ru> Hello! On Mon, Mar 11, 2013 at 10:20:10AM +0100, Jan Wrobel wrote: > Hi, > > Currently I have auth_request module configured to return static pages > on 401 and 403 errors. This looks like this: > > auth_request /auth/api/is-authorized/; > error_page 401 /auth/login.html; > error_page 403 /auth/not_authorized.html; > > Instead of this, is it possible to return 401 and 403 responses (along > with bodies) from the authentication backend directly to the client? No. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Mar 11 11:17:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Mar 2013 15:17:08 +0400 Subject: error unlink() nginx 1.2.6 In-Reply-To: <58E3DE39-B2C8-40CB-9DB2-B0EA43DC3144@cloudxcel.com> References: <20121229150801.GV40452@mdounin.ru> <9a00e41d844064849ef7ad501d0d904f.NginxMailingListEnglish@forum.nginx.org> <20130310163011.GN15378@mdounin.ru> <58E3DE39-B2C8-40CB-9DB2-B0EA43DC3144@cloudxcel.com> Message-ID: <20130311111708.GT15378@mdounin.ru> Hello! On Mon, Mar 11, 2013 at 09:58:28AM +0200, Kiril Kalchev wrote: > After I read the thread, I am wondering what is the recommended way to purge nginx cache? Recommended way is to assume you can't purge the cache, much like you can't purge caches in client's browsers and/or intermediate forward proxies. A week ago I've explained this to Michael here: http://mailman.nginx.org/pipermail/nginx/2013-March/037848.html -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Mar 11 11:29:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Mar 2013 15:29:48 +0400 Subject: IMAP: auth_http In-Reply-To: References: <20130310164031.GP15378@mdounin.ru> Message-ID: <20130311112948.GU15378@mdounin.ru> Hello! On Sun, Mar 10, 2013 at 02:43:11PM -0700, Grant wrote: > >> nginx seems to require being pointed to an HTTP server for imap > >> authentication. Here's the protocol spec: > >> > >> http://wiki.nginx.org/MailCoreModule#Authentication > >> > >> Is the idea to program this server yourself or does a server like this > >> already exist? > > > > It's usually a script written individualy for a specific system. > > Some samples may be found on the wiki, e.g. here: > > > > http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript > > In that case I request for nginx's imap proxy to function more like > imapproxy which is easier to set up. The goal of nginx imap proxy is to route client's connections to different backends, which is very different from what imapproxy does. It's more like a perdition. If you want nginx to just proxy all connections to a predefined backend server, you may use something like location = /mailauth { add_header Auth-Status OK; add_header Auth-Server 127.0.0.1; add_header Auth-Port 8143; add_header Auth-Wait 1; return 204; } as a dummy auth script. -- Maxim Dounin http://nginx.org/en/donation.html From kirilk at cloudxcel.com Mon Mar 11 11:38:28 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Mon, 11 Mar 2013 13:38:28 +0200 Subject: error unlink() nginx 1.2.6 In-Reply-To: <20130311111708.GT15378@mdounin.ru> References: <20121229150801.GV40452@mdounin.ru> <9a00e41d844064849ef7ad501d0d904f.NginxMailingListEnglish@forum.nginx.org> <20130310163011.GN15378@mdounin.ru> <58E3DE39-B2C8-40CB-9DB2-B0EA43DC3144@cloudxcel.com> <20130311111708.GT15378@mdounin.ru> Message-ID: <83A76EDD-72B1-403F-AC6D-880599D48F50@cloudxcel.com> Thank you very much, and sorry for the repeated question. I will dig deeper before asking next time. Regards, Kiril On Mar 11, 2013, at 1:17 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 11, 2013 at 09:58:28AM +0200, Kiril Kalchev wrote: > >> After I read the thread, I am wondering what is the recommended way to purge nginx cache? > > Recommended way is to assume you can't purge the cache, much like > you can't purge caches in client's browsers and/or intermediate > forward proxies. A week ago I've explained this to Michael here: > > http://mailman.nginx.org/pipermail/nginx/2013-March/037848.html > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Mar 11 12:12:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Mar 2013 16:12:20 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: Message-ID: <20130311121220.GX15378@mdounin.ru> Hello! On Sat, Mar 09, 2013 at 10:43:47PM +0800, Ji Zhang wrote: > Hi, > > I'm doing some research on FastCGI recently. As I see from the FastCGI > specification, it does support multiplexing through a single > connection. But apparently none of the current web servers, like > Nginx, Apache, or Lighttpd supports this feature. > > I found a thread from nginx dev mailing list back to 2009, stating > that multiplexing won't make much difference in performance: > http://forum.nginx.org/read.php?29,30275,30312 > > But I also find an interesting article on how great this feature is, > back to 2002: > http://www.nongnu.org/fastcgi/#multiplexing This article seems to confuse FastCGI multiplexing with event-based programming. Handling multiple requests in a single process is great - and nginx does so. But you don't need FastCGI multiplexing to do it. > I don't have the ability to perform a test on this, but another > protocol, SPDY, that recently becomes very popular, and its Nginx > patch is already usable, also features multiplexing. So I'm curious > about why spdy's multiplexing is great while fastcgi's is not. > > One reason I can think of is that tcp connection on the internet is > expensive, affecting by RTT, CWND, and other tube warming-up issue. > But tcp conneciton within IDC (or unix-domain socket on localhost) is > much cheaper. Besides, the application can also go the event-based > way, to accept as much connections as it can from the listening socket > and perform asynchronously. > > Does my point make sense? or some other more substantial reasons? You are correct, since FastCGI is used mostly for local communication, multiplexing on application level isn't expected to be beneficial. Another reason is that multiplexing isn't supported (and probably will never be) by the major FastCGI application - PHP. There were several discussions on FastCGI multiplexing here, and general consensus seems to be that FastCGI multiplexing might be useful to reduce costs of multiple long-polling connections to an application, as it will reduce number of sockets OS will have to maintain. It's yet to be demonstrated though. -- Maxim Dounin http://nginx.org/en/donation.html From emailgrant at gmail.com Mon Mar 11 19:37:37 2013 From: emailgrant at gmail.com (Grant) Date: Mon, 11 Mar 2013 12:37:37 -0700 Subject: SSL default changes? In-Reply-To: <20130311105324.GQ15378@mdounin.ru> References: <20130311105324.GQ15378@mdounin.ru> Message-ID: >> It looks like these changes from default are required for SSL session >> resumption and to mitigate the BEAST SSL vulnerability: >> >> ssl_session_cache shared:SSL:10m; >> ssl_ciphers RC4:HIGH:!aNULL:!MD5; >> ssl_prefer_server_ciphers on; >> >> Should the defaults be changed to these? > > The BEAST attack could be mitigated by various means, including > switching to TLS 1.1/1.2 (you probably do not want to due to > compatibility reasons) and/or fixing it on a client side (which is > considered to be right solution and already implemented by all > modern browsers). > > Use of the RC4 cipher is more a workaround than a permanent > solution, and hence there are no plans to make it the default. OK, why not enable SSL session resumption by default? ssl_session_cache shared:SSL:10m; - Grant From emailgrant at gmail.com Mon Mar 11 19:45:10 2013 From: emailgrant at gmail.com (Grant) Date: Mon, 11 Mar 2013 12:45:10 -0700 Subject: "nginx does not suck at ssl" In-Reply-To: <513d98bd.c462b40a.36eb.ffff9598@mx.google.com> References: <513d98bd.c462b40a.36eb.ffff9598@mx.google.com> Message-ID: >> After reading "nginx does not suck at ssl": >> >> http://matt.io/entry/ur >> >> I'm using: >> >> ssl_ciphers >> ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:!kEDH:RC4+RSA:+HIGH; > > Some of us use the following to mitigate BEAST attacks: > ssl_ciphers ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!aNULL:!MD5:!EDH; Thanks Mark, this is supposed to mitigate BEAST as well and it's only slightly different than the default: ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; Here is mex's link again: https://www.ssllabs.com/ssltest/ I use the following for better performance: ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH; Reference: http://www.hybridforge.com/blog/nginx-ssl-ciphers-and-pci-compliance - Grant From nginx-forum at nginx.us Mon Mar 11 20:54:55 2013 From: nginx-forum at nginx.us (kalpesh.patel@glgroup.com) Date: Mon, 11 Mar 2013 16:54:55 -0400 Subject: Subtle differences of restart Message-ID: <9c4136affbb6cb2afa8afd91eb72b210.NginxMailingListEnglish@forum.nginx.org> Hello all: I had a few subtle question on NGINX operation and in particular are of reseading configuration : -- Assuming NGINX processes are running and the configuration is syntacaly valid, what it the difference when '.../nginx -s reload' is executed versus 'kill -HUP ' is executed? Is ther any difference in the end result and if so what are they? -- Assuming NGINX processes are NOT running and the configuration is syntacaly valid, what will '.../nginx -s reload' will do? -- Assuming NGINX processes are running and the configuration is syntacaly valid, what should be used to reread the configuraton from a cron job? Thanks so answers... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237241,237241#msg-237241 From nginx-forum at nginx.us Mon Mar 11 21:39:37 2013 From: nginx-forum at nginx.us (kalpesh.patel@glgroup.com) Date: Mon, 11 Mar 2013 17:39:37 -0400 Subject: Want to access UNIX environment variable In-Reply-To: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> References: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <12c686210a68fd55255956a03b9a138c.NginxMailingListEnglish@forum.nginx.org> Late to contribute as well but wanted to mention that we reference a single include in the main config that gets linked to the actual file at the deployment time only. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236654,237242#msg-237242 From nginx-forum at nginx.us Mon Mar 11 21:41:04 2013 From: nginx-forum at nginx.us (kalpesh.patel@glgroup.com) Date: Mon, 11 Mar 2013 17:41:04 -0400 Subject: Want to access UNIX environment variable In-Reply-To: <12c686210a68fd55255956a03b9a138c.NginxMailingListEnglish@forum.nginx.org> References: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> <12c686210a68fd55255956a03b9a138c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Left out the fact make file is used to create the link. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236654,237243#msg-237243 From nginx-forum at nginx.us Mon Mar 11 21:44:01 2013 From: nginx-forum at nginx.us (ct2k7) Date: Mon, 11 Mar 2013 17:44:01 -0400 Subject: SPDY giving HTTP 500 Message-ID: I've compiled nginx with the SPDY patch, at the current latest, so nginx 1.3.14. As far as I can tell, the make was fine, no errors. I'm compiling against openSSL of the system: OpenSSL 1.0.1c 10 May 2012, on my CentOS 6.3 server (OVH Kernel). Server has ok amount of RAM, at least 20GB is free. ./configure was with minimal options. However, when a client attempts to connect over SPDY, they are faced with a HTTP 500 error. The errors logs aren't the most useful,. indicating memory issues, and I can't even find the error when the logging is set to debug. Here's the error log snippet pertinent: 2013/03/11 22:32:08 [emerg] 10589#0: *1 malloc(18446744073709551615) failed (12: Cannot allocate memory), client: ****, server: ****, request: "GET / HTTP/1.1", host: **** Any ideas why nginx is trying to allocate so much memory? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237244,237244#msg-237244 From nginx-forum at nginx.us Mon Mar 11 21:54:01 2013 From: nginx-forum at nginx.us (kalpesh.patel@glgroup.com) Date: Mon, 11 Mar 2013 17:54:01 -0400 Subject: How to check the existence of a http-only secure cookie In-Reply-To: <201302211740.56058.vbart@nginx.com> References: <201302211740.56058.vbart@nginx.com> Message-ID: <387d36393d7c2eb9a92d3a98d438ee5c.NginxMailingListEnglish@forum.nginx.org> http-only and secure are directives intended for browser. If the browser doesn't detect HTTP proto for http-only setting and SSL for secure setting then browser will drop the cookie and will never make it to the web server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236394,237245#msg-237245 From appa at perusio.net Mon Mar 11 22:26:48 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 11 Mar 2013 23:26:48 +0100 Subject: SPDY patch not applying cleanly. In-Reply-To: <201303110617.36534.vbart@nginx.com> References: <874ngibwcj.wl%appa@perusio.net> <201303110458.20152.vbart@nginx.com> <871ubmbuob.wl%appa@perusio.net> <201303110617.36534.vbart@nginx.com> Message-ID: <87zjy9a7yv.wl%appa@perusio.net> On 11 Mar 2013 03h17 CET, vbart at nginx.com wrote: > Please, verify the patch file: > > % md5sum patch.spdy.txt > a5cb5cb3fc8a8e04efb62b2f8f48a5ac patch.spdy.txt > > % shasum patch.spdy.txt > 1a9ffddffbde0812b67eaca91a22ff0aa17293cc patch.spdy.txt > > Also, note that "patch --dry-run" (or "patch -C" in BSD world) does > not work with multiple patches in one file. That was the issue. It works perfectly without the dry run. Thanks, --- appa From vbart at nginx.com Tue Mar 12 06:51:19 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 Mar 2013 10:51:19 +0400 Subject: SPDY giving HTTP 500 In-Reply-To: References: Message-ID: <201303121051.19941.vbart@nginx.com> On Tuesday 12 March 2013 01:44:01 ct2k7 wrote: > I've compiled nginx with the SPDY patch, at the current latest, so nginx > 1.3.14. As far as I can tell, the make was fine, no errors. I'm compiling > against openSSL of the system: OpenSSL 1.0.1c 10 May 2012, on my CentOS 6.3 > server (OVH Kernel). Server has ok amount of RAM, at least 20GB is free. > > ./configure was with minimal options. > > However, when a client attempts to connect over SPDY, they are faced with a > HTTP 500 error. The errors logs aren't the most useful,. indicating memory > issues, and I can't even find the error when the logging is set to debug. > > Here's the error log snippet pertinent: > > 2013/03/11 22:32:08 [emerg] 10589#0: *1 malloc(18446744073709551615) failed > (12: Cannot allocate memory), client: ****, server: ****, request: "GET / > HTTP/1.1", host: **** > > Any ideas why nginx is trying to allocate so much memory? > Could you show the debug log? wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From vbart at nginx.com Tue Mar 12 07:01:29 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 Mar 2013 11:01:29 +0400 Subject: How to check the existence of a http-only secure cookie In-Reply-To: <387d36393d7c2eb9a92d3a98d438ee5c.NginxMailingListEnglish@forum.nginx.org> References: <201302211740.56058.vbart@nginx.com> <387d36393d7c2eb9a92d3a98d438ee5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201303121101.29747.vbart@nginx.com> On Tuesday 12 March 2013 01:54:01 kalpesh.patel at glgroup.com wrote: > http-only and secure are directives intended for browser. If the browser > doesn't detect HTTP proto for http-only setting and SSL for secure setting > then browser will drop the cookie and will never make it to the web server. > Thank you, I know what "HttpOnly" and "Secure" are. But, please, note that these attributes are sent via Set-Cookie header from a web-server *response*, while the question was: > to check if a given a cookie is present and it is http-only and secure, > otherwise, reject the request with a 404". There's no way since they do not present in requests. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 12 09:48:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Mar 2013 13:48:28 +0400 Subject: SSL default changes? In-Reply-To: References: <20130311105324.GQ15378@mdounin.ru> Message-ID: <20130312094828.GH15378@mdounin.ru> Hello! On Mon, Mar 11, 2013 at 12:37:37PM -0700, Grant wrote: > >> It looks like these changes from default are required for SSL session > >> resumption and to mitigate the BEAST SSL vulnerability: > >> > >> ssl_session_cache shared:SSL:10m; > >> ssl_ciphers RC4:HIGH:!aNULL:!MD5; > >> ssl_prefer_server_ciphers on; > >> > >> Should the defaults be changed to these? > > > > The BEAST attack could be mitigated by various means, including > > switching to TLS 1.1/1.2 (you probably do not want to due to > > compatibility reasons) and/or fixing it on a client side (which is > > considered to be right solution and already implemented by all > > modern browsers). > > > > Use of the RC4 cipher is more a workaround than a permanent > > solution, and hence there are no plans to make it the default. > > OK, why not enable SSL session resumption by default? > > ssl_session_cache shared:SSL:10m; E.g. because it won't work on some platforms. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 12 10:13:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Mar 2013 14:13:44 +0400 Subject: Subtle differences of restart In-Reply-To: <9c4136affbb6cb2afa8afd91eb72b210.NginxMailingListEnglish@forum.nginx.org> References: <9c4136affbb6cb2afa8afd91eb72b210.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130312101344.GL15378@mdounin.ru> Hello! On Mon, Mar 11, 2013 at 04:54:55PM -0400, kalpesh.patel at glgroup.com wrote: > Hello all: > > I had a few subtle question on NGINX operation and in particular are of > reseading configuration : > > -- Assuming NGINX processes are running and the configuration is syntacaly > valid, what it the difference when '.../nginx -s reload' is executed versus > 'kill -HUP ' is executed? Is ther any difference in > the end result and if so what are they? The "nginx -s reload" requires (otherwise unneeded) parsing of the configuration file. Otherwise it's just a tricky way to do "kill -HUP ...". It was introduced mostly for win32 where there is no kill. > -- Assuming NGINX processes are NOT running and the configuration is > syntacaly valid, what will '.../nginx -s reload' will do? It will fail as it won't be able to open pid file. > -- Assuming NGINX processes are running and the configuration is syntacaly > valid, what should be used to reread the configuraton from a cron job? I would recommend using kill. (Well, actually I wouldn't recommend reloading configuration by cron, at least without some precautions to prevent situation when there are too many worker processes shutting down. But I assume you understand what you are doing.) -- Maxim Dounin http://nginx.org/en/donation.html From ceooph at gmail.com Tue Mar 12 11:36:51 2013 From: ceooph at gmail.com (ceooph) Date: Tue, 12 Mar 2013 12:36:51 +0100 Subject: Multiples proxy for 1 servername Message-ID: <513F1353.3090004@gmail.com> Hi all, We do reverse proxy to access from internet to some documentation filtering by ip address. These documentations are hosted on commercials portals. For 1 base we are often redirect between 2 or 3 (sometimes 6) servers. (www.exemple.com -> secure1.exemple.com -> secure2.exemple.com -> auth.exemple.com -> go.exemple.com -> www.exemple.com) And we have access to more than 50 bases ! Do you know if with nginx it's possible to do proxy_pass (and proxy_redirect) to these server one 1 virtual name ? I want to write something like this base1.mydomain.com proxy_ www.exemple.com proxy_ secure1.exemple.com .... base2.mydomain.com proxy_ www.exemple2.com .... I try it with different location for each server but it's doesn't work. Thanks in advance for your help. From vbart at nginx.com Tue Mar 12 14:07:58 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 Mar 2013 18:07:58 +0400 Subject: SPDY giving HTTP 500 In-Reply-To: References: Message-ID: <201303121807.59003.vbart@nginx.com> On Tuesday 12 March 2013 01:44:01 ct2k7 wrote: > I've compiled nginx with the SPDY patch, at the current latest, so nginx > 1.3.14. As far as I can tell, the make was fine, no errors. I'm compiling > against openSSL of the system: OpenSSL 1.0.1c 10 May 2012, on my CentOS 6.3 > server (OVH Kernel). Server has ok amount of RAM, at least 20GB is free. > > ./configure was with minimal options. > > However, when a client attempts to connect over SPDY, they are faced with a > HTTP 500 error. The errors logs aren't the most useful,. indicating memory > issues, and I can't even find the error when the logging is set to debug. > > Here's the error log snippet pertinent: > > 2013/03/11 22:32:08 [emerg] 10589#0: *1 malloc(18446744073709551615) failed > (12: Cannot allocate memory), client: ****, server: ****, request: "GET / > HTTP/1.1", host: **** > > Any ideas why nginx is trying to allocate so much memory? This bug was fixed in: http://nginx.org/patches/spdy/patch.spdy-68_1.3.14.txt Thank you for the report. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From jgehrcke at googlemail.com Tue Mar 12 16:39:27 2013 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Tue, 12 Mar 2013 17:39:27 +0100 Subject: nginx crash on reload -- how to detect? Message-ID: <513F5A3F.9070305@googlemail.com> Hello, I'm currently running a self-built nginx 1.3.14 on a debian system and use the attached (and also inlined) init.d script as /etc/init.d/nginx for managing the service. It's taken unmodified from debian wheezy. I somehow managed to get the nginx master process crashing (I have a few third-party modules compiled in and that's not the issue to be discussed right now) upon $ service nginx reload The problem is that in this case it just states "Reloading nginx configuration: nginx." without me realizing that the master process crashed and that the new config did not become activated. This is an issue by itself -- I spent some time changing the config over and over and doing reload attempts and was wondering why the heck the changes did not take effect. Another $ service nginx stop $ service nginx start then revealed that something wrong is going on, because 'stop' did not take action, 'start' spawned a new master process, but the listening ports were still blocked by old workers. I am wondering if there is a neat way to improve the service script in order to make it realize when the nginx master unexpectedly dies in the process of performing one of the service actions. Cheers, Jan-Philip #!/bin/sh ### BEGIN INIT INFO # Provides: nginx # Required-Start: $local_fs $remote_fs $network $syslog # Required-Stop: $local_fs $remote_fs $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the nginx web server # Description: starts nginx using start-stop-daemon ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/nginx NAME=nginx DESC=nginx # Include nginx defaults if available if [ -f /etc/default/nginx ]; then . /etc/default/nginx fi test -x $DAEMON || exit 0 set -e . /lib/lsb/init-functions test_nginx_config() { if $DAEMON -t $DAEMON_OPTS >/dev/null 2>&1; then return 0 else $DAEMON -t $DAEMON_OPTS return $? fi } case "$1" in start) echo -n "Starting $DESC: " test_nginx_config # Check if the ULIMIT is set in /etc/default/nginx if [ -n "$ULIMIT" ]; then # Set the ulimits ulimit $ULIMIT fi start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \ --exec $DAEMON -- $DAEMON_OPTS || true echo "$NAME." ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \ --exec $DAEMON || true echo "$NAME." ;; restart|force-reload) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON || true sleep 1 test_nginx_config # Check if the ULIMIT is set in /etc/default/nginx if [ -n "$ULIMIT" ]; then # Set the ulimits ulimit $ULIMIT fi start-stop-daemon --start --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true echo "$NAME." ;; reload) echo -n "Reloading $DESC configuration: " test_nginx_config start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/$NAME.pid \ --exec $DAEMON || true echo "$NAME." ;; configtest|testconfig) echo -n "Testing $DESC configuration: " if test_nginx_config; then echo "$NAME." else exit $? fi ;; status) status_of_proc -p /var/run/$NAME.pid "$DAEMON" nginx && exit 0 || exit $? ;; *) echo "Usage: $NAME {start|stop|restart|reload|force-reload|status|configtest}" >&2 exit 1 ;; esac exit 0 -------------- next part -------------- #!/bin/sh ### BEGIN INIT INFO # Provides: nginx # Required-Start: $local_fs $remote_fs $network $syslog # Required-Stop: $local_fs $remote_fs $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the nginx web server # Description: starts nginx using start-stop-daemon ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/nginx NAME=nginx DESC=nginx # Include nginx defaults if available if [ -f /etc/default/nginx ]; then . /etc/default/nginx fi test -x $DAEMON || exit 0 set -e . /lib/lsb/init-functions test_nginx_config() { if $DAEMON -t $DAEMON_OPTS >/dev/null 2>&1; then return 0 else $DAEMON -t $DAEMON_OPTS return $? fi } case "$1" in start) echo -n "Starting $DESC: " test_nginx_config # Check if the ULIMIT is set in /etc/default/nginx if [ -n "$ULIMIT" ]; then # Set the ulimits ulimit $ULIMIT fi start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \ --exec $DAEMON -- $DAEMON_OPTS || true echo "$NAME." ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \ --exec $DAEMON || true echo "$NAME." ;; restart|force-reload) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON || true sleep 1 test_nginx_config # Check if the ULIMIT is set in /etc/default/nginx if [ -n "$ULIMIT" ]; then # Set the ulimits ulimit $ULIMIT fi start-stop-daemon --start --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true echo "$NAME." ;; reload) echo -n "Reloading $DESC configuration: " test_nginx_config start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/$NAME.pid \ --exec $DAEMON || true echo "$NAME." ;; configtest|testconfig) echo -n "Testing $DESC configuration: " if test_nginx_config; then echo "$NAME." else exit $? fi ;; status) status_of_proc -p /var/run/$NAME.pid "$DAEMON" nginx && exit 0 || exit $? ;; *) echo "Usage: $NAME {start|stop|restart|reload|force-reload|status|configtest}" >&2 exit 1 ;; esac exit 0 From nginx+phil at spodhuis.org Tue Mar 12 18:24:45 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Tue, 12 Mar 2013 14:24:45 -0400 Subject: SPDY68 / POST to proxy / nginx worker segfault Message-ID: <20130312182445.GA9182@redoubt.spodhuis.org> nginx 1.3.14, SPDY patch version 68. Sitting in front of a PGP keyserver, with configuration as below, if I have "spdy" on the "listen" lines, then Chrome gets an error for no data returned and I get errors in errorlog: 2013/03/12 18:08:43 [alert] 8546#0: worker process 8815 exited on signal 11 2013/03/12 18:09:35 [alert] 8546#0: worker process 9085 exited on signal 11 2013/03/12 18:09:36 [alert] 8546#0: worker process 9089 exited on signal 11 Below, nginx version output, nginx.conf server block, and curl output from a working query when SPDY is enabled but not used (because it's curl), over https. (The server in this case has a cert from my private CA https://www.security.spodhuis.org/ has details, including PGP signature, if anyone wants to verify) ----------------------------8< cut here >8------------------------------ # nginx -V nginx version: nginx/1.3.14 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-ipv6 --with-google_perftools_module --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --add-module=/usr/ports/www/nginx-devel/work/giom-nginx_accept_language_module-02262ce --add-module=/usr/ports/www/nginx-devel/work/samizdatco-nginx-http-auth-digest-bd1c86a --with-http_dav_module --with-http_gunzip_module --with-http_stub_status_module --add-module=/usr/ports/www/nginx-devel/work/masterzen-nginx-upload-progress-module-a788dea --add-module=/usr/ports/www/nginx-devel/work/nginx_upstream_fair-20090923 --add-module=/usr/ports/www/nginx-devel/work/nginx_upstream_hash-0.3.1 --add-module=/usr/ports/www/nginx-devel/work/nginx-sticky-module-1.0 --add-module=/usr/ports/www/nginx-devel/work/simpl-ngx_devel_kit-48bc5dd --add-module=/usr/ports/www/nginx-devel/work/agentzh-encrypted-session-nginx-module-c752861 --add-module=/usr/ports/www/nginx-devel/work/arut-nginx-let-module-a5e1dc5 --with-pcre --add-module=/usr/ports/www/nginx-devel/work/agentzh-set-misc-nginx-module-658c235 --add-module=/usr/ports/www/nginx-devel/work/yaoweibin-nginx_tcp_proxy_module-b83e5a6 --with-http_spdy_module --with-http_ssl_module ----------------------------8< cut here >8------------------------------ ----------------------------8< cut here >8------------------------------ server { # need default_server for SNI to work with session resumption, unless # you accept the same SSL cache. Hrm. We do, for now. listen 94.142.241.93:443 ssl; listen [2a02:898:31:0:48:4558:73:6b73]:443 ssl; server_name sks.spodhuis.org; ssl on; ssl_certificate /www/conf/tls/ssl-sks-web.crt; ssl_certificate_key /www/conf/tls/ssl-sks-web.key; ssl_verify_client off; access_log /var/log/nginx/sks-tls.log combine-tls; location / { root /www/sites/sks.spodhuis.org/content; index index.html; } location ~ /\. { deny all; } location /pks { proxy_pass http://127.0.0.1:11371; proxy_pass_header Server; add_header Via "1.1 sks.spodhuis.org:443 (nginx)"; proxy_ignore_client_abort on; } location /sks-peers { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Real-IP $remote_addr; } } ----------------------------8< cut here >8------------------------------ % gpg -a --export $gpg_key_work | curl --data-urlencode keytext at - -vs https://sks.spodhuis.org/pks/add 2>&1 | pbcopy ----------------------------8< cut here >8------------------------------ * About to connect() to sks.spodhuis.org port 443 (#0) * Trying 2a02:898:31::48:4558:73:6b73... * Failed to connect to 2a02:898:31::48:4558:73:6b73: No route to host * Trying 94.142.241.93... * Connected to sks.spodhuis.org (94.142.241.93) port 443 (#0) * successfully set certificate verify locations: * CAfile: /opt/local/share/curl/curl-ca-bundle.crt CApath: none * SSLv3, TLS handshake, Client hello (1): } [data not shown] * SSLv3, TLS handshake, Server hello (2): { [data not shown] * SSLv3, TLS handshake, CERT (11): { [data not shown] * SSLv3, TLS handshake, Server key exchange (12): { [data not shown] * SSLv3, TLS handshake, Server finished (14): { [data not shown] * SSLv3, TLS handshake, Client key exchange (16): } [data not shown] * SSLv3, TLS change cipher, Client hello (1): } [data not shown] * SSLv3, TLS handshake, Finished (20): } [data not shown] * SSLv3, TLS change cipher, Client hello (1): { [data not shown] * SSLv3, TLS handshake, Finished (20): { [data not shown] * SSL connection using ECDHE-RSA-AES128-SHA256 * Server certificate: * subject: C=NL; ST=Noord Holland; O=GlobNIX Systems; CN=sks.spodhuis.org; emailAddress=keyserver at spodhuis.org * start date: 2011-08-10 04:59:54 GMT * expire date: 2013-05-01 04:59:54 GMT * subjectAltName: sks.spodhuis.org matched * issuer: C=US; O=GlobNIX Systems; OU=Certification Authority; CN=GlobNIX Certificate Authority 3; emailAddress=certificates at globnix.org * SSL certificate verify ok. > POST /pks/add HTTP/1.1 > User-Agent: curl/7.29.0 > Host: sks.spodhuis.org > Accept: */* > Content-Length: 18437 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > < HTTP/1.1 100 Continue } [data not shown] < HTTP/1.1 200 OK < Date: Tue, 12 Mar 2013 18:22:58 GMT < Content-Type: text/html; charset=UTF-8 < Content-Length: 129 < Connection: keep-alive < Server: sks_www/1.1.4 < Cache-Control: no-cache < Pragma: no-cache < Expires: 0 < X-HKP-Results-Count: 1 < Via: 1.1 sks.spodhuis.org:443 (nginx) < { [data not shown] * Connection #0 to host sks.spodhuis.org left intact Key block added to key server database. New public keys added:
1 key(s) added successfully.
----------------------------8< cut here >8------------------------------ From emailgrant at gmail.com Tue Mar 12 18:58:51 2013 From: emailgrant at gmail.com (Grant) Date: Tue, 12 Mar 2013 11:58:51 -0700 Subject: SSL default changes? In-Reply-To: <20130312094828.GH15378@mdounin.ru> References: <20130311105324.GQ15378@mdounin.ru> <20130312094828.GH15378@mdounin.ru> Message-ID: >> OK, why not enable SSL session resumption by default? >> >> ssl_session_cache shared:SSL:10m; > > E.g. because it won't work on some platforms. I'm sorry to bother about this, but do you mean it won't wok on some servers or in some browsers? If you mean browsers, will it prevent SSL from working at all in those browsers or would a browser error appear? - Grant From mdounin at mdounin.ru Tue Mar 12 23:00:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Mar 2013 03:00:50 +0400 Subject: SSL default changes? In-Reply-To: References: <20130311105324.GQ15378@mdounin.ru> <20130312094828.GH15378@mdounin.ru> Message-ID: <20130312230050.GU15378@mdounin.ru> Hello! On Tue, Mar 12, 2013 at 11:58:51AM -0700, Grant wrote: > >> OK, why not enable SSL session resumption by default? > >> > >> ssl_session_cache shared:SSL:10m; > > > > E.g. because it won't work on some platforms. > > I'm sorry to bother about this, but do you mean it won't wok on some > servers or in some browsers? If you mean browsers, will it prevent > SSL from working at all in those browsers or would a browser error > appear? It won't work on some server platforms (e.g. on modern win32, see http://nginx.org/en/docs/windows.html). -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Tue Mar 12 23:14:09 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 13 Mar 2013 03:14:09 +0400 Subject: SPDY68 / POST to proxy / nginx worker segfault In-Reply-To: <20130312182445.GA9182@redoubt.spodhuis.org> References: <20130312182445.GA9182@redoubt.spodhuis.org> Message-ID: <201303130314.09525.vbart@nginx.com> On Tuesday 12 March 2013 22:24:45 Phil Pennock wrote: > nginx 1.3.14, SPDY patch version 68. > > Sitting in front of a PGP keyserver, with configuration as below, if I > have "spdy" on the "listen" lines, then Chrome gets an error for no data > returned and I get errors in errorlog: > > 2013/03/12 18:08:43 [alert] 8546#0: worker process 8815 exited on signal 11 > 2013/03/12 18:09:35 [alert] 8546#0: worker process 9085 exited on signal 11 > 2013/03/12 18:09:36 [alert] 8546#0: worker process 9089 exited on signal 11 > > Below, nginx version output, nginx.conf server block, and curl output > from a working query when SPDY is enabled but not used (because it's > curl), over https. > > (The server in this case has a cert from my private CA > https://www.security.spodhuis.org/ has details, including PGP > signature, if anyone wants to verify) > [...] Thank you for the report. This issue should be fixed now in: http://nginx.org/patches/spdy/patch.spdy-69_1.3.14.txt wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx+phil at spodhuis.org Wed Mar 13 00:11:00 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Tue, 12 Mar 2013 20:11:00 -0400 Subject: SPDY68 / POST to proxy / nginx worker segfault In-Reply-To: <201303130314.09525.vbart@nginx.com> References: <20130312182445.GA9182@redoubt.spodhuis.org> <201303130314.09525.vbart@nginx.com> Message-ID: <20130313001100.GA22944@redoubt.spodhuis.org> On 2013-03-13 at 03:14 +0400, Valentin V. Bartenev wrote: > Thank you for the report. This issue should be fixed now in: > http://nginx.org/patches/spdy/patch.spdy-69_1.3.14.txt Fix confirmed, works for me. Thanks for the prompt fix! -Phil From nginx-forum at nginx.us Wed Mar 13 13:49:15 2013 From: nginx-forum at nginx.us (zuborg) Date: Wed, 13 Mar 2013 09:49:15 -0400 Subject: Incomplete page by nginx -> fcgi -> php-fpm with keepalive In-Reply-To: <20130131115431.GN40753@mdounin.ru> References: <20130131115431.GN40753@mdounin.ru> Message-ID: Patch helps me too. It should be added into nginx source code base asap ) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235764,237297#msg-237297 From mdounin at mdounin.ru Wed Mar 13 13:59:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Mar 2013 17:59:18 +0400 Subject: Incomplete page by nginx -> fcgi -> php-fpm with keepalive In-Reply-To: References: <20130131115431.GN40753@mdounin.ru> Message-ID: <20130313135918.GZ15378@mdounin.ru> Hello! On Wed, Mar 13, 2013 at 09:49:15AM -0400, zuborg wrote: > Patch helps me too. It should be added into nginx source code base asap ) It's already committed in 1.3.12, and even merged to the stable branch in 1.2.7. If the patch works for you - it means you are using outdated nginx. -- Maxim Dounin http://nginx.org/en/donation.html From aweber at comcast.net Wed Mar 13 14:15:48 2013 From: aweber at comcast.net (AJ Weber) Date: Wed, 13 Mar 2013 10:15:48 -0400 Subject: Avoid cache on zero-bytes-returned? In-Reply-To: <20130305145544.GB15378@mdounin.ru> References: <20130305145544.GB15378@mdounin.ru> Message-ID: <51408A14.4090205@comcast.net> I have a case where a user requires authorization to retrieve content. Ngnix correctly returns the tomcat's 401, and then the user attempts authentication. However, if the user fails to authenticate, tomcat returns a 200 but zero bytes returned. This comes through nginx as a cache-miss, status=200, 0 bytes returned. Unfortunately, if the user tries again, even if he/she is successful, the 200 result is cached for my 10min cache setting for 200-results...thus, even a correct login will not return the content for at least 10min due to the cached, zero-byte page. Is there a way to leave my default caching enabled, but tell nginx to NOT cache zero-byte results? If the correct content is available in the cache, I want it returned without going to the back end, but in my case (at least), a zero-byte result for a status=200 is not valid. Thanks for any tips or tricks! -AJ From contact at jpluscplusm.com Wed Mar 13 14:40:00 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 13 Mar 2013 14:40:00 +0000 Subject: Avoid cache on zero-bytes-returned? In-Reply-To: <51408A14.4090205@comcast.net> References: <20130305145544.GB15378@mdounin.ru> <51408A14.4090205@comcast.net> Message-ID: On 13 March 2013 14:15, AJ Weber wrote: > Is there a way to leave my default caching enabled, but tell nginx to NOT > cache zero-byte results? If the correct content is available in the cache, > I want it returned without going to the back end, but in my case (at least), > a zero-byte result for a status=200 is not valid. It looks to me like http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_no_cache is what you need to use. If the backend is returning a Content-Length header, you could refer to that. If not, you may have to get creative in assembling a map{} variable. FWIW, a 200 with an empty body is, (I believe) invalid. It should be a 204. If your backend is misbehaving, you might like to fix it before mitigating the problem inside your reverse-proxy. Remember: broken gets fixed, but shitty lives forever ... Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From aweber at comcast.net Wed Mar 13 15:02:32 2013 From: aweber at comcast.net (AJ Weber) Date: Wed, 13 Mar 2013 11:02:32 -0400 Subject: Avoid cache on zero-bytes-returned? In-Reply-To: References: <20130305145544.GB15378@mdounin.ru> <51408A14.4090205@comcast.net> Message-ID: <51409508.6020304@comcast.net> On 3/13/2013 10:40 AM, Jonathan Matthews wrote: > > It looks to me like > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_no_cache > is what you need to use. > If the backend is returning a Content-Length header, you could refer > to that. If not, you may have to get creative in assembling a map{} > variable. I looked at that, but it appears that it only tests for the existence of the variable/header. Content-Length should be there all the time. > > FWIW, a 200 with an empty body is, (I believe) invalid. It should be a > 204. If your backend is misbehaving, you might like to fix it before > mitigating the problem inside your reverse-proxy. Remember: broken > gets fixed, but shitty lives forever ... > I tend to agree with you here. Sometimes you can't open-up the application code very easily...but it's a very fair and valid point. I'll look into it. Thanks for the reply, AJ From contact at jpluscplusm.com Wed Mar 13 15:12:14 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 13 Mar 2013 15:12:14 +0000 Subject: Avoid cache on zero-bytes-returned? In-Reply-To: <51409508.6020304@comcast.net> References: <20130305145544.GB15378@mdounin.ru> <51408A14.4090205@comcast.net> <51409508.6020304@comcast.net> Message-ID: On 13 March 2013 15:02, AJ Weber wrote: > > > On 3/13/2013 10:40 AM, Jonathan Matthews wrote: >> >> >> It looks to me like >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_no_cache >> is what you need to use. >> If the backend is returning a Content-Length header, you could refer >> to that. If not, you may have to get creative in assembling a map{} >> variable. > > I looked at that, but it appears that it only tests for the existence of the > variable/header. Content-Length should be there all the time. The docs I linked to have this to say: "If at least one value of the string parameters is not empty and is not equal to ?0? then the response will not be saved" So if content-length is always present (and sometimes "0") I would try this: ------------------------------------ map $upstream_http_content_length $map_upstream_cache_buster { default 0; 0 1; } server { [...] proxy_no_cache $map_upstream_cache_buster; } ------------------------------------ Or something like that; you get the idea. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From aweber at comcast.net Wed Mar 13 18:19:50 2013 From: aweber at comcast.net (AJ Weber) Date: Wed, 13 Mar 2013 14:19:50 -0400 Subject: map a null/missing variable? Message-ID: <5140C346.3050203@comcast.net> OK, So I'm still working on my caching "issue", but this is a more general question, so for the sake of indexing and helping others in the future with (hopefully) a response... How do I use a map to map the lack of a variable/header/cookie (NULL?) to a value? I can't use "default", because when there IS a value it will be a random set of characters. So that said, if a regex like ".+" would work for "any value" and I use 'default' for the NULL, I would try that. :) Thanks in advance, AJ From contact at jpluscplusm.com Wed Mar 13 18:27:20 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 13 Mar 2013 18:27:20 +0000 Subject: map a null/missing variable? In-Reply-To: <5140C346.3050203@comcast.net> References: <5140C346.3050203@comcast.net> Message-ID: On 13 March 2013 18:19, AJ Weber wrote: > OK, > > So I'm still working on my caching "issue", but this is a more general > question, so for the sake of indexing and helping others in the future with > (hopefully) a response... > > How do I use a map to map the lack of a variable/header/cookie (NULL?) to a > value? I can't use "default", because when there IS a value it will be a > random set of characters. > > So that said, if a regex like ".+" would work for "any value" and I use > 'default' for the NULL, I would try that. :) Have you read the docs? http://nginx.org/r/map Just make sure you have a recent enough version that support regex in maps, and it /should/ be pretty obvious. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From hshah at crestron.com Wed Mar 13 22:59:38 2013 From: hshah at crestron.com (Hazrat Shah) Date: Wed, 13 Mar 2013 22:59:38 +0000 Subject: NGINX proxy websocket Message-ID: <6A9CEBF489241B489ABF102178B94E2635667ED7@6V-MBX1.CRESTRON.CRESTRON.com> I am using the Nginx v1.3.14 server proxy. I am sending the Websocket HTTP connection request "CONNECT HostName:Port HTTP/1.1" packet to the proxy from the client. It responds with an http-alt ack packet. How can I configure the proxy to return the HTTP reply "HTTP/1.1 200"? I also do not see in Wireshark trace the proxy attempting to establish a TCP connection with the Websocket server. -HS This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If you are not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Mar 14 00:35:36 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 14 Mar 2013 04:35:36 +0400 Subject: map a null/missing variable? In-Reply-To: <5140C346.3050203@comcast.net> References: <5140C346.3050203@comcast.net> Message-ID: <201303140435.36801.vbart@nginx.com> On Wednesday 13 March 2013 22:19:50 AJ Weber wrote: > OK, > > So I'm still working on my caching "issue", but this is a more general > question, so for the sake of indexing and helping others in the future > with (hopefully) a response... > > How do I use a map to map the lack of a variable/header/cookie (NULL?) > to a value? You can use '' (empty string) value for that. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From d4fseeker at gmail.com Thu Mar 14 00:39:57 2013 From: d4fseeker at gmail.com (Dan R) Date: Thu, 14 Mar 2013 01:39:57 +0100 Subject: Fwd: Lua shared storage inconsistent? In-Reply-To: References: Message-ID: Hello, I'm trying to count with a Lua-script the concurrent requests to a given virtual host on the backend to which Nginx relays als reverse proxy. My current code is somewhat more complex, but I try to reduce it to the problematic one. Assuming we have a shared storage with sufficient memory storage (let's say 100MB) I try the following: ** phase_access: ngx.shared.counter:add(ngx.var.host,0) --makes sure the value exists ngx.shared.counter:incr(ngx.var.host,1) ** phase_log ngx.shared.counter:incr(ngx.var.host,-1) I assumed -since the storage is supposedly atomic and shared- that this will work. However when running a benchmark with eg 500 concurrent connections, I will always be somewhat around 70 units in the minus at the end of the benchmark. How come? I tried setting the incrementer in other phases such as rewrite and the decrementer in the body-phase, but that didn't change anything. Also I noticed that taking a copy of the shared storage will not have that copy update during sleep-loops of the given request and I have to fetch a new one (with the penalty of keeping allocating new RAM) Is there anything fundamentally wrong with my understanding of Nginx, is there a bug in the lua implementation or what happened? Thank you for helping me out - I certainly couldn't so far... -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Mar 14 05:04:01 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 14 Mar 2013 09:04:01 +0400 Subject: NGINX proxy websocket In-Reply-To: <6A9CEBF489241B489ABF102178B94E2635667ED7@6V-MBX1.CRESTRON.CRESTRON.com> References: <6A9CEBF489241B489ABF102178B94E2635667ED7@6V-MBX1.CRESTRON.CRESTRON.com> Message-ID: <20130314050401.GP53111@lo0.su> On Wed, Mar 13, 2013 at 10:59:38PM +0000, Hazrat Shah wrote: > I am using the Nginx v1.3.14 server proxy. I am sending the Websocket > HTTP connection request "CONNECT HostName:Port HTTP/1.1" packet to the > proxy from the client. It responds with an http-alt ack packet. How can > I configure the proxy to return the HTTP reply "HTTP/1.1 200"? I also do > not see in Wireshark trace the proxy attempting to establish a TCP > connection with the Websocket server. Here are instuctions on how to configure WebSocket proxying in nginx: http://nginx.org/en/docs/http/websocket.html From ru at nginx.com Thu Mar 14 05:13:17 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 14 Mar 2013 09:13:17 +0400 Subject: map a null/missing variable? In-Reply-To: <5140C346.3050203@comcast.net> References: <5140C346.3050203@comcast.net> Message-ID: <20130314051317.GQ53111@lo0.su> On Wed, Mar 13, 2013 at 02:19:50PM -0400, AJ Weber wrote: > OK, > > So I'm still working on my caching "issue", but this is a more general > question, so for the sake of indexing and helping others in the future > with (hopefully) a response... > > How do I use a map to map the lack of a variable/header/cookie (NULL?) > to a value? I can't use "default", because when there IS a value it > will be a random set of characters. You can't handle the lack of a variable (it'll cause a syntax error in configuration). Mapping of an unset variable is easy: map $http_foo $my_var { "" unset; default set; } > So that said, if a regex like ".+" would work for "any value" and I use > 'default' for the NULL, I would try that. :) That's suboptimal. From yaoweibin at gmail.com Thu Mar 14 05:17:16 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Thu, 14 Mar 2013 13:17:16 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130308133629.GD8912@reaktio.net> References: <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> <20130225101304.GZ8912@reaktio.net> <20130305131741.GN8912@reaktio.net> <20130308133629.GD8912@reaktio.net> Message-ID: Try the new patch, It could solve your problem. Thanks for your test effort. 2013/3/8 Pasi K?rkk?inen > On Tue, Mar 05, 2013 at 03:17:41PM +0200, Pasi K?rkk?inen wrote: > > On Tue, Feb 26, 2013 at 10:13:11PM +0800, Weibin Yao wrote: > > > It still worked in my box. Can you show me the debug.log > > > ([1]http://wiki.nginx.org/Debugging)? You need recompile ? with > > > --with-debug configure argument and set debug level in error_log > > > directive. > > > > > > > Ok so I've sent you the debug log. > > Can you see anything obvious in it? > > > > I keep getting the "upstream sent invalid header while reading response > header from upstream" > > error when using the no_buffer patch.. > > > > Is there something you'd want me to try? Adjusting some config options? > Did you find anything weird in the log? > > Thanks! > > -- Pasi > > > > > > > > > 2013/2/25 Pasi K??rkk??inen <[2]pasik at iki.fi> > > > > > > On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > > > > ? ? Can you show me your configure? It works for me with > nginx-1.2.7. > > > > ? ? Thanks. > > > > > > > > > > Hi, > > > > > > I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers more" > module > > > added, > > > and your patch. > > > > > > I'm using the following configuration: > > > > > > server { > > > ? ? ? ? listen ? ? ? ? ? ? ? ? ? public_ip:443 ssl; > > > ? ? ? ? server_name ? ? ? ? ? ? service.domain.tld; > > > > > > ? ? ? ? ssl ? ? ? ? ? ? ? ? ? ? on; > > > ? ? ? ? keepalive_timeout ? ? ? 70; > > > > > > ? ? ? ? access_log ? ? ? ? ? ? > > > ? /var/log/nginx/access-service.log; > > > ? ? ? ? access_log ? ? ? ? ? ? > > > ? /var/log/nginx/access-service-full.log full; > > > ? ? ? ? error_log ? ? ? ? ? ? ? > > > /var/log/nginx/error-service.log; > > > > > > ? ? ? ? client_header_buffer_size 64k; > > > ? ? ? ? client_header_timeout ? 120; > > > > > > ? ? ? ? proxy_next_upstream error timeout invalid_header > http_500 > > > http_502 http_503; > > > ? ? ? ? proxy_set_header Host $host; > > > ? ? ? ? proxy_set_header X-Real-IP $remote_addr; > > > ? ? ? ? proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > > > ? ? ? ? proxy_redirect ? ? off; > > > ? ? ? ? proxy_buffering ? ? off; > > > ? ? ? ? proxy_cache ? ? ? ? off; > > > > > > ? ? ? ? add_header Last-Modified ""; > > > ? ? ? ? if_modified_since ? off; > > > > > > ? ? ? ? client_max_body_size ? ? 262144M; > > > ? ? ? ? client_body_buffer_size 1024k; > > > ? ? ? ? client_body_timeout ? ? 240; > > > > > > ? ? ? ? chunked_transfer_encoding off; > > > > > > # ? ? ? client_body_postpone_sending ? ? 64k; > > > # ? ? ? proxy_request_buffering ? ? ? ? off; > > > > > > ? ? ? ? location / { > > > > > > ? ? ? ? ? ? ? ? proxy_pass ? ? ? [3] > https://service-backend; > > > ? ? ? ? } > > > } > > > > > > Thanks! > > > > > > -- Pasi > > > > > > > ? ? 2013/2/22 Pasi K?*?*?rkk?*?*?inen <[1][4]pasik at iki.fi> > > > > > > > > ? ? ? On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi > > > K?*?*?rkk?*?*?inen wrote: > > > > ? ? ? > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao > wrote: > > > > ? ? ? > > ?* ? ?* Use the patch I attached in this mail thread > > > instead, don't use > > > > ? ? ? the pull > > > > ? ? ? > > ?* ? ?* request patch which is for tengine.?** > > > > ? ? ? > > ?* ? ?* Thanks. > > > > ? ? ? > > > > > > ? ? ? > > > > > ? ? ? > Oh sorry I missed that attachment. It seems to apply > and > > > build OK. > > > > ? ? ? > I'll start testing it. > > > > ? ? ? > > > > > > > > > ? ? ? I added the patch on top of nginx 1.2.7 and enabled the > > > following > > > > ? ? ? options: > > > > > > > > ? ? ? client_body_postpone_sending ?* ? ?* 64k; > > > > ? ? ? proxy_request_buffering ?* ? ?* ? ?* ? ?* ? off; > > > > > > > > ? ? ? after that connections through the nginx reverse proxy > started > > > failing > > > > ? ? ? with errors like this: > > > > > > > > ? ? ? [error] 29087#0: *49 upstream prematurely closed > connection > > > while > > > > ? ? ? reading response header from upstream > > > > ? ? ? [error] 29087#0: *60 upstream sent invalid header while > > > reading response > > > > ? ? ? header from upstream > > > > > > > > ? ? ? And the services are unusable. > > > > > > > > ? ? ? Commenting out the two config options above makes nginx > happy > > > again. > > > > ? ? ? Any idea what causes that? Any tips how to troubleshoot > it? > > > > ? ? ? Thanks! > > > > > > > > ? ? ? -- Pasi > > > > > > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* 2013/2/22 Pasi K?**??*??rkk?**??*??inen > > > <[1][2][5]pasik at iki.fi> > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* On Fri, Jan 18, 2013 at 10:38:21AM > +0200, > > > Pasi > > > > ? ? ? K?**??*??rkk?**??*??inen wrote: > > > > ? ? ? > > ?* ? ?* ? ?* > On Thu, Jan 17, 2013 at 11:15:58AM > +0800, > > > ?????? wrote: > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Yes. It should work for > any > > > request method. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > Great, thanks, I'll let you know how > it > > > works for me. > > > > ? ? ? Probably in two > > > > ? ? ? > > ?* ? ?* ? ?* weeks or so. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* Hi, > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* Adding the tengine pull request 91 on > top of > > > nginx 1.2.7 > > > > ? ? ? doesn't work: > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* cc1: warnings being treated as errors > > > > ? ? ? > > ?* ? ?* ? ?* src/http/ngx_http_request_body.c: In > function > > > > ? ? ? > > ?* ? ?* ? ?* > > > 'ngx_http_read_non_buffered_client_request_body': > > > > ? ? ? > > ?* ? ?* ? ?* src/http/ngx_http_request_body.c:506: > error: > > > implicit > > > > ? ? ? declaration of > > > > ? ? ? > > ?* ? ?* ? ?* function > 'ngx_http_top_input_body_filter' > > > > ? ? ? > > ?* ? ?* ? ?* make[1]: *** > > > [objs/src/http/ngx_http_request_body.o] Error 1 > > > > ? ? ? > > ?* ? ?* ? ?* make[1]: Leaving directory > > > `/root/src/nginx/nginx-1.2.7' > > > > ? ? ? > > ?* ? ?* ? ?* make: *** [build] Error 2 > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* ngx_http_top_input_body_filter() > cannot be > > > found from any > > > > ? ? ? .c/.h files.. > > > > ? ? ? > > ?* ? ?* ? ?* Which other patches should I apply? > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* Perhaps this? > > > > ? ? ? > > ?* ? ?* > > > > ? ? ? ?* > > > [2][3][6] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* Thanks, > > > > ? ? ? > > ?* ? ?* ? ?* -- Pasi > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 2013/1/16 Pasi > > > K?***?*??*?*??rkk?***?*??*?*??inen > > > > ? ? ? <[1][3][4][7]pasik at iki.fi> > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** On Sun, Jan 13, > 2013 at > > > 08:22:17PM +0800, > > > > ? ? ? ?????? wrote: > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > This > > > patch should work between > > > > ? ? ? nginx-1.2.6 and > > > > ? ? ? > > ?* ? ?* ? ?* nginx-1.3.8. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > The > > > documentation is here: > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ## > > > > ? ? ? client_body_postpone_sending ## > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Syntax: > > > > ? ? ? **client_body_postpone_sending** `size` > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > Default: 64k > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > Context: `http, server, > > > > ? ? ? location` > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > If you > > > specify the > > > > ? ? ? `proxy_request_buffering` or > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > `fastcgi_request_buffering` to > > > > ? ? ? be off, Nginx will > > > > ? ? ? > > ?* ? ?* ? ?* send the body > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to backend > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > when it > > > receives more than > > > > ? ? ? `size` data or the > > > > ? ? ? > > ?* ? ?* ? ?* whole request body > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** has been > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > received. It could save the > > > > ? ? ? connection and reduce > > > > ? ? ? > > ?* ? ?* ? ?* the IO number > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** with > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > backend. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ## > > > proxy_request_buffering ## > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Syntax: > > > > ? ? ? **proxy_request_buffering** `on | off` > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > Default: `on` > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > Context: `http, server, > > > > ? ? ? location` > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Specify > > > the request body will > > > > ? ? ? be buffered to the > > > > ? ? ? > > ?* ? ?* ? ?* disk or not. If > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** it's off, > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > the > > > request body will be > > > > ? ? ? stored in memory and sent > > > > ? ? ? > > ?* ? ?* ? ?* to backend > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** after Nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > receives more than > > > > ? ? ? `client_body_postpone_sending` > > > > ? ? ? > > ?* ? ?* ? ?* data. It could > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** save the > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > disk IO > > > with large request > > > > ? ? ? body. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** ?** ?*** ?** > > > > ? ? ? Note that, if you specify it > > > > ? ? ? > > ?* ? ?* ? ?* to be off, the nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** retry mechanism > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > with > > > unsuccessful response > > > > ? ? ? will be broken after > > > > ? ? ? > > ?* ? ?* ? ?* you sent part of > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > request > > > to backend. It will > > > > ? ? ? just return 500 when > > > > ? ? ? > > ?* ? ?* ? ?* it encounters > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** such > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > unsuccessful response. This > > > > ? ? ? directive also breaks > > > > ? ? ? > > ?* ? ?* ? ?* these > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** variables: > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > $request_body, > > > > ? ? ? $request_body_file. You should not > > > > ? ? ? > > ?* ? ?* ? ?* use these > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** variables any > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > more > > > while their values are > > > > ? ? ? undefined. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Hello, > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch sounds > > > exactly like what I need > > > > ? ? ? aswell! > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** I assume it > works for > > > both POST and PUT > > > > ? ? ? requests? > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Thanks, > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** -- Pasi > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** Hello! > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** @yaoweibin > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** If you are eager > > > > ? ? ? for this feature, you > > > > ? ? ? > > ?* ? ?* ? ?* could try my > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** patch: > > > > ? ? ? > > ?* ? ?* ? ?* > > > [2][2][4][5][8]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch has > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** been running in > > > > ? ? ? our production servers. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** what's the nginx > > > > ? ? ? version your patch based on? > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** Thanks! > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** On Fri, Jan 11, 2013 at > > > > ? ? ? 5:17 PM, ?****?*** > > > > ? ? ? > > ?* ? ?* ? ?* ?****?***?**?*???***?**?*???***?**?*?? > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > <[3][3][5][6][9]yaoweibin at gmail.com> wrote: > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** I know nginx > > > > ? ? ? team are working on it. You > > > > ? ? ? > > ?* ? ?* ? ?* can wait for it. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** If you are eager > > > > ? ? ? for this feature, you > > > > ? ? ? > > ?* ? ?* ? ?* could try my > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** patch: > > > > ? ? ? > > ?* ? ?* ? ?* > > > [4][4][6][7][10]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** This patch has > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** been running in > > > > ? ? ? our production servers. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** 2013/1/11 li > > > > ? ? ? zJay > > > > ? ? ? > > ?* ? ?* ? ?* <[5][5][7][8][11]zjay1987 at gmail.com> > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** Hello! > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** is it > > > > ? ? ? possible that nginx will not > > > > ? ? ? > > ?* ? ?* ? ?* buffer the client > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** body before > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** handle > > > > ? ? ? the request to upstream? > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** we want > > > > ? ? ? to use nginx as a reverse > > > > ? ? ? > > ?* ? ?* ? ?* proxy to upload very > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** very big file > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** to the > > > > ? ? ? upstream, but the default > > > > ? ? ? > > ?* ? ?* ? ?* behavior of nginx is to > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** save the > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** whole > > > > ? ? ? request to the local disk > > > > ? ? ? > > ?* ? ?* ? ?* first before handle it > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to the > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** upstream, > > > > ? ? ? which make the upstream > > > > ? ? ? > > ?* ? ?* ? ?* impossible to process > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the file on > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** the fly > > > > ? ? ? when the file is uploading, > > > > ? ? ? > > ?* ? ?* ? ?* results in much high > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** request > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** latency > > > > ? ? ? and server-side resource > > > > ? ? ? > > ?* ? ?* ? ?* consumption. > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** Thanks! > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** > > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** nginx > > > > ? ? ? mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** > > > > ? ? ? [6][6][8][9][12]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** ?** ?*** > > > > ? ? ? > > ?* ? ?* ? ?* > > > [7][7][9][10][13]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** -- > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** Weibin Yao > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** Developer @ > > > > ? ? ? Server Platform Team of > > > > ? ? ? > > ?* ? ?* ? ?* Taobao > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** > > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** nginx mailing > > > > ? ? ? list > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** > > > > ? ? ? [8][8][10][11][14]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** ?** ?*** > > > > ? ? ? > > ?* ? ?* > > > > ? ? ? ?* > > > [9][9][11][12][15]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** > > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** > > > > ? ? ? [10][10][12][13][16]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > ?** > > > ?*** > > > > ? ? ? > > ?* ? ?* > > > > ? ? ? ?* > > > [11][11][13][14][17] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > -- > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Weibin > > > Yao > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > > > Developer @ Server Platform > > > > ? ? ? Team of Taobao > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > References > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > Visible > > > links > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 1. > > > > ? ? ? mailto:[12][14][15][18]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 2. > > > > ? ? ? > > ?* ? ?* ? ?* > > > [13][15][16][19]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 3. > > > > ? ? ? mailto:[14][16][17][20]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 4. > > > > ? ? ? > > ?* ? ?* ? ?* > > > [15][17][18][21]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 5. > > > > ? ? ? mailto:[16][18][19][22]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 6. > > > > ? ? ? mailto:[17][19][20][23]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 7. > > > > ? ? ? > > ?* ? ?* ? ?* > > > [18][20][21][24]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 8. > > > > ? ? ? mailto:[19][21][22][25]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** ?*** > 9. > > > > ? ? ? > > ?* ? ?* ? ?* > > > [20][22][23][26]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** 10. > > > > ? ? ? mailto:[21][23][24][27]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*** ?** 11. > > > > ? ? ? > > ?* ? ?* ? ?* > > > [22][24][25][28]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > nginx mailing > list > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > [23][25][26][29]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? > > > ? [24][26][27][30]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx mailing > list > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > [25][27][28][31]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > > ? [26][28][29][32]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** -- > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Weibin Yao > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Developer @ Server > Platform > > > Team of Taobao > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > References > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** Visible links > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 1. > > > mailto:[29][30][33]pasik at iki.fi > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 2. > > > > ? ? ? [30][31][34]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 3. > > > mailto:[31][32][35]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 4. > > > > ? ? ? [32][33][36]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 5. > > > mailto:[33][34][37]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 6. > > > mailto:[34][35][38]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 7. > > > > ? ? ? [35][36][39] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 8. > > > mailto:[36][37][40]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* ?** 9. > > > > ? ? ? [37][38][41] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 10. > > > mailto:[38][39][42]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 11. > > > > ? ? ? [39][40][43] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 12. > > > mailto:[40][41][44]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 13. > > > > ? ? ? [41][42][45]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 14. > > > mailto:[42][43][46]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 15. > > > > ? ? ? [43][44][47]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 16. > > > mailto:[44][45][48]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 17. > > > mailto:[45][46][49]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 18. > > > > ? ? ? [46][47][50] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 19. > > > mailto:[47][48][51]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 20. > > > > ? ? ? [48][49][52] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 21. > > > mailto:[49][50][53]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 22. > > > > ? ? ? [50][51][54] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 23. > > > mailto:[51][52][55]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 24. > > > > ? ? ? [52][53][56] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 25. > > > mailto:[53][54][57]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > ?** ?* 26. > > > > ? ? ? [54][55][58] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > > nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > > [55][56][59]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > [56][57][60]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > [57][58][61]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > > > [58][59][62]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* [59][60][63]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > > [60][61][64]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* -- > > > > ? ? ? > > ?* ? ?* Weibin Yao > > > > ? ? ? > > ?* ? ?* Developer @ Server Platform Team of Taobao > > > > ? ? ? > > > > > > ? ? ? > > References > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* Visible links > > > > ? ? ? > > ?* ? ?* 1. mailto:[62][65]pasik at iki.fi > > > > ? ? ? > > ?* ? ?* 2. > > > > ? ? > > > ? [63][66] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? ? > > ?* ? ?* 3. mailto:[64][67]pasik at iki.fi > > > > ? ? ? > > ?* ? ?* 4. > > > [65][68]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* 5. mailto:[66][69]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? ?* 6. > > > [67][70]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* 7. mailto:[68][71]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* 8. mailto:[69][72]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* 9. > > > [70][73]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 10. mailto:[71][74]nginx at nginx.org > > > > ? ? ? > > ?* ? 11. > > > [72][75]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 12. mailto:[73][76]nginx at nginx.org > > > > ? ? ? > > ?* ? 13. > > > [74][77]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 14. mailto:[75][78]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 15. [76][79] > https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 16. mailto:[77][80]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? 17. [78][81] > https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 18. mailto:[79][82]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 19. mailto:[80][83]nginx at nginx.org > > > > ? ? ? > > ?* ? 20. > > > [81][84]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 21. mailto:[82][85]nginx at nginx.org > > > > ? ? ? > > ?* ? 22. > > > [83][86]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 23. mailto:[84][87]nginx at nginx.org > > > > ? ? ? > > ?* ? 24. > > > [85][88]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 25. mailto:[86][89]nginx at nginx.org > > > > ? ? ? > > ?* ? 26. > > > [87][90]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 27. mailto:[88][91]nginx at nginx.org > > > > ? ? ? > > ?* ? 28. > > > [89][92]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 29. mailto:[90][93]pasik at iki.fi > > > > ? ? ? > > ?* ? 30. [91][94] > https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 31. mailto:[92][95]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? 32. [93][96] > https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 33. mailto:[94][97]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 34. mailto:[95][98]nginx at nginx.org > > > > ? ? ? > > ?* ? 35. > > > [96][99]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 36. mailto:[97][100]nginx at nginx.org > > > > ? ? ? > > ?* ? 37. > > > [98][101]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 38. mailto:[99][102]nginx at nginx.org > > > > ? ? ? > > ?* ? 39. > > > [100][103]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 40. mailto:[101][104]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 41. > > > [102][105]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 42. mailto:[103][106]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? 43. > > > [104][107]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 44. mailto:[105][108]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 45. mailto:[106][109]nginx at nginx.org > > > > ? ? ? > > ?* ? 46. > > > [107][110]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 47. mailto:[108][111]nginx at nginx.org > > > > ? ? ? > > ?* ? 48. > > > [109][112]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 49. mailto:[110][113]nginx at nginx.org > > > > ? ? ? > > ?* ? 50. > > > [111][114]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 51. mailto:[112][115]nginx at nginx.org > > > > ? ? ? > > ?* ? 52. > > > [113][116]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 53. mailto:[114][117]nginx at nginx.org > > > > ? ? ? > > ?* ? 54. > > > [115][118]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 55. mailto:[116][119]nginx at nginx.org > > > > ? ? ? > > ?* ? 56. > > > [117][120]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 57. mailto:[118][121]nginx at nginx.org > > > > ? ? ? > > ?* ? 58. > > > [119][122]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 59. mailto:[120][123]nginx at nginx.org > > > > ? ? ? > > ?* ? 60. > > > [121][124]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > > _______________________________________________ > > > > ? ? ? > > nginx mailing list > > > > ? ? ? > > [122][125]nginx at nginx.org > > > > ? ? ? > > [123][126] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > _______________________________________________ > > > > ? ? ? > nginx mailing list > > > > ? ? ? > [124][127]nginx at nginx.org > > > > ? ? ? > [125][128] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? _______________________________________________ > > > > ? ? ? nginx mailing list > > > > ? ? ? [126][129]nginx at nginx.org > > > > ? ? ? [127][130] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? -- > > > > ? ? Weibin Yao > > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > > > References > > > > > > > > ? ? Visible links > > > > ? ? 1. mailto:[131]pasik at iki.fi > > > > ? ? 2. mailto:[132]pasik at iki.fi > > > > ? ? 3. > > > [133] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? 4. mailto:[134]pasik at iki.fi > > > > ? ? 5. [135]https://github.com/taobao/tengine/pull/91 > > > > ? ? 6. mailto:[136]yaoweibin at gmail.com > > > > ? ? 7. [137]https://github.com/taobao/tengine/pull/91 > > > > ? ? 8. mailto:[138]zjay1987 at gmail.com > > > > ? ? 9. mailto:[139]nginx at nginx.org > > > > ? 10. [140]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 11. mailto:[141]nginx at nginx.org > > > > ? 12. [142]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 13. mailto:[143]nginx at nginx.org > > > > ? 14. [144]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 15. mailto:[145]zjay1987 at gmail.com > > > > ? 16. [146]https://github.com/taobao/tengine/pull/91 > > > > ? 17. mailto:[147]yaoweibin at gmail.com > > > > ? 18. [148]https://github.com/taobao/tengine/pull/91 > > > > ? 19. mailto:[149]zjay1987 at gmail.com > > > > ? 20. mailto:[150]nginx at nginx.org > > > > ? 21. [151]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 22. mailto:[152]nginx at nginx.org > > > > ? 23. [153]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 24. mailto:[154]nginx at nginx.org > > > > ? 25. [155]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 26. mailto:[156]nginx at nginx.org > > > > ? 27. [157]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 28. mailto:[158]nginx at nginx.org > > > > ? 29. [159]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 30. mailto:[160]pasik at iki.fi > > > > ? 31. [161]https://github.com/taobao/tengine/pull/91 > > > > ? 32. mailto:[162]yaoweibin at gmail.com > > > > ? 33. [163]https://github.com/taobao/tengine/pull/91 > > > > ? 34. mailto:[164]zjay1987 at gmail.com > > > > ? 35. mailto:[165]nginx at nginx.org > > > > ? 36. [166]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 37. mailto:[167]nginx at nginx.org > > > > ? 38. [168]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 39. mailto:[169]nginx at nginx.org > > > > ? 40. [170]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 41. mailto:[171]zjay1987 at gmail.com > > > > ? 42. [172]https://github.com/taobao/tengine/pull/91 > > > > ? 43. mailto:[173]yaoweibin at gmail.com > > > > ? 44. [174]https://github.com/taobao/tengine/pull/91 > > > > ? 45. mailto:[175]zjay1987 at gmail.com > > > > ? 46. mailto:[176]nginx at nginx.org > > > > ? 47. [177]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 48. mailto:[178]nginx at nginx.org > > > > ? 49. [179]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 50. mailto:[180]nginx at nginx.org > > > > ? 51. [181]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 52. mailto:[182]nginx at nginx.org > > > > ? 53. [183]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 54. mailto:[184]nginx at nginx.org > > > > ? 55. [185]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 56. mailto:[186]nginx at nginx.org > > > > ? 57. [187]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 58. mailto:[188]nginx at nginx.org > > > > ? 59. [189]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 60. mailto:[190]nginx at nginx.org > > > > ? 61. [191]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 62. mailto:[192]pasik at iki.fi > > > > ? 63. > > > [193] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? 64. mailto:[194]pasik at iki.fi > > > > ? 65. [195]https://github.com/taobao/tengine/pull/91 > > > > ? 66. mailto:[196]yaoweibin at gmail.com > > > > ? 67. [197]https://github.com/taobao/tengine/pull/91 > > > > ? 68. mailto:[198]zjay1987 at gmail.com > > > > ? 69. mailto:[199]nginx at nginx.org > > > > ? 70. [200]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 71. mailto:[201]nginx at nginx.org > > > > ? 72. [202]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 73. mailto:[203]nginx at nginx.org > > > > ? 74. [204]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 75. mailto:[205]zjay1987 at gmail.com > > > > ? 76. [206]https://github.com/taobao/tengine/pull/91 > > > > ? 77. mailto:[207]yaoweibin at gmail.com > > > > ? 78. [208]https://github.com/taobao/tengine/pull/91 > > > > ? 79. mailto:[209]zjay1987 at gmail.com > > > > ? 80. mailto:[210]nginx at nginx.org > > > > ? 81. [211]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 82. mailto:[212]nginx at nginx.org > > > > ? 83. [213]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 84. mailto:[214]nginx at nginx.org > > > > ? 85. [215]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 86. mailto:[216]nginx at nginx.org > > > > ? 87. [217]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 88. mailto:[218]nginx at nginx.org > > > > ? 89. [219]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 90. mailto:[220]pasik at iki.fi > > > > ? 91. [221]https://github.com/taobao/tengine/pull/91 > > > > ? 92. mailto:[222]yaoweibin at gmail.com > > > > ? 93. [223]https://github.com/taobao/tengine/pull/91 > > > > ? 94. mailto:[224]zjay1987 at gmail.com > > > > ? 95. mailto:[225]nginx at nginx.org > > > > ? 96. [226]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 97. mailto:[227]nginx at nginx.org > > > > ? 98. [228]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 99. mailto:[229]nginx at nginx.org > > > > ? 100. [230]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 101. mailto:[231]zjay1987 at gmail.com > > > > ? 102. [232]https://github.com/taobao/tengine/pull/91 > > > > ? 103. mailto:[233]yaoweibin at gmail.com > > > > ? 104. [234]https://github.com/taobao/tengine/pull/91 > > > > ? 105. mailto:[235]zjay1987 at gmail.com > > > > ? 106. mailto:[236]nginx at nginx.org > > > > ? 107. [237]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 108. mailto:[238]nginx at nginx.org > > > > ? 109. [239]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 110. mailto:[240]nginx at nginx.org > > > > ? 111. [241]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 112. mailto:[242]nginx at nginx.org > > > > ? 113. [243]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 114. mailto:[244]nginx at nginx.org > > > > ? 115. [245]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 116. mailto:[246]nginx at nginx.org > > > > ? 117. [247]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 118. mailto:[248]nginx at nginx.org > > > > ? 119. [249]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 120. mailto:[250]nginx at nginx.org > > > > ? 121. [251]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 122. mailto:[252]nginx at nginx.org > > > > ? 123. [253]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 124. mailto:[254]nginx at nginx.org > > > > ? 125. [255]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 126. mailto:[256]nginx at nginx.org > > > > ? 127. [257]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > > > nginx mailing list > > > > [258]nginx at nginx.org > > > > [259]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > [260]nginx at nginx.org > > > [261]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > > Weibin Yao > > > Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > Visible links > > > 1. http://wiki.nginx.org/Debugging > > > 2. mailto:pasik at iki.fi > > > 3. https://service-backend/ > > > 4. mailto:pasik at iki.fi > > > 5. mailto:pasik at iki.fi > > > 6. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > 7. mailto:pasik at iki.fi > > > 8. https://github.com/taobao/tengine/pull/91 > > > 9. mailto:yaoweibin at gmail.com > > > 10. https://github.com/taobao/tengine/pull/91 > > > 11. mailto:zjay1987 at gmail.com > > > 12. mailto:nginx at nginx.org > > > 13. http://mailman.nginx.org/mailman/listinfo/nginx > > > 14. mailto:nginx at nginx.org > > > 15. http://mailman.nginx.org/mailman/listinfo/nginx > > > 16. mailto:nginx at nginx.org > > > 17. http://mailman.nginx.org/mailman/listinfo/nginx > > > 18. mailto:zjay1987 at gmail.com > > > 19. https://github.com/taobao/tengine/pull/91 > > > 20. mailto:yaoweibin at gmail.com > > > 21. https://github.com/taobao/tengine/pull/91 > > > 22. mailto:zjay1987 at gmail.com > > > 23. mailto:nginx at nginx.org > > > 24. http://mailman.nginx.org/mailman/listinfo/nginx > > > 25. mailto:nginx at nginx.org > > > 26. http://mailman.nginx.org/mailman/listinfo/nginx > > > 27. mailto:nginx at nginx.org > > > 28. http://mailman.nginx.org/mailman/listinfo/nginx > > > 29. mailto:nginx at nginx.org > > > 30. http://mailman.nginx.org/mailman/listinfo/nginx > > > 31. mailto:nginx at nginx.org > > > 32. http://mailman.nginx.org/mailman/listinfo/nginx > > > 33. mailto:pasik at iki.fi > > > 34. https://github.com/taobao/tengine/pull/91 > > > 35. mailto:yaoweibin at gmail.com > > > 36. https://github.com/taobao/tengine/pull/91 > > > 37. mailto:zjay1987 at gmail.com > > > 38. mailto:nginx at nginx.org > > > 39. http://mailman.nginx.org/mailman/listinfo/nginx > > > 40. mailto:nginx at nginx.org > > > 41. http://mailman.nginx.org/mailman/listinfo/nginx > > > 42. mailto:nginx at nginx.org > > > 43. http://mailman.nginx.org/mailman/listinfo/nginx > > > 44. mailto:zjay1987 at gmail.com > > > 45. https://github.com/taobao/tengine/pull/91 > > > 46. mailto:yaoweibin at gmail.com > > > 47. https://github.com/taobao/tengine/pull/91 > > > 48. mailto:zjay1987 at gmail.com > > > 49. mailto:nginx at nginx.org > > > 50. http://mailman.nginx.org/mailman/listinfo/nginx > > > 51. mailto:nginx at nginx.org > > > 52. http://mailman.nginx.org/mailman/listinfo/nginx > > > 53. mailto:nginx at nginx.org > > > 54. http://mailman.nginx.org/mailman/listinfo/nginx > > > 55. mailto:nginx at nginx.org > > > 56. http://mailman.nginx.org/mailman/listinfo/nginx > > > 57. mailto:nginx at nginx.org > > > 58. http://mailman.nginx.org/mailman/listinfo/nginx > > > 59. mailto:nginx at nginx.org > > > 60. http://mailman.nginx.org/mailman/listinfo/nginx > > > 61. mailto:nginx at nginx.org > > > 62. http://mailman.nginx.org/mailman/listinfo/nginx > > > 63. mailto:nginx at nginx.org > > > 64. http://mailman.nginx.org/mailman/listinfo/nginx > > > 65. mailto:pasik at iki.fi > > > 66. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > 67. mailto:pasik at iki.fi > > > 68. https://github.com/taobao/tengine/pull/91 > > > 69. mailto:yaoweibin at gmail.com > > > 70. https://github.com/taobao/tengine/pull/91 > > > 71. mailto:zjay1987 at gmail.com > > > 72. mailto:nginx at nginx.org > > > 73. http://mailman.nginx.org/mailman/listinfo/nginx > > > 74. mailto:nginx at nginx.org > > > 75. http://mailman.nginx.org/mailman/listinfo/nginx > > > 76. mailto:nginx at nginx.org > > > 77. http://mailman.nginx.org/mailman/listinfo/nginx > > > 78. mailto:zjay1987 at gmail.com > > > 79. https://github.com/taobao/tengine/pull/91 > > > 80. mailto:yaoweibin at gmail.com > > > 81. https://github.com/taobao/tengine/pull/91 > > > 82. mailto:zjay1987 at gmail.com > > > 83. mailto:nginx at nginx.org > > > 84. http://mailman.nginx.org/mailman/listinfo/nginx > > > 85. mailto:nginx at nginx.org > > > 86. http://mailman.nginx.org/mailman/listinfo/nginx > > > 87. mailto:nginx at nginx.org > > > 88. http://mailman.nginx.org/mailman/listinfo/nginx > > > 89. mailto:nginx at nginx.org > > > 90. http://mailman.nginx.org/mailman/listinfo/nginx > > > 91. mailto:nginx at nginx.org > > > 92. http://mailman.nginx.org/mailman/listinfo/nginx > > > 93. mailto:pasik at iki.fi > > > 94. https://github.com/taobao/tengine/pull/91 > > > 95. mailto:yaoweibin at gmail.com > > > 96. https://github.com/taobao/tengine/pull/91 > > > 97. mailto:zjay1987 at gmail.com > > > 98. mailto:nginx at nginx.org > > > 99. http://mailman.nginx.org/mailman/listinfo/nginx > > > 100. mailto:nginx at nginx.org > > > 101. http://mailman.nginx.org/mailman/listinfo/nginx > > > 102. mailto:nginx at nginx.org > > > 103. http://mailman.nginx.org/mailman/listinfo/nginx > > > 104. mailto:zjay1987 at gmail.com > > > 105. https://github.com/taobao/tengine/pull/91 > > > 106. mailto:yaoweibin at gmail.com > > > 107. https://github.com/taobao/tengine/pull/91 > > > 108. mailto:zjay1987 at gmail.com > > > 109. mailto:nginx at nginx.org > > > 110. http://mailman.nginx.org/mailman/listinfo/nginx > > > 111. mailto:nginx at nginx.org > > > 112. http://mailman.nginx.org/mailman/listinfo/nginx > > > 113. mailto:nginx at nginx.org > > > 114. http://mailman.nginx.org/mailman/listinfo/nginx > > > 115. mailto:nginx at nginx.org > > > 116. http://mailman.nginx.org/mailman/listinfo/nginx > > > 117. mailto:nginx at nginx.org > > > 118. http://mailman.nginx.org/mailman/listinfo/nginx > > > 119. mailto:nginx at nginx.org > > > 120. http://mailman.nginx.org/mailman/listinfo/nginx > > > 121. mailto:nginx at nginx.org > > > 122. http://mailman.nginx.org/mailman/listinfo/nginx > > > 123. mailto:nginx at nginx.org > > > 124. http://mailman.nginx.org/mailman/listinfo/nginx > > > 125. mailto:nginx at nginx.org > > > 126. http://mailman.nginx.org/mailman/listinfo/nginx > > > 127. mailto:nginx at nginx.org > > > 128. http://mailman.nginx.org/mailman/listinfo/nginx > > > 129. mailto:nginx at nginx.org > > > 130. http://mailman.nginx.org/mailman/listinfo/nginx > > > 131. mailto:pasik at iki.fi > > > 132. mailto:pasik at iki.fi > > > 133. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > 134. mailto:pasik at iki.fi > > > 135. https://github.com/taobao/tengine/pull/91 > > > 136. mailto:yaoweibin at gmail.com > > > 137. https://github.com/taobao/tengine/pull/91 > > > 138. mailto:zjay1987 at gmail.com > > > 139. mailto:nginx at nginx.org > > > 140. http://mailman.nginx.org/mailman/listinfo/nginx > > > 141. mailto:nginx at nginx.org > > > 142. http://mailman.nginx.org/mailman/listinfo/nginx > > > 143. mailto:nginx at nginx.org > > > 144. http://mailman.nginx.org/mailman/listinfo/nginx > > > 145. mailto:zjay1987 at gmail.com > > > 146. https://github.com/taobao/tengine/pull/91 > > > 147. mailto:yaoweibin at gmail.com > > > 148. https://github.com/taobao/tengine/pull/91 > > > 149. mailto:zjay1987 at gmail.com > > > 150. mailto:nginx at nginx.org > > > 151. http://mailman.nginx.org/mailman/listinfo/nginx > > > 152. mailto:nginx at nginx.org > > > 153. http://mailman.nginx.org/mailman/listinfo/nginx > > > 154. mailto:nginx at nginx.org > > > 155. http://mailman.nginx.org/mailman/listinfo/nginx > > > 156. mailto:nginx at nginx.org > > > 157. http://mailman.nginx.org/mailman/listinfo/nginx > > > 158. mailto:nginx at nginx.org > > > 159. http://mailman.nginx.org/mailman/listinfo/nginx > > > 160. mailto:pasik at iki.fi > > > 161. https://github.com/taobao/tengine/pull/91 > > > 162. mailto:yaoweibin at gmail.com > > > 163. https://github.com/taobao/tengine/pull/91 > > > 164. mailto:zjay1987 at gmail.com > > > 165. mailto:nginx at nginx.org > > > 166. http://mailman.nginx.org/mailman/listinfo/nginx > > > 167. mailto:nginx at nginx.org > > > 168. http://mailman.nginx.org/mailman/listinfo/nginx > > > 169. mailto:nginx at nginx.org > > > 170. http://mailman.nginx.org/mailman/listinfo/nginx > > > 171. mailto:zjay1987 at gmail.com > > > 172. https://github.com/taobao/tengine/pull/91 > > > 173. mailto:yaoweibin at gmail.com > > > 174. https://github.com/taobao/tengine/pull/91 > > > 175. mailto:zjay1987 at gmail.com > > > 176. mailto:nginx at nginx.org > > > 177. http://mailman.nginx.org/mailman/listinfo/nginx > > > 178. mailto:nginx at nginx.org > > > 179. http://mailman.nginx.org/mailman/listinfo/nginx > > > 180. mailto:nginx at nginx.org > > > 181. http://mailman.nginx.org/mailman/listinfo/nginx > > > 182. mailto:nginx at nginx.org > > > 183. http://mailman.nginx.org/mailman/listinfo/nginx > > > 184. mailto:nginx at nginx.org > > > 185. http://mailman.nginx.org/mailman/listinfo/nginx > > > 186. mailto:nginx at nginx.org > > > 187. http://mailman.nginx.org/mailman/listinfo/nginx > > > 188. mailto:nginx at nginx.org > > > 189. http://mailman.nginx.org/mailman/listinfo/nginx > > > 190. mailto:nginx at nginx.org > > > 191. http://mailman.nginx.org/mailman/listinfo/nginx > > > 192. mailto:pasik at iki.fi > > > 193. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > 194. mailto:pasik at iki.fi > > > 195. https://github.com/taobao/tengine/pull/91 > > > 196. mailto:yaoweibin at gmail.com > > > 197. https://github.com/taobao/tengine/pull/91 > > > 198. mailto:zjay1987 at gmail.com > > > 199. mailto:nginx at nginx.org > > > 200. http://mailman.nginx.org/mailman/listinfo/nginx > > > 201. mailto:nginx at nginx.org > > > 202. http://mailman.nginx.org/mailman/listinfo/nginx > > > 203. mailto:nginx at nginx.org > > > 204. http://mailman.nginx.org/mailman/listinfo/nginx > > > 205. mailto:zjay1987 at gmail.com > > > 206. https://github.com/taobao/tengine/pull/91 > > > 207. mailto:yaoweibin at gmail.com > > > 208. https://github.com/taobao/tengine/pull/91 > > > 209. mailto:zjay1987 at gmail.com > > > 210. mailto:nginx at nginx.org > > > 211. http://mailman.nginx.org/mailman/listinfo/nginx > > > 212. mailto:nginx at nginx.org > > > 213. http://mailman.nginx.org/mailman/listinfo/nginx > > > 214. mailto:nginx at nginx.org > > > 215. http://mailman.nginx.org/mailman/listinfo/nginx > > > 216. mailto:nginx at nginx.org > > > 217. http://mailman.nginx.org/mailman/listinfo/nginx > > > 218. mailto:nginx at nginx.org > > > 219. http://mailman.nginx.org/mailman/listinfo/nginx > > > 220. mailto:pasik at iki.fi > > > 221. https://github.com/taobao/tengine/pull/91 > > > 222. mailto:yaoweibin at gmail.com > > > 223. https://github.com/taobao/tengine/pull/91 > > > 224. mailto:zjay1987 at gmail.com > > > 225. mailto:nginx at nginx.org > > > 226. http://mailman.nginx.org/mailman/listinfo/nginx > > > 227. mailto:nginx at nginx.org > > > 228. http://mailman.nginx.org/mailman/listinfo/nginx > > > 229. mailto:nginx at nginx.org > > > 230. http://mailman.nginx.org/mailman/listinfo/nginx > > > 231. mailto:zjay1987 at gmail.com > > > 232. https://github.com/taobao/tengine/pull/91 > > > 233. mailto:yaoweibin at gmail.com > > > 234. https://github.com/taobao/tengine/pull/91 > > > 235. mailto:zjay1987 at gmail.com > > > 236. mailto:nginx at nginx.org > > > 237. http://mailman.nginx.org/mailman/listinfo/nginx > > > 238. mailto:nginx at nginx.org > > > 239. http://mailman.nginx.org/mailman/listinfo/nginx > > > 240. mailto:nginx at nginx.org > > > 241. http://mailman.nginx.org/mailman/listinfo/nginx > > > 242. mailto:nginx at nginx.org > > > 243. http://mailman.nginx.org/mailman/listinfo/nginx > > > 244. mailto:nginx at nginx.org > > > 245. http://mailman.nginx.org/mailman/listinfo/nginx > > > 246. mailto:nginx at nginx.org > > > 247. http://mailman.nginx.org/mailman/listinfo/nginx > > > 248. mailto:nginx at nginx.org > > > 249. http://mailman.nginx.org/mailman/listinfo/nginx > > > 250. mailto:nginx at nginx.org > > > 251. http://mailman.nginx.org/mailman/listinfo/nginx > > > 252. mailto:nginx at nginx.org > > > 253. http://mailman.nginx.org/mailman/listinfo/nginx > > > 254. mailto:nginx at nginx.org > > > 255. http://mailman.nginx.org/mailman/listinfo/nginx > > > 256. mailto:nginx at nginx.org > > > 257. http://mailman.nginx.org/mailman/listinfo/nginx > > > 258. mailto:nginx at nginx.org > > > 259. http://mailman.nginx.org/mailman/listinfo/nginx > > > 260. mailto:nginx at nginx.org > > > 261. http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: no_buffer_v5.patch Type: application/octet-stream Size: 40138 bytes Desc: not available URL: From nginx-forum at nginx.us Thu Mar 14 07:43:17 2013 From: nginx-forum at nginx.us (Joe M) Date: Thu, 14 Mar 2013 03:43:17 -0400 Subject: [Q] Security issues with Nginx Message-ID: <8ea3897e01800260b61373dee6d502a5.NginxMailingListEnglish@forum.nginx.org> Hey all Im new to Nginx and wanted to know if any of you familiar with any Known security issues in Nginx (for example: http://cnedelcu.blogspot.co.il/2010/05/nginx-php-via-fastcgi-important.html) Thanks Joe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237336,237336#msg-237336 From pasik at iki.fi Thu Mar 14 08:39:12 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 14 Mar 2013 10:39:12 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> <20130225101304.GZ8912@reaktio.net> <20130305131741.GN8912@reaktio.net> <20130308133629.GD8912@reaktio.net> Message-ID: <20130314083912.GT8912@reaktio.net> On Thu, Mar 14, 2013 at 01:17:16PM +0800, Weibin Yao wrote: > Try the new patch, It could solve your problem. > Thanks for your test effort. > Thanks a lot! I can confirm the "no_buffer_v5.patch" with nginx 1.2.7 fixes the problem for me, and both HTTP POST and HTTP PUT requests work OK now without buffering to disk. -- Pasi > 2013/3/8 Pasi K??rkk??inen <[1]pasik at iki.fi> > > On Tue, Mar 05, 2013 at 03:17:41PM +0200, Pasi K??rkk??inen wrote: > > On Tue, Feb 26, 2013 at 10:13:11PM +0800, Weibin Yao wrote: > > > ? ? It still worked in my box. Can you show me the debug.log > > > ? ? ([1][2]http://wiki.nginx.org/Debugging)? You need recompile ?* > with > > > ? ? --with-debug configure argument and set debug level in > error_log > > > ? ? directive. > > > > > > > Ok so I've sent you the debug log. > > Can you see anything obvious in it? > > > > I keep getting the "upstream sent invalid header while reading > response header from upstream" > > error when using the no_buffer patch.. > > > > Is there something you'd want me to try? Adjusting some config options? > Did you find anything weird in the log? > Thanks! > > -- Pasi > > > > > > > > > ? ? 2013/2/25 Pasi K?*?*?rkk?*?*?inen <[2][3]pasik at iki.fi> > > > > > > ? ? ? On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > > > ? ? ? > ?* ? ?* Can you show me your configure? It works for me > with nginx-1.2.7. > > > ? ? ? > ?* ? ?* Thanks. > > > ? ? ? > > > > > > > ? ? ? Hi, > > > > > > ? ? ? I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers > more" module > > > ? ? ? added, > > > ? ? ? and your patch. > > > > > > ? ? ? I'm using the following configuration: > > > > > > ? ? ? server { > > > ? ? ? ?* ? ?* ? ?* ? ?* ? listen ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* > ? ?* ? ?* public_ip:443 ssl; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? server_name ?* ? ?* ? ?* ? ?* ? ?* ? ?* > ? service.domain.tld; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? ssl ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* > ? ?* ? ?* ? ?* ? on; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? keepalive_timeout ?* ? ?* ? ?* ? 70; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? access_log ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > > ? ? ? ?* /var/log/nginx/access-service.log; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? access_log ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > > ? ? ? ?* /var/log/nginx/access-service-full.log full; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? error_log ?* ? ?* ? ?* ? ?* ? ?* ? ?* > ? ?* > > > ? ? ? /var/log/nginx/error-service.log; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_header_buffer_size 64k; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_header_timeout ?* ? 120; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_next_upstream error timeout > invalid_header http_500 > > > ? ? ? http_502 http_503; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_set_header Host $host; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_set_header X-Real-IP $remote_addr; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_redirect ?* ? ?* ? off; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_buffering ?* ? ?* off; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_cache ?* ? ?* ? ?* ? ?* off; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? add_header Last-Modified ""; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? if_modified_since ?* off; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_max_body_size ?* ? ?* 262144M; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_body_buffer_size 1024k; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_body_timeout ?* ? ?* ? 240; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? chunked_transfer_encoding off; > > > > > > ? ? ? # ?* ? ?* ? ?* ? client_body_postpone_sending ?* ? ?* 64k; > > > ? ? ? # ?* ? ?* ? ?* ? proxy_request_buffering ?* ? ?* ? ?* ? ?* > ? off; > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? location / { > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? proxy_pass ?* ? ?* > ? ?* [3][4]https://service-backend; > > > ? ? ? ?* ? ?* ? ?* ? ?* ? } > > > ? ? ? } > > > > > > ? ? ? Thanks! > > > > > > ? ? ? -- Pasi > > > > > > ? ? ? > ?* ? ?* 2013/2/22 Pasi K?**??*??rkk?**??*??inen > <[1][4][5]pasik at iki.fi> > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* On Fri, Feb 22, 2013 at 11:25:24AM +0200, > Pasi > > > ? ? ? K?**??*??rkk?**??*??inen wrote: > > > ? ? ? > ?* ? ?* ? ?* > On Fri, Feb 22, 2013 at 10:06:11AM +0800, > Weibin Yao wrote: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Use the patch I attached in > this mail thread > > > ? ? ? instead, don't use > > > ? ? ? > ?* ? ?* ? ?* the pull > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** request patch which is for > tengine.?*** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Thanks. > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > > > ? ? ? > ?* ? ?* ? ?* > Oh sorry I missed that attachment. It seems > to apply and > > > ? ? ? build OK. > > > ? ? ? > ?* ? ?* ? ?* > I'll start testing it. > > > ? ? ? > ?* ? ?* ? ?* > > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* I added the patch on top of nginx 1.2.7 and > enabled the > > > ? ? ? following > > > ? ? ? > ?* ? ?* ? ?* options: > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* client_body_postpone_sending ?** ?* ?** 64k; > > > ? ? ? > ?* ? ?* ? ?* proxy_request_buffering ?** ?* ?** ?* ?** ?* > ?** ?* off; > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* after that connections through the nginx > reverse proxy started > > > ? ? ? failing > > > ? ? ? > ?* ? ?* ? ?* with errors like this: > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* [error] 29087#0: *49 upstream prematurely > closed connection > > > ? ? ? while > > > ? ? ? > ?* ? ?* ? ?* reading response header from upstream > > > ? ? ? > ?* ? ?* ? ?* [error] 29087#0: *60 upstream sent invalid > header while > > > ? ? ? reading response > > > ? ? ? > ?* ? ?* ? ?* header from upstream > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* And the services are unusable. > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* Commenting out the two config options above > makes nginx happy > > > ? ? ? again. > > > ? ? ? > ?* ? ?* ? ?* Any idea what causes that? Any tips how to > troubleshoot it? > > > ? ? ? > ?* ? ?* ? ?* Thanks! > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* -- Pasi > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 2013/2/22 Pasi > K?***?*??*?*??rkk?***?*??*?*??inen > > > ? ? ? <[1][2][5][6]pasik at iki.fi> > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** On Fri, Jan 18, 2013 at > 10:38:21AM +0200, > > > ? ? ? Pasi > > > ? ? ? > ?* ? ?* ? ?* K?***?*??*?*??rkk?***?*??*?*??inen wrote: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > On Thu, Jan 17, 2013 > at 11:15:58AM +0800, > > > ? ? ? ?????? wrote: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** Yes. > It should work for any > > > ? ? ? request method. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > Great, thanks, I'll > let you know how it > > > ? ? ? works for me. > > > ? ? ? > ?* ? ?* ? ?* Probably in two > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** weeks or so. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Hi, > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Adding the tengine pull > request 91 on top of > > > ? ? ? nginx 1.2.7 > > > ? ? ? > ?* ? ?* ? ?* doesn't work: > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** cc1: warnings being > treated as errors > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > src/http/ngx_http_request_body.c: In function > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? 'ngx_http_read_non_buffered_client_request_body': > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > src/http/ngx_http_request_body.c:506: error: > > > ? ? ? implicit > > > ? ? ? > ?* ? ?* ? ?* declaration of > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** function > 'ngx_http_top_input_body_filter' > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** make[1]: *** > > > ? ? ? [objs/src/http/ngx_http_request_body.o] Error 1 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** make[1]: Leaving > directory > > > ? ? ? `/root/src/nginx/nginx-1.2.7' > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** make: *** [build] Error > 2 > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ngx_http_top_input_body_filter() cannot be > > > ? ? ? found from any > > > ? ? ? > ?* ? ?* ? ?* .c/.h files.. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Which other patches > should I apply? > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Perhaps this? > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** > > > ? ? ? > ?* ? ?* ? ?* ?** > > > ? ? > ? [2][3][6][7]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Thanks, > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** -- Pasi > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > 2013/1/16 Pasi > > > ? ? ? K?****?**?*??*?**?*??rkk?****?**?*??*?**?*??inen > > > ? ? ? > ?* ? ?* ? ?* <[1][3][4][7][8]pasik at iki.fi> > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** On Sun, Jan 13, 2013 at > > > ? ? ? 08:22:17PM +0800, > > > ? ? ? > ?* ? ?* ? ?* ?????? wrote: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** This > > > ? ? ? patch should work between > > > ? ? ? > ?* ? ?* ? ?* nginx-1.2.6 and > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx-1.3.8. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** The > > > ? ? ? documentation is here: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ## > > > ? ? ? > ?* ? ?* ? ?* client_body_postpone_sending ## > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** Syntax: > > > ? ? ? > ?* ? ?* ? ?* **client_body_postpone_sending** `size` > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? Default: 64k > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? Context: `http, server, > > > ? ? ? > ?* ? ?* ? ?* location` > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** If you > > > ? ? ? specify the > > > ? ? ? > ?* ? ?* ? ?* `proxy_request_buffering` or > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? `fastcgi_request_buffering` to > > > ? ? ? > ?* ? ?* ? ?* be off, Nginx will > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** send the body > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** to backend > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** when it > > > ? ? ? receives more than > > > ? ? ? > ?* ? ?* ? ?* `size` data or the > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** whole request body > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** has been > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? received. It could save the > > > ? ? ? > ?* ? ?* ? ?* connection and reduce > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the IO number > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** with > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? backend. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ## > > > ? ? ? proxy_request_buffering ## > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** Syntax: > > > ? ? ? > ?* ? ?* ? ?* **proxy_request_buffering** `on | off` > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? Default: `on` > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? Context: `http, server, > > > ? ? ? > ?* ? ?* ? ?* location` > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** Specify > > > ? ? ? the request body will > > > ? ? ? > ?* ? ?* ? ?* be buffered to the > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** disk or not. If > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** it's off, > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** the > > > ? ? ? request body will be > > > ? ? ? > ?* ? ?* ? ?* stored in memory and sent > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to backend > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** after Nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? receives more than > > > ? ? ? > ?* ? ?* ? ?* `client_body_postpone_sending` > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** data. It could > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** save the > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** disk IO > > > ? ? ? with large request > > > ? ? ? > ?* ? ?* ? ?* body. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** ?*** ?**** ?*** > > > ? ? ? > ?* ? ?* ? ?* Note that, if you specify it > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to be off, the nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** retry mechanism > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** with > > > ? ? ? unsuccessful response > > > ? ? ? > ?* ? ?* ? ?* will be broken after > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** you sent part of > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** the > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** request > > > ? ? ? to backend. It will > > > ? ? ? > ?* ? ?* ? ?* just return 500 when > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** it encounters > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** such > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? unsuccessful response. This > > > ? ? ? > ?* ? ?* ? ?* directive also breaks > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** these > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** variables: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? $request_body, > > > ? ? ? > ?* ? ?* ? ?* $request_body_file. You should not > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** use these > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** variables any > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** more > > > ? ? ? while their values are > > > ? ? ? > ?* ? ?* ? ?* undefined. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** Hello, > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** This patch sounds > > > ? ? ? exactly like what I need > > > ? ? ? > ?* ? ?* ? ?* aswell! > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** I assume it works for > > > ? ? ? both POST and PUT > > > ? ? ? > ?* ? ?* ? ?* requests? > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** Thanks, > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** -- Pasi > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** Hello! > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** @yaoweibin > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** If you are eager > > > ? ? ? > ?* ? ?* ? ?* for this feature, you > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** could try my > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** patch: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? [2][2][4][5][8][9]https://github.com/taobao/tengine/pull/91. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** This patch has > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** been running in > > > ? ? ? > ?* ? ?* ? ?* our production servers. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** what's the nginx > > > ? ? ? > ?* ? ?* ? ?* version your patch based on? > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** Thanks! > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** On Fri, Jan 11, 2013 at > > > ? ? ? > ?* ? ?* ? ?* 5:17 PM, ?*****?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > ?*****?****?***?**?*???****?***?**?*???****?***?**?*?? > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > ? ? ? <[3][3][5][6][9][10]yaoweibin at gmail.com> wrote: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** I know nginx > > > ? ? ? > ?* ? ?* ? ?* team are working on it. You > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** can wait for it. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** If you are eager > > > ? ? ? > ?* ? ?* ? ?* for this feature, you > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** could try my > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** patch: > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [4][4][6][7][10][11]https://github.com/taobao/tengine/pull/91. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** This patch has > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** been running in > > > ? ? ? > ?* ? ?* ? ?* our production servers. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** 2013/1/11 li > > > ? ? ? > ?* ? ?* ? ?* zJay > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > <[5][5][7][8][11][12]zjay1987 at gmail.com> > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** Hello! > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** is it > > > ? ? ? > ?* ? ?* ? ?* possible that nginx will not > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** buffer the client > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** body before > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** handle > > > ? ? ? > ?* ? ?* ? ?* the request to upstream? > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** we want > > > ? ? ? > ?* ? ?* ? ?* to use nginx as a reverse > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** proxy to upload very > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** very big file > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** to the > > > ? ? ? > ?* ? ?* ? ?* upstream, but the default > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** behavior of nginx is to > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** save the > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** whole > > > ? ? ? > ?* ? ?* ? ?* request to the local disk > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** first before handle it > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** to the > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** upstream, > > > ? ? ? > ?* ? ?* ? ?* which make the upstream > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** impossible to process > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** the file on > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** the fly > > > ? ? ? > ?* ? ?* ? ?* when the file is uploading, > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** results in much high > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** request > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** latency > > > ? ? ? > ?* ? ?* ? ?* and server-side resource > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** consumption. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** Thanks! > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** nginx > > > ? ? ? > ?* ? ?* ? ?* mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** > > > ? ? ? > ?* ? ?* ? ?* [6][6][8][9][12][13]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** ?*** ?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [7][7][9][10][13][14]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** -- > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** Weibin Yao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** Developer @ > > > ? ? ? > ?* ? ?* ? ?* Server Platform Team of > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Taobao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** nginx mailing > > > ? ? ? > ?* ? ?* ? ?* list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** > > > ? ? ? > ?* ? ?* ? ?* [8][8][10][11][14][15]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** ?*** ?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** > > > ? ? ? > ?* ? ?* ? ?* ?** > > > ? ? > ? [9][9][11][12][15][16]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** > > > ? ? ? > ?* ? ?* ? ?* [10][10][12][13][16][17]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** ?*** > > > ? ? ? ?**** > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** > > > ? ? ? > ?* ? ?* ? ?* ?** > > > ? ? > ? [11][11][13][14][17][18]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** -- > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** Weibin > > > ? ? ? Yao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** > > > ? ? ? Developer @ Server Platform > > > ? ? ? > ?* ? ?* ? ?* Team of Taobao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > References > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** Visible > > > ? ? ? links > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 1. > > > ? ? ? > ?* ? ?* ? ?* mailto:[12][14][15][18][19]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 2. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [13][15][16][19][20]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 3. > > > ? ? ? > ?* ? ?* ? ?* > mailto:[14][16][17][20][21]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 4. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [15][17][18][21][22]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 5. > > > ? ? ? > ?* ? ?* ? ?* mailto:[16][18][19][22][23]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 6. > > > ? ? ? > ?* ? ?* ? ?* mailto:[17][19][20][23][24]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 7. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [18][20][21][24][25]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 8. > > > ? ? ? > ?* ? ?* ? ?* mailto:[19][21][22][25][26]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** ?**** 9. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [20][22][23][26][27]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** 10. > > > ? ? ? > ?* ? ?* ? ?* mailto:[21][23][24][27][28]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > ?**** ?*** 11. > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [22][24][25][28][29]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? [23][25][26][29][30]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > > ? ? ? > ?* ? ?* > > > ? ? ? ?* > [24][26][27][30][31]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > ? ? ? > ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > ? ? ? [25][27][28][31][32]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > ?*** > > > ? ? ? > ?* ? ?* > > > ? ? ? ?* > [26][28][29][32][33]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** -- > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > Weibin Yao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > Developer @ Server Platform > > > ? ? ? Team of Taobao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > References > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > Visible links > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 1. > > > ? ? ? mailto:[29][30][33][34]pasik at iki.fi > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 2. > > > ? ? ? > ?* ? ?* ? ?* > [30][31][34][35]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 3. > > > ? ? ? mailto:[31][32][35][36]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 4. > > > ? ? ? > ?* ? ?* ? ?* > [32][33][36][37]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 5. > > > ? ? ? mailto:[33][34][37][38]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 6. > > > ? ? ? mailto:[34][35][38][39]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 7. > > > ? ? ? > ?* ? ?* ? ?* > [35][36][39][40]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 8. > > > ? ? ? mailto:[36][37][40][41]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 9. > > > ? ? ? > ?* ? ?* ? ?* > [37][38][41][42]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 10. > > > ? ? ? mailto:[38][39][42][43]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 11. > > > ? ? ? > ?* ? ?* ? ?* > [39][40][43][44]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 12. > > > ? ? ? mailto:[40][41][44][45]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 13. > > > ? ? ? > ?* ? ?* ? ?* > [41][42][45][46]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 14. > > > ? ? ? mailto:[42][43][46][47]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 15. > > > ? ? ? > ?* ? ?* ? ?* > [43][44][47][48]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 16. > > > ? ? ? mailto:[44][45][48][49]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 17. > > > ? ? ? mailto:[45][46][49][50]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 18. > > > ? ? ? > ?* ? ?* ? ?* > [46][47][50][51]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 19. > > > ? ? ? mailto:[47][48][51][52]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 20. > > > ? ? ? > ?* ? ?* ? ?* > [48][49][52][53]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 21. > > > ? ? ? mailto:[49][50][53][54]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 22. > > > ? ? ? > ?* ? ?* ? ?* > [50][51][54][55]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 23. > > > ? ? ? mailto:[51][52][55][56]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 24. > > > ? ? ? > ?* ? ?* ? ?* > [52][53][56][57]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 25. > > > ? ? ? mailto:[53][54][57][58]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 26. > > > ? ? ? > ?* ? ?* ? ?* > [54][55][58][59]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > [55][56][59][60]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? > ? [56][57][60][61]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > [57][58][61][62]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > ? [58][59][62][63]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > [59][60][63][64]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > ? ? > ? [60][61][64][65]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** -- > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Weibin Yao > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Developer @ Server Platform > Team of Taobao > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > References > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Visible links > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 1. > mailto:[62][65][66]pasik at iki.fi > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 2. > > > ? ? ? > ?* ? ?* > > > ? ? ? ?* > [63][66][67]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 3. > mailto:[64][67][68]pasik at iki.fi > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 4. > > > ? ? ? [65][68][69]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 5. > mailto:[66][69][70]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 6. > > > ? ? ? [67][70][71]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 7. > mailto:[68][71][72]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 8. > mailto:[69][72][73]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 9. > > > ? ? ? [70][73][74]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 10. > mailto:[71][74][75]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 11. > > > ? ? ? [72][75][76]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 12. > mailto:[73][76][77]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 13. > > > ? ? ? [74][77][78]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 14. > mailto:[75][78][79]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 15. > [76][79][80]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 16. > mailto:[77][80][81]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 17. > [78][81][82]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 18. > mailto:[79][82][83]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 19. > mailto:[80][83][84]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 20. > > > ? ? ? [81][84][85]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 21. > mailto:[82][85][86]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 22. > > > ? ? ? [83][86][87]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 23. > mailto:[84][87][88]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 24. > > > ? ? ? [85][88][89]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 25. > mailto:[86][89][90]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 26. > > > ? ? ? [87][90][91]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 27. > mailto:[88][91][92]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 28. > > > ? ? ? [89][92][93]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 29. > mailto:[90][93][94]pasik at iki.fi > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 30. > [91][94][95]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 31. > mailto:[92][95][96]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 32. > [93][96][97]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 33. > mailto:[94][97][98]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 34. > mailto:[95][98][99]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 35. > > > ? ? ? [96][99][100]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 36. > mailto:[97][100][101]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 37. > > > ? ? > ? [98][101][102]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 38. > mailto:[99][102][103]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 39. > > > ? ? > ? [100][103][104]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 40. > mailto:[101][104][105]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 41. > > > ? ? ? [102][105][106]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 42. > mailto:[103][106][107]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 43. > > > ? ? ? [104][107][108]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 44. > mailto:[105][108][109]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 45. > mailto:[106][109][110]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 46. > > > ? ? > ? [107][110][111]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 47. > mailto:[108][111][112]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 48. > > > ? ? > ? [109][112][113]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 49. > mailto:[110][113][114]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 50. > > > ? ? > ? [111][114][115]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 51. > mailto:[112][115][116]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 52. > > > ? ? > ? [113][116][117]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 53. > mailto:[114][117][118]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 54. > > > ? ? > ? [115][118][119]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 55. > mailto:[116][119][120]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 56. > > > ? ? > ? [117][120][121]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 57. > mailto:[118][121][122]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 58. > > > ? ? > ? [119][122][123]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 59. > mailto:[120][123][124]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 60. > > > ? ? > ? [121][124][125]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > > > ? ? ? > ?* ? ?* ? ?* > > > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > > nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > > [122][125][126]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > > [123][126][127]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* ? ?* > > > > ? ? ? > ?* ? ?* ? ?* > > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* > nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* > [124][127][128]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > > [125][128][129]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* [126][129][130]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > [127][130][131]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > ?* ? ?* -- > > > ? ? ? > ?* ? ?* Weibin Yao > > > ? ? ? > ?* ? ?* Developer @ Server Platform Team of Taobao > > > ? ? ? > > > > ? ? ? > References > > > ? ? ? > > > > ? ? ? > ?* ? ?* Visible links > > > ? ? ? > ?* ? ?* 1. mailto:[131][132]pasik at iki.fi > > > ? ? ? > ?* ? ?* 2. mailto:[132][133]pasik at iki.fi > > > ? ? ? > ?* ? ?* 3. > > > ? ? > ? [133][134]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? ? > ?* ? ?* 4. mailto:[134][135]pasik at iki.fi > > > ? ? ? > ?* ? ?* 5. > [135][136]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* 6. mailto:[136][137]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* 7. > [137][138]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* 8. mailto:[138][139]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* 9. mailto:[139][140]nginx at nginx.org > > > ? ? ? > ?* ? 10. > [140][141]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 11. mailto:[141][142]nginx at nginx.org > > > ? ? ? > ?* ? 12. > [142][143]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 13. mailto:[143][144]nginx at nginx.org > > > ? ? ? > ?* ? 14. > [144][145]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 15. mailto:[145][146]zjay1987 at gmail.com > > > ? ? ? > ?* ? 16. > [146][147]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 17. mailto:[147][148]yaoweibin at gmail.com > > > ? ? ? > ?* ? 18. > [148][149]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 19. mailto:[149][150]zjay1987 at gmail.com > > > ? ? ? > ?* ? 20. mailto:[150][151]nginx at nginx.org > > > ? ? ? > ?* ? 21. > [151][152]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 22. mailto:[152][153]nginx at nginx.org > > > ? ? ? > ?* ? 23. > [153][154]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 24. mailto:[154][155]nginx at nginx.org > > > ? ? ? > ?* ? 25. > [155][156]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 26. mailto:[156][157]nginx at nginx.org > > > ? ? ? > ?* ? 27. > [157][158]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 28. mailto:[158][159]nginx at nginx.org > > > ? ? ? > ?* ? 29. > [159][160]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 30. mailto:[160][161]pasik at iki.fi > > > ? ? ? > ?* ? 31. > [161][162]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 32. mailto:[162][163]yaoweibin at gmail.com > > > ? ? ? > ?* ? 33. > [163][164]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 34. mailto:[164][165]zjay1987 at gmail.com > > > ? ? ? > ?* ? 35. mailto:[165][166]nginx at nginx.org > > > ? ? ? > ?* ? 36. > [166][167]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 37. mailto:[167][168]nginx at nginx.org > > > ? ? ? > ?* ? 38. > [168][169]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 39. mailto:[169][170]nginx at nginx.org > > > ? ? ? > ?* ? 40. > [170][171]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 41. mailto:[171][172]zjay1987 at gmail.com > > > ? ? ? > ?* ? 42. > [172][173]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 43. mailto:[173][174]yaoweibin at gmail.com > > > ? ? ? > ?* ? 44. > [174][175]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 45. mailto:[175][176]zjay1987 at gmail.com > > > ? ? ? > ?* ? 46. mailto:[176][177]nginx at nginx.org > > > ? ? ? > ?* ? 47. > [177][178]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 48. mailto:[178][179]nginx at nginx.org > > > ? ? ? > ?* ? 49. > [179][180]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 50. mailto:[180][181]nginx at nginx.org > > > ? ? ? > ?* ? 51. > [181][182]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 52. mailto:[182][183]nginx at nginx.org > > > ? ? ? > ?* ? 53. > [183][184]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 54. mailto:[184][185]nginx at nginx.org > > > ? ? ? > ?* ? 55. > [185][186]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 56. mailto:[186][187]nginx at nginx.org > > > ? ? ? > ?* ? 57. > [187][188]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 58. mailto:[188][189]nginx at nginx.org > > > ? ? ? > ?* ? 59. > [189][190]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 60. mailto:[190][191]nginx at nginx.org > > > ? ? ? > ?* ? 61. > [191][192]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 62. mailto:[192][193]pasik at iki.fi > > > ? ? ? > ?* ? 63. > > > ? ? > ? [193][194]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? ? > ?* ? 64. mailto:[194][195]pasik at iki.fi > > > ? ? ? > ?* ? 65. > [195][196]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 66. mailto:[196][197]yaoweibin at gmail.com > > > ? ? ? > ?* ? 67. > [197][198]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 68. mailto:[198][199]zjay1987 at gmail.com > > > ? ? ? > ?* ? 69. mailto:[199][200]nginx at nginx.org > > > ? ? ? > ?* ? 70. > [200][201]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 71. mailto:[201][202]nginx at nginx.org > > > ? ? ? > ?* ? 72. > [202][203]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 73. mailto:[203][204]nginx at nginx.org > > > ? ? ? > ?* ? 74. > [204][205]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 75. mailto:[205][206]zjay1987 at gmail.com > > > ? ? ? > ?* ? 76. > [206][207]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 77. mailto:[207][208]yaoweibin at gmail.com > > > ? ? ? > ?* ? 78. > [208][209]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 79. mailto:[209][210]zjay1987 at gmail.com > > > ? ? ? > ?* ? 80. mailto:[210][211]nginx at nginx.org > > > ? ? ? > ?* ? 81. > [211][212]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 82. mailto:[212][213]nginx at nginx.org > > > ? ? ? > ?* ? 83. > [213][214]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 84. mailto:[214][215]nginx at nginx.org > > > ? ? ? > ?* ? 85. > [215][216]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 86. mailto:[216][217]nginx at nginx.org > > > ? ? ? > ?* ? 87. > [217][218]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 88. mailto:[218][219]nginx at nginx.org > > > ? ? ? > ?* ? 89. > [219][220]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 90. mailto:[220][221]pasik at iki.fi > > > ? ? ? > ?* ? 91. > [221][222]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 92. mailto:[222][223]yaoweibin at gmail.com > > > ? ? ? > ?* ? 93. > [223][224]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? 94. mailto:[224][225]zjay1987 at gmail.com > > > ? ? ? > ?* ? 95. mailto:[225][226]nginx at nginx.org > > > ? ? ? > ?* ? 96. > [226][227]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 97. mailto:[227][228]nginx at nginx.org > > > ? ? ? > ?* ? 98. > [228][229]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 99. mailto:[229][230]nginx at nginx.org > > > ? ? ? > ?* 100. > [230][231]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 101. mailto:[231][232]zjay1987 at gmail.com > > > ? ? ? > ?* 102. > [232][233]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* 103. mailto:[233][234]yaoweibin at gmail.com > > > ? ? ? > ?* 104. > [234][235]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* 105. mailto:[235][236]zjay1987 at gmail.com > > > ? ? ? > ?* 106. mailto:[236][237]nginx at nginx.org > > > ? ? ? > ?* 107. > [237][238]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 108. mailto:[238][239]nginx at nginx.org > > > ? ? ? > ?* 109. > [239][240]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 110. mailto:[240][241]nginx at nginx.org > > > ? ? ? > ?* 111. > [241][242]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 112. mailto:[242][243]nginx at nginx.org > > > ? ? ? > ?* 113. > [243][244]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 114. mailto:[244][245]nginx at nginx.org > > > ? ? ? > ?* 115. > [245][246]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 116. mailto:[246][247]nginx at nginx.org > > > ? ? ? > ?* 117. > [247][248]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 118. mailto:[248][249]nginx at nginx.org > > > ? ? ? > ?* 119. > [249][250]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 120. mailto:[250][251]nginx at nginx.org > > > ? ? ? > ?* 121. > [251][252]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 122. mailto:[252][253]nginx at nginx.org > > > ? ? ? > ?* 123. > [253][254]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 124. mailto:[254][255]nginx at nginx.org > > > ? ? ? > ?* 125. > [255][256]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* 126. mailto:[256][257]nginx at nginx.org > > > ? ? ? > ?* 127. > [257][258]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > _______________________________________________ > > > ? ? ? > nginx mailing list > > > ? ? ? > [258][259]nginx at nginx.org > > > ? ? ? > [259][260]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? _______________________________________________ > > > ? ? ? nginx mailing list > > > ? ? ? [260][261]nginx at nginx.org > > > ? ? ? [261][262]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? -- > > > ? ? Weibin Yao > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > ? ? Visible links > > > ? ? 1. [263]http://wiki.nginx.org/Debugging > > > ? ? 2. mailto:[264]pasik at iki.fi > > > ? ? 3. [265]https://service-backend/ > > > ? ? 4. mailto:[266]pasik at iki.fi > > > ? ? 5. mailto:[267]pasik at iki.fi > > > ? ? 6. > [268]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? 7. mailto:[269]pasik at iki.fi > > > ? ? 8. [270]https://github.com/taobao/tengine/pull/91 > > > ? ? 9. mailto:[271]yaoweibin at gmail.com > > > ? 10. [272]https://github.com/taobao/tengine/pull/91 > > > ? 11. mailto:[273]zjay1987 at gmail.com > > > ? 12. mailto:[274]nginx at nginx.org > > > ? 13. [275]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 14. mailto:[276]nginx at nginx.org > > > ? 15. [277]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 16. mailto:[278]nginx at nginx.org > > > ? 17. [279]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 18. mailto:[280]zjay1987 at gmail.com > > > ? 19. [281]https://github.com/taobao/tengine/pull/91 > > > ? 20. mailto:[282]yaoweibin at gmail.com > > > ? 21. [283]https://github.com/taobao/tengine/pull/91 > > > ? 22. mailto:[284]zjay1987 at gmail.com > > > ? 23. mailto:[285]nginx at nginx.org > > > ? 24. [286]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 25. mailto:[287]nginx at nginx.org > > > ? 26. [288]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 27. mailto:[289]nginx at nginx.org > > > ? 28. [290]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 29. mailto:[291]nginx at nginx.org > > > ? 30. [292]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 31. mailto:[293]nginx at nginx.org > > > ? 32. [294]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 33. mailto:[295]pasik at iki.fi > > > ? 34. [296]https://github.com/taobao/tengine/pull/91 > > > ? 35. mailto:[297]yaoweibin at gmail.com > > > ? 36. [298]https://github.com/taobao/tengine/pull/91 > > > ? 37. mailto:[299]zjay1987 at gmail.com > > > ? 38. mailto:[300]nginx at nginx.org > > > ? 39. [301]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 40. mailto:[302]nginx at nginx.org > > > ? 41. [303]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 42. mailto:[304]nginx at nginx.org > > > ? 43. [305]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 44. mailto:[306]zjay1987 at gmail.com > > > ? 45. [307]https://github.com/taobao/tengine/pull/91 > > > ? 46. mailto:[308]yaoweibin at gmail.com > > > ? 47. [309]https://github.com/taobao/tengine/pull/91 > > > ? 48. mailto:[310]zjay1987 at gmail.com > > > ? 49. mailto:[311]nginx at nginx.org > > > ? 50. [312]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 51. mailto:[313]nginx at nginx.org > > > ? 52. [314]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 53. mailto:[315]nginx at nginx.org > > > ? 54. [316]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 55. mailto:[317]nginx at nginx.org > > > ? 56. [318]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 57. mailto:[319]nginx at nginx.org > > > ? 58. [320]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 59. mailto:[321]nginx at nginx.org > > > ? 60. [322]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 61. mailto:[323]nginx at nginx.org > > > ? 62. [324]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 63. mailto:[325]nginx at nginx.org > > > ? 64. [326]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 65. mailto:[327]pasik at iki.fi > > > ? 66. > [328]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? 67. mailto:[329]pasik at iki.fi > > > ? 68. [330]https://github.com/taobao/tengine/pull/91 > > > ? 69. mailto:[331]yaoweibin at gmail.com > > > ? 70. [332]https://github.com/taobao/tengine/pull/91 > > > ? 71. mailto:[333]zjay1987 at gmail.com > > > ? 72. mailto:[334]nginx at nginx.org > > > ? 73. [335]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 74. mailto:[336]nginx at nginx.org > > > ? 75. [337]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 76. mailto:[338]nginx at nginx.org > > > ? 77. [339]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 78. mailto:[340]zjay1987 at gmail.com > > > ? 79. [341]https://github.com/taobao/tengine/pull/91 > > > ? 80. mailto:[342]yaoweibin at gmail.com > > > ? 81. [343]https://github.com/taobao/tengine/pull/91 > > > ? 82. mailto:[344]zjay1987 at gmail.com > > > ? 83. mailto:[345]nginx at nginx.org > > > ? 84. [346]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 85. mailto:[347]nginx at nginx.org > > > ? 86. [348]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 87. mailto:[349]nginx at nginx.org > > > ? 88. [350]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 89. mailto:[351]nginx at nginx.org > > > ? 90. [352]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 91. mailto:[353]nginx at nginx.org > > > ? 92. [354]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 93. mailto:[355]pasik at iki.fi > > > ? 94. [356]https://github.com/taobao/tengine/pull/91 > > > ? 95. mailto:[357]yaoweibin at gmail.com > > > ? 96. [358]https://github.com/taobao/tengine/pull/91 > > > ? 97. mailto:[359]zjay1987 at gmail.com > > > ? 98. mailto:[360]nginx at nginx.org > > > ? 99. [361]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 100. mailto:[362]nginx at nginx.org > > > ? 101. [363]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 102. mailto:[364]nginx at nginx.org > > > ? 103. [365]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 104. mailto:[366]zjay1987 at gmail.com > > > ? 105. [367]https://github.com/taobao/tengine/pull/91 > > > ? 106. mailto:[368]yaoweibin at gmail.com > > > ? 107. [369]https://github.com/taobao/tengine/pull/91 > > > ? 108. mailto:[370]zjay1987 at gmail.com > > > ? 109. mailto:[371]nginx at nginx.org > > > ? 110. [372]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 111. mailto:[373]nginx at nginx.org > > > ? 112. [374]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 113. mailto:[375]nginx at nginx.org > > > ? 114. [376]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 115. mailto:[377]nginx at nginx.org > > > ? 116. [378]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 117. mailto:[379]nginx at nginx.org > > > ? 118. [380]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 119. mailto:[381]nginx at nginx.org > > > ? 120. [382]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 121. mailto:[383]nginx at nginx.org > > > ? 122. [384]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 123. mailto:[385]nginx at nginx.org > > > ? 124. [386]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 125. mailto:[387]nginx at nginx.org > > > ? 126. [388]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 127. mailto:[389]nginx at nginx.org > > > ? 128. [390]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 129. mailto:[391]nginx at nginx.org > > > ? 130. [392]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 131. mailto:[393]pasik at iki.fi > > > ? 132. mailto:[394]pasik at iki.fi > > > ? 133. > [395]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? 134. mailto:[396]pasik at iki.fi > > > ? 135. [397]https://github.com/taobao/tengine/pull/91 > > > ? 136. mailto:[398]yaoweibin at gmail.com > > > ? 137. [399]https://github.com/taobao/tengine/pull/91 > > > ? 138. mailto:[400]zjay1987 at gmail.com > > > ? 139. mailto:[401]nginx at nginx.org > > > ? 140. [402]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 141. mailto:[403]nginx at nginx.org > > > ? 142. [404]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 143. mailto:[405]nginx at nginx.org > > > ? 144. [406]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 145. mailto:[407]zjay1987 at gmail.com > > > ? 146. [408]https://github.com/taobao/tengine/pull/91 > > > ? 147. mailto:[409]yaoweibin at gmail.com > > > ? 148. [410]https://github.com/taobao/tengine/pull/91 > > > ? 149. mailto:[411]zjay1987 at gmail.com > > > ? 150. mailto:[412]nginx at nginx.org > > > ? 151. [413]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 152. mailto:[414]nginx at nginx.org > > > ? 153. [415]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 154. mailto:[416]nginx at nginx.org > > > ? 155. [417]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 156. mailto:[418]nginx at nginx.org > > > ? 157. [419]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 158. mailto:[420]nginx at nginx.org > > > ? 159. [421]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 160. mailto:[422]pasik at iki.fi > > > ? 161. [423]https://github.com/taobao/tengine/pull/91 > > > ? 162. mailto:[424]yaoweibin at gmail.com > > > ? 163. [425]https://github.com/taobao/tengine/pull/91 > > > ? 164. mailto:[426]zjay1987 at gmail.com > > > ? 165. mailto:[427]nginx at nginx.org > > > ? 166. [428]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 167. mailto:[429]nginx at nginx.org > > > ? 168. [430]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 169. mailto:[431]nginx at nginx.org > > > ? 170. [432]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 171. mailto:[433]zjay1987 at gmail.com > > > ? 172. [434]https://github.com/taobao/tengine/pull/91 > > > ? 173. mailto:[435]yaoweibin at gmail.com > > > ? 174. [436]https://github.com/taobao/tengine/pull/91 > > > ? 175. mailto:[437]zjay1987 at gmail.com > > > ? 176. mailto:[438]nginx at nginx.org > > > ? 177. [439]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 178. mailto:[440]nginx at nginx.org > > > ? 179. [441]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 180. mailto:[442]nginx at nginx.org > > > ? 181. [443]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 182. mailto:[444]nginx at nginx.org > > > ? 183. [445]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 184. mailto:[446]nginx at nginx.org > > > ? 185. [447]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 186. mailto:[448]nginx at nginx.org > > > ? 187. [449]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 188. mailto:[450]nginx at nginx.org > > > ? 189. [451]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 190. mailto:[452]nginx at nginx.org > > > ? 191. [453]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 192. mailto:[454]pasik at iki.fi > > > ? 193. > [455]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? 194. mailto:[456]pasik at iki.fi > > > ? 195. [457]https://github.com/taobao/tengine/pull/91 > > > ? 196. mailto:[458]yaoweibin at gmail.com > > > ? 197. [459]https://github.com/taobao/tengine/pull/91 > > > ? 198. mailto:[460]zjay1987 at gmail.com > > > ? 199. mailto:[461]nginx at nginx.org > > > ? 200. [462]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 201. mailto:[463]nginx at nginx.org > > > ? 202. [464]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 203. mailto:[465]nginx at nginx.org > > > ? 204. [466]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 205. mailto:[467]zjay1987 at gmail.com > > > ? 206. [468]https://github.com/taobao/tengine/pull/91 > > > ? 207. mailto:[469]yaoweibin at gmail.com > > > ? 208. [470]https://github.com/taobao/tengine/pull/91 > > > ? 209. mailto:[471]zjay1987 at gmail.com > > > ? 210. mailto:[472]nginx at nginx.org > > > ? 211. [473]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 212. mailto:[474]nginx at nginx.org > > > ? 213. [475]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 214. mailto:[476]nginx at nginx.org > > > ? 215. [477]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 216. mailto:[478]nginx at nginx.org > > > ? 217. [479]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 218. mailto:[480]nginx at nginx.org > > > ? 219. [481]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 220. mailto:[482]pasik at iki.fi > > > ? 221. [483]https://github.com/taobao/tengine/pull/91 > > > ? 222. mailto:[484]yaoweibin at gmail.com > > > ? 223. [485]https://github.com/taobao/tengine/pull/91 > > > ? 224. mailto:[486]zjay1987 at gmail.com > > > ? 225. mailto:[487]nginx at nginx.org > > > ? 226. [488]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 227. mailto:[489]nginx at nginx.org > > > ? 228. [490]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 229. mailto:[491]nginx at nginx.org > > > ? 230. [492]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 231. mailto:[493]zjay1987 at gmail.com > > > ? 232. [494]https://github.com/taobao/tengine/pull/91 > > > ? 233. mailto:[495]yaoweibin at gmail.com > > > ? 234. [496]https://github.com/taobao/tengine/pull/91 > > > ? 235. mailto:[497]zjay1987 at gmail.com > > > ? 236. mailto:[498]nginx at nginx.org > > > ? 237. [499]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 238. mailto:[500]nginx at nginx.org > > > ? 239. [501]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 240. mailto:[502]nginx at nginx.org > > > ? 241. [503]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 242. mailto:[504]nginx at nginx.org > > > ? 243. [505]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 244. mailto:[506]nginx at nginx.org > > > ? 245. [507]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 246. mailto:[508]nginx at nginx.org > > > ? 247. [509]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 248. mailto:[510]nginx at nginx.org > > > ? 249. [511]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 250. mailto:[512]nginx at nginx.org > > > ? 251. [513]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 252. mailto:[514]nginx at nginx.org > > > ? 253. [515]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 254. mailto:[516]nginx at nginx.org > > > ? 255. [517]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 256. mailto:[518]nginx at nginx.org > > > ? 257. [519]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 258. mailto:[520]nginx at nginx.org > > > ? 259. [521]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 260. mailto:[522]nginx at nginx.org > > > ? 261. [523]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > [524]nginx at nginx.org > > > [525]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [526]nginx at nginx.org > > [527]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > [528]nginx at nginx.org > [529]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > References > > Visible links > 1. mailto:pasik at iki.fi > 2. http://wiki.nginx.org/Debugging > 3. mailto:pasik at iki.fi > 4. https://service-backend/ > 5. mailto:pasik at iki.fi > 6. mailto:pasik at iki.fi > 7. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 8. mailto:pasik at iki.fi > 9. https://github.com/taobao/tengine/pull/91 > 10. mailto:yaoweibin at gmail.com > 11. https://github.com/taobao/tengine/pull/91 > 12. mailto:zjay1987 at gmail.com > 13. mailto:nginx at nginx.org > 14. http://mailman.nginx.org/mailman/listinfo/nginx > 15. mailto:nginx at nginx.org > 16. http://mailman.nginx.org/mailman/listinfo/nginx > 17. mailto:nginx at nginx.org > 18. http://mailman.nginx.org/mailman/listinfo/nginx > 19. mailto:zjay1987 at gmail.com > 20. https://github.com/taobao/tengine/pull/91 > 21. mailto:yaoweibin at gmail.com > 22. https://github.com/taobao/tengine/pull/91 > 23. mailto:zjay1987 at gmail.com > 24. mailto:nginx at nginx.org > 25. http://mailman.nginx.org/mailman/listinfo/nginx > 26. mailto:nginx at nginx.org > 27. http://mailman.nginx.org/mailman/listinfo/nginx > 28. mailto:nginx at nginx.org > 29. http://mailman.nginx.org/mailman/listinfo/nginx > 30. mailto:nginx at nginx.org > 31. http://mailman.nginx.org/mailman/listinfo/nginx > 32. mailto:nginx at nginx.org > 33. http://mailman.nginx.org/mailman/listinfo/nginx > 34. mailto:pasik at iki.fi > 35. https://github.com/taobao/tengine/pull/91 > 36. mailto:yaoweibin at gmail.com > 37. https://github.com/taobao/tengine/pull/91 > 38. mailto:zjay1987 at gmail.com > 39. mailto:nginx at nginx.org > 40. http://mailman.nginx.org/mailman/listinfo/nginx > 41. mailto:nginx at nginx.org > 42. http://mailman.nginx.org/mailman/listinfo/nginx > 43. mailto:nginx at nginx.org > 44. http://mailman.nginx.org/mailman/listinfo/nginx > 45. mailto:zjay1987 at gmail.com > 46. https://github.com/taobao/tengine/pull/91 > 47. mailto:yaoweibin at gmail.com > 48. https://github.com/taobao/tengine/pull/91 > 49. mailto:zjay1987 at gmail.com > 50. mailto:nginx at nginx.org > 51. http://mailman.nginx.org/mailman/listinfo/nginx > 52. mailto:nginx at nginx.org > 53. http://mailman.nginx.org/mailman/listinfo/nginx > 54. mailto:nginx at nginx.org > 55. http://mailman.nginx.org/mailman/listinfo/nginx > 56. mailto:nginx at nginx.org > 57. http://mailman.nginx.org/mailman/listinfo/nginx > 58. mailto:nginx at nginx.org > 59. http://mailman.nginx.org/mailman/listinfo/nginx > 60. mailto:nginx at nginx.org > 61. http://mailman.nginx.org/mailman/listinfo/nginx > 62. mailto:nginx at nginx.org > 63. http://mailman.nginx.org/mailman/listinfo/nginx > 64. mailto:nginx at nginx.org > 65. http://mailman.nginx.org/mailman/listinfo/nginx > 66. mailto:pasik at iki.fi > 67. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 68. mailto:pasik at iki.fi > 69. https://github.com/taobao/tengine/pull/91 > 70. mailto:yaoweibin at gmail.com > 71. https://github.com/taobao/tengine/pull/91 > 72. mailto:zjay1987 at gmail.com > 73. mailto:nginx at nginx.org > 74. http://mailman.nginx.org/mailman/listinfo/nginx > 75. mailto:nginx at nginx.org > 76. http://mailman.nginx.org/mailman/listinfo/nginx > 77. mailto:nginx at nginx.org > 78. http://mailman.nginx.org/mailman/listinfo/nginx > 79. mailto:zjay1987 at gmail.com > 80. https://github.com/taobao/tengine/pull/91 > 81. mailto:yaoweibin at gmail.com > 82. https://github.com/taobao/tengine/pull/91 > 83. mailto:zjay1987 at gmail.com > 84. mailto:nginx at nginx.org > 85. http://mailman.nginx.org/mailman/listinfo/nginx > 86. mailto:nginx at nginx.org > 87. http://mailman.nginx.org/mailman/listinfo/nginx > 88. mailto:nginx at nginx.org > 89. http://mailman.nginx.org/mailman/listinfo/nginx > 90. mailto:nginx at nginx.org > 91. http://mailman.nginx.org/mailman/listinfo/nginx > 92. mailto:nginx at nginx.org > 93. http://mailman.nginx.org/mailman/listinfo/nginx > 94. mailto:pasik at iki.fi > 95. https://github.com/taobao/tengine/pull/91 > 96. mailto:yaoweibin at gmail.com > 97. https://github.com/taobao/tengine/pull/91 > 98. mailto:zjay1987 at gmail.com > 99. mailto:nginx at nginx.org > 100. http://mailman.nginx.org/mailman/listinfo/nginx > 101. mailto:nginx at nginx.org > 102. http://mailman.nginx.org/mailman/listinfo/nginx > 103. mailto:nginx at nginx.org > 104. http://mailman.nginx.org/mailman/listinfo/nginx > 105. mailto:zjay1987 at gmail.com > 106. https://github.com/taobao/tengine/pull/91 > 107. mailto:yaoweibin at gmail.com > 108. https://github.com/taobao/tengine/pull/91 > 109. mailto:zjay1987 at gmail.com > 110. mailto:nginx at nginx.org > 111. http://mailman.nginx.org/mailman/listinfo/nginx > 112. mailto:nginx at nginx.org > 113. http://mailman.nginx.org/mailman/listinfo/nginx > 114. mailto:nginx at nginx.org > 115. http://mailman.nginx.org/mailman/listinfo/nginx > 116. mailto:nginx at nginx.org > 117. http://mailman.nginx.org/mailman/listinfo/nginx > 118. mailto:nginx at nginx.org > 119. http://mailman.nginx.org/mailman/listinfo/nginx > 120. mailto:nginx at nginx.org > 121. http://mailman.nginx.org/mailman/listinfo/nginx > 122. mailto:nginx at nginx.org > 123. http://mailman.nginx.org/mailman/listinfo/nginx > 124. mailto:nginx at nginx.org > 125. http://mailman.nginx.org/mailman/listinfo/nginx > 126. mailto:nginx at nginx.org > 127. http://mailman.nginx.org/mailman/listinfo/nginx > 128. mailto:nginx at nginx.org > 129. http://mailman.nginx.org/mailman/listinfo/nginx > 130. mailto:nginx at nginx.org > 131. http://mailman.nginx.org/mailman/listinfo/nginx > 132. mailto:pasik at iki.fi > 133. mailto:pasik at iki.fi > 134. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 135. mailto:pasik at iki.fi > 136. https://github.com/taobao/tengine/pull/91 > 137. mailto:yaoweibin at gmail.com > 138. https://github.com/taobao/tengine/pull/91 > 139. mailto:zjay1987 at gmail.com > 140. mailto:nginx at nginx.org > 141. http://mailman.nginx.org/mailman/listinfo/nginx > 142. mailto:nginx at nginx.org > 143. http://mailman.nginx.org/mailman/listinfo/nginx > 144. mailto:nginx at nginx.org > 145. http://mailman.nginx.org/mailman/listinfo/nginx > 146. mailto:zjay1987 at gmail.com > 147. https://github.com/taobao/tengine/pull/91 > 148. mailto:yaoweibin at gmail.com > 149. https://github.com/taobao/tengine/pull/91 > 150. mailto:zjay1987 at gmail.com > 151. mailto:nginx at nginx.org > 152. http://mailman.nginx.org/mailman/listinfo/nginx > 153. mailto:nginx at nginx.org > 154. http://mailman.nginx.org/mailman/listinfo/nginx > 155. mailto:nginx at nginx.org > 156. http://mailman.nginx.org/mailman/listinfo/nginx > 157. mailto:nginx at nginx.org > 158. http://mailman.nginx.org/mailman/listinfo/nginx > 159. mailto:nginx at nginx.org > 160. http://mailman.nginx.org/mailman/listinfo/nginx > 161. mailto:pasik at iki.fi > 162. https://github.com/taobao/tengine/pull/91 > 163. mailto:yaoweibin at gmail.com > 164. https://github.com/taobao/tengine/pull/91 > 165. mailto:zjay1987 at gmail.com > 166. mailto:nginx at nginx.org > 167. http://mailman.nginx.org/mailman/listinfo/nginx > 168. mailto:nginx at nginx.org > 169. http://mailman.nginx.org/mailman/listinfo/nginx > 170. mailto:nginx at nginx.org > 171. http://mailman.nginx.org/mailman/listinfo/nginx > 172. mailto:zjay1987 at gmail.com > 173. https://github.com/taobao/tengine/pull/91 > 174. mailto:yaoweibin at gmail.com > 175. https://github.com/taobao/tengine/pull/91 > 176. mailto:zjay1987 at gmail.com > 177. mailto:nginx at nginx.org > 178. http://mailman.nginx.org/mailman/listinfo/nginx > 179. mailto:nginx at nginx.org > 180. http://mailman.nginx.org/mailman/listinfo/nginx > 181. mailto:nginx at nginx.org > 182. http://mailman.nginx.org/mailman/listinfo/nginx > 183. mailto:nginx at nginx.org > 184. http://mailman.nginx.org/mailman/listinfo/nginx > 185. mailto:nginx at nginx.org > 186. http://mailman.nginx.org/mailman/listinfo/nginx > 187. mailto:nginx at nginx.org > 188. http://mailman.nginx.org/mailman/listinfo/nginx > 189. mailto:nginx at nginx.org > 190. http://mailman.nginx.org/mailman/listinfo/nginx > 191. mailto:nginx at nginx.org > 192. http://mailman.nginx.org/mailman/listinfo/nginx > 193. mailto:pasik at iki.fi > 194. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 195. mailto:pasik at iki.fi > 196. https://github.com/taobao/tengine/pull/91 > 197. mailto:yaoweibin at gmail.com > 198. https://github.com/taobao/tengine/pull/91 > 199. mailto:zjay1987 at gmail.com > 200. mailto:nginx at nginx.org > 201. http://mailman.nginx.org/mailman/listinfo/nginx > 202. mailto:nginx at nginx.org > 203. http://mailman.nginx.org/mailman/listinfo/nginx > 204. mailto:nginx at nginx.org > 205. http://mailman.nginx.org/mailman/listinfo/nginx > 206. mailto:zjay1987 at gmail.com > 207. https://github.com/taobao/tengine/pull/91 > 208. mailto:yaoweibin at gmail.com > 209. https://github.com/taobao/tengine/pull/91 > 210. mailto:zjay1987 at gmail.com > 211. mailto:nginx at nginx.org > 212. http://mailman.nginx.org/mailman/listinfo/nginx > 213. mailto:nginx at nginx.org > 214. http://mailman.nginx.org/mailman/listinfo/nginx > 215. mailto:nginx at nginx.org > 216. http://mailman.nginx.org/mailman/listinfo/nginx > 217. mailto:nginx at nginx.org > 218. http://mailman.nginx.org/mailman/listinfo/nginx > 219. mailto:nginx at nginx.org > 220. http://mailman.nginx.org/mailman/listinfo/nginx > 221. mailto:pasik at iki.fi > 222. https://github.com/taobao/tengine/pull/91 > 223. mailto:yaoweibin at gmail.com > 224. https://github.com/taobao/tengine/pull/91 > 225. mailto:zjay1987 at gmail.com > 226. mailto:nginx at nginx.org > 227. http://mailman.nginx.org/mailman/listinfo/nginx > 228. mailto:nginx at nginx.org > 229. http://mailman.nginx.org/mailman/listinfo/nginx > 230. mailto:nginx at nginx.org > 231. http://mailman.nginx.org/mailman/listinfo/nginx > 232. mailto:zjay1987 at gmail.com > 233. https://github.com/taobao/tengine/pull/91 > 234. mailto:yaoweibin at gmail.com > 235. https://github.com/taobao/tengine/pull/91 > 236. mailto:zjay1987 at gmail.com > 237. mailto:nginx at nginx.org > 238. http://mailman.nginx.org/mailman/listinfo/nginx > 239. mailto:nginx at nginx.org > 240. http://mailman.nginx.org/mailman/listinfo/nginx > 241. mailto:nginx at nginx.org > 242. http://mailman.nginx.org/mailman/listinfo/nginx > 243. mailto:nginx at nginx.org > 244. http://mailman.nginx.org/mailman/listinfo/nginx > 245. mailto:nginx at nginx.org > 246. http://mailman.nginx.org/mailman/listinfo/nginx > 247. mailto:nginx at nginx.org > 248. http://mailman.nginx.org/mailman/listinfo/nginx > 249. mailto:nginx at nginx.org > 250. http://mailman.nginx.org/mailman/listinfo/nginx > 251. mailto:nginx at nginx.org > 252. http://mailman.nginx.org/mailman/listinfo/nginx > 253. mailto:nginx at nginx.org > 254. http://mailman.nginx.org/mailman/listinfo/nginx > 255. mailto:nginx at nginx.org > 256. http://mailman.nginx.org/mailman/listinfo/nginx > 257. mailto:nginx at nginx.org > 258. http://mailman.nginx.org/mailman/listinfo/nginx > 259. mailto:nginx at nginx.org > 260. http://mailman.nginx.org/mailman/listinfo/nginx > 261. mailto:nginx at nginx.org > 262. http://mailman.nginx.org/mailman/listinfo/nginx > 263. http://wiki.nginx.org/Debugging > 264. mailto:pasik at iki.fi > 265. https://service-backend/ > 266. mailto:pasik at iki.fi > 267. mailto:pasik at iki.fi > 268. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 269. mailto:pasik at iki.fi > 270. https://github.com/taobao/tengine/pull/91 > 271. mailto:yaoweibin at gmail.com > 272. https://github.com/taobao/tengine/pull/91 > 273. mailto:zjay1987 at gmail.com > 274. mailto:nginx at nginx.org > 275. http://mailman.nginx.org/mailman/listinfo/nginx > 276. mailto:nginx at nginx.org > 277. http://mailman.nginx.org/mailman/listinfo/nginx > 278. mailto:nginx at nginx.org > 279. http://mailman.nginx.org/mailman/listinfo/nginx > 280. mailto:zjay1987 at gmail.com > 281. https://github.com/taobao/tengine/pull/91 > 282. mailto:yaoweibin at gmail.com > 283. https://github.com/taobao/tengine/pull/91 > 284. mailto:zjay1987 at gmail.com > 285. mailto:nginx at nginx.org > 286. http://mailman.nginx.org/mailman/listinfo/nginx > 287. mailto:nginx at nginx.org > 288. http://mailman.nginx.org/mailman/listinfo/nginx > 289. mailto:nginx at nginx.org > 290. http://mailman.nginx.org/mailman/listinfo/nginx > 291. mailto:nginx at nginx.org > 292. http://mailman.nginx.org/mailman/listinfo/nginx > 293. mailto:nginx at nginx.org > 294. http://mailman.nginx.org/mailman/listinfo/nginx > 295. mailto:pasik at iki.fi > 296. https://github.com/taobao/tengine/pull/91 > 297. mailto:yaoweibin at gmail.com > 298. https://github.com/taobao/tengine/pull/91 > 299. mailto:zjay1987 at gmail.com > 300. mailto:nginx at nginx.org > 301. http://mailman.nginx.org/mailman/listinfo/nginx > 302. mailto:nginx at nginx.org > 303. http://mailman.nginx.org/mailman/listinfo/nginx > 304. mailto:nginx at nginx.org > 305. http://mailman.nginx.org/mailman/listinfo/nginx > 306. mailto:zjay1987 at gmail.com > 307. https://github.com/taobao/tengine/pull/91 > 308. mailto:yaoweibin at gmail.com > 309. https://github.com/taobao/tengine/pull/91 > 310. mailto:zjay1987 at gmail.com > 311. mailto:nginx at nginx.org > 312. http://mailman.nginx.org/mailman/listinfo/nginx > 313. mailto:nginx at nginx.org > 314. http://mailman.nginx.org/mailman/listinfo/nginx > 315. mailto:nginx at nginx.org > 316. http://mailman.nginx.org/mailman/listinfo/nginx > 317. mailto:nginx at nginx.org > 318. http://mailman.nginx.org/mailman/listinfo/nginx > 319. mailto:nginx at nginx.org > 320. http://mailman.nginx.org/mailman/listinfo/nginx > 321. mailto:nginx at nginx.org > 322. http://mailman.nginx.org/mailman/listinfo/nginx > 323. mailto:nginx at nginx.org > 324. http://mailman.nginx.org/mailman/listinfo/nginx > 325. mailto:nginx at nginx.org > 326. http://mailman.nginx.org/mailman/listinfo/nginx > 327. mailto:pasik at iki.fi > 328. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 329. mailto:pasik at iki.fi > 330. https://github.com/taobao/tengine/pull/91 > 331. mailto:yaoweibin at gmail.com > 332. https://github.com/taobao/tengine/pull/91 > 333. mailto:zjay1987 at gmail.com > 334. mailto:nginx at nginx.org > 335. http://mailman.nginx.org/mailman/listinfo/nginx > 336. mailto:nginx at nginx.org > 337. http://mailman.nginx.org/mailman/listinfo/nginx > 338. mailto:nginx at nginx.org > 339. http://mailman.nginx.org/mailman/listinfo/nginx > 340. mailto:zjay1987 at gmail.com > 341. https://github.com/taobao/tengine/pull/91 > 342. mailto:yaoweibin at gmail.com > 343. https://github.com/taobao/tengine/pull/91 > 344. mailto:zjay1987 at gmail.com > 345. mailto:nginx at nginx.org > 346. http://mailman.nginx.org/mailman/listinfo/nginx > 347. mailto:nginx at nginx.org > 348. http://mailman.nginx.org/mailman/listinfo/nginx > 349. mailto:nginx at nginx.org > 350. http://mailman.nginx.org/mailman/listinfo/nginx > 351. mailto:nginx at nginx.org > 352. http://mailman.nginx.org/mailman/listinfo/nginx > 353. mailto:nginx at nginx.org > 354. http://mailman.nginx.org/mailman/listinfo/nginx > 355. mailto:pasik at iki.fi > 356. https://github.com/taobao/tengine/pull/91 > 357. mailto:yaoweibin at gmail.com > 358. https://github.com/taobao/tengine/pull/91 > 359. mailto:zjay1987 at gmail.com > 360. mailto:nginx at nginx.org > 361. http://mailman.nginx.org/mailman/listinfo/nginx > 362. mailto:nginx at nginx.org > 363. http://mailman.nginx.org/mailman/listinfo/nginx > 364. mailto:nginx at nginx.org > 365. http://mailman.nginx.org/mailman/listinfo/nginx > 366. mailto:zjay1987 at gmail.com > 367. https://github.com/taobao/tengine/pull/91 > 368. mailto:yaoweibin at gmail.com > 369. https://github.com/taobao/tengine/pull/91 > 370. mailto:zjay1987 at gmail.com > 371. mailto:nginx at nginx.org > 372. http://mailman.nginx.org/mailman/listinfo/nginx > 373. mailto:nginx at nginx.org > 374. http://mailman.nginx.org/mailman/listinfo/nginx > 375. mailto:nginx at nginx.org > 376. http://mailman.nginx.org/mailman/listinfo/nginx > 377. mailto:nginx at nginx.org > 378. http://mailman.nginx.org/mailman/listinfo/nginx > 379. mailto:nginx at nginx.org > 380. http://mailman.nginx.org/mailman/listinfo/nginx > 381. mailto:nginx at nginx.org > 382. http://mailman.nginx.org/mailman/listinfo/nginx > 383. mailto:nginx at nginx.org > 384. http://mailman.nginx.org/mailman/listinfo/nginx > 385. mailto:nginx at nginx.org > 386. http://mailman.nginx.org/mailman/listinfo/nginx > 387. mailto:nginx at nginx.org > 388. http://mailman.nginx.org/mailman/listinfo/nginx > 389. mailto:nginx at nginx.org > 390. http://mailman.nginx.org/mailman/listinfo/nginx > 391. mailto:nginx at nginx.org > 392. http://mailman.nginx.org/mailman/listinfo/nginx > 393. mailto:pasik at iki.fi > 394. mailto:pasik at iki.fi > 395. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 396. mailto:pasik at iki.fi > 397. https://github.com/taobao/tengine/pull/91 > 398. mailto:yaoweibin at gmail.com > 399. https://github.com/taobao/tengine/pull/91 > 400. mailto:zjay1987 at gmail.com > 401. mailto:nginx at nginx.org > 402. http://mailman.nginx.org/mailman/listinfo/nginx > 403. mailto:nginx at nginx.org > 404. http://mailman.nginx.org/mailman/listinfo/nginx > 405. mailto:nginx at nginx.org > 406. http://mailman.nginx.org/mailman/listinfo/nginx > 407. mailto:zjay1987 at gmail.com > 408. https://github.com/taobao/tengine/pull/91 > 409. mailto:yaoweibin at gmail.com > 410. https://github.com/taobao/tengine/pull/91 > 411. mailto:zjay1987 at gmail.com > 412. mailto:nginx at nginx.org > 413. http://mailman.nginx.org/mailman/listinfo/nginx > 414. mailto:nginx at nginx.org > 415. http://mailman.nginx.org/mailman/listinfo/nginx > 416. mailto:nginx at nginx.org > 417. http://mailman.nginx.org/mailman/listinfo/nginx > 418. mailto:nginx at nginx.org > 419. http://mailman.nginx.org/mailman/listinfo/nginx > 420. mailto:nginx at nginx.org > 421. http://mailman.nginx.org/mailman/listinfo/nginx > 422. mailto:pasik at iki.fi > 423. https://github.com/taobao/tengine/pull/91 > 424. mailto:yaoweibin at gmail.com > 425. https://github.com/taobao/tengine/pull/91 > 426. mailto:zjay1987 at gmail.com > 427. mailto:nginx at nginx.org > 428. http://mailman.nginx.org/mailman/listinfo/nginx > 429. mailto:nginx at nginx.org > 430. http://mailman.nginx.org/mailman/listinfo/nginx > 431. mailto:nginx at nginx.org > 432. http://mailman.nginx.org/mailman/listinfo/nginx > 433. mailto:zjay1987 at gmail.com > 434. https://github.com/taobao/tengine/pull/91 > 435. mailto:yaoweibin at gmail.com > 436. https://github.com/taobao/tengine/pull/91 > 437. mailto:zjay1987 at gmail.com > 438. mailto:nginx at nginx.org > 439. http://mailman.nginx.org/mailman/listinfo/nginx > 440. mailto:nginx at nginx.org > 441. http://mailman.nginx.org/mailman/listinfo/nginx > 442. mailto:nginx at nginx.org > 443. http://mailman.nginx.org/mailman/listinfo/nginx > 444. mailto:nginx at nginx.org > 445. http://mailman.nginx.org/mailman/listinfo/nginx > 446. mailto:nginx at nginx.org > 447. http://mailman.nginx.org/mailman/listinfo/nginx > 448. mailto:nginx at nginx.org > 449. http://mailman.nginx.org/mailman/listinfo/nginx > 450. mailto:nginx at nginx.org > 451. http://mailman.nginx.org/mailman/listinfo/nginx > 452. mailto:nginx at nginx.org > 453. http://mailman.nginx.org/mailman/listinfo/nginx > 454. mailto:pasik at iki.fi > 455. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 456. mailto:pasik at iki.fi > 457. https://github.com/taobao/tengine/pull/91 > 458. mailto:yaoweibin at gmail.com > 459. https://github.com/taobao/tengine/pull/91 > 460. mailto:zjay1987 at gmail.com > 461. mailto:nginx at nginx.org > 462. http://mailman.nginx.org/mailman/listinfo/nginx > 463. mailto:nginx at nginx.org > 464. http://mailman.nginx.org/mailman/listinfo/nginx > 465. mailto:nginx at nginx.org > 466. http://mailman.nginx.org/mailman/listinfo/nginx > 467. mailto:zjay1987 at gmail.com > 468. https://github.com/taobao/tengine/pull/91 > 469. mailto:yaoweibin at gmail.com > 470. https://github.com/taobao/tengine/pull/91 > 471. mailto:zjay1987 at gmail.com > 472. mailto:nginx at nginx.org > 473. http://mailman.nginx.org/mailman/listinfo/nginx > 474. mailto:nginx at nginx.org > 475. http://mailman.nginx.org/mailman/listinfo/nginx > 476. mailto:nginx at nginx.org > 477. http://mailman.nginx.org/mailman/listinfo/nginx > 478. mailto:nginx at nginx.org > 479. http://mailman.nginx.org/mailman/listinfo/nginx > 480. mailto:nginx at nginx.org > 481. http://mailman.nginx.org/mailman/listinfo/nginx > 482. mailto:pasik at iki.fi > 483. https://github.com/taobao/tengine/pull/91 > 484. mailto:yaoweibin at gmail.com > 485. https://github.com/taobao/tengine/pull/91 > 486. mailto:zjay1987 at gmail.com > 487. mailto:nginx at nginx.org > 488. http://mailman.nginx.org/mailman/listinfo/nginx > 489. mailto:nginx at nginx.org > 490. http://mailman.nginx.org/mailman/listinfo/nginx > 491. mailto:nginx at nginx.org > 492. http://mailman.nginx.org/mailman/listinfo/nginx > 493. mailto:zjay1987 at gmail.com > 494. https://github.com/taobao/tengine/pull/91 > 495. mailto:yaoweibin at gmail.com > 496. https://github.com/taobao/tengine/pull/91 > 497. mailto:zjay1987 at gmail.com > 498. mailto:nginx at nginx.org > 499. http://mailman.nginx.org/mailman/listinfo/nginx > 500. mailto:nginx at nginx.org > 501. http://mailman.nginx.org/mailman/listinfo/nginx > 502. mailto:nginx at nginx.org > 503. http://mailman.nginx.org/mailman/listinfo/nginx > 504. mailto:nginx at nginx.org > 505. http://mailman.nginx.org/mailman/listinfo/nginx > 506. mailto:nginx at nginx.org > 507. http://mailman.nginx.org/mailman/listinfo/nginx > 508. mailto:nginx at nginx.org > 509. http://mailman.nginx.org/mailman/listinfo/nginx > 510. mailto:nginx at nginx.org > 511. http://mailman.nginx.org/mailman/listinfo/nginx > 512. mailto:nginx at nginx.org > 513. http://mailman.nginx.org/mailman/listinfo/nginx > 514. mailto:nginx at nginx.org > 515. http://mailman.nginx.org/mailman/listinfo/nginx > 516. mailto:nginx at nginx.org > 517. http://mailman.nginx.org/mailman/listinfo/nginx > 518. mailto:nginx at nginx.org > 519. http://mailman.nginx.org/mailman/listinfo/nginx > 520. mailto:nginx at nginx.org > 521. http://mailman.nginx.org/mailman/listinfo/nginx > 522. mailto:nginx at nginx.org > 523. http://mailman.nginx.org/mailman/listinfo/nginx > 524. mailto:nginx at nginx.org > 525. http://mailman.nginx.org/mailman/listinfo/nginx > 526. mailto:nginx at nginx.org > 527. http://mailman.nginx.org/mailman/listinfo/nginx > 528. mailto:nginx at nginx.org > 529. http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Mar 14 08:40:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Mar 2013 12:40:12 +0400 Subject: [Q] Security issues with Nginx In-Reply-To: <8ea3897e01800260b61373dee6d502a5.NginxMailingListEnglish@forum.nginx.org> References: <8ea3897e01800260b61373dee6d502a5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130314084012.GE15378@mdounin.ru> Hello! On Thu, Mar 14, 2013 at 03:43:17AM -0400, Joe M wrote: > Hey all > > Im new to Nginx and wanted to know if any of you familiar with any Known > security issues in Nginx (for example: > http://cnedelcu.blogspot.co.il/2010/05/nginx-php-via-fastcgi-important.html) This was discussed here once discovered[1], and the conclusion is: it's not a security issue in nginx, but rather a misconfiguration of php. Making sure you've configured it correctly (i.e. switched off cgi.fix_pathinfo=0 in php.ini) is a good idea though. [1] http://mailman.nginx.org/pipermail/nginx/2010-May/020461.html -- Maxim Dounin http://nginx.org/en/donation.html From pasik at iki.fi Thu Mar 14 08:43:57 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 14 Mar 2013 10:43:57 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130307174837.GB8912@reaktio.net> References: <20130305131741.GN8912@reaktio.net> <20130307174837.GB8912@reaktio.net> Message-ID: <20130314084357.GU8912@reaktio.net> On Thu, Mar 07, 2013 at 07:48:37PM +0200, Pasi K?rkk?inen wrote: > On Thu, Mar 07, 2013 at 12:25:43PM -0500, double wrote: > > > I keep getting the "upstream sent invalid header while reading response > > header from upstream" > > > error when using the no_buffer patch.. > > > > The patch does not work for you? > > Thanks > > Markus > > > > Yep, the patch doesn't work for me. > the latest no_buffer_v5.patch seems to work OK for me with nginx 1.2.7. -- Pasi From nginx-forum at nginx.us Thu Mar 14 11:20:59 2013 From: nginx-forum at nginx.us (Joe M) Date: Thu, 14 Mar 2013 07:20:59 -0400 Subject: [Q] Security issues with Nginx In-Reply-To: <20130314084012.GE15378@mdounin.ru> References: <20130314084012.GE15378@mdounin.ru> Message-ID: <92f81d0bc143eb1d7bfbe5d4d481a1c2.NginxMailingListEnglish@forum.nginx.org> OK, great Any other security issues or misconfiguration I should know about? Thanks Joe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237336,237345#msg-237345 From nginx-forum at nginx.us Thu Mar 14 13:46:20 2013 From: nginx-forum at nginx.us (enlighteneditdevelopment) Date: Thu, 14 Mar 2013 09:46:20 -0400 Subject: NginX and Magento very strange problem (?!) In-Reply-To: <20130304114437.GJ15378@mdounin.ru> References: <20130304114437.GJ15378@mdounin.ru> Message-ID: <1a63adfa7b3aa1adb0bf693b25a01377.NginxMailingListEnglish@forum.nginx.org> Our magento experts resolve complicated issues on ecommerce websites by building custom extensions and modules, to meet your custom requirements or for a no obligation free quote get in touch with us. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236869,237353#msg-237353 From pasik at iki.fi Thu Mar 14 14:13:49 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 14 Mar 2013 16:13:49 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130228181246.GT81985@mdounin.ru> References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> <20130228181246.GT81985@mdounin.ru> Message-ID: <20130314141348.GV8912@reaktio.net> On Thu, Feb 28, 2013 at 10:12:47PM +0400, Maxim Dounin wrote: > Hello! > Hello, > On Thu, Feb 28, 2013 at 05:36:23PM +0000, Andr? Cruz wrote: > > > I'm also very interested in being able to configure nginx to NOT > > proxy the entire request. > > > > Regarding this patch, > > https://github.com/alibaba/tengine/pull/91, is anything > > fundamentally wrong with it? I don't understand Chinese so I'm > > at a loss here... > > As a non-default mode of operation the aproach taken is likely > good enough (not looked into details), but the patch won't work > with current nginx versions - at least it needs (likely major) > adjustments to cope with changes introduced during work on chunked > request body support as available in nginx 1.3.9+. > Weibin: Have you thought of upstreaming the no_buffer patch to nginx 1.3.x so it could become part of next nginx stable version 1.4 ? It'd be really nice to have the no_buffer functionality in stock nginx! (the current no_buffer_v5.patch seems to work OK for me on nginx 1.2.7) -- Pasi > -- > Maxim Dounin > http://nginx.org/en/donation.html > From nginx-forum at nginx.us Thu Mar 14 15:19:09 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 11:19:09 -0400 Subject: nginx: [emerg] invalid host in upstream Message-ID: <61e7921616d43b59215793256619b5f3.NginxMailingListEnglish@forum.nginx.org> I'm trying to run a Rails app using Nginx with 5 Thin servers. I created the thin.yml file and when I run the start command, I get: bitnami at linux:/opt/bitnami/projects/ndeavor/current/config$ thin -C thin.yml start Starting server on 127.0.0.1:3000 ... Starting server on 127.0.0.1:3001 ... Starting server on 127.0.0.1:3002 ... Starting server on 127.0.0.1:3003 ... Starting server on 127.0.0.1:3004 ... I created a ndeavor.conf file in nginx/conf/vhosts that looks like this: https://dl.dropbox.com/u/35302780/ndeavor.conf And if I run the command to start Nginx, I get this: bitnami at linux:/opt/bitnami$ sudo ./ctlscript.sh start nginx nginx: [emerg] invalid host in upstream "127.0.0.1:3000/ndeavor" in /opt/bitnami/nginx/conf/vhosts/ndeavor.conf:2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237358,237358#msg-237358 From WBrown at e1b.org Thu Mar 14 15:24:55 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Thu, 14 Mar 2013 11:24:55 -0400 Subject: nginx: [emerg] invalid host in upstream In-Reply-To: <61e7921616d43b59215793256619b5f3.NginxMailingListEnglish@forum.nginx.org> References: <61e7921616d43b59215793256619b5f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: From: "Reddirt" > bitnami at linux:/opt/bitnami$ sudo ./ctlscript.sh start nginx > nginx: [emerg] invalid host in upstream "127.0.0.1:3000/ndeavor" in > /opt/bitnami/nginx/conf/vhosts/ndeavor.conf:2 What is your upstream section of the configuration file? Do you have the "/ndeavor" as part of the server statement? Try taking it off. Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From nginx-forum at nginx.us Thu Mar 14 15:36:58 2013 From: nginx-forum at nginx.us (gadh) Date: Thu, 14 Mar 2013 11:36:58 -0400 Subject: nginx + my module crashes only when ignore client abort = on Message-ID: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> i use nginx ver 1.2.5 (also tried 1.2.7) with my module that sends subrequest to an upstream, waits untill response get back, then goes to backend upstream and fetch the regular web page from it. when i add to nginx conf "proxy_ignore_client_abort on;", nginx crash with signal 11 (seg fault) when i do "ab" test and stop it in the middle of the (log: "client prematurely closed..." etc.). when i cancel my subrequest - no crash or: when i remove the proxy_ignore_client_abort (default off) - no crash, even with the subrequest. here's the core dump: ================================== Program terminated with signal 11, Segmentation fault. #0 0x000000000045926b in ngx_http_terminate_request (r=0x1964360, rc=0) at src/http/ngx_http_request.c:2147 2147 ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, (gdb) bt #0 0x000000000045926b in ngx_http_terminate_request (r=0x1964360, rc=0) at src/http/ngx_http_request.c:2147 #1 0x0000000000458b98 in ngx_http_finalize_request (r=0x1964360, rc=0) at src/http/ngx_http_request.c:1977 #2 0x0000000000459d80 in ngx_http_test_reading (r=0x1964360) at src/http/ngx_http_request.c:2443 #3 0x00000000004587fc in ngx_http_request_handler (ev=0x192b428) at src/http/ngx_http_request.c:1866 #4 0x000000000043f1e0 in ngx_epoll_process_events (cycle=0x18f0470, timer=59782, flags=1) at src/event/modules/ngx_epoll_module.c:683 #5 0x000000000042eea4 in ngx_process_events_and_timers (cycle=0x18f0470) at src/event/ngx_event.c:247 #6 0x000000000043ba36 in ngx_single_process_cycle (cycle=0x18f0470) at src/os/unix/ngx_process_cycle.c:315 #7 0x000000000040a33a in main (argc=3, argv=0x7fff9e8fec18) at src/core/nginx.c:409 ================================================= the cause of the crash: it appears thet "r->connection->log" is null or garbaged - its being freed before by nginx core ! please help TNX, Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237362#msg-237362 From nginx-forum at nginx.us Thu Mar 14 15:58:01 2013 From: nginx-forum at nginx.us (gadh) Date: Thu, 14 Mar 2013 11:58:01 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> References: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: i attache here my "debug_http" log - note that "http finalize request" is called twice (i think that one of them nulls the connection so nulls its log too), and thats NOT happening when NOT using proxy_ignore_client_abort: btw: i use proxy http version 1.1, if it helps ------------------------------- 2013/03/14 17:49:23 [info] 29550#0: *55 client prematurely closed connection while sending request to upstream, client: 10.0.0.18, server: www.aaa.com, request: "GET / HTTP/1.0", upstream: "http://x.x.x.x:80/", host: "aaa.com" 2013/03/14 17:49:23 [debug] 29550#0: *55 http finalize request: 0, "/?" a:1, c:1 2013/03/14 17:49:23 [debug] 29550#0: *55 http terminate request count:1 2013/03/14 17:49:23 [debug] 29550#0: *55 cleanup http upstream request: "/" 2013/03/14 17:49:23 [debug] 29550#0: *55 finalize http upstream request: -4 2013/03/14 17:49:23 [debug] 29550#0: *55 finalize http proxy request 2013/03/14 17:49:23 [debug] 29550#0: *55 free keepalive peer 2013/03/14 17:49:23 [debug] 29550#0: *55 free rr peer 3 0 2013/03/14 17:49:23 [debug] 29550#0: *55 close http upstream connection: 25 2013/03/14 17:49:23 [debug] 29550#0: *55 http finalize request: -4, "/?" a:1, c:1 2013/03/14 17:49:23 [debug] 29550#0: *55 http lingering close handler 2013/03/14 17:49:23 [debug] 29550#0: *55 lingering read: 0 2013/03/14 17:49:23 [debug] 29550#0: *55 http request count:1 blk:0 2013/03/14 17:49:23 [debug] 29550#0: *55 http close request 2013/03/14 17:49:23 [debug] 29550#0: *55 http log handler 2013/03/14 17:49:23 [debug] 29550#0: *55 close http connection: 23 ------------------------------- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237363#msg-237363 From nginx-forum at nginx.us Thu Mar 14 16:06:13 2013 From: nginx-forum at nginx.us (Wolfsrudel) Date: Thu, 14 Mar 2013 12:06:13 -0400 Subject: nginx: [emerg] invalid host in upstream In-Reply-To: References: Message-ID: <2169a98091b81cec805738ee5062e6f0.NginxMailingListEnglish@forum.nginx.org> I don't think that you can use a path behind the server. The Upstream server is a server name itself (with port), not with something like "/something" at the end. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237358,237364#msg-237364 From mdounin at mdounin.ru Thu Mar 14 16:32:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Mar 2013 20:32:47 +0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> References: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130314163247.GM15378@mdounin.ru> Hello! On Thu, Mar 14, 2013 at 11:36:58AM -0400, gadh wrote: > i use nginx ver 1.2.5 (also tried 1.2.7) with my module that sends > subrequest to an upstream, waits untill response get back, then goes to > backend upstream and fetch the regular web page from it. > when i add to nginx conf "proxy_ignore_client_abort on;", nginx crash with > signal 11 (seg fault) when i do "ab" test and stop it in the middle of the > (log: "client prematurely closed..." etc.). > when i cancel my subrequest - no crash > or: when i remove the proxy_ignore_client_abort (default off) - no crash, > even with the subrequest. Description of the problem suggests there is something wrong with request reference counting, likely caused by what your module does. It's very easy to screw it up, especially when trying to do subrequests before the request body is received. Hard to say anything else without the code. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Mar 14 16:46:43 2013 From: nginx-forum at nginx.us (gadh) Date: Thu, 14 Mar 2013 12:46:43 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <20130314163247.GM15378@mdounin.ru> References: <20130314163247.GM15378@mdounin.ru> Message-ID: thanks after i get the subrequest response in a handler function i registered, what can i do in order to tell the ngin core the subrequest had finished ? in my case i do only these actions: ngx_http_core_run_phases(r->main); return NGX_OK; is this ok ? BTW, its not a case of a client body, i'm talking about GET requests also that get crashed, not POST. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237369#msg-237369 From mdounin at mdounin.ru Thu Mar 14 17:04:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Mar 2013 21:04:32 +0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: References: <20130314163247.GM15378@mdounin.ru> Message-ID: <20130314170431.GO15378@mdounin.ru> Hello! On Thu, Mar 14, 2013 at 12:46:43PM -0400, gadh wrote: > thanks > after i get the subrequest response in a handler function i registered, what > can i do in order to tell the ngin core the subrequest had finished ? in my > case i do only these actions: > > ngx_http_core_run_phases(r->main); > return NGX_OK; > > is this ok ? No. This looks like completely wrong code, which may easily screw up things. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Mar 14 18:19:44 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 14:19:44 -0400 Subject: nginx: [emerg] invalid host in upstream In-Reply-To: <2169a98091b81cec805738ee5062e6f0.NginxMailingListEnglish@forum.nginx.org> References: <2169a98091b81cec805738ee5062e6f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: It looks like Bitnami expects the Rails app to be in url/appname. So, I just removed the /ndeavor from the thin.yml and from nginx.conf. This is what my thin.yml looks like now: https://dl.dropbox.com/u/35302780/thin.yml And this is my nginx.conf: https://dl.dropbox.com/u/35302780/nginx.conf I'm using Bitnami Rails stack that runs Ubuntu. I now start the Thin with this: thin -C thin.yml start And I start Nginx with this: bitnami at linux:/opt/bitnami$ sudo ./ctlscript.sh start nginx And they both start OK. But, when I go to my website url, I get this message "The service is not available. Please try again later." I don't know where that message is coming from. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237358,237372#msg-237372 From nginx-forum at nginx.us Thu Mar 14 18:24:42 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 14:24:42 -0400 Subject: nginx: [emerg] invalid host in upstream In-Reply-To: References: <2169a98091b81cec805738ee5062e6f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2b3ff025c3886b887e272006bb5ed3cf.NginxMailingListEnglish@forum.nginx.org> I restarted. Now I'm getting the website without the CSS. I thought Nginx would provide the static webpages including CSS. How can I tell if Nginx is running properly? Sorry - This is my first time trying this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237358,237373#msg-237373 From nginx-forum at nginx.us Thu Mar 14 18:27:06 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 14:27:06 -0400 Subject: nginx: [emerg] invalid host in upstream In-Reply-To: <2b3ff025c3886b887e272006bb5ed3cf.NginxMailingListEnglish@forum.nginx.org> References: <2169a98091b81cec805738ee5062e6f0.NginxMailingListEnglish@forum.nginx.org> <2b3ff025c3886b887e272006bb5ed3cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7ad95d94a4b1447168422c6835786353.NginxMailingListEnglish@forum.nginx.org> My Rails app has this in production.rb: config.serve_static_assets = false When I run Thin by itself, I have to make that true. I thought with Nginx it would stay false. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237358,237374#msg-237374 From nginx-forum at nginx.us Thu Mar 14 20:13:46 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 16:13:46 -0400 Subject: nginx: [emerg] invalid host in upstream In-Reply-To: <7ad95d94a4b1447168422c6835786353.NginxMailingListEnglish@forum.nginx.org> References: <2169a98091b81cec805738ee5062e6f0.NginxMailingListEnglish@forum.nginx.org> <2b3ff025c3886b887e272006bb5ed3cf.NginxMailingListEnglish@forum.nginx.org> <7ad95d94a4b1447168422c6835786353.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9e57c6ff68937ce8eee084210ebafc4e.NginxMailingListEnglish@forum.nginx.org> OK - we can close this thread. But, I'm going to start a new one because I still can't get to my Rails app. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237358,237377#msg-237377 From nginx-forum at nginx.us Thu Mar 14 20:19:02 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 16:19:02 -0400 Subject: The service is not available. Please try again later. Message-ID: <640c02ea53588b77fc84ba622d5b49ec.NginxMailingListEnglish@forum.nginx.org> I'm trying to get a Bitnami Rails running with Nginx and Thin. I've got Thin servers running on: bitnami at linux:/opt/bitnami$ thin -C projects/ndeavor/current/config/thin.yml start Starting server on 127.0.0.1:3000 ... Starting server on 127.0.0.1:3001 ... Starting server on 127.0.0.1:3002 ... Starting server on 127.0.0.1:3003 ... Starting server on 127.0.0.1:3004 ... I've got Nginx running: bitnami at linux:/opt/bitnami$ sudo ./ctlscript.sh start nginx /opt/bitnami/nginx/scripts/ctl.sh : Nginx started And after I start Nginx, then the thin.3000.log has: >> Thin web server (v1.5.0 codename Knife) >> Maximum connections set to 1024 >> Listening on 127.0.0.1:3000, CTRL+C to stop So, I believe that Nginx is connected to the 5 Thin app servers. I have the nginx.conf listen on port 80 But, when I try to connect using a browser, I get this: "The service is not available. Please try again later." It's the same message I get if nginx is not running. Is something in Ubuntu stopping http port 80 from reaching nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237379,237379#msg-237379 From nginx-forum at nginx.us Thu Mar 14 20:31:19 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 16:31:19 -0400 Subject: The service is not available. Please try again later. In-Reply-To: <640c02ea53588b77fc84ba622d5b49ec.NginxMailingListEnglish@forum.nginx.org> References: <640c02ea53588b77fc84ba622d5b49ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <685bb6b27bba8f391731519a81604e9c.NginxMailingListEnglish@forum.nginx.org> This is my nginx.conf file. Does it look OK? https://dl.dropbox.com/u/35302780/nginx.conf Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237379,237381#msg-237381 From nginx-forum at nginx.us Thu Mar 14 22:03:20 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 14 Mar 2013 18:03:20 -0400 Subject: The service is not available. Please try again later. In-Reply-To: <685bb6b27bba8f391731519a81604e9c.NginxMailingListEnglish@forum.nginx.org> References: <640c02ea53588b77fc84ba622d5b49ec.NginxMailingListEnglish@forum.nginx.org> <685bb6b27bba8f391731519a81604e9c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <594d826ab7f6873198e72c60e08952b5.NginxMailingListEnglish@forum.nginx.org> I got past that error and now the nginx error log has this: 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "ndeavor.ameipro.com" 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "ndeavor.ameipro.com" 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "ndeavor.ameipro.com" 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3003/", host: "ndeavor.ameipro.com" 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3004/", host: "ndeavor.ameipro.com" 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET /clients HTTP/1.1", upstream: "http://127.0.0.1:3004/clients", host: "ndeavor.ameipro.com" 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET /clients HTTP/1.1", upstream: "http://127.0.0.1:3000/clients", host: "ndeavor.ameipro.com" 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET /clients HTTP/1.1", upstream: "http://127.0.0.1:3001/clients", host: "ndeavor.ameipro.com" 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET /clients HTTP/1.1", upstream: "http://127.0.0.1:3002/clients", host: "ndeavor.ameipro.com" 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.20.3, server: , request: "GET /clients HTTP/1.1", upstream: "http://127.0.0.1:3003/clients", host: "ndeavor.ameipro.com" 2013/03/14 14:22:28 [error] 1537#0: *12 no live upstreams while connecting to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", upstream: "http://backend/", host: "ndeavor.ameipro.com" 2013/03/14 14:23:09 [info] 1537#0: *1 client 192.168.20.3 closed keepalive connection 2013/03/14 14:23:09 [info] 1537#0: *15 client 192.168.20.3 closed keepalive connection Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237379,237382#msg-237382 From jay at kodewerx.org Fri Mar 15 02:07:20 2013 From: jay at kodewerx.org (Jay Oster) Date: Thu, 14 Mar 2013 19:07:20 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy Message-ID: Hi list! I'm trying to debug an interesting problem where we randomly get a "high latency" response time from our upstream servers. It appears to occur in about 1.5% of all requests. Here's a description of the tests I've been running to isolate the problem within nginx: I'm using an endpoint on the upstream servers that operates extremely quickly; a request which only responds back with the server's current local UNIX timestamp. From the nginx server, I start ApacheBench with 5,000 concurrent connections directly to the upstream server (bypassing nginx). Here is what a typical run of this test looks like: Document Path: /time/0 Document Length: 19 bytes Concurrency Level: 5000 Time taken for tests: 0.756 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 1110000 bytes HTML transferred: 95000 bytes Requests per second: 6617.33 [#/sec] (mean) Time per request: 755.592 [ms] (mean) Time per request: 0.151 [ms] (mean, across all concurrent requests) Transfer rate: 1434.62 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 4 63 53.7 35 167 Processing: 22 44 19.1 38 249 Waiting: 17 35 18.8 30 243 Total: 55 107 64.4 73 401 Percentage of the requests served within a certain time (ms) 50% 73 66% 77 75% 84 80% 202 90% 222 95% 231 98% 250 99% 251 100% 401 (longest request) And here's the same test with the longest response times I've seen: Document Path: /time/0 Document Length: 19 bytes Concurrency Level: 5000 Time taken for tests: 0.807 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 1110000 bytes HTML transferred: 95000 bytes Requests per second: 6197.08 [#/sec] (mean) Time per request: 806.831 [ms] (mean) Time per request: 0.161 [ms] (mean, across all concurrent requests) Transfer rate: 1343.51 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 3 45 51.8 17 144 Processing: 10 29 24.4 23 623 Waiting: 9 25 24.4 18 623 Total: 26 75 67.4 39 626 Percentage of the requests served within a certain time (ms) 50% 39 66% 42 75% 45 80% 173 90% 190 95% 199 98% 217 99% 218 100% 626 (longest request) Not bad. Now, keep in mind, this is a SINGLE upstream server handling these requests over the network. Once I change my test to point ab at the local nginx, the strange latency issue rears its ugly head. I have 4 upstream servers in my config. Here's what the same test through nginx looks like: Concurrency Level: 5000 Time taken for tests: 1.602 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 1170000 bytes HTML transferred: 95000 bytes Requests per second: 3121.08 [#/sec] (mean) Time per request: 1602.012 [ms] (mean) Time per request: 0.320 [ms] (mean, across all concurrent requests) Transfer rate: 713.21 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 109 172 39.4 167 246 Processing: 106 505 143.3 530 1248 Waiting: 103 504 143.5 530 1248 Total: 344 677 108.6 696 1358 Percentage of the requests served within a certain time (ms) 50% 696 66% 723 75% 741 80% 752 90% 768 95% 779 98% 786 99% 788 100% 1358 (longest request) Ack! It's like nginx decides to drop an extra second on some requests for no reason. I've also recorded these test runs with nginx's access log. Here's the log format, first: log_format main '$remote_addr - - ' ## User's IP Address '[$time_local] ' ## DateTime '"$request" ' ## User's Request URL '$status ' ## HTTP Code '$body_bytes_sent ' ## Bytes BODY ONLY '"$http_referer" ' ## User's Referer '"$http_user_agent" ' ## User's Agent '$request_time ' ## NginX Response '$upstream_response_time ' ## Upstream Response '$bytes_sent ' ## Bytes Sent (GZIP) '$request_length'; ## String Length The access log has 10,000 lines total (i.e. two of these tests with 5,000 concurrent connections), and when I sort by upstream_response_time, I get a log with the first 140 lines having about 1s on the upstream_response_time, and the remaining 9,860 lines show 700ms and less. Here's a snippet showing the strangeness, starting with line numbers: 1: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 2: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 3: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 1.026 1.025 234 83 ... 138: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 1.000 0.999 234 81 139: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 140: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 141: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 142: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 143: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 ... 9998: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 9999: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 10000: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" 200 19 "-" "ApacheBench/2.3" 0.122 0.002 234 81 The upstream_response_time difference between line 140 and 141 is nearly 500ms! The total request_time also displays an interesting gap of almost 300ms. What's going on here? The kernels have been tuned on all servers for a high number of open files, and tcp buffers: $ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 119715 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1048576 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 119715 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ cat /proc/sys/net/core/*mem_* 184217728 184217728 184217728 184217728 184217728 Also for reference, here is part of my nginx.conf which may be useful for diagnosis: worker_processes 7; worker_rlimit_nofile 500000; events { use epoll; worker_connections 500000; multi_accept on; } http { log_format main ... access_log ... ## ----------------------------------------------------------------------- ## TCP Tuning ## ----------------------------------------------------------------------- sendfile off; tcp_nopush off; tcp_nodelay on; ## ----------------------------------------------------------------------- ## Max Data Size ## ----------------------------------------------------------------------- client_max_body_size 1k; client_body_buffer_size 1k; client_header_buffer_size 32k; large_client_header_buffers 200 32k; ## ----------------------------------------------------------------------- ## GZIP ## ----------------------------------------------------------------------- gzip on; gzip_min_length 1000; gzip_disable msie6; gzip_proxied any; gzip_buffers 100 64k; gzip_types text/javascript; ## ----------------------------------------------------------------------- ## Proxy Load Distribution ## ----------------------------------------------------------------------- proxy_redirect off; proxy_connect_timeout 5; proxy_send_timeout 5; proxy_read_timeout 8; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_buffering off; ## ----------------------------------------------------------------------- ## Hide 'Server: nginx' Server Header ## ----------------------------------------------------------------------- server_tokens off; proxy_pass_header Server; upstream ... server ... } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 15 02:47:50 2013 From: nginx-forum at nginx.us (jaychris) Date: Thu, 14 Mar 2013 22:47:50 -0400 Subject: nginx_upload_progress Message-ID: I'm trying to compile in the nginx_upload_progress module into an RPM. I'm using the SPEC file provided with v1.2.7 from the Nginx repos. I downloaded the upload_progress module, opened up the source tar archive, copied the module to nginx-1.2.7/nginx-upload-progress-module, and added this line to my SPEC file: --add-module=%{_builddir}/%{name}-%{version}/nginx-upload-progress-module \ When run the rpmbuild process, it looks to me like it is successfully finding and compiling the module (excerpts from the build log): gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -O2 -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx-upload-progress-module/ngx_http_uploadprogress_module.o \ /root/rpmbuild/BUILD/nginx-1.2.7/nginx-upload-progress-module/ngx_http_uploadprogress_module.c objs/addon/nginx-upload-progress-module/ngx_http_uploadprogress_module.o \ Checking for unpackaged file(s): /usr/lib/rpm/check-files /root/rpmbuild/BUILDROOT/nginx-1.2.7-1.el6.ngx.x86_64 Wrote: /root/rpmbuild/SRPMS/nginx-1.2.7-1.el6.ngx.src.rpm Wrote: /root/rpmbuild/RPMS/x86_64/nginx-1.2.7-1.el6.ngx.x86_64.rpm Wrote: /root/rpmbuild/RPMS/x86_64/nginx-debug-1.2.7-1.el6.ngx.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.tN7cRa + umask 022 + cd /root/rpmbuild/BUILD + cd nginx-1.2.7 + /bin/rm -rf /root/rpmbuild/BUILDROOT/nginx-1.2.7-1.el6.ngx.x86_64 + exit 0 So, the build completes successfully, but when I install the rpm and run "nginx -V", I get: nginx version: nginx/1.2.7 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g' ... no upload-progress module. Am I missing a part of the process here? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237384,237384#msg-237384 From nginx-forum at nginx.us Fri Mar 15 03:37:59 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Thu, 14 Mar 2013 23:37:59 -0400 Subject: How to investigate upstream timed out issues? Message-ID: <02cd8251ff47402a7f19ac7dde4ea0e2.NginxMailingListEnglish@forum.nginx.org> Hello guys In my nginx version 1.3.14 I'm having lots of upstream timeouts like these and wonder what's the correct, professional approach is, to solve these: example: 762#0: *113 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 58.28.152.233, server: videomail.io, request: "GET /socket.io/socket.io.v0.9.11.js HTTP/1.1", upstream: "https://127.0.0.1:4443/socket.io/socket.io.v0.9.11.js", host: "videomail.io" relevant nginx config: location / { proxy_cache one; try_files $uri @proxy; } location @proxy { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_set_header Host $http_host; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404; proxy_pass https://127.0.0.1:4443; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; } what would you do in my case to examine this? imo the default timeout of 60 seconds is good and i do not want to increase it. it should be able to deliver socket.io.v0.9.11.js under 60 seconds, so i believe the problem is something else. i just do not know how to investigate this. any hints very welcome! cheers michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237385,237385#msg-237385 From robm at fastmail.fm Fri Mar 15 04:22:31 2013 From: robm at fastmail.fm (Robert Mueller) Date: Fri, 15 Mar 2013 15:22:31 +1100 Subject: Dropped https client connection doesn't drop backend proxy_pass connection Message-ID: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> Hi I'm trying to setup nginx to proxy a server sent events connection (http://dev.w3.org/html5/eventsource/) to a backend server. The approach is the browser connects to a particular path, which then checks the cookies to see the connection is authorised, and then returns an X-Accel-Redirect header to connect up to a separate internal path to do the proxying. That separate internal path is configured like this: location ^~ /pushevents/ { internal; # Immediately send backend responses back to client proxy_buffering off; # Disable keepalive to browser keepalive_timeout 0; # It's a long lived backend connection with potentially a long time between # push events, make sure proxy doesn't timeout proxy_read_timeout 7200; proxy_pass http://pushbackend; } The backend service listening on pushbackend then holds the connection open, and returns data down the connection whenever a push event occurs, which nginx proxies straight back to the browser because buffering is turned off. One important thing here is that if the client drops the connection to nginx, nginx should also drop the connection to the backend server. This is done by making sure we do not change proxy_ignore_client_abort, which defaults to off. Testing this out, it all works fine for http browser connections, but appears to be broken for https browser connections. When using http with my test client, I see this in the nginx log when the client disconnects: 2013/03/14 23:27:27 [info] 27717#0: *1 client prematurely closed connection, so upstream connection is closed too while reading upstream, client: 10.50.1.14, server: www.*, request: "GET /events/ HTTP/1.0", upstream: "http://127.0.0.4:80/.../", host: "..." And when I check netstat, I see that the connection from nginx -> pushbackend has been dropped as well. However for https connections, this is not what happens. Instead, it appears nginx fails to detect that the client has closed the connection, and leaves the nginx -> pushbackend connection open (confirmed with netstat). And because we set proxy_read_timeout to 2 hours, it takes 2 hours for that connection to be closed! That's bad. This is nginx 1.2.7 compiled with openssl 1.0.1e. With an http connection with debugging enabled, when I close the client connection I see this in the log: 2013/03/15 00:11:33 [debug] 28285#0: epoll: fd:3 ev:0005 d:00007F6B414AA150 2013/03/15 00:11:33 [debug] 28285#0: *1 http run request: "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 http upstream check client, write event:0, "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 http upstream recv(): 0 (11: Resource temporarily unavailable) 2013/03/15 00:11:33 [info] 28285#0: *1 client prematurely closed connection, so upstream connection is closed too while reading upstream, client: 10.50.1.14, server: insecure.*, request: "GET /events/ HTTP/1.0", upstream: "http://127.0.0.4/.../", host: "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 finalize http upstream request: 499 2013/03/15 00:11:33 [debug] 28285#0: *1 finalize http proxy request 2013/03/15 00:11:33 [debug] 28285#0: *1 free rr peer 1 0 2013/03/15 00:11:33 [debug] 28285#0: *1 close http upstream connection: 32 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001847BC0, unused: 48 2013/03/15 00:11:33 [debug] 28285#0: *1 event timer del: 32: 1363327885296 2013/03/15 00:11:33 [debug] 28285#0: *1 reusable connection: 0 2013/03/15 00:11:33 [debug] 28285#0: *1 http output filter "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 http copy filter: "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 http postpone filter "..." 00007FFF614BAFE0 2013/03/15 00:11:33 [debug] 28285#0: *1 http copy filter: -1 "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 http finalize request: -1, "..." a:1, c:1 2013/03/15 00:11:33 [debug] 28285#0: *1 http terminate request count:1 2013/03/15 00:11:33 [debug] 28285#0: *1 http terminate cleanup count:1 blk:0 2013/03/15 00:11:33 [debug] 28285#0: *1 http posted request: "..." 2013/03/15 00:11:33 [debug] 28285#0: *1 http terminate handler count:1 2013/03/15 00:11:33 [debug] 28285#0: *1 http request count:1 blk:0 2013/03/15 00:11:33 [debug] 28285#0: *1 http close request 2013/03/15 00:11:33 [debug] 28285#0: *1 http log handler 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001952800 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001895EA0 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001846BB0, unused: 0 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001894E90, unused: 3 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001896EB0, unused: 1771 2013/03/15 00:11:33 [debug] 28285#0: *1 close http connection: 3 2013/03/15 00:11:33 [debug] 28285#0: *1 reusable connection: 0 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 00000000018467A0 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 00000000018461A0 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 000000000183B210, unused: 8 2013/03/15 00:11:33 [debug] 28285#0: *1 free: 0000000001846690, unused: 128 2013/03/15 00:11:33 [debug] 28285#0: timer delta: 3247 2013/03/15 00:11:33 [debug] 28285#0: posted events 0000000000000000 2013/03/15 00:11:33 [debug] 28285#0: worker cycle 2013/03/15 00:11:33 [debug] 28285#0: epoll timer: -1 When doing the same thing with an https connection (openssl s_client) and closing the connection I see this in the log: 2013/03/15 00:10:23 [debug] 28237#0: epoll: fd:3 ev:0005 d:00007F6ECDCAE150 2013/03/15 00:10:23 [debug] 28237#0: *1 http run request: "..." 2013/03/15 00:10:23 [debug] 28237#0: *1 http upstream check client, write event:0, "..." 2013/03/15 00:10:23 [debug] 28237#0: *1 http upstream recv(): 1 (11: Resource temporarily unavailable) 2013/03/15 00:10:23 [debug] 28237#0: *1 http run request: "..." 2013/03/15 00:10:23 [debug] 28237#0: *1 http upstream process non buffered downstream 2013/03/15 00:10:23 [debug] 28237#0: *1 event timer del: 32: 1363327814331 2013/03/15 00:10:23 [debug] 28237#0: *1 event timer add: 32: 7200000:1363327823956 2013/03/15 00:10:23 [debug] 28237#0: timer delta: 4626 2013/03/15 00:10:23 [debug] 28237#0: posted events 0000000000000000 2013/03/15 00:10:23 [debug] 28237#0: worker cycle 2013/03/15 00:10:23 [debug] 28237#0: epoll timer: 7200000 Is this an nginx bug? -- Rob Mueller robm at fastmail.fm From mdounin at mdounin.ru Fri Mar 15 08:10:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Mar 2013 12:10:44 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> Message-ID: <20130315081044.GQ15378@mdounin.ru> Hello! On Fri, Mar 15, 2013 at 03:22:31PM +1100, Robert Mueller wrote: [...] > When using http with my test client, I see this in the nginx log when > the client disconnects: > > 2013/03/14 23:27:27 [info] 27717#0: *1 client prematurely closed > connection, so upstream connection is closed too while reading upstream, > client: 10.50.1.14, server: www.*, request: "GET /events/ HTTP/1.0", > upstream: "http://127.0.0.4:80/.../", host: "..." > > And when I check netstat, I see that the connection from nginx -> > pushbackend has been dropped as well. > > However for https connections, this is not what happens. Instead, it > appears nginx fails to detect that the client has closed the connection, > and leaves the nginx -> pushbackend connection open (confirmed with > netstat). And because we set proxy_read_timeout to 2 hours, it takes 2 > hours for that connection to be closed! That's bad. You shouldn't rely on connection close being detected by nginx. It's generally useful optimization, but it's not something which is guaranteed to work. It is generally not possible to check if the connection was closed if there a pending data in it. One have to read all pending data before connection close can be detected, but it doesn't work as a generic detection mechanism as one have to buffer read data somewhere. As of now, nginx uses: 1) A EV_EOF flag as reported by kqueue. This only works if you use kqueue, i.e. on FreeBSD and friends. 2) The recv(MSG_PEEK) call to test a case when connection was closed. This works on all platforms, but only if there are no pending data. In case of https, in many (most) cases there are pending data - due to various SSL packets send during connection close. This means connection close detection with https doesn't work unless you use kqueue. Further reading: http://mailman.nginx.org/pipermail/nginx/2011-June/027672.html http://mailman.nginx.org/pipermail/nginx/2011-November/030630.html -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Mar 15 08:16:25 2013 From: nginx-forum at nginx.us (mex) Date: Fri, 15 Mar 2013 04:16:25 -0400 Subject: How to investigate upstream timed out issues? In-Reply-To: <02cd8251ff47402a7f19ac7dde4ea0e2.NginxMailingListEnglish@forum.nginx.org> References: <02cd8251ff47402a7f19ac7dde4ea0e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: are you able to fetch the given ressource from your nginx-server? e.g. wget https://127.0.0.1:4443/socket.io/socket.io.v0.9.11.js maybe you have an port-issue (4443 vs 443) mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237385,237390#msg-237390 From mdounin at mdounin.ru Fri Mar 15 08:20:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Mar 2013 12:20:59 +0400 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: Message-ID: <20130315082059.GR15378@mdounin.ru> Hello! On Thu, Mar 14, 2013 at 07:07:20PM -0700, Jay Oster wrote: [...] > The access log has 10,000 lines total (i.e. two of these tests with 5,000 > concurrent connections), and when I sort by upstream_response_time, I get a > log with the first 140 lines having about 1s on the upstream_response_time, > and the remaining 9,860 lines show 700ms and less. Here's a snippet showing > the strangeness, starting with line numbers: > > > 1: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 > 2: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 > 3: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 1.026 1.025 234 83 > ... > 138: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 1.000 0.999 234 81 > 139: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 > 140: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 > 141: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > 142: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > 143: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > ... > 9998: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 > 9999: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 > 10000: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > 200 19 "-" "ApacheBench/2.3" 0.122 0.002 234 81 > > > > The upstream_response_time difference between line 140 and 141 is nearly > 500ms! The total request_time also displays an interesting gap of almost > 300ms. What's going on here? I would suggests there are packet loss and retransmits for some reason. Try tcpdump'ing traffic between nginx and backends to see what goes on in details. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Mar 15 08:24:13 2013 From: nginx-forum at nginx.us (varan) Date: Fri, 15 Mar 2013 04:24:13 -0400 Subject: Geo Module. How to make nginx to check proxy for any ip? Message-ID: <62b761348593ef2d394a49f82087a203.NginxMailingListEnglish@forum.nginx.org> I mean I haven't got full list of ip addresses that can be proxy, so I want nginx to look X-FORWARDED-FOR header for any ip address, and if the header exists to determine geo using the header. If there's no X-Forwarded-For header, then use ip address for geo. is it acceptable to use "proxy 0.0.0.0/0" ? for example: geo $geo { default ZZ; proxy 0.0.0.0/0; 127.0.0.1/32 RU; 192.168.0.1/32 UK; ... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237392,237392#msg-237392 From nginx-forum at nginx.us Fri Mar 15 08:39:03 2013 From: nginx-forum at nginx.us (honwel) Date: Fri, 15 Mar 2013 04:39:03 -0400 Subject: nginx: worker process: malloc(): memory corruption: 0x0000000000b6bdb0 *** Message-ID: hi on centos 6, nginx-1.2.2, nginx was compiled with: --prefix=/usr/local/nginx --user=root --group=root --with-http_ssl_module --with-ipv6 --with-pcre=/home/nginx/src/pcre-8.20 --with-openssl=/home/nginx/src/openssl-1.0.1c --with-zlib=/home/nginx/src/zlib-1.2.7 --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_subs_filter --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_gunzip --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_lvs_live --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_sniper some minutes later when launch nginx, i get error(no coredump file, and workprcess hungs)? *** glibc detected *** nginx: worker process: malloc(): memory corruption: 0x0000000000b6bdb0 *** pstark info: [root at rs1 sbin]# pstack 30803 #0 0x0000003eb160cb1b in pthread_once () from /lib64/libpthread.so.0 #1 0x0000003eb0efe954 in backtrace () from /lib64/libc.so.6 #2 0x0000003eb0e707cb in __libc_message () from /lib64/libc.so.6 #3 0x0000003eb0e760e6 in malloc_printerr () from /lib64/libc.so.6 #4 0x0000003eb0e79b64 in _int_malloc () from /lib64/libc.so.6 #5 0x0000003eb0e7a5a6 in calloc () from /lib64/libc.so.6 #6 0x0000003eb0a0ad0f in _dl_new_object () from /lib64/ld-linux-x86-64.so.2 #7 0x0000003eb0a0719e in _dl_map_object_from_fd () from /lib64/ld-linux-x86-64.so.2 #8 0x0000003eb0a0835a in _dl_map_object () from /lib64/ld-linux-x86-64.so.2 #9 0x0000003eb0a129b4 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2 #10 0x0000003eb0a0e196 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2 #11 0x0000003eb0a1246a in _dl_open () from /lib64/ld-linux-x86-64.so.2 #12 0x0000003eb0f26300 in do_dlopen () from /lib64/libc.so.6 #13 0x0000003eb0a0e196 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2 #14 0x0000003eb0f26457 in __libc_dlopen_mode () from /lib64/libc.so.6 #15 0x0000003eb0efe855 in init () from /lib64/libc.so.6 #16 0x0000003eb160cb23 in pthread_once () from /lib64/libpthread.so.0 #17 0x0000003eb0efe954 in backtrace () from /lib64/libc.so.6 #18 0x0000003eb0e707cb in __libc_message () from /lib64/libc.so.6 #19 0x0000003eb0e760e6 in malloc_printerr () from /lib64/libc.so.6 #20 0x0000003eb0e79b64 in _int_malloc () from /lib64/libc.so.6 #21 0x0000003eb0e7a911 in malloc () from /lib64/libc.so.6 #22 0x000000000041bb5e in ngx_alloc () #23 0x00000000004144ab in ngx_resolver_alloc () #24 0x0000000000414916 in ngx_resolver_calloc () #25 0x00000000004149af in ngx_resolve_start () #26 0x000000000043ece8 in ngx_http_upstream_init_request () #27 0x000000000043ee93 in ngx_http_upstream_init () #28 0x00000000004361c1 in ngx_http_read_client_request_body () #29 0x0000000000459973 in ngx_http_proxy_handler () #30 0x000000000042bd24 in ngx_http_core_content_phase () #31 0x00000000004269a3 in ngx_http_core_run_phases () #32 0x0000000000426a9e in ngx_http_handler () #33 0x0000000000430662 in ngx_http_process_request () #34 0x0000000000430d98 in ngx_http_process_request_headers () #35 0x0000000000431369 in ngx_http_process_request_line () #36 0x000000000042e6a6 in ngx_http_init_request () #37 0x000000000042e825 in ngx_http_keepalive_handler () #38 0x00000000004198f2 in ngx_event_process_posted () #39 0x00000000004197c2 in ngx_process_events_and_timers () #40 0x000000000041f410 in ngx_worker_process_cycle () #41 0x000000000041dc58 in ngx_spawn_process () #42 0x000000000041eadd in ngx_start_worker_processes () #43 0x000000000041f931 in ngx_master_process_cycle () #44 0x0000000000404e23 in main () [root at rs1 sbin]# pstack 30804 #0 0x0000003eb0ee8ee3 in __epoll_wait_nocancel () from /lib64/libc.so.6 #1 0x0000000000420562 in ngx_epoll_process_events () #2 0x000000000041975b in ngx_process_events_and_timers () #3 0x000000000041f410 in ngx_worker_process_cycle () #4 0x000000000041dc58 in ngx_spawn_process () #5 0x000000000041eadd in ngx_start_worker_processes () #6 0x000000000041f931 in ngx_master_process_cycle () #7 0x0000000000404e23 in main () [root at rs1 sbin]# pstack 30805 #0 0x0000003eb0ee8ee3 in __epoll_wait_nocancel () from /lib64/libc.so.6 #1 0x0000000000420562 in ngx_epoll_process_events () #2 0x000000000041975b in ngx_process_events_and_timers () #3 0x000000000041f410 in ngx_worker_process_cycle () #4 0x000000000041dc58 in ngx_spawn_process () #5 0x000000000041eadd in ngx_start_worker_processes () #6 0x000000000041f931 in ngx_master_process_cycle () #7 0x0000000000404e23 in main () [root at rs1 sbin]# pstack 30806 #0 0x0000003eb0ee8ee3 in __epoll_wait_nocancel () from /lib64/libc.so.6 #1 0x0000000000420562 in ngx_epoll_process_events () #2 0x000000000041975b in ngx_process_events_and_timers () #3 0x000000000041f410 in ngx_worker_process_cycle () #4 0x000000000041dc58 in ngx_spawn_process () #5 0x000000000041eadd in ngx_start_worker_processes () #6 0x000000000041f931 in ngx_master_process_cycle () #7 0x0000000000404e23 in main () [root at rs1 sbin]# ps aux | grep nginx root 30802 0.0 0.0 17904 844 ? Ss 08:15 0:00 nginx: master process ./nginx root 30803 0.0 0.0 20524 4200 ? S 08:15 0:00 nginx: worker process root 30804 0.0 0.0 18876 2544 ? S 08:15 0:00 nginx: worker process root 30805 0.0 0.0 19320 2972 ? S 08:15 0:00 nginx: worker process root 30806 0.0 0.0 18716 1884 ? S 08:15 0:00 nginx: worker process root 30957 0.0 0.0 103244 860 pts/1 S+ 08:28 0:00 grep nginx any ideas for fix it? thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237393,237393#msg-237393 From mdounin at mdounin.ru Fri Mar 15 10:02:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Mar 2013 14:02:26 +0400 Subject: Geo Module. How to make nginx to check proxy for any ip? In-Reply-To: <62b761348593ef2d394a49f82087a203.NginxMailingListEnglish@forum.nginx.org> References: <62b761348593ef2d394a49f82087a203.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130315100226.GT15378@mdounin.ru> Hello! On Fri, Mar 15, 2013 at 04:24:13AM -0400, varan wrote: > I mean I haven't got full list of ip addresses that can be proxy, so I want > nginx to look X-FORWARDED-FOR header for any ip address, and if the header > exists to determine geo using the header. If there's no X-Forwarded-For > header, then use ip address for geo. > is it acceptable to use "proxy 0.0.0.0/0" ? > > for example: > > geo $geo { > default ZZ; > proxy 0.0.0.0/0; > > 127.0.0.1/32 RU; > 192.168.0.1/32 UK; > ... > } This will work, but it means that anybody will be able to easily fool your geo detection. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Mar 15 10:09:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Mar 2013 14:09:10 +0400 Subject: nginx: worker process: malloc(): memory corruption: 0x0000000000b6bdb0 *** In-Reply-To: References: Message-ID: <20130315100909.GU15378@mdounin.ru> Hello! On Fri, Mar 15, 2013 at 04:39:03AM -0400, honwel wrote: > hi > on centos 6, nginx-1.2.2, nginx was compiled with: > --prefix=/usr/local/nginx --user=root --group=root --with-http_ssl_module > --with-ipv6 --with-pcre=/home/nginx/src/pcre-8.20 > --with-openssl=/home/nginx/src/openssl-1.0.1c > --with-zlib=/home/nginx/src/zlib-1.2.7 > --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_subs_filter > --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_gunzip > --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_lvs_live > --add-module=/home/nginx/svn/nginx-1.2.2/src/add-on/nginx_sniper > > some minutes later when launch nginx, i get error(no coredump file, and > workprcess hungs)? Have you tried reproducing the problem without 3rd party modules compiled in? You may also want to upgrade nginx to at least nginx 1.2.7, 1.2.2 is rather old. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Mar 15 10:35:27 2013 From: nginx-forum at nginx.us (ilmerovingio) Date: Fri, 15 Mar 2013 06:35:27 -0400 Subject: read_timeout directive Message-ID: <469be208462766c524dfd2c181d29243.NginxMailingListEnglish@forum.nginx.org> Hi, I've a few servers with nginx 1.2.3 that run a streaming services platform. Is there in nginx any "read_timeout" directive (like the send_timeout but for the opposite) that shuts down the connection if the client has not send any data for x time? I'm looking for something also in beta stage, modules, plugin, patch... Many thanks for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237396,237396#msg-237396 From nginx-forum at nginx.us Fri Mar 15 11:37:32 2013 From: nginx-forum at nginx.us (honwel) Date: Fri, 15 Mar 2013 07:37:32 -0400 Subject: nginx: worker process: malloc(): memory corruption: 0x0000000000b6bdb0 *** In-Reply-To: <20130315100909.GU15378@mdounin.ru> References: <20130315100909.GU15378@mdounin.ru> Message-ID: <01be781ab0d8d06bb3591cea04c0ac63.NginxMailingListEnglish@forum.nginx.org> thanks, i will try as you mention and report it on forum. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237393,237397#msg-237397 From mdounin at mdounin.ru Fri Mar 15 11:58:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Mar 2013 15:58:48 +0400 Subject: read_timeout directive In-Reply-To: <469be208462766c524dfd2c181d29243.NginxMailingListEnglish@forum.nginx.org> References: <469be208462766c524dfd2c181d29243.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130315115848.GW15378@mdounin.ru> Hello! On Fri, Mar 15, 2013 at 06:35:27AM -0400, ilmerovingio wrote: > Hi, I've a few servers with nginx 1.2.3 that run a streaming services > platform. > > Is there in nginx any "read_timeout" directive (like the send_timeout but > for the opposite) that shuts down the connection if the client has not send > any data for x time? > > I'm looking for something also in beta stage, modules, plugin, patch... There are two timeouts related to reading data from a client, client_header_timeout and client_body_timeout. See here for details: http://nginx.org/r/client_header_timeout http://nginx.org/r/client_body_timeout -- Maxim Dounin http://nginx.org/en/donation.html From WBrown at e1b.org Fri Mar 15 12:11:41 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Fri, 15 Mar 2013 08:11:41 -0400 Subject: The service is not available. Please try again later. In-Reply-To: <594d826ab7f6873198e72c60e08952b5.NginxMailingListEnglish@forum.nginx.org> References: <640c02ea53588b77fc84ba622d5b49ec.NginxMailingListEnglish@forum.nginx.org> <685bb6b27bba8f391731519a81604e9c.NginxMailingListEnglish@forum.nginx.org> <594d826ab7f6873198e72c60e08952b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: > From: "Reddirt" > I got past that error and now the nginx error log has this: > > 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: > "ndeavor.ameipro.com" > 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: > "ndeavor.ameipro.com" > 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: > "ndeavor.ameipro.com" > 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3003/", host: > "ndeavor.ameipro.com" > 2013/03/14 14:22:10 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3004/", host: > "ndeavor.ameipro.com" > 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET /clients HTTP/1.1", upstream: " http://127.0.0.1:3004/clients", > host: "ndeavor.ameipro.com" > 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET /clients HTTP/1.1", upstream: " http://127.0.0.1:3000/clients", > host: "ndeavor.ameipro.com" > 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET /clients HTTP/1.1", upstream: " http://127.0.0.1:3001/clients", > host: "ndeavor.ameipro.com" > 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET /clients HTTP/1.1", upstream: " http://127.0.0.1:3002/clients", > host: "ndeavor.ameipro.com" > 2013/03/14 14:22:27 [error] 1537#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 192.168.20.3, server: , > request: "GET /clients HTTP/1.1", upstream: " http://127.0.0.1:3003/clients", > host: "ndeavor.ameipro.com" > 2013/03/14 14:22:28 [error] 1537#0: *12 no live upstreams while connecting > to upstream, client: 192.168.20.3, server: , request: "GET / HTTP/1.1", > upstream: "http://backend/", host: "ndeavor.ameipro.com" > 2013/03/14 14:23:09 [info] 1537#0: *1 client 192.168.20.3 closed keepalive > connection > 2013/03/14 14:23:09 [info] 1537#0: *15 client 192.168.20.3 closed keepalive > connection > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237379, > 237382#msg-237382 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Is the application listening on those ports? What does netstat show? Can you telnet to the app server on those ports? Could the app not be listening on 127.0.0.1 and only on the non-loopback address? Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From nginx-forum at nginx.us Fri Mar 15 17:18:04 2013 From: nginx-forum at nginx.us (jaychris) Date: Fri, 15 Mar 2013 13:18:04 -0400 Subject: nginx_upload_progress In-Reply-To: References: Message-ID: <033b1e911088dd8dc4d87b50732ab05f.NginxMailingListEnglish@forum.nginx.org> Never mind, I guess there are two configure statements in the spec file. I added the module to both and it's working fine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237384,237409#msg-237409 From nginx-forum at nginx.us Fri Mar 15 17:30:50 2013 From: nginx-forum at nginx.us (jaychris) Date: Fri, 15 Mar 2013 13:30:50 -0400 Subject: Nginx 1.2.7 / h.264 / flowplayer Message-ID: <053052cf6d2b52d74af60285f4584153.NginxMailingListEnglish@forum.nginx.org> I'm running Nginx v1.2.7 from the Nginx repo, with mp4 module enabled. I added this to my conf: location ~ \.mov { mp4; mp4_buffer_size 1m; mp4_max_buffer_size 5m; } location / { try_files $uri $uri/ @rewrite; } location @rewrite { # Some modules enforce no slash (/) at the end of the URL # Else this rewrite block wouldn't be needed (GlobalRedirect) #if ($uri !~* /gcal/(.*)$) { rewrite ^/(.*)$ /index.php?q=$1 last; #} } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/phpfpm.sock; fastcgi_read_timeout 300; fastcgi_send_timeout 300; } I'm pseudo-sreaming using flowplayer 3.2.5. When I attempt to view a video, I get: 200 stream not found Netstream.play..... In the error logs, I get this: 10.10.100.149 - - [15/Mar/2013:17:21:58 +0000] "GET /flowplayer/flowplayer.commercial-3.2.5.swf?0.8039274555630982 HTTP/1.1" 200 326175 "http://mywebsite.com/courses/video-course" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22" 2013/03/15 17:21:58 [error] 9872#0: *1210 open() "/var/www/p.php/201303-video/video-course.mov" failed (20: Not a directory), client: 10.10.100.149, server: www.mywebsite.com, request: "GET /p.php/201303-video/video-course.mov?Expires=1365960115&Key-Pair-Id=APKAJHTHBGFXZDNBWLIQ&Signature=bMteSYSc1Cd2ILJtNfh6h~DwaOJhoOpxvHdBSaMctKzKlJrIkRMs4XltP1qKsZmGW0ZwYPL4229VIKa-fSB6koLpxRMFwJ~QJuzA2CmFO1ZCQ8NSHcxV2tAG4tLUOwLif112MJtNYrUtGhEh4ESmQPjAEpUq8JbM45F7uRPdmWg_&From=/courses/video-course HTTP/1.1", host: "www.mywebsite.com", referrer: "http://www.mywebsite.com/courses/video-course" It works via Apache, so my guess is that my Nginx config is not quite right. Any pointers would be appreciated. This is a Drupal hosted website. I realize there is a "Not a directory" error, which I am guessing may be an indication that the nginx config is not correct, because that URL is invalid. Probably because I'm not processing the locations correctly for this functionality. But, I'm not sure exactly what needs to be changed at this point for it to be correct. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237411,237411#msg-237411 From luky-37 at hotmail.com Fri Mar 15 18:05:59 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 15 Mar 2013 19:05:59 +0100 Subject: Nginx 1.2.7 / h.264 / flowplayer In-Reply-To: <053052cf6d2b52d74af60285f4584153.NginxMailingListEnglish@forum.nginx.org> References: <053052cf6d2b52d74af60285f4584153.NginxMailingListEnglish@forum.nginx.org> Message-ID: .mov is not .mp4, they are completely different containers. ---------------------------------------- > To: nginx at nginx.org > Subject: Nginx 1.2.7 / h.264 / flowplayer > From: nginx-forum at nginx.us > Date: Fri, 15 Mar 2013 13:30:50 -0400 > > I'm running Nginx v1.2.7 from the Nginx repo, with mp4 module enabled. > > I added this to my conf: > > location ~ \.mov { > mp4; > mp4_buffer_size 1m; > mp4_max_buffer_size 5m; > } > > location / { > try_files $uri $uri/ @rewrite; > } > > location @rewrite { > # Some modules enforce no slash (/) at the end of the URL > # Else this rewrite block wouldn't be needed > (GlobalRedirect) > #if ($uri !~* /gcal/(.*)$) { > rewrite ^/(.*)$ /index.php?q=$1 last; > #} > } > > location ~ \.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_intercept_errors on; > fastcgi_pass unix:/var/run/phpfpm.sock; > fastcgi_read_timeout 300; > fastcgi_send_timeout 300; > > } > > I'm pseudo-sreaming using flowplayer 3.2.5. When I attempt to view a video, > I get: > > 200 stream not found Netstream.play..... > > In the error logs, I get this: > > 10.10.100.149 - - [15/Mar/2013:17:21:58 +0000] "GET > /flowplayer/flowplayer.commercial-3.2.5.swf?0.8039274555630982 HTTP/1.1" 200 > 326175 "http://mywebsite.com/courses/video-course" "Mozilla/5.0 (Macintosh; > Intel Mac OS X 10_8_2) AppleWebKit/537.22 (KHTML, like Gecko) > Chrome/25.0.1364.172 Safari/537.22" > > 2013/03/15 17:21:58 [error] 9872#0: *1210 open() > "/var/www/p.php/201303-video/video-course.mov" failed (20: Not a directory), > client: 10.10.100.149, server: www.mywebsite.com, request: "GET > /p.php/201303-video/video-course.mov?Expires=1365960115&Key-Pair-Id=APKAJHTHBGFXZDNBWLIQ&Signature=bMteSYSc1Cd2ILJtNfh6h~DwaOJhoOpxvHdBSaMctKzKlJrIkRMs4XltP1qKsZmGW0ZwYPL4229VIKa-fSB6koLpxRMFwJ~QJuzA2CmFO1ZCQ8NSHcxV2tAG4tLUOwLif112MJtNYrUtGhEh4ESmQPjAEpUq8JbM45F7uRPdmWg_&From=/courses/video-course > HTTP/1.1", host: "www.mywebsite.com", referrer: > "http://www.mywebsite.com/courses/video-course" > > It works via Apache, so my guess is that my Nginx config is not quite right. > Any pointers would be appreciated. This is a Drupal hosted website. > > I realize there is a "Not a directory" error, which I am guessing may be an > indication that the nginx config is not correct, because that URL is > invalid. Probably because I'm not processing the locations correctly for > this functionality. But, I'm not sure exactly what needs to be changed at > this point for it to be correct. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237411,237411#msg-237411 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Mar 15 19:30:58 2013 From: nginx-forum at nginx.us (jaychris) Date: Fri, 15 Mar 2013 15:30:58 -0400 Subject: Nginx 1.2.7 / h.264 / flowplayer In-Reply-To: <053052cf6d2b52d74af60285f4584153.NginxMailingListEnglish@forum.nginx.org> References: <053052cf6d2b52d74af60285f4584153.NginxMailingListEnglish@forum.nginx.org> Message-ID: <16e7c79ba696fb6bde8092e3e65eff86.NginxMailingListEnglish@forum.nginx.org> Once again, never mind. I had a bad location filter. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237411,237415#msg-237415 From nginx-forum at nginx.us Fri Mar 15 19:32:52 2013 From: nginx-forum at nginx.us (jaychris) Date: Fri, 15 Mar 2013 15:32:52 -0400 Subject: Nginx 1.2.7 / h.264 / flowplayer In-Reply-To: References: Message-ID: <57679ab5d932f37c67c315007a709eed.NginxMailingListEnglish@forum.nginx.org> Well, true, but .mov is still h.264 encoded, so I believe it would still use the same streaming mechanism. Maybe I'm wrong about that. At any rate, it appears to be working now that I fixed my nginx location filters. Lukas Tribus Wrote: ------------------------------------------------------- > ..mov is not .mp4, they are completely different containers. > > > > ---------------------------------------- > > To: nginx at nginx.org > > Subject: Nginx 1.2.7 / h.264 / flowplayer > > From: nginx-forum at nginx.us > > Date: Fri, 15 Mar 2013 13:30:50 -0400 > > > > I'm running Nginx v1.2.7 from the Nginx repo, with mp4 module > enabled. > > > > I added this to my conf: > > > > location ~ \.mov { > > mp4; > > mp4_buffer_size 1m; > > mp4_max_buffer_size 5m; > > } > > > > location / { > > try_files $uri $uri/ @rewrite; > > } > > > > location @rewrite { > > # Some modules enforce no slash (/) at the end of the URL > > # Else this rewrite block wouldn't be needed > > (GlobalRedirect) > > #if ($uri !~* /gcal/(.*)$) { > > rewrite ^/(.*)$ /index.php?q=$1 last; > > #} > > } > > > > location ~ \.php$ { > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > include fastcgi_params; > > fastcgi_param SCRIPT_FILENAME > > $document_root$fastcgi_script_name; > > fastcgi_intercept_errors on; > > fastcgi_pass unix:/var/run/phpfpm.sock; > > fastcgi_read_timeout 300; > > fastcgi_send_timeout 300; > > > > } > > > > I'm pseudo-sreaming using flowplayer 3.2.5. When I attempt to view a > video, > > I get: > > > > 200 stream not found Netstream.play..... > > > > In the error logs, I get this: > > > > 10.10.100.149 - - [15/Mar/2013:17:21:58 +0000] "GET > > /flowplayer/flowplayer.commercial-3.2.5.swf?0.8039274555630982 > HTTP/1.1" 200 > > 326175 "http://mywebsite.com/courses/video-course" "Mozilla/5.0 > (Macintosh; > > Intel Mac OS X 10_8_2) AppleWebKit/537.22 (KHTML, like Gecko) > > Chrome/25.0.1364.172 Safari/537.22" > > > > 2013/03/15 17:21:58 [error] 9872#0: *1210 open() > > "/var/www/p.php/201303-video/video-course.mov" failed (20: Not a > directory), > > client: 10.10.100.149, server: www.mywebsite.com, request: "GET > > > /p.php/201303-video/video-course.mov?Expires=1365960115&Key-Pair-Id=AP > KAJHTHBGFXZDNBWLIQ&Signature=bMteSYSc1Cd2ILJtNfh6h~DwaOJhoOpxvHdBSaMct > KzKlJrIkRMs4XltP1qKsZmGW0ZwYPL4229VIKa-fSB6koLpxRMFwJ~QJuzA2CmFO1ZCQ8N > SHcxV2tAG4tLUOwLif112MJtNYrUtGhEh4ESmQPjAEpUq8JbM45F7uRPdmWg_&From=/co > urses/video-course > > HTTP/1.1", host: "www.mywebsite.com", referrer: > > "http://www.mywebsite.com/courses/video-course" > > > > It works via Apache, so my guess is that my Nginx config is not > quite right. > > Any pointers would be appreciated. This is a Drupal hosted website. > > > > I realize there is a "Not a directory" error, which I am guessing > may be an > > indication that the nginx config is not correct, because that URL is > > invalid. Probably because I'm not processing the locations correctly > for > > this functionality. But, I'm not sure exactly what needs to be > changed at > > this point for it to be correct. > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,237411,237411#msg-237411 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237411,237416#msg-237416 From nginx-forum at nginx.us Fri Mar 15 19:48:56 2013 From: nginx-forum at nginx.us (beaufour) Date: Fri, 15 Mar 2013 15:48:56 -0400 Subject: upstream keepalive with upstream hash Message-ID: I've been trying to get this setup working: client <- c0 -> nginx1 <- c1 -> nginx2 <- c2 -> service (http) where the c1 connection is kept alive between request from from the outside, but c0 and c2 are closed after each request. I've used the 'keepalive' keyword in the upstream nginx1 config, and it works. Unfortunately I also use the upstream hash patch on nginx1, and as soon as I enable that nginx1 closes the connection forcefully. I've confirmed this with tcpdump in both setups, and it's the only difference. Any hints to what I can do? I'm suspecting that the upstream hash module "takes over" the upstream handling, and thus ignores the 'keepalive' keyword, but I'm randomly guessing. Thanks, Allan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237417,237417#msg-237417 From nginx-forum at nginx.us Fri Mar 15 20:10:19 2013 From: nginx-forum at nginx.us (beaufour) Date: Fri, 15 Mar 2013 16:10:19 -0400 Subject: upstream keepalive with upstream hash In-Reply-To: References: Message-ID: <722c172c39886d014621ad825f526afe.NginxMailingListEnglish@forum.nginx.org> I also just tried the ip_hash module and it also disables the keepalive functionality :( Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237417,237420#msg-237420 From robm at fastmail.fm Fri Mar 15 22:32:27 2013 From: robm at fastmail.fm (Robert Mueller) Date: Sat, 16 Mar 2013 09:32:27 +1100 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <20130315081044.GQ15378@mdounin.ru> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <20130315081044.GQ15378@mdounin.ru> Message-ID: <1363386747.13074.140661204958001.6D3BCC81@webmail.messagingengine.com> > In case of https, in many (most) cases there are pending data - > due to various SSL packets send during connection close. This > means connection close detection with https doesn't work unless > you use kqueue. > > Further reading: > > http://mailman.nginx.org/pipermail/nginx/2011-June/027672.html > http://mailman.nginx.org/pipermail/nginx/2011-November/030630.html These reports appear to relate to SSL upstream connections (both refer to ngx_http_upstream_check_broken_connection). I'm talking about an SSL client connection, with a plain http upstream connection. When an https client drops it's connection, the upstream http proxy connection is not dropped. If nginx can't detect an https client disconnect properly, that must mean it's leaking connection information internally doesn't it? Rob From djczaski at gmail.com Fri Mar 15 23:24:45 2013 From: djczaski at gmail.com (djczaski) Date: Fri, 15 Mar 2013 19:24:45 -0400 Subject: websocket backend Message-ID: What are the best options for websocket backends? I'm working with an embedded platform so I'm somewhat restricted to something lightweight in C/C++. The things I found so far are: libwebsockets: http://git.warmcat.com/cgi-bin/cgit/libwebsockets/ poco: http://www.appinf.com/docs/poco/Poco.Net.WebSocket.html Another option would be to handle it right in Openresty/ngx_lua. I see there was some discussion about this a little while back: https://github.com/chaoslawful/lua-nginx-module/issues/165 What are the best options? From nik.molnar at consbio.org Fri Mar 15 23:46:19 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Fri, 15 Mar 2013 16:46:19 -0700 Subject: websocket backend In-Reply-To: References: Message-ID: <5143B2CB.4080204@consbio.org> I haven't tried it yet, but nginx-push-stream-module looks good: https://github.com/wandenberg/nginx-push-stream-module _Nik On 3/15/2013 4:24 PM, djczaski wrote: > What are the best options for websocket backends? I'm working with an > embedded platform so I'm somewhat restricted to something lightweight > in C/C++. The things I found so far are: > > libwebsockets: http://git.warmcat.com/cgi-bin/cgit/libwebsockets/ > poco: http://www.appinf.com/docs/poco/Poco.Net.WebSocket.html > > Another option would be to handle it right in Openresty/ngx_lua. I > see there was some discussion about this a little while back: > > https://github.com/chaoslawful/lua-nginx-module/issues/165 > > What are the best options? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Mar 16 03:19:58 2013 From: nginx-forum at nginx.us (zestsh) Date: Fri, 15 Mar 2013 23:19:58 -0400 Subject: Would like to implement WebSocket support In-Reply-To: References: Message-ID: <91949683476a466e8d0a71c8c506e75e.NginxMailingListEnglish@forum.nginx.org> Finally, Nginx 1.3.13 supports websocket officially. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221884,237425#msg-237425 From jay at kodewerx.org Sat Mar 16 08:37:22 2013 From: jay at kodewerx.org (Jay Oster) Date: Sat, 16 Mar 2013 01:37:22 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: <20130315082059.GR15378@mdounin.ru> References: <20130315082059.GR15378@mdounin.ru> Message-ID: Hi Maxim, Thanks for the suggestion! It looks like packet drop is the culprit here. The initial SYN packet doesn't receive a corresponding SYN-ACK from the upstream servers, so after a 1-second timeout (TCP Retransmission TimeOut), the packet is retransmitted. The question is still *why* this only occurs through nginx. To further narrow down the root cause, I moved my upstream server to the same machine with nginx. The issue can still be replicated there. To eliminate my upstream server as the cause (it's written in C with libevent, by the way) I used the nodejs hello world demo; nodejs has trouble with the 5,000 concurrent connections (go figure) but the connections that are successful (without nginx reverse proxying) all complete in less than one second. When I place nginx between ApacheBench and nodejs, that 1-second TCP RTO shows up again. To reiterate, this is all happening on a single machine; the TCP stack is involved, but not a physical network. The only common denominator is nginx. On Fri, Mar 15, 2013 at 1:20 AM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 14, 2013 at 07:07:20PM -0700, Jay Oster wrote: > > [...] > > > The access log has 10,000 lines total (i.e. two of these tests with 5,000 > > concurrent connections), and when I sort by upstream_response_time, I > get a > > log with the first 140 lines having about 1s on the > upstream_response_time, > > and the remaining 9,860 lines show 700ms and less. Here's a snippet > showing > > the strangeness, starting with line numbers: > > > > > > 1: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 > > 2: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 > > 3: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.026 1.025 234 83 > > ... > > 138: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.000 0.999 234 81 > > 139: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 > > 140: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 > > 141: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > > 142: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > > 143: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > > ... > > 9998: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 > > 9999: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 > > 10000: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.122 0.002 234 81 > > > > > > > > The upstream_response_time difference between line 140 and 141 is nearly > > 500ms! The total request_time also displays an interesting gap of almost > > 300ms. What's going on here? > > I would suggests there are packet loss and retransmits for some > reason. Try tcpdump'ing traffic between nginx and backends to see > what goes on in details. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveen.yarlagadda at gmail.com Sat Mar 16 09:34:32 2013 From: praveen.yarlagadda at gmail.com (Praveen Yarlagadda) Date: Sat, 16 Mar 2013 02:34:32 -0700 Subject: some sort of attack? Message-ID: Hi, I installed nginx on an EC2 instance. After few hours, I started getting repeated requests from a set of servers. I tried using limit_req with the following options: limit_req_zone $binary_remote_addr zone=ratezone:10m rate=3r/s; limit_req zone=ratezone burst=5 nodelay; But I found that it is not effective. If you take a look at the following access_log content, you would notice that the IP addresses are different. I don't see more than 3 requests in a sec. Another weird thing is GET requests are starting with *"http://". *I never saw it before. Is there any way I can filter requests or possibly throw 503? Any help is really appreciated. 108.62.157.221 - - [16/Mar/2013:06:48:32 +0000] "GET http://ad.tagjunction.com/st?ad_type=iframe&ad_size=728x90§ion=3127172&pub_url=${PUB_URL}HTTP/1.0" 404 570 " http://www.oslims.com/green-coffee/pure-coffee/why-should-you-buy-a-professional-coffee-maker.html" "Mozilla/4.0 (compatible; MSIE 6.01; Windows 95; Alexa Toolbar)" "-" 108.62.192.236 - - [16/Mar/2013:06:48:32 +0000] "GET http://ads1.ministerial5.com/creative/2-002134604-00001i;size=1 HTTP/1.0" 404 570 " http://femalefashionroad.com/index.php?option=com_mailto&tmpl=component&link=aHR0cDovL2ZlbWFsZWZhc2hpb25yb2FkLmNvbS9pbmRleC5waHA/b3B0aW9uPWNvbV9jb250ZW50JnZpZXc9YXJ0aWNsZSZpZD0xOTYyNzoyMDExLTEyLTE1LTIyLTA5LTE3JmNhdGlkPTQxOndvbWVuLWZhc2hpb24mSXRlbWlkPTk3" "Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)" "-" 173.208.16.212 - - [16/Mar/2013:06:48:32 +0000] "GET http://ib.adnxs.com/ttj?id=1184170 HTTP/1.0" 404 570 " http://ffwoman.com/index.php?option=com_content&view=article&id=1358:face-cream-nearly-killed-a-woman&catid=54:health-tips&Itemid=100" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.20 Safari/535.1" "-" 173.234.116.220 - - [16/Mar/2013:06:48:32 +0000] "GET http://ad.globe7.com/st?ad_type=pop&ad_size=0x0§ion=2978145&banned_pop_types=29&pop_times=1&pop_frequency=0&pop_nofreqcap=1HTTP/1.0" 404 570 " http://www.economysea.com/index.php?option=com_content&view=article&id=7067:2011-09-28-20-11-07&catid=48:economy-today&Itemid=98" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Ubuntu/11.04 Chromium/17.0.963.65 Chrome/17.0.963.65 Safari/535.11" "-" 72.52.75.73 - - [16/Mar/2013:06:48:32 +0000] "GET http://ib.adnxs.com/tt?id=1121510&cb=${CACHEBUSTER}&pubclick=${CLICK_URL}HTTP/1.0" 404 570 " http://www.tvzhou.com/?tag=lisa&paged=2" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/18.6.872.0 Safari/535.2 UNTRUSTED/1.0 3gpp-gba UNTRUSTED/1.0" "-" 23.19.67.56 - - [16/Mar/2013:06:48:32 +0000] "GET http://ad.tagjunction.com/st?ad_type=iframe&ad_size=120x600§ion=3680802&pub_url=${PUB_URL}HTTP/1.0" 404 168 " http://economicface.com/index.php?option=com_mailto&tmpl=component&link=e3ca08bc42ab0d0829e79ecb01f98523fba42f8b" "Mozilla/5.0 (Windows; U; WinNT3.51; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7" "-" 173.234.145.205 - - [16/Mar/2013:06:48:32 +0000] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=728x90§ion=4097260&pub_url=${www.classidressing.com}HTTP/1.0" 404 570 " http://classidressing.com/index.php?view=article&catid=43:womens-clothing&id=7161:2012-01-19-23-59-09&format=pdf" "Mozilla/4.0 (compatible; MSIE 5.01; Windows 95; MSIECrawler)" "-" 142.4.126.137 - - [16/Mar/2013:06:48:32 +0000] "GET http://ads.clovenetwork.com/ttj?id=801591&pubclick=[INSERT_CLICK_TAG]HTTP/1.0" 404 570 " http://www.today-car.com/?cat=601" "Mozilla/4.0 (compatible; MSIE 6.0; Update a; Win32)" "-" 23.19.130.109 - - [16/Mar/2013:06:48:32 +0000] "GET http://ads1.ministerial5.com/creative/2-002134516-00001i;size=2 HTTP/1.0" 500 594 " http://likecatpink.com/index.php?option=com_content&view=article&id=10082:2012-01-07-14-12-06&catid=43:fashion-jewellery&Itemid=99" "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; Alexa Toolbar)" "-" 108.62.17.245 - - [16/Mar/2013:06:48:32 +0000] "GET http://ib.adnxs.com/ttj?id=1200348&cb=${CACHEBUSTER}&pubclick=${CLICK_URL}HTTP/1.0" 404 168 " http://styleear.com/index.php?option=com_mailto&tmpl=component&link=5d2f4abeb642b19272252d653174f14589e07a8b" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7) Gecko/20040626 Firefox/0.8" "-" -Praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sat Mar 16 10:06:17 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 16 Mar 2013 10:06:17 +0000 Subject: some sort of attack? In-Reply-To: References: Message-ID: On 16 March 2013 09:34, Praveen Yarlagadda wrote: > Hi, > > I installed nginx on an EC2 instance. After few hours, I started getting > repeated requests from a set of servers. I tried using limit_req with the > following options: > > limit_req_zone $binary_remote_addr zone=ratezone:10m rate=3r/s; > limit_req zone=ratezone burst=5 nodelay; > > But I found that it is not effective. If you take a look at the following > access_log content, you would notice that the IP addresses are different. I > don't see more than 3 requests in a sec. Another weird thing is GET requests > are starting with "http://". I never saw it before. Is there any way I can > filter requests or possibly throw 503? How about location http:// { access_log off; return 444; } Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From francis at daoine.org Sat Mar 16 10:38:42 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Mar 2013 10:38:42 +0000 Subject: some sort of attack? In-Reply-To: References: Message-ID: <20130316103842.GA18002@craic.sysops.org> On Sat, Mar 16, 2013 at 02:34:32AM -0700, Praveen Yarlagadda wrote: Hi there, > I installed nginx on an EC2 instance. > Another weird thing is GET > requests are starting with *"http://". *I never saw it before. Is there any > way I can filter requests or possibly throw 503? These might be innocent requests from browsers configured to use your IP address as a proxy server. (Maybe there was a proxy server on a previous instance that used your current address?) I suggest making your current server{} blocks list all of the server_name:s that you want to handle, and then let the default server{} block handle these other requests, with "return 503" or any other configuration you like. See http://nginx.org/r/listen and http://nginx.org/r/server_name for how to configure server names and the default server for a given address:port. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Mar 16 11:14:58 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Mar 2013 11:14:58 +0000 Subject: nginx crash on reload -- how to detect? In-Reply-To: <513F5A3F.9070305@googlemail.com> References: <513F5A3F.9070305@googlemail.com> Message-ID: <20130316111458.GB18002@craic.sysops.org> On Tue, Mar 12, 2013 at 05:39:27PM +0100, Jan-Philip Gehrcke wrote: Hi there, > I'm currently running a self-built nginx 1.3.14 on a debian system and > use the attached (and also inlined) init.d script as /etc/init.d/nginx > for managing the service. It's taken unmodified from debian wheezy. This looks like a generic init script which has been slightly adapted for nginx. It introduces some extra levels between what you do (run "service") and what you want to do (control nginx). These levels are frequently useful, but in this case they do seem to hide the information you want. > $ service nginx reload > > The problem is that in this case it just states "Reloading nginx > configuration: nginx." without me realizing that the master process > crashed and that the new config did not become activated. > I am wondering if there is a neat way to improve the service script in > order to make it realize when the nginx master unexpectedly dies in the > process of performing one of the service actions. Right now, you run "service" which runs this shell script which (eventually) runs start-stop-daemon which runs nginx. nginx can communicate using its return value, its stderr output, and its log file. The sequence you run seems to hide the return value and the stderr output from you. It's not clear to me exactly what happened in your failing case; but if you can recreate it, what do you see when you run "nginx", "nginx -t", "nginx -s reload", and "nginx -s stop"? Check the return value, stderr output, and error.log after each one. Is there enough information in that output to help you understand the nature of the problem? (I believe that a return value of non-0 means that something went wrong, and the stderr output and/or error.log will indicate what that something was. The output may also have information when the return value is 0.) When you're happy that you understand the nginx output, then you can try changing the init script to let that output get back to you somehow. That might involve learning where the current script puts that information; or changing the start-stop-daemon invocation line to make it available; or replacing the entire script with something much simpler and with fewer features. Be aware that removing features will probably make this daemon not fit in with the startup/shutdown of all others on your system. Only you can decide how much that matters to you. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Mar 16 11:26:05 2013 From: nginx-forum at nginx.us (fdmitry) Date: Sat, 16 Mar 2013 07:26:05 -0400 Subject: New subfilter module with multi-pattern search support Message-ID: <58f0ec8dfe3e86050ac366f7a35ec2fa.NginxMailingListEnglish@forum.nginx.org> Hi all! There is a new subfilter module. It uses fast Wu-Manber algorithm for multi-pattern search. Probably it would be interesting to someone: https://github.com/dursegov/nginx-subfilter-module It is possible to add regular expressions support if it is required, but as a separate engine (i.e. use libpcre instead of builtin Wu-Manber). The combination of them is a good challenge. The code may still have bugs, since I don't still have a time to write good tests. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237437,237437#msg-237437 From babynewton73 at gmail.com Sat Mar 16 13:40:38 2013 From: babynewton73 at gmail.com (Newton Kim) Date: Sat, 16 Mar 2013 22:40:38 +0900 Subject: websocket backend In-Reply-To: <5143B2CB.4080204@consbio.org> References: <5143B2CB.4080204@consbio.org> Message-ID: I agree with him too. On 16 Mar 2013 08:46, "Nikolas Stevenson-Molnar" wrote: > I haven't tried it yet, but nginx-push-stream-module looks good: > https://github.com/wandenberg/nginx-push-stream-module > > _Nik > > On 3/15/2013 4:24 PM, djczaski wrote: > > What are the best options for websocket backends? I'm working with an > > embedded platform so I'm somewhat restricted to something lightweight > > in C/C++. The things I found so far are: > > > > libwebsockets: http://git.warmcat.com/cgi-bin/cgit/libwebsockets/ > > poco: http://www.appinf.com/docs/poco/Poco.Net.WebSocket.html > > > > Another option would be to handle it right in Openresty/ngx_lua. I > > see there was some discussion about this a little while back: > > > > https://github.com/chaoslawful/lua-nginx-module/issues/165 > > > > What are the best options? > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Sat Mar 16 15:05:00 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Sat, 16 Mar 2013 19:05:00 +0400 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> Message-ID: Jay, On Mar 16, 2013, at 12:37 PM, Jay Oster wrote: > Hi Maxim, > > Thanks for the suggestion! It looks like packet drop is the culprit here. The initial SYN packet doesn't receive a corresponding SYN-ACK from the upstream servers, so after a 1-second timeout (TCP Retransmission TimeOut), the packet is retransmitted. The question is still *why* this only occurs through nginx. > > To further narrow down the root cause, I moved my upstream server to the same machine with nginx. The issue can still be replicated there. To eliminate my upstream server as the cause (it's written in C with libevent, by the way) I used the nodejs hello world demo; nodejs has trouble with the 5,000 concurrent connections (go figure) but the connections that are successful (without nginx reverse proxying) all complete in less than one second. When I place nginx between ApacheBench and nodejs, that 1-second TCP RTO shows up again. You mean you keep seeing SYN-ACK loss through loopback? That might sound funny, but what's the OS and the overall environment of that strangely behaving machine with nginx? Is it a virtualized one? Is the other machine any different? The more details you can provide, the better :) > To reiterate, this is all happening on a single machine; the TCP stack is involved, but not a physical network. The only common denominator is nginx. Can you try the same tests on the other machine, where you originally didn't have any problems with your application? That is, can you repeat nginx+app on the other machine and see if the above strange behavior persists? > On Fri, Mar 15, 2013 at 1:20 AM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 14, 2013 at 07:07:20PM -0700, Jay Oster wrote: > > [...] > > > The access log has 10,000 lines total (i.e. two of these tests with 5,000 > > concurrent connections), and when I sort by upstream_response_time, I get a > > log with the first 140 lines having about 1s on the upstream_response_time, > > and the remaining 9,860 lines show 700ms and less. Here's a snippet showing > > the strangeness, starting with line numbers: > > > > > > 1: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 > > 2: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 > > 3: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.026 1.025 234 83 > > ... > > 138: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 1.000 0.999 234 81 > > 139: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 > > 140: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 > > 141: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > > 142: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > > 143: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 > > ... > > 9998: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 > > 9999: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 > > 10000: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" > > 200 19 "-" "ApacheBench/2.3" 0.122 0.002 234 81 > > > > > > > > The upstream_response_time difference between line 140 and 141 is nearly > > 500ms! The total request_time also displays an interesting gap of almost > > 300ms. What's going on here? > > I would suggests there are packet loss and retransmits for some > reason. Try tcpdump'ing traffic between nginx and backends to see > what goes on in details. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander_koch_log at lavabit.com Sat Mar 16 15:15:20 2013 From: alexander_koch_log at lavabit.com (alexander_koch_log) Date: Sat, 16 Mar 2013 16:15:20 +0100 Subject: Module to check if file contains a character Message-ID: <51448C88.3050605@lavabit.com> Hello, Without reading the whole file, what is the best approach to only check if the beginning of the file contains a specific character? Would something like this be the right approach? b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); then set file_pos, file_last etc. and later do ngx_cpymem(s, b->pos, size) to copy it in a u_char and then do a if s[0] == ? Thanks, Alex From nginx-forum at nginx.us Sat Mar 16 17:52:20 2013 From: nginx-forum at nginx.us (Reddirt) Date: Sat, 16 Mar 2013 13:52:20 -0400 Subject: The service is not available. Please try again later. In-Reply-To: References: Message-ID: <3ee503bda6bccce611d4c99b769088c6.NginxMailingListEnglish@forum.nginx.org> OK - I now have Nginx running the 5 Thin app servers! Thanks for the help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237379,237441#msg-237441 From praveen.yarlagadda at gmail.com Sat Mar 16 19:37:52 2013 From: praveen.yarlagadda at gmail.com (Praveen Yarlagadda) Date: Sat, 16 Mar 2013 12:37:52 -0700 Subject: some sort of attack? In-Reply-To: <20130316103842.GA18002@craic.sysops.org> References: <20130316103842.GA18002@craic.sysops.org> Message-ID: Thanks a lot, Jonathan and Francis! It works great. I am able to significantly reduce the load. Here is my final configuration: * limit_req_zone $binary_remote_addr zone=ratezone:10m rate=3r/s;* * server {* * listen 80;* * server_name www.example.com;* * * * location / {* * limit_req zone=ratezone burst=5 nodelay;* * proxy_pass http://appservers;* * }* * }* * * * server {* * listen 80;* * server_name ~.*;* * location / {* * access_log off;* * return 503;* * }* * }* -Praveen On Sat, Mar 16, 2013 at 3:38 AM, Francis Daly wrote: > On Sat, Mar 16, 2013 at 02:34:32AM -0700, Praveen Yarlagadda wrote: > > Hi there, > > > I installed nginx on an EC2 instance. > > > Another weird thing is GET > > requests are starting with *"http://". *I never saw it before. Is there > any > > way I can filter requests or possibly throw 503? > > These might be innocent requests from browsers configured to use your IP > address as a proxy server. (Maybe there was a proxy server on a previous > instance that used your current address?) > > I suggest making your current server{} blocks list all of the > server_name:s that you want to handle, and then let the default > server{} block handle these other requests, with "return 503" or any > other configuration you like. > > See http://nginx.org/r/listen and http://nginx.org/r/server_name for how > to configure server names and the default server for a given address:port. > > f > > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ephemeric at gmail.com Sat Mar 16 20:38:07 2013 From: ephemeric at gmail.com (Robert Gabriel) Date: Sat, 16 Mar 2013 22:38:07 +0200 Subject: Reverse Proxy Data Diode Message-ID: Hello all, The below article https://www.synergistscada.com/building-your-own-data-diode-with-open-source-solutions/ uses a fiber optic data diode along with Nginx as a reverse proxy. The author states: "TCP/IP client-server reverse proxies on either end of the data diode can be setup to respond to the hand shaking requests automatically without the need to actually send any data back to the insecure network. The client-server proxies solution should work in most cases however, through testing should be completed in a lab environment before deploying a data diode solution into an ICS." And "Step 5 ? Configure your Reverse Proxy Depending on the data you want to replicate you can either configure an open source reverse proxy like nginx (engine x) and use your database?s web services to replicate the data. Step 6 ? Disconnect one of the fiber optic ST connectors Once you have your two proxy servers configured and communicating to each other you can simply disconnect one of the two fiber ST connectors. You will likely need to spend time properly configuring your reverse proxy servers to relay the information correctly and you will need to write some scripts in your database to perform the continuous data replication." He however does not provide any working configuration. We would love to implement this and I greatly appreciate any help. If someone can at least just point me in the right direction I would be eternally grateful. Thank you. From mdounin at mdounin.ru Sat Mar 16 23:13:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 Mar 2013 03:13:49 +0400 Subject: upstream keepalive with upstream hash In-Reply-To: References: Message-ID: <20130316231349.GK15378@mdounin.ru> Hello! On Fri, Mar 15, 2013 at 03:48:56PM -0400, beaufour wrote: > I've been trying to get this setup working: > client <- c0 -> nginx1 <- c1 -> nginx2 <- c2 -> service (http) > > where the c1 connection is kept alive between request from from the outside, > but c0 and c2 are closed after each request. I've used the 'keepalive' > keyword in the upstream nginx1 config, and it works. Unfortunately I also > use the upstream hash patch on nginx1, and as soon as I enable that nginx1 > closes the connection forcefully. I've confirmed this with tcpdump in both > setups, and it's the only difference. Any hints to what I can do? > > I'm suspecting that the upstream hash module "takes over" the upstream > handling, and thus ignores the 'keepalive' keyword, but I'm randomly > guessing. Quoting http://nginx.org/r/keepalive: : When using load balancer methods other than the default : round-robin, it is necessary to activate them before the keepalive : directive. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Mar 16 23:39:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 Mar 2013 03:39:17 +0400 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> Message-ID: <20130316233917.GL15378@mdounin.ru> Hello! On Sat, Mar 16, 2013 at 01:37:22AM -0700, Jay Oster wrote: > Hi Maxim, > > Thanks for the suggestion! It looks like packet drop is the culprit here. > The initial SYN packet doesn't receive a corresponding SYN-ACK from the > upstream servers, so after a 1-second timeout (TCP Retransmission TimeOut), > the packet is retransmitted. The question is still *why* this only occurs > through nginx. Have you tried looking on tcpdump on both backend and nginx host? This might help to further narrow down the problem. I could see two possible causes here: 1) A trivial one. Listen queue of your backend service is exhausted, and the SYN packet is dropped due to this. This can be easily fixed by using bigger listen queue, and also easy enough to track as there are listen queue overflow counters available in most OSes. 2) Some other queue in the network stack is exhausted. This might be nontrivial to track (but usually possible too). > To further narrow down the root cause, I moved my upstream server to the > same machine with nginx. The issue can still be replicated there. To > eliminate my upstream server as the cause (it's written in C with libevent, > by the way) I used the nodejs hello world demo; nodejs has trouble with the > 5,000 concurrent connections (go figure) but the connections that are > successful (without nginx reverse proxying) all complete in less than one > second. When I place nginx between ApacheBench and nodejs, that 1-second > TCP RTO shows up again. > > To reiterate, this is all happening on a single machine; the TCP stack is > involved, but not a physical network. The only common denominator is nginx. Use of nginx may result in another distribution of connection attempts to a backend, resulting in bigger SYN packet bursts (especially if you use settings like multi_accept). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Mar 16 23:49:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 Mar 2013 03:49:01 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1363386747.13074.140661204958001.6D3BCC81@webmail.messagingengine.com> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <20130315081044.GQ15378@mdounin.ru> <1363386747.13074.140661204958001.6D3BCC81@webmail.messagingengine.com> Message-ID: <20130316234901.GM15378@mdounin.ru> Hello! On Sat, Mar 16, 2013 at 09:32:27AM +1100, Robert Mueller wrote: > > > In case of https, in many (most) cases there are pending data - > > due to various SSL packets send during connection close. This > > means connection close detection with https doesn't work unless > > you use kqueue. > > > > Further reading: > > > > http://mailman.nginx.org/pipermail/nginx/2011-June/027672.html > > http://mailman.nginx.org/pipermail/nginx/2011-November/030630.html > > These reports appear to relate to SSL upstream connections (both refer > to ngx_http_upstream_check_broken_connection). I'm talking about an SSL > client connection, with a plain http upstream connection. Both are about client connections. The ngx_http_upstream_check_broken_connection() function is here to check if client is disconnected or not. > When an https client drops it's connection, the upstream http proxy > connection is not dropped. If nginx can't detect an https client > disconnect properly, that must mean it's leaking connection information > internally doesn't it? No. It just can't say if a connection was closed or not as there are pending data in the connection, and it can't read data (there may be a pipelined request). Therefore in this case, being on the safe side, it assumes the connection isn't closed and doesn't try to abort upstream request. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Mar 17 08:17:09 2013 From: nginx-forum at nginx.us (Camayoc) Date: Sun, 17 Mar 2013 04:17:09 -0400 Subject: Reverse Proxy Data Diode In-Reply-To: References: Message-ID: I urge caution using this approach to a data diode. The question you ask is a very important one: where can I find a working configuration? Do not get me wrong, it is possible to make such approaches work, I have seen them in my companies test lab. The question you have to consider is reliability and trust. How reliable does the solution need to be? My experiece has been making something work in a test lab is relatively easy. However, making something work in a deployed environment, thus sustainable 24/7/365 is much harder. Intermittent data losses will happen over time? How does your application manage these? How do you implement re-synchronisation (can't be triggered automatically, as there is no feedback loop). Sorry, I am not answering your question directly, rather rasiing issues you need to consider before building something yourself. These issues are explored further iat the links below. Link: http://colinrobbins.me/2013/02/07/diy-data-diode-for-1612/ (reliability question) Link: http://colinrobbins.me/2013/03/12/can-you-trust-your-1612-diode/ (trust question) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237446,237451#msg-237451 From jay at kodewerx.org Sun Mar 17 09:17:52 2013 From: jay at kodewerx.org (Jason Oster) Date: Sun, 17 Mar 2013 02:17:52 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> Message-ID: <051BFC6E-6CAE-4844-978E-415E0939B36A@kodewerx.org> Hello Andrew, On Mar 16, 2013, at 8:05 AM, Andrew Alexeev wrote: > Jay, > > On Mar 16, 2013, at 12:37 PM, Jay Oster wrote: > >> Hi Maxim, >> >> Thanks for the suggestion! It looks like packet drop is the culprit here. The initial SYN packet doesn't receive a corresponding SYN-ACK from the upstream servers, so after a 1-second timeout (TCP Retransmission TimeOut), the packet is retransmitted. The question is still *why* this only occurs through nginx. >> >> To further narrow down the root cause, I moved my upstream server to the same machine with nginx. The issue can still be replicated there. To eliminate my upstream server as the cause (it's written in C with libevent, by the way) I used the nodejs hello world demo; nodejs has trouble with the 5,000 concurrent connections (go figure) but the connections that are successful (without nginx reverse proxying) all complete in less than one second. When I place nginx between ApacheBench and nodejs, that 1-second TCP RTO shows up again. > > You mean you keep seeing SYN-ACK loss through loopback? That appears to be the case, yes. I've captured packets with tcpdump, and load them into Wireshark for easier visualization. I can see a very clear gap where no packets are transmitting for over 500ms, then a burst of ~10 SYN packets. When I look at the TCP stream flow on these SYN bursts, it shows an initial SYN packet almost exactly 1 second earlier without a corresponding SYN-ACK. I'm taking the 1-second delay to be the RTO. I can provide some pieces of the tcpdump capture log on Monday, to help illustrate. > That might sound funny, but what's the OS and the overall environment of that strangely behaving machine with nginx? Is it a virtualized one? Is the other machine any different? The more details you can provide, the better :) It's a 64-bit Ubuntu 12.04 VM, running on an AWS m3.xlarge. Both VMs are configured the same. >> To reiterate, this is all happening on a single machine; the TCP stack is involved, but not a physical network. The only common denominator is nginx. > > Can you try the same tests on the other machine, where you originally didn't have any problems with your application? That is, can you repeat nginx+app on the other machine and see if the above strange behavior persists? Same configuration. I'm investigating this issue because it is common across literally dozens of servers we have running in AWS. It occurs in all regions, and on all instance types. This "single server" test is the first time the software has been run with nginx load balancing to upstream processes on the same machine. >> On Fri, Mar 15, 2013 at 1:20 AM, Maxim Dounin wrote: >> Hello! >> >> On Thu, Mar 14, 2013 at 07:07:20PM -0700, Jay Oster wrote: >> >> [...] >> >> > The access log has 10,000 lines total (i.e. two of these tests with 5,000 >> > concurrent connections), and when I sort by upstream_response_time, I get a >> > log with the first 140 lines having about 1s on the upstream_response_time, >> > and the remaining 9,860 lines show 700ms and less. Here's a snippet showing >> > the strangeness, starting with line numbers: >> > >> > >> > 1: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 >> > 2: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 1.027 1.026 234 83 >> > 3: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 1.026 1.025 234 83 >> > ... >> > 138: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 1.000 0.999 234 81 >> > 139: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 >> > 140: 127.0.0.1 - - [14/Mar/2013:17:57:18 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.999 0.999 234 81 >> > 141: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 >> > 142: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 >> > 143: 127.0.0.1 - - [14/Mar/2013:17:37:21 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.708 0.568 234 83 >> > ... >> > 9998: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 >> > 9999: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.142 0.005 234 81 >> > 10000: 127.0.0.1 - - [14/Mar/2013:17:57:16 -0700] "GET /time/0 HTTP/1.0" >> > 200 19 "-" "ApacheBench/2.3" 0.122 0.002 234 81 >> > >> > >> > >> > The upstream_response_time difference between line 140 and 141 is nearly >> > 500ms! The total request_time also displays an interesting gap of almost >> > 300ms. What's going on here? >> >> I would suggests there are packet loss and retransmits for some >> reason. Try tcpdump'ing traffic between nginx and backends to see >> what goes on in details. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at kodewerx.org Sun Mar 17 09:23:20 2013 From: jay at kodewerx.org (Jason Oster) Date: Sun, 17 Mar 2013 02:23:20 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: <20130316233917.GL15378@mdounin.ru> References: <20130315082059.GR15378@mdounin.ru> <20130316233917.GL15378@mdounin.ru> Message-ID: Hi again, Maxim! On Mar 16, 2013, at 4:39 PM, Maxim Dounin wrote: > Hello! > > On Sat, Mar 16, 2013 at 01:37:22AM -0700, Jay Oster wrote: > >> Hi Maxim, >> >> Thanks for the suggestion! It looks like packet drop is the culprit here. >> The initial SYN packet doesn't receive a corresponding SYN-ACK from the >> upstream servers, so after a 1-second timeout (TCP Retransmission TimeOut), >> the packet is retransmitted. The question is still *why* this only occurs >> through nginx. > > Have you tried looking on tcpdump on both backend and nginx host? > This might help to further narrow down the problem. I haven't yet, but I will restart my investigation there on Monday. Capturing packets on both sides during the same time frame may reveal something I haven't seen yet. > I could see two possible causes here: > > 1) A trivial one. Listen queue of your backend service is > exhausted, and the SYN packet is dropped due to this. This can be > easily fixed by using bigger listen queue, and also easy enough to > track as there are listen queue overflow counters available in > most OSes. Overflow queue is configured to 1024 on these hosts, though nothing changes when I increase it. I can however make the delay much longer by making the queue smaller. > 2) Some other queue in the network stack is exhausted. This might > be nontrivial to track (but usually possible too). This is interesting, and could very well be it! Do you have any suggestions on where to start looking? > >> To further narrow down the root cause, I moved my upstream server to the >> same machine with nginx. The issue can still be replicated there. To >> eliminate my upstream server as the cause (it's written in C with libevent, >> by the way) I used the nodejs hello world demo; nodejs has trouble with the >> 5,000 concurrent connections (go figure) but the connections that are >> successful (without nginx reverse proxying) all complete in less than one >> second. When I place nginx between ApacheBench and nodejs, that 1-second >> TCP RTO shows up again. >> >> To reiterate, this is all happening on a single machine; the TCP stack is >> involved, but not a physical network. The only common denominator is nginx. > > Use of nginx may result in another distribution of connection > attempts to a backend, resulting in bigger SYN packet bursts > (especially if you use settings like multi_accept). Got it. I don't think multi_accept is being used (it's not in the nginx config). Thank you. > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Mar 17 09:47:24 2013 From: nginx-forum at nginx.us (gadh) Date: Sun, 17 Mar 2013 05:47:24 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <20130314170431.GO15378@mdounin.ru> References: <20130314170431.GO15378@mdounin.ru> Message-ID: <5fde0da10e79730679f007f13afd07bb.NginxMailingListEnglish@forum.nginx.org> Ok, i'll attach my calling to subrequest code, its working flawlessly except the case i reported here: //------------------------------------------------------------------ /* Note: the purspose of this code is to call a handler module (at rewrite phase), send special POST subrequest to another server (independant of the main request), wait with the module untill subrequest finishes, process its data , then continue to backend or next handler module */ // the subrequest will call this handler after it finishes ngx_int_t ngx_aaa_post_subrequest_handler (ngx_http_request_t *r, void *data, ngx_int_t rc) { ngx_aaa_ctx_t *ctx = (ngx_aaa_ctx_t*)data; ngx_chain_t *bufs; ngx_uint_t status; if (rc != NGX_OK) { NGX_aaa_LOG_ERROR("bad response (nginx code %d)",rc); ctx->post.error = 1; aaa_SUB_PROF_END ngx_http_core_run_phases(r->main); // continue main request return NGX_OK; //cannot return rc if != NGX_OK - see below } if (r->upstream) // when sending to another server, then subrequest is passed on upstream module { bufs = r->upstream->out_bufs; status = r->upstream->state->status; } else // runs on the same nginx, another port { NGX_aaa_LOG_ERROR("response could not get by 'upstream' method. aborting"); ctx->post.error = 1; aaa_SUB_PROF_END ngx_http_core_run_phases(r->main); return NGX_OK; } if (status != NGX_HTTP_OK) // == 200 OK { NGX_aaa_LOG_ERROR("bad response status (%d)",status); ctx->post.error = 1; aaa_SUB_PROF_END ngx_http_core_run_phases(r->main); return NGX_OK; // when returning error in subrequest, the nginx loops over it untill ok, or after 2 loops its stucks the main req. } ctx->post.done = 1; ctx->post.response_data = ngx_aaa_utils_get_data_from_bufs(r, bufs); ctx->post.response_handler(r, data); // passing ctx->post->response_data to ngx_aaa_response_handler() - data parsing if (!ctx->post.response_data) ngx_http_core_run_phases(r->main); if (!ctx->standalone) ngx_http_core_run_phases(r->main); // release main request from its wait and send it to the backend server return NGX_OK; } // main code of calling to subrequest ngx_int_t ngx_aaa_send_post_subrequest(ngx_http_request_t *r, ngx_aaa_ctx_t *ctx, char *_uri, ngx_str_t *data, ngx_uint_t is_ret) { ngx_http_request_t *sr; ngx_uint_t flags = 0; ngx_http_post_subrequest_t *psr; ngx_str_t uri; ngx_int_t res; ngx_buf_t *buf; flags = NGX_HTTP_SUBREQUEST_IN_MEMORY; uri.data = (u_char*)_uri; uri.len = strlen(_uri); psr = ngx_palloc(r->pool, sizeof(ngx_http_post_subrequest_t)); if (!psr) return NGX_HTTP_INTERNAL_SERVER_ERROR; ctx->done = 0; ctx->post.done = 0; ctx->post.start = 1; if (is_ret) // return answer to caller, async { psr->handler = ngx_aaa_post_subrequest_handler; // register callback function for returning ans from the other end psr->data = ctx; } else psr = NULL; // this func only registers the subreq in queue, but not activates it yet // note: sr->request_body is nulled during this func, alloc later res = ngx_http_subrequest(r, &uri, NULL , &sr, psr, flags); if (res) return NGX_HTTP_INTERNAL_SERVER_ERROR; ngx_memzero(&sr->headers_in, sizeof(sr->headers_in)); buf = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); if (!buf) return NGX_ERROR; // args is an ngx_str_t with the body sr->method = NGX_HTTP_POST; ngx_memcpy(&(sr->method_name), &ngx_aaa_post_method_name, sizeof(ngx_str_t)); buf->temporary = 1; buf->pos = data->data; buf->last = buf->pos + data->len; // do not inherit rb from parent sr->request_body = ngx_palloc(r->pool, sizeof(ngx_http_request_body_t)); NGX_aaa_CHECK_ALLOC_AND_RETURN(sr->request_body) // note: always alloc bufs even if ptr is lid - since its garbage from former request ! (caused seg fault in mod_proxy !) sr->request_body->bufs = ngx_alloc_chain_link(r->pool); NGX_aaa_CHECK_ALLOC_AND_RETURN(sr->request_body->bufs) // post body - re-populate , do not inherit from parent sr->request_body->bufs->buf = buf; sr->request_body->bufs->next = NULL; sr->request_body->buf = buf; sr->header_in = NULL; buf->last_in_chain = 1; buf->last_buf = 1; sr->request_body_in_single_buf = 1; sr->headers_in.content_length_n = ngx_buf_size(buf); ngx_str_t c_len_key = ngx_string("Content-Length"); ngx_str_t c_len_l; char len_str[20]; sprintf(len_str, "%lu", ngx_buf_size(buf)); c_len_l.data = (u_char*)len_str; c_len_l.len = strlen(len_str); ngx_aaa_set_input_header(sr, &sr->headers_in.content_length, &c_len_key, &c_len_l); ngx_str_t key, l; ngx_str_set(&key,"Content-Type"); ngx_str_set(&l, "application/x-www-form-urlencoded"); ngx_aaa_set_input_header(sr, &sr->headers_in.content_type, &key, &l); return NGX_OK; } // handler module main function - calls the subrequest, waits for it to finish ngx_int_t ngx_aaa_handler(ngx_http_request_t *r) { // pseudo code: alloc module ctx - only once if (ctx->post.start) { // check if post subrequest has ended - then call next module handler if (ctx->post.done) { return NGX_DECLINED; // declined - if hdl is reg. in rewrite phase } else // wait for post subrequest to finish unless error { if (ctx->post.error) { return NGX_DECLINED; // subrequest finished - call next handler module } else { return NGX_AGAIN; // wait untill finish response to our subrequest } } } // prepare subrequest // ngx_str - post body for the subrequest ctx->post.response_handler = ngx_aaa_response_handler; // for subrequest response data parsing rc = ngx_aaa_send_post_subrequest(r, ctx, url, ngx_str, 1); if (rc != NGX_OK) { NGX_aaa_LOG_ERROR("ngx_aaa_send_post_subrequest failed (error %d)",rc); return NGX_DECLINED; } /* NGX_DECLINED == pass to next handler, do not wait. * NGX_OK == wait for subrequest to finish first (non blocking, of course) */ return NGX_OK; } //------------------------------------------------------------------ i'de appreciate your help, BTW, is there any "nginx subrequest coding guide" documentation available ? its very confusing and lacks much info on the web, i got it working only thru alot of trial-and-error. Tnx Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237454#msg-237454 From ephemeric at gmail.com Sun Mar 17 11:32:43 2013 From: ephemeric at gmail.com (Robert Gabriel) Date: Sun, 17 Mar 2013 13:32:43 +0200 Subject: Reverse Proxy Data Diode In-Reply-To: References: Message-ID: On 17 March 2013 10:17, Camayoc wrote: > I urge caution using this approach to a data diode. > The question you ask is a very important one: where can I find a working > configuration? > Do not get me wrong, it is possible to make such approaches work, I have > seen them in my companies test lab. > The question you have to consider is reliability and trust. > How reliable does the solution need to be? My experiece has been making > something work in a test lab is relatively easy. However, making something > work in a deployed environment, thus sustainable 24/7/365 is much harder. > Intermittent data losses will happen over time? How does your application > manage these? How do you implement re-synchronisation (can't be triggered > automatically, as there is no feedback loop). Thank you for your response. I have read both links before and understand the implications. I just wanted to see this work and simply cannot believe how expensive commercial solutions are. From mdounin at mdounin.ru Sun Mar 17 11:42:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 Mar 2013 15:42:24 +0400 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> <20130316233917.GL15378@mdounin.ru> Message-ID: <20130317114224.GR15378@mdounin.ru> Hello! On Sun, Mar 17, 2013 at 02:23:20AM -0700, Jason Oster wrote: [...] > > 1) A trivial one. Listen queue of your backend service is > > exhausted, and the SYN packet is dropped due to this. This > > can be easily fixed by using bigger listen queue, and also > > easy enough to track as there are listen queue overflow > > counters available in most OSes. > > Overflow queue is configured to 1024 on these hosts, though > nothing changes when I increase it. I can however make the delay > much longer by making the queue smaller. On "these hosts"? Note that listen queue aka backlog size is configured in _applications_ which call listen(). At a host level you may only configure somaxconn, which is maximum allowed listen queue size (but an application may still use anything lower, even just 1). Make sure to check actual listen queue sizes used on listen sockets involved. On Linux (you are using Linux, right?) this should be possible with "ss -nlt" (or "netstat -nlt"). > > 2) Some other queue in the network stack is exhausted. This > > might be nontrivial to track (but usually possible too). > > This is interesting, and could very well be it! Do you have any > suggestions on where to start looking? I'm not a Linux expert, but quick search suggests it should be possible with dropwatch, see e.g. here: http://prefetch.net/blog/index.php/2011/07/11/using-netstat-and-dropwatch-to-observe-packet-loss-on-linux-servers/ -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Mar 17 12:46:21 2013 From: nginx-forum at nginx.us (Camayoc) Date: Sun, 17 Mar 2013 08:46:21 -0400 Subject: Reverse Proxy Data Diode In-Reply-To: References: Message-ID: <92fefb58773b84a17f73e61bb760a1eb.NginxMailingListEnglish@forum.nginx.org> I'd argue the commercial solutions are value for money, given the complexities. By I accept I am biased :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237446,237457#msg-237457 From nginx-forum at nginx.us Sun Mar 17 19:00:23 2013 From: nginx-forum at nginx.us (gadh) Date: Sun, 17 Mar 2013 15:00:23 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <5fde0da10e79730679f007f13afd07bb.NginxMailingListEnglish@forum.nginx.org> References: <20130314170431.GO15378@mdounin.ru> <5fde0da10e79730679f007f13afd07bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: more info: when i use "ignore client abort = on" , the crash happens when the client aborts the connection, BEFORE my subrequest handler is called, so its unlikely this code causes the crash. also, i send the subrequest to a configured url named "aaa_post/" which uses the proxy module to send it to other server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237463#msg-237463 From amuniz at klicap.es Sun Mar 17 19:32:51 2013 From: amuniz at klicap.es (=?ISO-8859-1?Q?Antonio_Manuel_Mu=F1iz_Mart=EDn?=) Date: Sun, 17 Mar 2013 20:32:51 +0100 Subject: NGINX built and analyzed Message-ID: Hi guys, We've taken the freedom of configure a built of NGINX in a daily basis. So, every day (if there are changes in the source code) a new build will run. After each build a Sonar analysis will be performed. This is the build link: http://live.clinkerhq.com/jenkins/job/nginx-build And this is the analysis results link: http://live.clinkerhq.com/sonar/dashboard/index/3656 So, NGINX has a eye over his code now :-) Cheers, Antonio. -- Antonio Manuel Mu?iz Mart?n Software Developer at klicap - ingenier?a del puzle work phone + 34 954 894 322 www.klicap.es | blog.klicap.es From grails at jmsd.co.uk Sun Mar 17 20:08:39 2013 From: grails at jmsd.co.uk (John Moore) Date: Sun, 17 Mar 2013 20:08:39 +0000 Subject: Simple question about proxy cache Message-ID: <4todoxi4u16o.1gq09-jp1sj40s@elasticemail.com> I've used nginx as a reverse proxy server for a long while but I've not tried out the proxy cache until today, and I have to say I'm a little bit confused by what I'm seeing in the cache log, and I'm wondering whether I've set things up correctly. My requirements are actually pretty simple. I have a couple of locations which I want to proxy to another server and cache the results. Thus: location /media/house_images/{ proxy_pass http://backend; proxy_cache one; } location /media/boat_images/{ proxy_pass http://backend; proxy_cache one; } Apart from this, I don't want any cacheing of responses to be done. I am assuming that the default is NOT to cache unless a cache zone is specified (at the server or location level, presumably), so either omitting a proxy_cache or specifying 'proxy_cache off' should be sufficient to achieve this, should it not? Two things are puzzling me, though. Firstly, in the cache log, I'm seeing the URLs of all kinds of requests which SHOULD NOT be cached, and I'm wondering whether all requests are logged whether they're cached or not - I certainly hope this is the case and it's not actually cacheing these responses. I would definitely prefer to only see entries in the log for requests matching locations for which a cache has been specified. I presume this is possible? Secondly, the very requests which I would expect to be cached are all showing up in the log with the word 'MISS' in the $upstream_cache_status column. So, I am worried that I'm currently cacheing everything EXCEPT the pages I actually want to be cached! Could someone please clarify whether my assumptions are correct, and maybe explain how I should set up cacheing to do just what I want, in the simplest way? Thanks. John From mdounin at mdounin.ru Sun Mar 17 23:08:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Mar 2013 03:08:27 +0400 Subject: Simple question about proxy cache In-Reply-To: <4todoxi4u16o.1gq09-jp1sj40s@elasticemail.com> References: <4todoxi4u16o.1gq09-jp1sj40s@elasticemail.com> Message-ID: <20130317230827.GT15378@mdounin.ru> Hello! On Sun, Mar 17, 2013 at 08:08:39PM +0000, John Moore wrote: > I've used nginx as a reverse proxy server for a long while but I've not > tried out the proxy cache until today, and I have to say I'm a little > bit confused by what I'm seeing in the cache log, and I'm wondering > whether I've set things up correctly. My requirements are actually > pretty simple. I have a couple of locations which I want to proxy to > another server and cache the results. Thus: > > location /media/house_images/{ > proxy_pass http://backend; > proxy_cache one; > } > > location /media/boat_images/{ > proxy_pass http://backend; > proxy_cache one; > } > > > Apart from this, I don't want any cacheing of responses to be done. I am > assuming that the default is NOT to cache unless a cache zone is > specified (at the server or location level, presumably), so either > omitting a proxy_cache or specifying 'proxy_cache off' should be > sufficient to achieve this, should it not? Yes, without proxy_cache (or with "proxy_cache off") configured in a location cache won't be used. > Two things are puzzling me, though. Firstly, in the cache log, I'm > seeing the URLs of all kinds of requests which SHOULD NOT be cached, and > I'm wondering whether all requests are logged whether they're cached or > not - I certainly hope this is the case and it's not actually cacheing > these responses. I would definitely prefer to only see entries in the > log for requests matching locations for which a cache has been > specified. I presume this is possible? You can configure logs for a specific location, see http://nginx.org/r/access_log. > Secondly, the very requests which I would expect to be cached are all > showing up in the log with the word 'MISS' in the $upstream_cache_status > column. This usually happens if your backend doesn't specify allowed cache time (in this case, proxy_cache_valid should be used to set one, see http://nginx.org/r/proxy_cache_valid) or if backend responses doesn't allow cache to be used (either directly with Cache-Control/Expires headers, or indirectly with Set-Cookie header, see http://nginx.org/r/proxy_ignore_headers). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Mar 17 23:52:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Mar 2013 03:52:13 +0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <5fde0da10e79730679f007f13afd07bb.NginxMailingListEnglish@forum.nginx.org> References: <20130314170431.GO15378@mdounin.ru> <5fde0da10e79730679f007f13afd07bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130317235213.GW15378@mdounin.ru> Hello! On Sun, Mar 17, 2013 at 05:47:24AM -0400, gadh wrote: Below just couple of comments. Outlined problems are enough to cause arbitrary segmentation faults, and I haven't looked for more. [...] > ngx_memzero(&sr->headers_in, sizeof(sr->headers_in)); Note: this ruins original request headers. It's enough to cause anything. [...] > // do not inherit rb from parent > sr->request_body = ngx_palloc(r->pool, sizeof(ngx_http_request_body_t)); > NGX_aaa_CHECK_ALLOC_AND_RETURN(sr->request_body) > > // note: always alloc bufs even if ptr is lid - since its garbage from > former request ! (caused seg fault in mod_proxy !) > sr->request_body->bufs = ngx_alloc_chain_link(r->pool); > NGX_aaa_CHECK_ALLOC_AND_RETURN(sr->request_body->bufs) > > // post body - re-populate , do not inherit from parent > sr->request_body->bufs->buf = buf; > sr->request_body->bufs->next = NULL; > sr->request_body->buf = buf; Note: you allocate request body structure and only initialize some of it's members. E.g. sr->request_body->temp_file remains uninitialized and will likely be dereferenced, resulting in segmentation fault. You have to at least change ngx_palloc() to ngx_pcalloc(). [...] > BTW, is there any "nginx subrequest coding guide" documentation available ? > its very confusing and lacks much info on the web, i got it working only > thru alot of trial-and-error. Subrequests are dead simple in it's supported form: you just call ngx_http_subrequest() in a body filter, and the result is added to the output at the appropriate point. Good sample is available in ngx_http_addition_filter_module.c. What you try to do with subrequests isn't really supported (the fact that it works - is actually a side effect of subrequests processing rewrite in 0.7.25), hence no guides. -- Maxim Dounin http://nginx.org/en/donation.html From yaoweibin at gmail.com Mon Mar 18 04:21:14 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Mon, 18 Mar 2013 12:21:14 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130314141348.GV8912@reaktio.net> References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> <20130228181246.GT81985@mdounin.ru> <20130314141348.GV8912@reaktio.net> Message-ID: 2013/3/14 Pasi K?rkk?inen > On Thu, Feb 28, 2013 at 10:12:47PM +0400, Maxim Dounin wrote: > > Hello! > > > > Hello, > > > On Thu, Feb 28, 2013 at 05:36:23PM +0000, Andr? Cruz wrote: > > > > > I'm also very interested in being able to configure nginx to NOT > > > proxy the entire request. > > > > > > Regarding this patch, > > > https://github.com/alibaba/tengine/pull/91, is anything > > > fundamentally wrong with it? I don't understand Chinese so I'm > > > at a loss here... > > > > As a non-default mode of operation the aproach taken is likely > > good enough (not looked into details), but the patch won't work > > with current nginx versions - at least it needs (likely major) > > adjustments to cope with changes introduced during work on chunked > > request body support as available in nginx 1.3.9+. > > > > Weibin: Have you thought of upstreaming the no_buffer patch to nginx 1.3.x > so it could become part of next nginx stable version 1.4 ? > You can see my first email in this thread, The nginx team are working on it. But I don't know when they finish it. You can wait for their version. Thanks. > > It'd be really nice to have the no_buffer functionality in stock nginx! > (the current no_buffer_v5.patch seems to work OK for me on nginx 1.2.7) > > -- Pasi > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Mar 18 05:40:24 2013 From: nginx-forum at nginx.us (gadh) Date: Mon, 18 Mar 2013 01:40:24 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <20130317235213.GW15378@mdounin.ru> References: <20130317235213.GW15378@mdounin.ru> Message-ID: thanks Maxim ! i very appreciate your help on this. about the temp file - i protect from a response to be written to a file by knowing the max size that can be sent by the server and enlarging the proxy buffers accordingly. i know i ruin the original request header - its the main purpose for my code ! i want to issue an independant subrequest to another server, no to to the original. but the r->main->... is not ruined and acting ok afterwards. in any case, i ask you to support this subrequest mechasnim, its obviously needed to send a subrequest to any server, not just to the original one, and also to control its response instead of just adding it to the start/end of page, its alot more flexible. can i use another mechanism in order to achive those goals ? to create a new upstream module ? tnx Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237475#msg-237475 From nginx-forum at nginx.us Mon Mar 18 10:40:07 2013 From: nginx-forum at nginx.us (shajalalmia2) Date: Mon, 18 Mar 2013 06:40:07 -0400 Subject: Find Duplicate Files and Free Up Disk Space Message-ID: <7023ddeb96ee72274f24ff5a2d970e9a.NginxMailingListEnglish@forum.nginx.org> I faced same problem my computers many files is Duplicate. so my need help .please tell me your idea Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237481,237481#msg-237481 From mdounin at mdounin.ru Mon Mar 18 11:17:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Mar 2013 15:17:34 +0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: References: <20130317235213.GW15378@mdounin.ru> Message-ID: <20130318111734.GZ15378@mdounin.ru> Hello! On Mon, Mar 18, 2013 at 01:40:24AM -0400, gadh wrote: > thanks Maxim ! i very appreciate your help on this. > about the temp file - i protect from a response to be written to a file by > knowing the max size that can be sent by the server and enlarging the proxy > buffers accordingly. You are not initializing subrequest's request_body->temp_file pointer (among other request_body members). It might point anywhere, and will cause problems. > i know i ruin the original request header - its the main purpose for my code > ! i want to issue an independant subrequest to another server, no to to the > original. but the r->main->... is not ruined and acting ok afterwards. Yes, indeed. Note though, that by changing headers_in structure you are responsible for it's consistency. It's usually much better idea to use upstream functionality to create needed request to an upstream instead (proxy_set_body, proxy_pass_headers and so on). > in any case, i ask you to support this subrequest mechasnim, its obviously > needed to send a subrequest to any server, not just to the original one, and > also to control its response instead of just adding it to the start/end of > page, its alot more flexible. > can i use another mechanism in order to achive those goals ? to create a new > upstream module ? What is supported is subrequest in memory functionality, which allows you to get the response in memory instead of appending it to the response. It only works with certain upstream protocols though. And it wasn't supposed to work at arbitrary request processing phases, so it might be non-trivial to do things properly, in particular - ensure subrequest consistency at early phases of request processing and to rerun the main request once subrequest is complete. -- Maxim Dounin http://nginx.org/en/donation.html From grails at jmsd.co.uk Mon Mar 18 11:21:22 2013 From: grails at jmsd.co.uk (John Moore) Date: Mon, 18 Mar 2013 11:21:22 +0000 Subject: Simple question about proxy cache Message-ID: <4todvx2xlhj4.1gq09-jphjby3r@elasticemail.com> On 17/03/13 23:08, Maxim Dounin wrote: > Hello! > > On Sun, Mar 17, 2013 at 08:08:39PM +0000, John Moore wrote: > >> I've used nginx as a reverse proxy server for a long while but I've not >> tried out the proxy cache until today, and I have to say I'm a little >> bit confused by what I'm seeing in the cache log, and I'm wondering >> whether I've set things up correctly. My requirements are actually >> pretty simple. I have a couple of locations which I want to proxy to >> another server and cache the results. Thus: >> >> location /media/house_images/{ >> proxy_pass http://backend; >> proxy_cache one; >> } >> >> location /media/boat_images/{ >> proxy_pass http://backend; >> proxy_cache one; >> } >> >> >> Apart from this, I don't want any cacheing of responses to be done. I am >> assuming that the default is NOT to cache unless a cache zone is >> specified (at the server or location level, presumably), so either >> omitting a proxy_cache or specifying 'proxy_cache off' should be >> sufficient to achieve this, should it not? > Yes, without proxy_cache (or with "proxy_cache off") configured in > a location cache won't be used. > >> Two things are puzzling me, though. Firstly, in the cache log, I'm >> seeing the URLs of all kinds of requests which SHOULD NOT be cached, and >> I'm wondering whether all requests are logged whether they're cached or >> not - I certainly hope this is the case and it's not actually cacheing >> these responses. I would definitely prefer to only see entries in the >> log for requests matching locations for which a cache has been >> specified. I presume this is possible? > You can configure logs for a specific location, see > http://nginx.org/r/access_log. > >> Secondly, the very requests which I would expect to be cached are all >> showing up in the log with the word 'MISS' in the $upstream_cache_status >> column. > This usually happens if your backend doesn't specify allowed cache > time (in this case, proxy_cache_valid should be used to set one, > see http://nginx.org/r/proxy_cache_valid) or if backend responses > doesn't allow cache to be used (either directly with > Cache-Control/Expires headers, or indirectly with Set-Cookie > header, see http://nginx.org/r/proxy_ignore_headers). > Excellent - thanks, Maxim! That's got me sorted now, it all seems to be working as planned. From nginx-forum at nginx.us Mon Mar 18 11:24:10 2013 From: nginx-forum at nginx.us (gadh) Date: Mon, 18 Mar 2013 07:24:10 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <20130318111734.GZ15378@mdounin.ru> References: <20130318111734.GZ15378@mdounin.ru> Message-ID: <30b7543d21d0905fd667bc596898e6db.NginxMailingListEnglish@forum.nginx.org> > Note though, that by changing headers_in structure you are > responsible for it's consistency. It's usually much better idea > to use upstream functionality to create needed request to an > upstream instead (proxy_set_body, proxy_pass_headers and so on). > but can i wait for the upstream to return and delay the request from passing on to backend as i do in my subrequest ? when i use your suggested proxy directives i have no control on that Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237485#msg-237485 From nginx-forum at nginx.us Mon Mar 18 13:45:37 2013 From: nginx-forum at nginx.us (gadh) Date: Mon, 18 Mar 2013 09:45:37 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <20130318111734.GZ15378@mdounin.ru> References: <20130318111734.GZ15378@mdounin.ru> Message-ID: <59f7f1720669f55f9dda485cdf6d075f.NginxMailingListEnglish@forum.nginx.org> i changed to pcalloc as you told me and the crash seems to be solved !! thanks alot Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,237488#msg-237488 From nginx-forum at nginx.us Mon Mar 18 13:52:59 2013 From: nginx-forum at nginx.us (beaufour) Date: Mon, 18 Mar 2013 09:52:59 -0400 Subject: upstream keepalive with upstream hash In-Reply-To: <20130316231349.GK15378@mdounin.ru> References: <20130316231349.GK15378@mdounin.ru> Message-ID: <2994f188d2a0e26a66fb88de750c2d43.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: > > I'm suspecting that the upstream hash module "takes over" the > upstream > > handling, and thus ignores the 'keepalive' keyword, but I'm randomly > > guessing. > > Quoting http://nginx.org/r/keepalive: > > : When using load balancer methods other than the default > : round-robin, it is necessary to activate them before the keepalive > : directive. Doh! That seems to do the trick. Thanks! Allan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237417,237489#msg-237489 From WBrown at e1b.org Mon Mar 18 14:23:12 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Mon, 18 Mar 2013 10:23:12 -0400 Subject: Reverse Proxy Data Diode In-Reply-To: <92fefb58773b84a17f73e61bb760a1eb.NginxMailingListEnglish@forum.nginx.org> References: <92fefb58773b84a17f73e61bb760a1eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: From: "Camayoc" > I'd argue the commercial solutions are value for money, given the > complexities. Not to mention most organizations that would need such a device like having someone to hold accountable (usually via lawsuit) when it fails. Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From lists at ruby-forum.com Mon Mar 18 15:12:58 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Mon, 18 Mar 2013 16:12:58 +0100 Subject: 502 Bad Gateway- Nginx and thin Message-ID: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> I have install nginx 1.1.19 and thin 1.5.0 in Ubuntu 12.04 and I am install redmine, when I try to access the page of redmine gives me the following error: 502 Bad Gateway. In Nginx log reads: 2013/03/18 10:45:09 [crit] 13886#0: *11 connect() to unix:/tmp/thin.0.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: redmine_nginx.ipp.com, request: "GET / HTTP/1.1", upstream: "http://unix:/tmp/thin.0.sock:/", host: "redmine_nginx.ipp.com" thin configuration: chdir: /var/www/redmine_nginx/ environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: /var/log/thin/redmine.log #pid: /var/run/thin/redmine.pid pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 1 daemonize: true socket: /tmp/thin.sock group: www-data user: www-data Nginx Service block: upstream thin_cluster { server unix:/tmp/thin.0.sock; } server { listen 80; ## listen for ipv4 # Set appropriately for virtual hosting and to use server_name_in_redirect server_name redmine_nginx.ipp.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access.log; error_log /var/log/nginx/localhost.error.log; # Note: Documentation says proxy_set_header should work in location # block, but testing did not support this statement so it has # been placed here in server block include /etc/nginx/proxy_opts; proxy_redirect off; # Note: Must match the prefix used in Thin configuration for Redmine # or / if no prefix configured location / { root /var/www/redmine_nginx/public; error_page 404 404.html; error_page 500 502 503 504 500.html; try_files $uri/index.html $uri.html $uri @redmine_thin_servers; } location @redmine_thin_servers { proxy_pass http://thin_cluster; } } the include /etc/nginx/proxy_opts; # Shared options used by all proxies proxy_set_header Host $http_host; # Following headers are not used by Redmine but may be useful for plugins and # other web applications proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Any other options for all proxies here client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Mon Mar 18 15:22:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Mar 2013 19:22:29 +0400 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> Message-ID: <20130318152228.GE15378@mdounin.ru> Hello! On Mon, Mar 18, 2013 at 04:12:58PM +0100, Yunior Miguel A. wrote: > I have install nginx 1.1.19 and thin 1.5.0 in Ubuntu 12.04 and I am > install redmine, when I try to access the page of redmine gives me the > following error: 502 Bad Gateway. In Nginx log reads: > 2013/03/18 10:45:09 [crit] 13886#0: *11 connect() to > unix:/tmp/thin.0.sock failed (2: No such file or directory) while > connecting to upstream, client: 127.0.0.1, server: > redmine_nginx.ipp.com, request: "GET / HTTP/1.1", upstream: > "http://unix:/tmp/thin.0.sock:/", host: "redmine_nginx.ipp.com" > thin configuration: [...] This: > socket: /tmp/thin.sock doesn't match this: > server unix:/tmp/thin.0.sock; and the mismatch explains the problem. -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Mon Mar 18 15:26:20 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Mar 2013 15:26:20 +0000 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> Message-ID: <20130318152620.GC18002@craic.sysops.org> On Mon, Mar 18, 2013 at 04:12:58PM +0100, Yunior Miguel A. wrote: Hi there, > I have install nginx 1.1.19 and thin 1.5.0 in Ubuntu 12.04 and I am > install redmine, when I try to access the page of redmine gives me the > following error: 502 Bad Gateway. In Nginx log reads: > 2013/03/18 10:45:09 [crit] 13886#0: *11 connect() to > unix:/tmp/thin.0.sock failed (2: No such file or directory) while Where did you tell thin to listen? Where did you tell nginx that thin would be listening? Are they the same? f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Mon Mar 18 16:13:12 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Mon, 18 Mar 2013 17:13:12 +0100 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> Message-ID: <1ceefdadada1ddea9cfe8ff2eabe2d16@ruby-forum.com> Whn I am go /tmp/thin.0.sock that file exist. Sometime when i am reset the thin read thin: /var/lib/gems/1.9.1/gems/activesupport-3.2.12/lib/active_support/dependencies.rb:251:in `block in require': iconv will be deprecated in the future, use String#encode instead. I am put the same port and change server unix:/tmp/thin.sock; and the log is the same. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Mon Mar 18 16:29:05 2013 From: nginx-forum at nginx.us (RedFoxy) Date: Mon, 18 Mar 2013 12:29:05 -0400 Subject: rewrite for missing images Message-ID: <885beabc5a846f7a628004946d1e058b.NginxMailingListEnglish@forum.nginx.org> Hello! I want do a specific rewrite for missing image, I mean that I've a 404 page for all other missing pages, now I want do a rewrite that shows an image when is an image that is missing. is that possible? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237502,237502#msg-237502 From nginx-forum at nginx.us Mon Mar 18 16:33:18 2013 From: nginx-forum at nginx.us (RedFoxy) Date: Mon, 18 Mar 2013 12:33:18 -0400 Subject: custom error page with errors in it Message-ID: <39e96e53ff3c93e95736b4c9317505ec.NginxMailingListEnglish@forum.nginx.org> Hi all! I does some rules for various web errors, that rules shows a custom error page: error_page 500 502 503 504 /50x.html; location = /50x.html { root /etc/nginx/_conf/error-page; } error_page 404 /404.html; location = /404.html { root /etc/nginx/_conf/error-page; } That's ok but there is a way to add some informations about the error in the pages? In apache there was some "apache-code" to include in the HTML page to add the web server output error in the html page, there is something like it in nginx? thank's Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237503,237503#msg-237503 From francis at daoine.org Mon Mar 18 17:14:49 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Mar 2013 17:14:49 +0000 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: <1ceefdadada1ddea9cfe8ff2eabe2d16@ruby-forum.com> References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> <1ceefdadada1ddea9cfe8ff2eabe2d16@ruby-forum.com> Message-ID: <20130318171449.GD18002@craic.sysops.org> On Mon, Mar 18, 2013 at 05:13:12PM +0100, Yunior Miguel A. wrote: Hi there, > I am put the same port and change server unix:/tmp/thin.sock; > > and the log is the same. I'm not sure what exact configuration and log you are looking at in this test, but if your nginx.conf says "unix:/tmp/thin.sock" and your nginx error log says "connect() to unix:/tmp/thin.0.sock failed", then your actually-running nginx is not using that nginx.conf. Alternatively, if your nginx.conf says "unix:/tmp/thin.sock" and your nginx error log says "connect() to unix:/tmp/thin.sock failed", then that strongly suggests that your thin is not listening on that socket. So either make sure that thin is doing what you expect, or make sure that nginx is doing what you expect, depending on what your test showed. f -- Francis Daly francis at daoine.org From krjeschke at omniti.com Mon Mar 18 18:28:12 2013 From: krjeschke at omniti.com (Katherine Jeschke) Date: Mon, 18 Mar 2013 14:28:12 -0400 Subject: Surge 2013 CFP Open Message-ID: The Surge 2013 CFP is open. For details or to submit a paper, please visit http://surge.omniti.com/2013 -- Katherine Jeschke Director of Marketing and Creative Services OmniTI Computer Consulting, Inc. 11830 West Market Place, Suite F Fulton, MD 20759 O: 240-646-0770, 222 F: 301-497-2001 C: 443/643-6140 omniti.com Surge 2013 The information contained in this electronic message and any attached documents is privileged, confidential, and protected from disclosure. If you are not the intended recipient, note that any review, disclosure, copying, distribution, or use of the contents of this electronic message or any attached documents is prohibited. If you have received this communication in error, please destroy it and notify us immediately by telephone (1-443-325-1360) or by electronic mail (info at omniti.com). Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Mon Mar 18 19:53:10 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Mon, 18 Mar 2013 20:53:10 +0100 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> Message-ID: Thans for all. The end configuration: thin configuration: chdir: /var/www/redmine/ environment: production address: 127.0.0.1 port: 3000 timeout: 30 log: /var/log/thin/gespro.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 1 daemonize: true socket: /tmp/thin.sock group: www-data user: www-data ngin configuration: upstream thin_cluster { server unix:/tmp/thin.0.sock; # server unix:/tmp/thin.1.sock; # server unix:/tmp/thin.2.sock; } server { listen 80; ## listen for ipv4 # Set appropriately for virtual hosting and to use server_name_in_redirect server_name redmine.ipp.uci.cu; server_name_in_redirect off; access_log /var/log/nginx/localhost.access.log; error_log /var/log/nginx/localhost.error.log; # Note: Documentation says proxy_set_header should work in location # block, but testing did not support this statement so it has # been placed here in server block include /etc/nginx/proxy_opts; proxy_redirect off; # Note: Must match the prefix used in Thin configuration for Redmine # or / if no prefix configured location / { root /var/www/redmine/public; error_page 404 404.html; error_page 500 502 503 504 500.html; try_files $uri/index.html $uri.html $uri @redmine_thin_servers; } location @redmine_thin_servers { proxy_pass http://thin_cluster; } } thanks for all. -- Posted via http://www.ruby-forum.com/. From black.fledermaus at arcor.de Mon Mar 18 20:10:16 2013 From: black.fledermaus at arcor.de (basti) Date: Mon, 18 Mar 2013 21:10:16 +0100 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> Message-ID: <514774A8.6040503@arcor.de> thin => socket: /tmp/thin.sock nginx => server unix:/tmp/thin.0.sock; would be the problem i think. Am 18.03.2013 20:53, schrieb Yunior Miguel A.: > Thans for all. The end configuration: > > thin configuration: > > chdir: /var/www/redmine/ > environment: production > address: 127.0.0.1 > port: 3000 > timeout: 30 > log: /var/log/thin/gespro.log > pid: tmp/pids/thin.pid > max_conns: 1024 > max_persistent_conns: 512 > require: [] > wait: 30 > servers: 1 > daemonize: true > socket: /tmp/thin.sock > group: www-data > user: www-data > > ngin configuration: > > upstream thin_cluster { > server unix:/tmp/thin.0.sock; > # server unix:/tmp/thin.1.sock; > # server unix:/tmp/thin.2.sock; > } > > server { > > listen 80; ## listen for ipv4 > > # Set appropriately for virtual hosting and to use > server_name_in_redirect > server_name redmine.ipp.uci.cu; > server_name_in_redirect off; > > access_log /var/log/nginx/localhost.access.log; > error_log /var/log/nginx/localhost.error.log; > > # Note: Documentation says proxy_set_header should work in location > # block, but testing did not support this statement so it has > # been placed here in server block > include /etc/nginx/proxy_opts; > proxy_redirect off; > > # Note: Must match the prefix used in Thin configuration for Redmine > # or / if no prefix configured > location / { > root /var/www/redmine/public; > > error_page 404 404.html; > error_page 500 502 503 504 500.html; > try_files $uri/index.html $uri.html $uri @redmine_thin_servers; > } > > location @redmine_thin_servers { > proxy_pass http://thin_cluster; > } > } > > thanks for all. > From francis at daoine.org Mon Mar 18 20:24:43 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Mar 2013 20:24:43 +0000 Subject: 502 Bad Gateway- Nginx and thin In-Reply-To: References: <643d7734038eff5e13653af485ca8c37@ruby-forum.com> Message-ID: <20130318202443.GE18002@craic.sysops.org> On Mon, Mar 18, 2013 at 08:53:10PM +0100, Yunior Miguel A. wrote: Hi there, I see that I was wrong in assuming how thin works. > thin configuration: > servers: 1 > socket: /tmp/thin.sock It looks like the *actual* file name used for the socket can add a ".integer" before the final "." in the configured socket file name, with the integer going from 0 up to one below the number of servers. So this bit: > ngin configuration: > > upstream thin_cluster { > server unix:/tmp/thin.0.sock; > # server unix:/tmp/thin.1.sock; > # server unix:/tmp/thin.2.sock; > } is correct, despite looking odd to my non-thin eyes. It's good that you found a configuration that works for you. >From the initial error message reported, I can only assume that thin wasn't running in that mode at the time you tested. f -- Francis Daly francis at daoine.org From jay at kodewerx.org Mon Mar 18 21:19:26 2013 From: jay at kodewerx.org (Jay Oster) Date: Mon, 18 Mar 2013 14:19:26 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: <20130317114224.GR15378@mdounin.ru> References: <20130315082059.GR15378@mdounin.ru> <20130316233917.GL15378@mdounin.ru> <20130317114224.GR15378@mdounin.ru> Message-ID: Hi Maxim, On Sun, Mar 17, 2013 at 4:42 AM, Maxim Dounin wrote: > Hello! > > On "these hosts"? Note that listen queue aka backlog size is > configured in _applications_ which call listen(). At a host level > you may only configure somaxconn, which is maximum allowed listen > queue size (but an application may still use anything lower, even > just 1). > "These hosts" means we have a lot of servers in production right now, and they all exhibit the same issue. It hasn't been a showstopper, but it's been occurring for as long as anyone can remember. The total number of upstream servers on a typical day is 6 machines (each running 3 service processes), and hosts running nginx account for another 4 machines. All of these are Ubuntu 12.04 64-bit VMs running on AWS EC2 m3.xlarge instance types. I was under the impression that /proc/sys/net/ipv4/tcp_max_syn_backlog was for configuring the maximum queue size on the host. It's set to 1024, here, and increasing the number doesn't change the frequency of the missed packets. /proc/sys/net/core/somaxconn is set to 500,000 Make sure to check actual listen queue sizes used on listen > sockets involved. On Linux (you are using Linux, right?) this > should be possible with "ss -nlt" (or "netstat -nlt"). According to `ss -nlt`, send-q on these ports is set to 128. And recv-q on all ports is 0. I don't know what this means for recv-q, use default? And would default be 1024? But according to `netstat -nlt` both queues are 0? > > > 2) Some other queue in the network stack is exhausted. This > > > might be nontrivial to track (but usually possible too). > > > > This is interesting, and could very well be it! Do you have any > > suggestions on where to start looking? > > I'm not a Linux expert, but quick search suggests it should be > possible with dropwatch, see e.g. here: > > > http://prefetch.net/blog/index.php/2011/07/11/using-netstat-and-dropwatch-to-observe-packet-loss-on-linux-servers/ Thanks for the tip! I'll take some time to explore this some more. And before anyone asks, I'm not using iptables or netfilter. That appears to be a common cause for TCP overhead when investigating similar issues. Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Mar 18 21:51:13 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Mar 2013 21:51:13 +0000 Subject: securing access to a folder - 404 error In-Reply-To: <3f9b42885c146cd20a56a1e69a001f93.NginxMailingListEnglish@forum.nginx.org> References: <3f9b42885c146cd20a56a1e69a001f93.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130318215113.GF18002@craic.sysops.org> On Sun, Mar 10, 2013 at 04:07:23PM -0400, mottwsc wrote: Hi there, > I'm trying to secure a directory on a CentOS 6.3 64 server running NGINX > 1.2.7. I think I've set this up correctly, but it keeps giving me a 404 Not > Found error when I try to access a file in that folder in the browser using > domainName/secure/hello2.html. A 404 error from nginx for a local file should usually show something in the error log. Is there anything there? > I even moved the .htpasswd > file into the /secure/ folder and changed the config file to reflect that > change (just to see what would happen), but I still get the 404 Not Found > error. > > Can anyone tell me what I'm missing? I get 401 if I don't give the right credentials, and 403 if the passwd file is missing or if the requested file is not readable. But the only way I get 404 is if the file requested does not exist. What "root" directive is effective in this location{}? f -- Francis Daly francis at daoine.org From jay at kodewerx.org Mon Mar 18 22:09:13 2013 From: jay at kodewerx.org (Jay Oster) Date: Mon, 18 Mar 2013 15:09:13 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: <051BFC6E-6CAE-4844-978E-415E0939B36A@kodewerx.org> References: <20130315082059.GR15378@mdounin.ru> <051BFC6E-6CAE-4844-978E-415E0939B36A@kodewerx.org> Message-ID: Hi again! On Sun, Mar 17, 2013 at 2:17 AM, Jason Oster wrote: > Hello Andrew, > > On Mar 16, 2013, at 8:05 AM, Andrew Alexeev wrote: > > Jay, > > You mean you keep seeing SYN-ACK loss through loopback? > > > That appears to be the case, yes. I've captured packets with tcpdump, and > load them into Wireshark for easier visualization. I can see a very clear > gap where no packets are transmitting for over 500ms, then a burst of ~10 > SYN packets. When I look at the TCP stream flow on these SYN bursts, it > shows an initial SYN packet almost exactly 1 second earlier without a > corresponding SYN-ACK. I'm taking the 1-second delay to be the RTO. I can > provide some pieces of the tcpdump capture log on Monday, to help > illustrate. > I double-checked, and the packet loss does *not* occur on loopback interface. It does occur when hitting the network with a machine's own external IP address, however. This is within Amazon's datacenter; the packets bounce through their firewall before returning to the VM. > That might sound funny, but what's the OS and the overall environment of > that strangely behaving machine with nginx? Is it a virtualized one? Is the > other machine any different? The more details you can provide, the better :) > > > It's a 64-bit Ubuntu 12.04 VM, running on an AWS m3.xlarge. Both VMs are > configured the same. > > Can you try the same tests on the other machine, where you originally > didn't have any problems with your application? That is, can you repeat > nginx+app on the other machine and see if the above strange behavior > persists? > > > Same configuration. I'm investigating this issue because it is common > across literally dozens of servers we have running in AWS. It occurs in all > regions, and on all instance types. This "single server" test is the first > time the software has been run with nginx load balancing to upstream > processes on the same machine. > Here is some additional information in the form of screenshots from Wireshark! 10.245.2.254 is the VM's eth0 address. 50.112.82.196 is the VM's external IP, as assigned by Amazon. All of these packets are being routed through Amazon's firewall. This first screenshot shows the "gap" that ends with a SYN burst. This was all captured during a single run of AB. [image: Inline image 1] The gap is about 500ms where the server is idle. :( If I use "follow TCP stream" on the highlighted packet, I get this: [image: Inline image 2] The initial SYN packet was sent almost exactly 1 second prior, and a SYN-ACK was not received for it. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2013-03-18 at 11.59.18 AM.png Type: image/png Size: 109869 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2013-03-18 at 11.58.49 AM.png Type: image/png Size: 104445 bytes Desc: not available URL: From coolbsd at hotmail.com Mon Mar 18 23:24:05 2013 From: coolbsd at hotmail.com (Cool) Date: Mon, 18 Mar 2013 16:24:05 -0700 Subject: How to change cookie header in a filter? Message-ID: Hi, What's the right way to change incoming cookie header so that upstream can get it just like it's from user's original request header? For example, user's browser sends: Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747 and I want it to be: Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747; mycookie=something when upstream processes the request. I'm trying to migrate an Apache HTTPd module to nginx, it's more or less like mod_usertrack (http://httpd.apache.org/docs/2.2/mod/mod_usertrack.html) but I need to implement my own logic to enforce compatibility among Apache, Nginx, IIS, and Jetty. The question is, for the first time visitor, the incoming request does not have mycookie in the header, I can determine this and generate cookie and Set-Cookie in response, however, I also need to change incoming cookie header so that upstream (php-fpm now, but should be same to all other upstreams as I'm guessing) can get this generated "mycookie" as well. I tried to add new entry to r->headers_in.cookies but it does not work, also tried r->headers_in.headers but no luck either. Thanks, -C -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 19 00:10:47 2013 From: nginx-forum at nginx.us (mottwsc) Date: Mon, 18 Mar 2013 20:10:47 -0400 Subject: securing access to a folder - 404 error In-Reply-To: <20130318215113.GF18002@craic.sysops.org> References: <20130318215113.GF18002@craic.sysops.org> Message-ID: I was able to get partway through the problem with some help. The basic problem was that I had been missing a root directive in one of the location blocks. I was advised to (and did) move the root statement up to the server block and comment it out from any sub-blocks. I have found that this now works as it should to protect the /secure folder when trying to view html files, but it does not when viewing php files in the /secure folder (it just bypasses authentication and displays the file. I must be missing something in the /php block (I guess), but I'm not sure what that would be. Any suggestions? Here's the entire nginx config file.... CODE -------------------------------------------------------------------------------------------------- server { listen 80; server_name mm201.myserver.com; root /var/www/html; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { # root /var/www/html; # this statement allows static content to be served first try_files $uri $uri/ /index.php index index.php index.html index.htm; } # protect the "secure" folder ( /var/www/html/secure ) location /secure/ { # root /var/www/html; auth_basic "Restricted"; auth_basic_user_file /var/www/protected/.htpasswd; # auth_basic_user_file /var/www/html/secure/.htpasswd; } # protect the "munin" folder ( /var/www/html/munin ) and subfolders location ^~ /munin/ { auth_basic "Restricted"; auth_basic_user_file /var/www/protected/.htpasswd; } error_page 404 /404.html; location = /404.html { # root /var/www/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { # root /var/www/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # root /var/www/html; try_files $uri =404; # the above was inserted to block malicious code uploads, but nginx and # the php-fcgi workers must be on the same physical server fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } -------------------------------------------------------------------------------------------------- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237196,237518#msg-237518 From pcgeopc at gmail.com Tue Mar 19 03:55:09 2013 From: pcgeopc at gmail.com (Geo P.C.) Date: Tue, 19 Mar 2013 09:25:09 +0530 Subject: Need to proxypass to different servers. Message-ID: We have 3 servers with Nginx as webserver. The setup is as follows: Server1 : Proxy server Server2 : App Server1 Server3 : App Server 2 In both App servers port 80 is accessed only by Proxy server. We need to setup in such a way that while accessing geotest.com it will go to proxy server and then it should proxypass to app server1 and while accessing geotest.com/cms it should go to proxy server and then to app server 2. So in proxy server we need to setup as while accessing geotest.com and all its subdirectories like geotest.com/* it should go to app server 1 except while accessing geotest.com/cms and its subdirectories it should go to app server2. Please let us know how we can configure it. In proxy server we setup as follows but is not working: server { listen 80; server_name geotest.com; location / { proxy_pass http://app1.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /ui { proxy_pass http://app2.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } Can anyone please hlp us on it. Thanks Geo -------------- next part -------------- An HTML attachment was scrubbed... URL: From robm at fastmail.fm Tue Mar 19 04:45:10 2013 From: robm at fastmail.fm (Robert Mueller) Date: Tue, 19 Mar 2013 15:45:10 +1100 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <20130316234901.GM15378@mdounin.ru> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <20130315081044.GQ15378@mdounin.ru> <1363386747.13074.140661204958001.6D3BCC81@webmail.messagingengine.com> <20130316234901.GM15378@mdounin.ru> Message-ID: <1363668310.15512.140661206171601.41B74C43@webmail.messagingengine.com> > > When an https client drops it's connection, the upstream http proxy > > connection is not dropped. If nginx can't detect an https client > > disconnect properly, that must mean it's leaking connection information > > internally doesn't it? > > No. It just can't say if a connection was closed or not as there > are pending data in the connection, and it can't read data (there > may be a pipelined request). Therefore in this case, being on the > safe side, it assumes the connection isn't closed and doesn't try > to abort upstream request. Oh right I see now. So the underlying problem is that the nginx stream layer abstraction isn't clean enough to handle low level OS events and then map them through the SSL layer to read/write/eof conceptual events as needed. Instead you need an OS level "eof" event, which you then assume maps through the SSL abstraction layer to a SSL stream eof event. Ok, so I had a look at the kqueue eof handling, and what's needed for epoll eof handling, and created a quick patch that seems to work. Can you check this out, and see if it looks right. If so, any chance you can incorporate it upstream? http://robm.fastmail.fm/downloads/nginx-epoll-eof.patch If there's anything you want changed, let me know and I'll try and fix it up. Rob From nginx-forum at nginx.us Tue Mar 19 08:10:57 2013 From: nginx-forum at nginx.us (lifeisjustabout) Date: Tue, 19 Mar 2013 04:10:57 -0400 Subject: how can i avoid google index my site properly jail directory only allow html extension. Message-ID: <8c20e6d3b567ddcc2531b2d69c5864c8.NginxMailingListEnglish@forum.nginx.org> Hello I have small pdf search site searches pdf over internet when somebody search URL look like www.mypdfsearchwebsite.com/pdf/nginx.html with (html extension) google index this address with html and without html version so there are duplicate record on google index each search lies without html www.mypdfsearchwebsite.com/pdf/nginx and with html www.mypdfsearchwebsite.com/pdf/nginx.html version what i would like to do is if i can make it around without html version (www.mypdfsearchwebsite.com/pdf/nginx) will be returned 404 page rather than 200 it could be also jail /pdf/ directory only allow html if html couldn't find go to nginx 404 page. I am not nginx guru but the newbie if you send me exactly location where i put suggested code i would be highly appreciated thank you very much for your help my nginx conf is below server { listen 1.2.3.4:80; server_name mypdfwebsite.com www.mypdfwebsite.com; # log_format awstatcomp '$host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$$ # access_log /var/log/nginx/mypdfwebsite.com.access.log main; access_log /var/log/nginx/mypdfwebsite.com.access.log awstatcomp; error_log /var/log/nginx/mypdfwebsite.com.error.log; root /home/mypdfwebsite/www; index index.php index.html; location / { try_files $uri $uri/ /index.php?q=$request_uri; } location ~ \.php$ { #root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/mypdfwebsite/www$fastcgi_script_name; include fastcgi_params; } location ~ /\. { deny all; } location ~ \.pl$ { gzip off; include /etc/nginx/fastcgi_params; #fastcgi_pass unix:/tmp/php.sock; #statistic perl fastcgi_pass 127.0.0.1:8999; fastcgi_index index.pl; fastcgi_param SCRIPT_FILENAME /home/mypdfwebsite/www/$fastcgi_script_name; allow 127.0.0.1; allow 1.2.3.34; deny all; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237527,237527#msg-237527 From francis at daoine.org Tue Mar 19 09:11:26 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Mar 2013 09:11:26 +0000 Subject: securing access to a folder - 404 error In-Reply-To: References: <20130318215113.GF18002@craic.sysops.org> Message-ID: <20130319091126.GG18002@craic.sysops.org> On Mon, Mar 18, 2013 at 08:10:47PM -0400, mottwsc wrote: Hi there, > I have found that this now > works as it should to protect the /secure folder when trying to view html > files, but it does not when viewing php files in the /secure folder (it just > bypasses authentication and displays the file. I must be missing something > in the /php block (I guess), but I'm not sure what that would be. Your "php" block doesn't have any mention of auth_basic, and so basic authentication does not apply there. > Any suggestions? One request is handled in one location. You must have all of the configuration that you want, available in the one location that handles a specific request. The "location" blocks you have are as follows. > location / { > location /secure/ { > location ^~ /munin/ { > location = /404.html { > location = /50x.html { > location ~ \.php$ { > location ~ /\.ht { The documentation (http://nginx.org/r/location, for example) should tell you exactly which location{} is used for each request you make. What you want is a location for "secure php" -- either "location ~ php" inside "location ^~ /secure/"; or else something like "location ~ ^/secure/.*php" in which both auth_basic and fastcgi_pass apply. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Mar 19 09:20:29 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Mar 2013 09:20:29 +0000 Subject: Need to proxypass to different servers. In-Reply-To: References: Message-ID: <20130319092029.GH18002@craic.sysops.org> On Tue, Mar 19, 2013 at 09:25:09AM +0530, Geo P.C. wrote: Hi there, > We have 3 servers with Nginx as webserver. The setup is as follows: > So in proxy server we need to setup as while accessing geotest.com and all > its subdirectories like geotest.com/* it should go to app server 1 except > while accessing geotest.com/cms and its subdirectories it should go to app > server2. > > Please let us know how we can configure it. "location /cms" should have "proxy_pass" to app2, "location /" should have "proxy_pass" to app1. Almost exactly as you show. Except that you spell "cms" "ui", for some reason. > In proxy server we setup as follows but is not working: Be specific. What one request do you make that does not give the response that you expect? What response do you get instead? Other things: you must set the world up so that the browser actually gets to your proxy server when requesting geotest.com. That's outside of anything nginx can do. You must set things up so that nginx actually gets to your app2 server when... > proxy_pass http://app2.com; ...using the name app2.com. That needs a working resolver, or a configured upstream block. Or just use the IP address directly here. And you will *probably* want to make sure that everything on app2 knows that it is effectively being served below /cms, as otherwise any links to other resources on that server may not work as you want. (And note that "location /cms" and "location /cms/" do different things, and may not both be what you want.) f -- Francis Daly francis at daoine.org From defan at nginx.com Tue Mar 19 09:49:47 2013 From: defan at nginx.com (Andrei Belov) Date: Tue, 19 Mar 2013 13:49:47 +0400 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> <051BFC6E-6CAE-4844-978E-415E0939B36A@kodewerx.org> Message-ID: Hello Jay, On Mar 19, 2013, at 2:09 , Jay Oster wrote: > Hi again! > > On Sun, Mar 17, 2013 at 2:17 AM, Jason Oster wrote: > Hello Andrew, > > On Mar 16, 2013, at 8:05 AM, Andrew Alexeev wrote: >> Jay, >> >> You mean you keep seeing SYN-ACK loss through loopback? > > That appears to be the case, yes. I've captured packets with tcpdump, and load them into Wireshark for easier visualization. I can see a very clear gap where no packets are transmitting for over 500ms, then a burst of ~10 SYN packets. When I look at the TCP stream flow on these SYN bursts, it shows an initial SYN packet almost exactly 1 second earlier without a corresponding SYN-ACK. I'm taking the 1-second delay to be the RTO. I can provide some pieces of the tcpdump capture log on Monday, to help illustrate. > > I double-checked, and the packet loss does *not* occur on loopback interface. It does occur when hitting the network with a machine's own external IP address, however. This is within Amazon's datacenter; the packets bounce through their firewall before returning to the VM. If I understand you right, issue can be repeated in the following cases: 1) client and server are on different EC2 instances, public IPs are used; 2) client and server are on different EC2 instances, private IPs are used; 3) client and server are on a single EC2 instance, public IP is used. And there are no problems when: 1) client and server are on a single EC2 instance, either loopback or private IP is used. Please correct me if I'm wrong. What about EC2 security group - do the client and the server use the same group? How many rules are present in this group? Have you tried to either decrease a number of rules used, or create "pass any to any" simple configuration? And just to clarify the things - under "external IP address" do you mean EC2 instance's public IP, or maybe Elastic IP? > >> That might sound funny, but what's the OS and the overall environment of that strangely behaving machine with nginx? Is it a virtualized one? Is the other machine any different? The more details you can provide, the better :) > > It's a 64-bit Ubuntu 12.04 VM, running on an AWS m3.xlarge. Both VMs are configured the same. > >> Can you try the same tests on the other machine, where you originally didn't have any problems with your application? That is, can you repeat nginx+app on the other machine and see if the above strange behavior persists? > > Same configuration. I'm investigating this issue because it is common across literally dozens of servers we have running in AWS. It occurs in all regions, and on all instance types. This "single server" test is the first time the software has been run with nginx load balancing to upstream processes on the same machine. > > Here is some additional information in the form of screenshots from Wireshark! > > 10.245.2.254 is the VM's eth0 address. 50.112.82.196 is the VM's external IP, as assigned by Amazon. All of these packets are being routed through Amazon's firewall. > > This first screenshot shows the "gap" that ends with a SYN burst. This was all captured during a single run of AB. > > > > > The gap is about 500ms where the server is idle. :( > > If I use "follow TCP stream" on the highlighted packet, I get this: > > > > The initial SYN packet was sent almost exactly 1 second prior, and a SYN-ACK was not received for it. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Mar 19 10:32:04 2013 From: nginx-forum at nginx.us (gvag) Date: Tue, 19 Mar 2013 06:32:04 -0400 Subject: Websocket proxying based on Sec-Websocket-Protocol Message-ID: Hi guys, I am trying to find if it is possible to proxy a websocket request based on the Sec-Websocket-Protocol. More specific, is there a way to check the Sec-Websocket-Protocol (http://tools.ietf.org/html/rfc6455#section-11.3.4) header and proxy to the appropriate host/port according to this header? A use case would be to proxy to a SIP Connector -SIP Websocket connector- (specific host and port) when the Sec-Websocket-Protocol = sip. (http://tools.ietf.org/html/draft-ietf-sipcore-sip-websocket-03) I checked documentation and forums, but i cannot find any information relevant to this issue. As far as i can see the Sec-Websocket-Protocol header is not available to be used in the nginx, just the "Upgrade: websocket" header. Regards George Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237539,237539#msg-237539 From nginx-forum at nginx.us Tue Mar 19 12:45:26 2013 From: nginx-forum at nginx.us (senior.weber@gmail.com) Date: Tue, 19 Mar 2013 08:45:26 -0400 Subject: proxy not listening to 443 Message-ID: <5f6b2f9d97d8a64b4dad8692db0d73aa.NginxMailingListEnglish@forum.nginx.org> Hello! We are running some applications servers (grails) and using nginx as reverse proxy before that for caching and load balancing purposes. everything is working as expected, but now that we received our ssl certificate, i am failing to route the ssl requests over nginx (i did understand that i could tell nginx the certificate and then serve the content of the http only servers in backend via ssl "frontend"). here is my server block: [code] upstream foobar { ip_hash; server 127.0.0.1:9099; } server { server_name .foobar.lu listen 443 default_server ssl; listen 80; access_log /.zis/logs/access.log; ssl_certificate /.zis/cert/foobar_lu.crt; ssl_certificate_key /.zis/cert/foobar.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; location ~* ^/(login|admin|account).*$ { if ($scheme = "http") { rewrite ^ https://www.foobar.lu$request_uri permanent; } proxy_pass http://foobar; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; add_header Front-End-Https on; proxy_redirect off; } [.. non-ssl caching stuff..] } [/code] accessing the page via httpsyields to ERR_CONNECTION_REFUSED and nestat offers me no-one listening on 443: root at foo:/home/jeremy# netstat -nl | grep :4 tcp 0 0 0.0.0.0:4242 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4243 0.0.0.0:* LISTEN root at foo:/home/jeremy# netstat -nl | grep :80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN Is there something special about ssl i did not configure right maybe? I tried splitting 80 and 443 in separate server blocks but no luck so far. Any help would be highly appreciated, thanks in advance, Andreas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237547,237547#msg-237547 From mdounin at mdounin.ru Tue Mar 19 13:04:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Mar 2013 17:04:23 +0400 Subject: How to change cookie header in a filter? In-Reply-To: References: Message-ID: <20130319130423.GL15378@mdounin.ru> Hello! On Mon, Mar 18, 2013 at 04:24:05PM -0700, Cool wrote: > Hi, > > What's the right way to change incoming cookie header so that upstream > can get it just like it's from user's original request header? For > example, user's browser sends: > > Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747 > > and I want it to be: > > Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747; > mycookie=something > > when upstream processes the request. I would recommend something like this: proxy_set_header Cookie "$http_cookie; mycookie=something"; (Similar to what's usually done with X-Forwarded-For using the $proxy_add_x_forwarded_for variable.) > I'm trying to migrate an Apache HTTPd module to nginx, it's more or less > like mod_usertrack > (http://httpd.apache.org/docs/2.2/mod/mod_usertrack.html) but I need to > implement my own logic to enforce compatibility among Apache, Nginx, > IIS, and Jetty. > > The question is, for the first time visitor, the incoming request does > not have mycookie in the header, I can determine this and generate > cookie and Set-Cookie in response, however, I also need to change > incoming cookie header so that upstream (php-fpm now, but should be same > to all other upstreams as I'm guessing) can get this generated > "mycookie" as well. > > I tried to add new entry to r->headers_in.cookies but it does not work, > also tried r->headers_in.headers but no luck either. It's usually not a good idea to change original request headers. Instead, it is recommended to form appropriate request to an upstream, see above. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Mar 19 13:35:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Mar 2013 17:35:05 +0400 Subject: Websocket proxying based on Sec-Websocket-Protocol In-Reply-To: References: Message-ID: <20130319133504.GN15378@mdounin.ru> Hello! On Tue, Mar 19, 2013 at 06:32:04AM -0400, gvag wrote: > Hi guys, > > I am trying to find if it is possible to proxy a websocket request based on > the Sec-Websocket-Protocol. More specific, is there a way to check the > Sec-Websocket-Protocol (http://tools.ietf.org/html/rfc6455#section-11.3.4) > header and proxy to the appropriate host/port according to this header? > > A use case would be to proxy to a SIP Connector -SIP Websocket connector- > (specific host and port) when the Sec-Websocket-Protocol = sip. > (http://tools.ietf.org/html/draft-ietf-sipcore-sip-websocket-03) > > I checked documentation and forums, but i cannot find any information > relevant to this issue. As far as i can see the Sec-Websocket-Protocol > header is not available to be used in the nginx, just the "Upgrade: > websocket" header. You may check Sec-Websocket-Protocol using $http_sec_websocket_protocol variable (see [1]) and route requests accordingly. This isn't specific to websockets and hence not something discussed in websockets context. [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#variables -- Maxim Dounin http://nginx.org/en/donation.html From pcgeopc at gmail.com Tue Mar 19 14:12:25 2013 From: pcgeopc at gmail.com (Geo P.C.) Date: Tue, 19 Mar 2013 19:42:25 +0530 Subject: Need to proxypass to different servers. In-Reply-To: <20130319092029.GH18002@craic.sysops.org> References: <20130319092029.GH18002@craic.sysops.org> Message-ID: Thanks for your reply. Please see this: In Proxy server we have the setup as follows: server { listen 80; server_name geotest.com; proxy_buffering on; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://192.168.0.1/; #app1 server } location /cms { proxy_pass http://192.168.0.2/; # app2 server } } Now while accessing the url the result are as follows: 1. geotest.com ? Working fine getting the contents of app1 server 2. geotest.com/a1 ? Working fine getting the contents of app1 server 3. geotest.com/cms ? Not working. Site proxypass to app2 server but we are getting a 404 page. 4. geotest.com/cmsssss ?Same as above result. For your information the cms application running app2 server is graphite server and you can find the nginx configuration file from the url: http://www.frlinux.eu/?p=199 in which we use the server name as geotest.com So can you please help us on it. Thanks Geo On Tue, Mar 19, 2013 at 2:50 PM, Francis Daly wrote: > On Tue, Mar 19, 2013 at 09:25:09AM +0530, Geo P.C. wrote: > > Hi there, > > > We have 3 servers with Nginx as webserver. The setup is as follows: > > > So in proxy server we need to setup as while accessing geotest.com and > all > > its subdirectories like geotest.com/* it should go to app server 1 > except > > while accessing geotest.com/cms and its subdirectories it should go to > app > > server2. > > > > Please let us know how we can configure it. > > "location /cms" should have "proxy_pass" to app2, "location /" should have > "proxy_pass" to app1. Almost exactly as you show. Except that you spell > "cms" "ui", for some reason. > > > In proxy server we setup as follows but is not working: > > Be specific. > > What one request do you make that does not give the response that you > expect? What response do you get instead? > > Other things: you must set the world up so that the browser actually > gets to your proxy server when requesting geotest.com. That's outside > of anything nginx can do. > > You must set things up so that nginx actually gets to your app2 server > when... > > > proxy_pass http://app2.com; > > ...using the name app2.com. That needs a working resolver, or a configured > upstream block. Or just use the IP address directly here. > > And you will *probably* want to make sure that everything on app2 knows > that it is effectively being served below /cms, as otherwise any links > to other resources on that server may not work as you want. > > (And note that "location /cms" and "location /cms/" do different things, > and may not both be what you want.) > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From WBrown at e1b.org Tue Mar 19 14:14:22 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Tue, 19 Mar 2013 10:14:22 -0400 Subject: Translating an F5 rule Message-ID: I am configuring Nginx to sit in front of several IIS web servers to do load balancing and SSL signing. THe IIS application is supplied by an outside vendor. I have the load balancing and SSL signing working, with one exception. The login page doesn't work. :( When the vendor hosts this application, they use F5 hardware for SSP and load balancing. They gave me thisrule that they use in the F5 that I need to translate to nginx-ese: when HTTP_REQUEST { HTTP::header remove SWSSLHDR HTTP::header insert SWSSLHDR [TCP::local_port] } Is anyone here familiar w/ F5 hardwaare that can help translate this? -- William Brown Core Hosted Application Technical Team and Messaging Team Technology Services, WNYRIC, Erie 1 BOCES (716) 821-7285 Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From mdounin at mdounin.ru Tue Mar 19 14:19:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Mar 2013 18:19:29 +0400 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> <20130316233917.GL15378@mdounin.ru> <20130317114224.GR15378@mdounin.ru> Message-ID: <20130319141928.GO15378@mdounin.ru> Hello! On Mon, Mar 18, 2013 at 02:19:26PM -0700, Jay Oster wrote: > On Sun, Mar 17, 2013 at 4:42 AM, Maxim Dounin wrote: > > > On "these hosts"? Note that listen queue aka backlog size is > > configured in _applications_ which call listen(). At a host level > > you may only configure somaxconn, which is maximum allowed listen > > queue size (but an application may still use anything lower, even > > just 1). > > > > "These hosts" means we have a lot of servers in production right now, and > they all exhibit the same issue. It hasn't been a showstopper, but it's > been occurring for as long as anyone can remember. The total number of > upstream servers on a typical day is 6 machines (each running 3 service > processes), and hosts running nginx account for another 4 machines. All of > these are Ubuntu 12.04 64-bit VMs running on AWS EC2 m3.xlarge instance > types. > > I was under the impression that /proc/sys/net/ipv4/tcp_max_syn_backlog was > for configuring the maximum queue size on the host. It's set to 1024, here, > and increasing the number doesn't change the frequency of the missed > packets. > > /proc/sys/net/core/somaxconn is set to 500,000 As far as I understand, tcp_max_syn_backlog configures global cumulative limit for all listening sockets, while somaxconn limits one listening socket backlog. If any of the two is too small - you'll see SYN packets dropped. > > Make sure to check actual listen queue sizes used on listen > > sockets involved. On Linux (you are using Linux, right?) this > > should be possible with "ss -nlt" (or "netstat -nlt"). > > > According to `ss -nlt`, send-q on these ports is set to 128. And recv-q on > all ports is 0. I don't know what this means for recv-q, use default? And > would default be 1024? In "ss -nlt" output send-q column is used to display listen queue size for listen sockets. Number 128 here means you have listen queue for 128 connections only. You should tune your backends to use bigger listen queues, 128 is certanly too small for concurency 5000 you use in your tests. (The recv-q column should indicate current number of connections in listen queue.) > But according to `netstat -nlt` both queues are 0? This means that netstat isn't showing listen queue sizes on your host. It looks like many linux systems still always display 0 for listen sockets. -- Maxim Dounin http://nginx.org/en/donation.html From senior.weber at gmail.com Tue Mar 19 14:41:03 2013 From: senior.weber at gmail.com (Andreas Weber) Date: Tue, 19 Mar 2013 15:41:03 +0100 Subject: Need to proxypass to different servers. In-Reply-To: References: <20130319092029.GH18002@craic.sysops.org> Message-ID: Hi, Im not expert but i think you must specify /cms BEFORE / because "/" will match everything best regards andreas Threepwood: Ha-ha! Taste cold steel, feeble cannon restraint rope! 2013/3/19 Geo P.C. > Thanks for your reply. Please see this: > > > In Proxy server we have the setup as follows: > > > > server { > > listen 80; > > server_name geotest.com; > > proxy_buffering on; > > proxy_redirect off; > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > > location / { > > proxy_pass http://192.168.0.1/; #app1 > server > > } > > location /cms { > > proxy_pass http://192.168.0.2/; # > app2 server > > } > > } > > Now while accessing the url the result are as follows: > > > > 1. geotest.com ? Working fine getting the contents of app1 server > > 2. geotest.com/a1 ? Working fine getting the contents of app1 server > > 3. geotest.com/cms ? Not working. Site proxypass to app2 server but > we are getting a 404 page. > > 4. geotest.com/cmsssss ?Same as above result. > > > > For your information the cms application running app2 server is graphite > server and you can find the nginx configuration file from the url: > http://www.frlinux.eu/?p=199 in which we use the server name as > geotest.com > > > > So can you please help us on it. > > > > Thanks > > Geo > > > > On Tue, Mar 19, 2013 at 2:50 PM, Francis Daly wrote: > >> On Tue, Mar 19, 2013 at 09:25:09AM +0530, Geo P.C. wrote: >> >> Hi there, >> >> > We have 3 servers with Nginx as webserver. The setup is as follows: >> >> > So in proxy server we need to setup as while accessing geotest.com and >> all >> > its subdirectories like geotest.com/* it should go to app server 1 >> except >> > while accessing geotest.com/cms and its subdirectories it should go to >> app >> > server2. >> > >> > Please let us know how we can configure it. >> >> "location /cms" should have "proxy_pass" to app2, "location /" should have >> "proxy_pass" to app1. Almost exactly as you show. Except that you spell >> "cms" "ui", for some reason. >> >> > In proxy server we setup as follows but is not working: >> >> Be specific. >> >> What one request do you make that does not give the response that you >> expect? What response do you get instead? >> >> Other things: you must set the world up so that the browser actually >> gets to your proxy server when requesting geotest.com. That's outside >> of anything nginx can do. >> >> You must set things up so that nginx actually gets to your app2 server >> when... >> >> > proxy_pass http://app2.com; >> >> ...using the name app2.com. That needs a working resolver, or a >> configured >> upstream block. Or just use the IP address directly here. >> >> And you will *probably* want to make sure that everything on app2 knows >> that it is effectively being served below /cms, as otherwise any links >> to other resources on that server may not work as you want. >> >> (And note that "location /cms" and "location /cms/" do different things, >> and may not both be what you want.) >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Peter_Booth at s5a.com Tue Mar 19 14:43:12 2013 From: Peter_Booth at s5a.com (Peter Booth) Date: Tue, 19 Mar 2013 09:43:12 -0500 Subject: Translating an F5 rule In-Reply-To: References: Message-ID: The code does the following: 1. remove an HTTP header named "SWSSLHDR" 2. replaces it with SWSSLHDR: port, where the port is the local port of the "current context's TCP connection", presumably the port that your F5 virtual server is listening on. This is presumably to separate SSL and non SSL traffic , or to allow for load balancing across websites that are hosted on ports 8080, 8000 or other nonstandard ports. One thought- are you configuring the nginx server to terminate SSL and then proxy to a single upstream endpoint? Is this the same topology as the F5 one? Is the entire site SSL or just the login portions? Peter -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of WBrown at e1b.org Sent: Tuesday, March 19, 2013 10:14 AM To: nginx at nginx.org Subject: Translating an F5 rule I am configuring Nginx to sit in front of several IIS web servers to do load balancing and SSL signing. THe IIS application is supplied by an outside vendor. I have the load balancing and SSL signing working, with one exception. The login page doesn't work. :( When the vendor hosts this application, they use F5 hardware for SSP and load balancing. They gave me thisrule that they use in the F5 that I need to translate to nginx-ese: when HTTP_REQUEST { HTTP::header remove SWSSLHDR HTTP::header insert SWSSLHDR [TCP::local_port] } Is anyone here familiar w/ F5 hardwaare that can help translate this? -- William Brown Core Hosted Application Technical Team and Messaging Team Technology Services, WNYRIC, Erie 1 BOCES (716) 821-7285 Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Mar 19 14:48:28 2013 From: nginx-forum at nginx.us (gvag) Date: Tue, 19 Mar 2013 10:48:28 -0400 Subject: Websocket proxying based on Sec-Websocket-Protocol In-Reply-To: <20130319133504.GN15378@mdounin.ru> References: <20130319133504.GN15378@mdounin.ru> Message-ID: Thanks a lot for the answer. George Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237539,237559#msg-237559 From mdounin at mdounin.ru Tue Mar 19 14:54:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Mar 2013 18:54:31 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1363668310.15512.140661206171601.41B74C43@webmail.messagingengine.com> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <20130315081044.GQ15378@mdounin.ru> <1363386747.13074.140661204958001.6D3BCC81@webmail.messagingengine.com> <20130316234901.GM15378@mdounin.ru> <1363668310.15512.140661206171601.41B74C43@webmail.messagingengine.com> Message-ID: <20130319145430.GQ15378@mdounin.ru> Hello! On Tue, Mar 19, 2013 at 03:45:10PM +1100, Robert Mueller wrote: > > > > When an https client drops it's connection, the upstream http proxy > > > connection is not dropped. If nginx can't detect an https client > > > disconnect properly, that must mean it's leaking connection information > > > internally doesn't it? > > > > No. It just can't say if a connection was closed or not as there > > are pending data in the connection, and it can't read data (there > > may be a pipelined request). Therefore in this case, being on the > > safe side, it assumes the connection isn't closed and doesn't try > > to abort upstream request. > > Oh right I see now. > > So the underlying problem is that the nginx stream layer abstraction > isn't clean enough to handle low level OS events and then map them > through the SSL layer to read/write/eof conceptual events as needed. > Instead you need an OS level "eof" event, which you then assume maps > through the SSL abstraction layer to a SSL stream eof event. Not exactly. The underlying problem is that BSD sockets API doesn't provide standard means to detect EOF without reading all pending data, and hence OS-specific extensions have to be used to reliably detect pending EOFs. > Ok, so I had a look at the kqueue eof handling, and what's needed for > epoll eof handling, and created a quick patch that seems to work. > > Can you check this out, and see if it looks right. If so, any chance you > can incorporate it upstream? > > http://robm.fastmail.fm/downloads/nginx-epoll-eof.patch > > If there's anything you want changed, let me know and I'll try and fix > it up. I don't really like what you did in ngx_http_upstream.c. And there are more places where ev->pending_eof is used, and probably at least some of these places should be adapted, too. Additionally, poll_ctl(2) manpage claims the EPOLLRDHUP only available since Linux 2.6.17, and this suggests it needs some configure tests/conditional compilation. Valentin is already worked on this, and I believe he'll be able to provide a little bit more generic patch. -- Maxim Dounin http://nginx.org/en/donation.html From jfs.world at gmail.com Tue Mar 19 15:11:17 2013 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Tue, 19 Mar 2013 23:11:17 +0800 Subject: Translating an F5 rule In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 10:43 PM, Peter Booth wrote: > The code does the following: > > 1. remove an HTTP header named "SWSSLHDR" > 2. replaces it with SWSSLHDR: port, where the port is the local port of > the "current context's TCP connection", presumably the port that your F5 > virtual server is listening on. > "when HTTP_REQUEST" is actually client-side, so the port in question would be the port on the backend server that it proxies to. Seems kind of strange to even pass this info along, unless somehow your backends are all listening on different ports. Whatever the case, this is what it actually means. -jf > This is presumably to separate SSL and non SSL traffic , or to allow for > load balancing across websites that are hosted on ports 8080, 8000 or > other nonstandard ports. > > One thought- are you configuring the nginx server to terminate SSL and > then proxy to a single upstream endpoint? Is this the same topology as > the F5 one? Is the entire site SSL or just the login portions? > > Peter > > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf > Of WBrown at e1b.org > Sent: Tuesday, March 19, 2013 10:14 AM > To: nginx at nginx.org > Subject: Translating an F5 rule > > I am configuring Nginx to sit in front of several IIS web servers to do > load balancing and SSL signing. THe IIS application is supplied by an > outside vendor. I have the load balancing and SSL signing working, > with > one exception. > > The login page doesn't work. :( > > When the vendor hosts this application, they use F5 hardware for SSP and > > load balancing. They gave me thisrule that they use in the F5 that I > need > to translate to nginx-ese: > > when HTTP_REQUEST { > HTTP::header remove SWSSLHDR > HTTP::header insert SWSSLHDR [TCP::local_port] > } > > Is anyone here familiar w/ F5 hardwaare that can help translate this? > > > -- > > William Brown > Core Hosted Application Technical Team and Messaging Team > Technology Services, WNYRIC, Erie 1 BOCES > (716) 821-7285 > > > > > Confidentiality Notice: > This electronic message and any attachments may contain confidential or > privileged information, and is intended only for the individual or > entity > identified above as the addressee. If you are not the addressee (or the > employee or agent responsible to deliver it to the addressee), or if > this > message has been addressed to you in error, you are hereby notified that > > you may not copy, forward, disclose or use any part of this message or > any > attachments. Please notify the sender immediately by return e-mail or > telephone and delete this message from your system. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From WBrown at e1b.org Tue Mar 19 15:42:59 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Tue, 19 Mar 2013 11:42:59 -0400 Subject: Translating an F5 rule In-Reply-To: References: Message-ID: Peter Booth wrote on 03/19/2013 10:43:12 AM: > The code does the following: > > 1. remove an HTTP header named "SWSSLHDR" > 2. replaces it with SWSSLHDR: port, where the port is the local port of > the "current context's TCP connection", presumably the port that your F5 > virtual server is listening on. I had somewhat figured that out. It isn't clear from the notes I got from vender as to what the current context is. I'm guessing the client side, but I can test that. > This is presumably to separate SSL and non SSL traffic , or to allow for > load balancing across websites that are hosted on ports 8080, 8000 or > other nonstandard ports. > > One thought- are you configuring the nginx server to terminate SSL and > then proxy to a single upstream endpoint? Is this the same topology as > the F5 one? Is the entire site SSL or just the login portions? Presently, we are using an Centos box with Piranha for load balancing, but we wish to implement SSL. There are about 50 sites hosted with three upstream servers. I don't want to tie up 150 IP addresses for SSL on them, so I want to terminate the SSL connection at the nginx server and use HTTP on port 80 to connect from nginx to IIS. The F5 information is just what the IIS application vendor says they use in their configuration. We may be buying an F5 in the future, but I need SSL in the short term. Would I add to the location section something like this: more_set_input_headers -r SWSSLHDR $server_port If $server_port isn't correct, I could try $remote_port. Are there any other port variables that I've missed? >From my reading of the F5 docs, the "when HTTP_REQUEST" indicates this is only processed on requests received from clients. Since they are always removing the SWSSLHDR from incoming requests, then adding it again, I think using the -r option is sensible and only adding it if it exists. Now I'm off to rebuild nginx with HttpHeadersMoreModule. Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From nginx-forum at nginx.us Tue Mar 19 15:53:08 2013 From: nginx-forum at nginx.us (senior.weber@gmail.com) Date: Tue, 19 Mar 2013 11:53:08 -0400 Subject: proxy not listening to 443 In-Reply-To: <5f6b2f9d97d8a64b4dad8692db0d73aa.NginxMailingListEnglish@forum.nginx.org> References: <5f6b2f9d97d8a64b4dad8692db0d73aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1b3588d07030e47d274b9fb33f2fedcc.NginxMailingListEnglish@forum.nginx.org> Ok, Its getting better :-) Could get it to listen to 443 by using listen *:443 default_server ssl; listen *:80; (star double dot port) however server still says ERR_CONNECTION_REFUSED and in access log, nothing appears for https .. any help would be highly appreciated .. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237547,237566#msg-237566 From peter_booth at s5a.com Tue Mar 19 17:54:20 2013 From: peter_booth at s5a.com (peter) Date: Tue, 19 Mar 2013 12:54:20 -0500 Subject: Translating an F5 rule In-Reply-To: Message-ID: You might find that you get most traction with open resty ? its an nginx bundle project that includes ngx_lua, HttpHeadersMoreModule and a bunch of other mopdules that are great for transforming requests and implementing F5-like logic. I have been using it for six months and its saved me a bunch of time and helped me get weird stuff done. The openresty mailing list is very responsive. On 3/19/13 10:42 AM, "WBrown at e1b.org" wrote: > Peter Booth wrote on 03/19/2013 10:43:12 AM: > >> > The code does the following: >> > >> > 1. remove an HTTP header named "SWSSLHDR" >> > 2. replaces it with SWSSLHDR: port, where the port is the local port of >> > the "current context's TCP connection", presumably the port that your F5 >> > virtual server is listening on. > > I had somewhat figured that out. It isn't clear from the notes I got from > vender as to what the current context is. I'm guessing the client side, > but I can test that. > >> > This is presumably to separate SSL and non SSL traffic , or to allow for >> > load balancing across websites that are hosted on ports 8080, 8000 or >> > other nonstandard ports. >> > >> > One thought- are you configuring the nginx server to terminate SSL and >> > then proxy to a single upstream endpoint? Is this the same topology as >> > the F5 one? Is the entire site SSL or just the login portions? > > Presently, we are using an Centos box with Piranha for load balancing, but > we wish to implement SSL. There are about 50 sites hosted with three > upstream servers. I don't want to tie up 150 IP addresses for SSL on > them, so I want to terminate the SSL connection at the nginx server and > use HTTP on port 80 to connect from nginx to IIS. > > The F5 information is just what the IIS application vendor says they use > in their configuration. We may be buying an F5 in the future, but I need > SSL in the short term. > > Would I add to the location section something like this: > > more_set_input_headers -r SWSSLHDR $server_port > > If $server_port isn't correct, I could try $remote_port. Are there any > other port variables that I've missed? > > From my reading of the F5 docs, the "when HTTP_REQUEST" indicates this is > only processed on requests received from clients. Since they are always > removing the SWSSLHDR from incoming requests, then adding it again, I > think using the -r option is sensible and only adding it if it exists. > > Now I'm off to rebuild nginx with HttpHeadersMoreModule. > > > > > Confidentiality Notice: > This electronic message and any attachments may contain confidential or > privileged information, and is intended only for the individual or entity > identified above as the addressee. If you are not the addressee (or the > employee or agent responsible to deliver it to the addressee), or if this > message has been addressed to you in error, you are hereby notified that > you may not copy, forward, disclose or use any part of this message or any > attachments. Please notify the sender immediately by return e-mail or > telephone and delete this message from your system. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr.sikora at frickle.com Tue Mar 19 17:13:27 2013 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Tue, 19 Mar 2013 18:13:27 +0100 Subject: [ANNOUNCE] ngx_cache_purge-2.1 Message-ID: <5589422AB7BA4B8A96CB5EE33AE5A425@Desktop> Version 2.1 is now available at: http://labs.frickle.com/nginx_ngx_cache_purge/ GitHub repository is available at: https://github.com/FRiCKLE/ngx_cache_purge/ Changes: 2013-03-19 VERSION 2.1 * When enabled, cache purge will now catch all requests with PURGE (or specified) method, even if cache isn't configured. Previously, it would pass such requests to the upstream. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From WBrown at e1b.org Tue Mar 19 18:05:18 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Tue, 19 Mar 2013 14:05:18 -0400 Subject: Translating an F5 rule In-Reply-To: References: Message-ID: peter wrote on 03/19/2013 01:54:20 PM: > You might find that you get most traction with open resty ? its an > nginx bundle project that includes ngx_lua, > HttpHeadersMoreModule and a bunch of other mopdules that are great > for transforming requests > and implementing F5-like logic. I have been using it for six months > and its saved me a bunch of time > and helped me get weird stuff done. The openresty mailing list is > very responsive. Thank you for the suggestion. OpenResty certainly looks like an interesting project adding lots of additional features/modules to the base nginx. My requirements are fairly limited, so I don't need all the development features it offers. If I get stymied with the base nginx plus specific modules (ie. headers-more-nginx-module-master) then I will try OpenResty. Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From coolbsd at hotmail.com Tue Mar 19 18:50:43 2013 From: coolbsd at hotmail.com (Cool) Date: Tue, 19 Mar 2013 11:50:43 -0700 Subject: How to change cookie header in a filter? In-Reply-To: <20130319130423.GL15378@mdounin.ru> References: <20130319130423.GL15378@mdounin.ru> Message-ID: Thanks Maxim, I got what you mean. Since I'm using fastCGI so I put something like this: fastcgi_param HTTP_COOKIE "$http_cookie; mycookie=$cookie_note"; (I populated cookie_note in my filter already, this was done for logging purpose thus it is just a reuse of existing facility) More problems come with this solution: 1. it seems fastcgi_param called before my filter so $cookie_note always got empty, and 2. it seems fastcgi_param could not be used in a if directive so I end up with change the cookie header even the mycookie is presented in user's request, thus 3. all i got is a empty mycookie value However, this is really a good start point as at least it changes cookies sent to upstream, will look into codes to see how it works. -C PS, the expected behavior is: 1. user sends request 2. if mycookie presents and it is valid (some checksum function), it will not be touched and just pass through to upstream 3. if mycookie does not present or it is invalid (i.e. faked) - a new mycookie will be generated based on user's ip, port, time, etc, note that this means mycookies for different users are totally different - the new mycookie will be in Set-Cookie header - the new mycookie should be in the incoming cookie header as well, so that upstream can always have a mycookie 4. either it's a mycookie from request or populated, the cookie_note variable will be updated, mainly for logging purpose, but could be something else (like traffic routing). ? 13-3-19 ??6:04, Maxim Dounin ??: > Hello! > > On Mon, Mar 18, 2013 at 04:24:05PM -0700, Cool wrote: > >> Hi, >> >> What's the right way to change incoming cookie header so that upstream >> can get it just like it's from user's original request header? For >> example, user's browser sends: >> >> Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747 >> >> and I want it to be: >> >> Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747; >> mycookie=something >> >> when upstream processes the request. > I would recommend something like this: > > proxy_set_header Cookie "$http_cookie; mycookie=something"; > > (Similar to what's usually done with X-Forwarded-For using > the $proxy_add_x_forwarded_for variable.) > >> I'm trying to migrate an Apache HTTPd module to nginx, it's more or less >> like mod_usertrack >> (http://httpd.apache.org/docs/2.2/mod/mod_usertrack.html) but I need to >> implement my own logic to enforce compatibility among Apache, Nginx, >> IIS, and Jetty. >> >> The question is, for the first time visitor, the incoming request does >> not have mycookie in the header, I can determine this and generate >> cookie and Set-Cookie in response, however, I also need to change >> incoming cookie header so that upstream (php-fpm now, but should be same >> to all other upstreams as I'm guessing) can get this generated >> "mycookie" as well. >> >> I tried to add new entry to r->headers_in.cookies but it does not work, >> also tried r->headers_in.headers but no luck either. > It's usually not a good idea to change original request headers. > Instead, it is recommended to form appropriate request to an > upstream, see above. > From francis at daoine.org Tue Mar 19 20:17:17 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Mar 2013 20:17:17 +0000 Subject: Need to proxypass to different servers. In-Reply-To: References: <20130319092029.GH18002@craic.sysops.org> Message-ID: <20130319201717.GI18002@craic.sysops.org> On Tue, Mar 19, 2013 at 07:42:25PM +0530, Geo P.C. wrote: Hi there, > location / { > proxy_pass http://192.168.0.1/; #app1 > } > > location /cms { > proxy_pass http://192.168.0.2/; # > } > 1. geotest.com ? Working fine getting the contents of app1 server > 2. geotest.com/a1 ? Working fine getting the contents of app1 server So far, so good. > 3. geotest.com/cms ? Not working. Site proxypass to app2 server but > we are getting a 404 page. What request do you want nginx to make of the app2 server here? -- /, /cms, or something else? What request is nginx making of the app2 server? -- check the app2 server logs, if the nginx logs don't tell you. If you make either of those requests of app2 yourself (using curl), do you get what you expect? > 4. geotest.com/cmsssss ?Same as above result. Same questions. What do you want to happen? What does happen? > For your information the cms application running app2 server is graphite > server and you can find the nginx configuration file from the url: > http://www.frlinux.eu/?p=199 in which we use the server name as geotest.com Unless graphite is one of those special and beautiful apps that allow themselves to be easily reverse-proxied at a non-root url, you may end up happier if you just use two separate server{} blocks with different server_name directives. But that can be worried about after you see how your /cms locations work. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Mar 19 20:29:02 2013 From: nginx-forum at nginx.us (fastcatch) Date: Tue, 19 Mar 2013 16:29:02 -0400 Subject: When does nginx return a Bad gateway (502)? Message-ID: <7e265ff1fb1ff13be1026c031e1397e2.NginxMailingListEnglish@forum.nginx.org> I have an application behind nginx (Rails on Unicorn if it matters) which listens on a UNIX socket. It all works nice, especially when load is low. When there is some load on the server though (say 70%), I randomly(?) get a bunch of 502 responses -- usually in batches. Thus the question: when (under what conditions) does nginx send a 502 response? Thanks in advance, fastcatch PS: The app's worker processes do not seem to be irresponsive, slow, blocked for IO or anything else I have been able to identify as a possible cause. Thus I want to understand more the nginx side so as to find the root cause -- which I believe is on the other side (or some config tweak needed soemwhere). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237573,237573#msg-237573 From nik.molnar at consbio.org Tue Mar 19 20:47:38 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 19 Mar 2013 13:47:38 -0700 Subject: When does nginx return a Bad gateway (502)? In-Reply-To: <7e265ff1fb1ff13be1026c031e1397e2.NginxMailingListEnglish@forum.nginx.org> References: <7e265ff1fb1ff13be1026c031e1397e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5148CEEA.3040308@consbio.org> In my experience, Nginx returns 502 when the upstream server (Unicorn in your case) doesn't respond or terminates the connection unexpectedly. _Nik On 3/19/2013 1:29 PM, fastcatch wrote: > I have an application behind nginx (Rails on Unicorn if it matters) which > listens on a UNIX socket. > > It all works nice, especially when load is low. When there is some load on > the server though (say 70%), I randomly(?) get a bunch of 502 responses -- > usually in batches. > > Thus the question: when (under what conditions) does nginx send a 502 > response? > > Thanks in advance, > > fastcatch > > PS: The app's worker processes do not seem to be irresponsive, slow, blocked > for IO or anything else I have been able to identify as a possible cause. > Thus I want to understand more the nginx side so as to find the root cause > -- which I believe is on the other side (or some config tweak needed > soemwhere). > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237573,237573#msg-237573 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jay at kodewerx.org Tue Mar 19 22:16:25 2013 From: jay at kodewerx.org (Jay Oster) Date: Tue, 19 Mar 2013 15:16:25 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: <20130319141928.GO15378@mdounin.ru> References: <20130315082059.GR15378@mdounin.ru> <20130316233917.GL15378@mdounin.ru> <20130317114224.GR15378@mdounin.ru> <20130319141928.GO15378@mdounin.ru> Message-ID: Hi Maxim, On Tue, Mar 19, 2013 at 7:19 AM, Maxim Dounin wrote: > Hello! > > As far as I understand, tcp_max_syn_backlog configures global > cumulative limit for all listening sockets, while somaxconn limits > one listening socket backlog. If any of the two is too small - > you'll see SYN packets dropped. > > > > Make sure to check actual listen queue sizes used on listen > > > sockets involved. On Linux (you are using Linux, right?) this > > > should be possible with "ss -nlt" (or "netstat -nlt"). > > > > > > According to `ss -nlt`, send-q on these ports is set to 128. And recv-q > on > > all ports is 0. I don't know what this means for recv-q, use default? And > > would default be 1024? > > In "ss -nlt" output send-q column is used to display listen queue > size for listen sockets. Number 128 here means you have listen > queue for 128 connections only. You should tune your backends to > use bigger listen queues, 128 is certanly too small for concurency > 5000 you use in your tests. > > (The recv-q column should indicate current number of connections > in listen queue.) This is an excellent tip, thank you! Regardless of whether it fully resolves this issue, I will see about tuning the individual listen socket queues. The server is using libevent's asynchronous HTTP server module, so I'm not sure how much control I have over the socket options. But I will investigate. > > But according to `netstat -nlt` both queues are 0? > > This means that netstat isn't showing listen queue sizes on your > host. It looks like many linux systems still always display 0 for > listen sockets. Pretty strange. Oh well. `ss` works for me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at kodewerx.org Tue Mar 19 22:42:27 2013 From: jay at kodewerx.org (Jay Oster) Date: Tue, 19 Mar 2013 15:42:27 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> <051BFC6E-6CAE-4844-978E-415E0939B36A@kodewerx.org> Message-ID: Hi Andrei! On Tue, Mar 19, 2013 at 2:49 AM, Andrei Belov wrote: > Hello Jay, > > If I understand you right, issue can be repeated in the following cases: > > 1) client and server are on different EC2 instances, public IPs are used; > 2) client and server are on different EC2 instances, private IPs are used; > 3) client and server are on a single EC2 instance, public IP is used. > > And there are no problems when: > > 1) client and server are on a single EC2 instance, either loopback or > private IP is used. > > Please correct me if I'm wrong. > If by "client" you mean nginx, and by "server" you mean our upstream HTTP service ... That is exactly correct. You could also throw in another permutation by changing where ApacheBench is run, but it doesn't change the occurrence of dropped packets; only increases average latency when AB and nginx are on separate EC2 instances. > What about EC2 security group - do the client and the server use the same > group? > How many rules are present in this group? Have you tried to either decrease > a number of rules used, or create "pass any to any" simple configuration? > That's a great point! We have been struggling with the number of firewall rules as a separate matter, in fact. There may be some relation here. Thank you for reminding me. > And just to clarify the things - under "external IP address" do you mean > EC2 > instance's public IP, or maybe Elastic IP? I'm talking about the instance public IPs. Elastic IPs are only used for client access to nginx. And specifically only for managing DNS. Between nginx and upstream servers, the public IPs are used. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robm at fastmail.fm Wed Mar 20 01:07:32 2013 From: robm at fastmail.fm (Robert Mueller) Date: Wed, 20 Mar 2013 12:07:32 +1100 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <20130319145430.GQ15378@mdounin.ru> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <20130315081044.GQ15378@mdounin.ru> <1363386747.13074.140661204958001.6D3BCC81@webmail.messagingengine.com> <20130316234901.GM15378@mdounin.ru> <1363668310.15512.140661206171601.41B74C43@webmail.messagingengine.com> <20130319145430.GQ15378@mdounin.ru> Message-ID: <1363741652.3871.140661206613853.5219825A@webmail.messagingengine.com> > Valentin is already worked on this, and I believe he'll be able to > provide a little bit more generic patch. Ok, well I might just use ours for now, but won't develop it any further. Any idea on a time frame for this more official patch? Rob From matthieu.tourne at gmail.com Wed Mar 20 01:47:04 2013 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Tue, 19 Mar 2013 18:47:04 -0700 Subject: No HTTP version in request Message-ID: I just found an interesting behavior in Nginx while looking at a reqeust that was causing an error in my code. For a request with no HTTP/xx version, Nginx will return no HTTP response headers. >From what I gathered, this is just Nginx defaulting to HTTP/0.9, where no headers are expected. And actually doing the right thing here. But shouldn't $server_protocol default to HTTP/0.9 in that case, instead of being an empty string ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Mar 20 05:04:20 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 20 Mar 2013 09:04:20 +0400 Subject: No HTTP version in request In-Reply-To: References: Message-ID: <3B5DB071-8632-4FAF-8D35-1D6FEA6E0B77@sysoev.ru> On Mar 20, 2013, at 5:47 , Matthieu Tourne wrote: > I just found an interesting behavior in Nginx while looking at a reqeust that was causing an error in my code. > > For a request with no HTTP/xx version, Nginx will return no HTTP response headers. > > From what I gathered, this is just Nginx defaulting to HTTP/0.9, where no headers are expected. And actually doing the right thing here. > > But shouldn't $server_protocol default to HTTP/0.9 in that case, instead of being an empty string ? Yes, absence of protocol version is so called HTTP/0.9 but this is adhoc name, so I think empty string is correct value, since there is no protocol version at all. -- Igor Sysoev http://nginx.com/services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Mar 20 07:05:50 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 20 Mar 2013 11:05:50 +0400 Subject: Translating an F5 rule In-Reply-To: References: Message-ID: <37FEFC80-6A9B-43F8-8503-3D7149F2507F@sysoev.ru> On Mar 19, 2013, at 19:42 , WBrown at e1b.org wrote: > Peter Booth wrote on 03/19/2013 10:43:12 AM: > >> The code does the following: >> >> 1. remove an HTTP header named "SWSSLHDR" >> 2. replaces it with SWSSLHDR: port, where the port is the local port of >> the "current context's TCP connection", presumably the port that your F5 >> virtual server is listening on. > > I had somewhat figured that out. It isn't clear from the notes I got from > vender as to what the current context is. I'm guessing the client side, > but I can test that. > >> This is presumably to separate SSL and non SSL traffic , or to allow for >> load balancing across websites that are hosted on ports 8080, 8000 or >> other nonstandard ports. >> >> One thought- are you configuring the nginx server to terminate SSL and >> then proxy to a single upstream endpoint? Is this the same topology as >> the F5 one? Is the entire site SSL or just the login portions? > > Presently, we are using an Centos box with Piranha for load balancing, but > we wish to implement SSL. There are about 50 sites hosted with three > upstream servers. I don't want to tie up 150 IP addresses for SSL on > them, so I want to terminate the SSL connection at the nginx server and > use HTTP on port 80 to connect from nginx to IIS. > > The F5 information is just what the IIS application vendor says they use > in their configuration. We may be buying an F5 in the future, but I need > SSL in the short term. > > Would I add to the location section something like this: > > more_set_input_headers -r SWSSLHDR $server_port proxy_set_header SWSSLHDR $server_port; -- Igor Sysoev http://nginx.com/services.html From jfs.world at gmail.com Wed Mar 20 08:17:28 2013 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Wed, 20 Mar 2013 16:17:28 +0800 Subject: Translating an F5 rule In-Reply-To: <37FEFC80-6A9B-43F8-8503-3D7149F2507F@sysoev.ru> References: <37FEFC80-6A9B-43F8-8503-3D7149F2507F@sysoev.ru> Message-ID: On Wed, Mar 20, 2013 at 3:05 PM, Igor Sysoev wrote: > On Mar 19, 2013, at 19:42 , WBrown at e1b.org wrote: > >> Peter Booth wrote on 03/19/2013 10:43:12 AM: >> >>> The code does the following: >>> >>> 1. remove an HTTP header named "SWSSLHDR" >>> 2. replaces it with SWSSLHDR: port, where the port is the local port of >>> the "current context's TCP connection", presumably the port that your F5 >>> virtual server is listening on. >> >> I had somewhat figured that out. It isn't clear from the notes I got from >> vender as to what the current context is. I'm guessing the client side, >> but I can test that. >> >>> This is presumably to separate SSL and non SSL traffic , or to allow for >>> load balancing across websites that are hosted on ports 8080, 8000 or >>> other nonstandard ports. >>> >>> One thought- are you configuring the nginx server to terminate SSL and >>> then proxy to a single upstream endpoint? Is this the same topology as >>> the F5 one? Is the entire site SSL or just the login portions? >> >> Presently, we are using an Centos box with Piranha for load balancing, but >> we wish to implement SSL. There are about 50 sites hosted with three >> upstream servers. I don't want to tie up 150 IP addresses for SSL on >> them, so I want to terminate the SSL connection at the nginx server and >> use HTTP on port 80 to connect from nginx to IIS. >> >> The F5 information is just what the IIS application vendor says they use >> in their configuration. We may be buying an F5 in the future, but I need >> SSL in the short term. >> >> Would I add to the location section something like this: >> >> more_set_input_headers -r SWSSLHDR $server_port > > proxy_set_header SWSSLHDR $server_port; > nice catch! But once again, because HTTP_REQUEST is client-side, so says this F5-certified engineer with reference to the docs, it should be $proxy_port instead of $server_port. -jf From mdounin at mdounin.ru Wed Mar 20 08:38:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Mar 2013 12:38:19 +0400 Subject: How to change cookie header in a filter? In-Reply-To: References: <20130319130423.GL15378@mdounin.ru> Message-ID: <20130320083819.GA62550@mdounin.ru> Hello! On Tue, Mar 19, 2013 at 11:50:43AM -0700, Cool wrote: > Thanks Maxim, I got what you mean. > > Since I'm using fastCGI so I put something like this: > > fastcgi_param HTTP_COOKIE "$http_cookie; mycookie=$cookie_note"; > > (I populated cookie_note in my filter already, this was done for > logging purpose thus it is just a reuse of existing facility) > > More problems come with this solution: > > 1. it seems fastcgi_param called before my filter so $cookie_note > always got empty, and You shouldn't rely on your filter already executed, and should instead register a variable handler which does the actual work. This way it will work at any time. > 2. it seems fastcgi_param could not be used in a if directive so I > end up with change the cookie header even the mycookie is presented > in user's request, thus If there are conditions when you should not add a cookie I would recommend you implementing a variable with full Cookie header you want to pass, e.g. fastcgi_param HTTP_COOKIE $my_new_cookie; This way you may implement arbitrary conditions you want in your module. (You may also construct the variable using if/set/map/etc, but doing appropriate tests in your module would be less error prone.) -- Maxim Dounin http://nginx.org/en/donation.html From igor at sysoev.ru Wed Mar 20 08:42:43 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 20 Mar 2013 12:42:43 +0400 Subject: Need to proxypass to different servers. In-Reply-To: References: <20130319092029.GH18002@craic.sysops.org> Message-ID: <08094371-7F46-4144-B847-F1FCCEC2C9DC@sysoev.ru> On Mar 19, 2013, at 18:41 , Andreas Weber wrote: > Im not expert but i think you must specify /cms BEFORE / because "/" will match everything No. Since "/" and "/cms" are not regex locations, nginx finds the maximum match despite location order. This is why using only non-regex locations allows to create at once large and easy to maintain configurations with a lot of locations. -- Igor Sysoev http://nginx.com/services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 20 10:03:35 2013 From: nginx-forum at nginx.us (fastcatch) Date: Wed, 20 Mar 2013 06:03:35 -0400 Subject: When does nginx return a Bad gateway (502)? In-Reply-To: <5148CEEA.3040308@consbio.org> References: <5148CEEA.3040308@consbio.org> Message-ID: <8fd0e2adc40cbfba00b43209e857e1af.NginxMailingListEnglish@forum.nginx.org> Thank you. BTW: having googled it for different expressions I've found that it is a well known issue and that it has nothing to do with nginx or the Rails app itself. Unicorn in and of itself can neither be blamed (albeit IMHO there should be some official documentation on this issue, esp. since nginx is the preferred/"official" web server for unicorn). It looks like a socket backlog setting issue to be handled at OS level but I'm still unsure if that's a resolution or not. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237573,237590#msg-237590 From pasik at iki.fi Wed Mar 20 17:02:51 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Wed, 20 Mar 2013 19:02:51 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130314083912.GT8912@reaktio.net> References: <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> <20130225101304.GZ8912@reaktio.net> <20130305131741.GN8912@reaktio.net> <20130308133629.GD8912@reaktio.net> <20130314083912.GT8912@reaktio.net> Message-ID: <20130320170251.GL11427@reaktio.net> On Thu, Mar 14, 2013 at 10:39:12AM +0200, Pasi K?rkk?inen wrote: > On Thu, Mar 14, 2013 at 01:17:16PM +0800, Weibin Yao wrote: > > Try the new patch, It could solve your problem. > > Thanks for your test effort. > > > > Thanks a lot! > > I can confirm the "no_buffer_v5.patch" with nginx 1.2.7 fixes the problem for me, > and both HTTP POST and HTTP PUT requests work OK now without buffering to disk. > For the archives: v6 of the patch fixes some remaining problems: http://yaoweibin.cn/patches/nginx-1.2.7-no_buffer-v6.patch -- Pasi > > > > 2013/3/8 Pasi K??rkk??inen <[1]pasik at iki.fi> > > > > On Tue, Mar 05, 2013 at 03:17:41PM +0200, Pasi K??rkk??inen wrote: > > > On Tue, Feb 26, 2013 at 10:13:11PM +0800, Weibin Yao wrote: > > > > ? ? It still worked in my box. Can you show me the debug.log > > > > ? ? ([1][2]http://wiki.nginx.org/Debugging)? You need recompile ?* > > with > > > > ? ? --with-debug configure argument and set debug level in > > error_log > > > > ? ? directive. > > > > > > > > > > Ok so I've sent you the debug log. > > > Can you see anything obvious in it? > > > > > > I keep getting the "upstream sent invalid header while reading > > response header from upstream" > > > error when using the no_buffer patch.. > > > > > > > Is there something you'd want me to try? Adjusting some config options? > > Did you find anything weird in the log? > > Thanks! > > > > -- Pasi > > > > > > > > > > > > > ? ? 2013/2/25 Pasi K?*?*?rkk?*?*?inen <[2][3]pasik at iki.fi> > > > > > > > > ? ? ? On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > > > > ? ? ? > ?* ? ?* Can you show me your configure? It works for me > > with nginx-1.2.7. > > > > ? ? ? > ?* ? ?* Thanks. > > > > ? ? ? > > > > > > > > > ? ? ? Hi, > > > > > > > > ? ? ? I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers > > more" module > > > > ? ? ? added, > > > > ? ? ? and your patch. > > > > > > > > ? ? ? I'm using the following configuration: > > > > > > > > ? ? ? server { > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? listen ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > ? ?* ? ?* public_ip:443 ssl; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? server_name ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > ? service.domain.tld; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? ssl ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > ? ?* ? ?* ? ?* ? on; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? keepalive_timeout ?* ? ?* ? ?* ? 70; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? access_log ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > > > ? ? ? ?* /var/log/nginx/access-service.log; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? access_log ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > > > ? ? ? ?* /var/log/nginx/access-service-full.log full; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? error_log ?* ? ?* ? ?* ? ?* ? ?* ? ?* > > ? ?* > > > > ? ? ? /var/log/nginx/error-service.log; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_header_buffer_size 64k; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_header_timeout ?* ? 120; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_next_upstream error timeout > > invalid_header http_500 > > > > ? ? ? http_502 http_503; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_set_header Host $host; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_set_header X-Real-IP $remote_addr; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_redirect ?* ? ?* ? off; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_buffering ?* ? ?* off; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? proxy_cache ?* ? ?* ? ?* ? ?* off; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? add_header Last-Modified ""; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? if_modified_since ?* off; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_max_body_size ?* ? ?* 262144M; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_body_buffer_size 1024k; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? client_body_timeout ?* ? ?* ? 240; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? chunked_transfer_encoding off; > > > > > > > > ? ? ? # ?* ? ?* ? ?* ? client_body_postpone_sending ?* ? ?* 64k; > > > > ? ? ? # ?* ? ?* ? ?* ? proxy_request_buffering ?* ? ?* ? ?* ? ?* > > ? off; > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? location / { > > > > > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? proxy_pass ?* ? ?* > > ? ?* [3][4]https://service-backend; > > > > ? ? ? ?* ? ?* ? ?* ? ?* ? } > > > > ? ? ? } > > > > > > > > ? ? ? Thanks! > > > > > > > > ? ? ? -- Pasi > > > > > > > > ? ? ? > ?* ? ?* 2013/2/22 Pasi K?**??*??rkk?**??*??inen > > <[1][4][5]pasik at iki.fi> > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* On Fri, Feb 22, 2013 at 11:25:24AM +0200, > > Pasi > > > > ? ? ? K?**??*??rkk?**??*??inen wrote: > > > > ? ? ? > ?* ? ?* ? ?* > On Fri, Feb 22, 2013 at 10:06:11AM +0800, > > Weibin Yao wrote: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Use the patch I attached in > > this mail thread > > > > ? ? ? instead, don't use > > > > ? ? ? > ?* ? ?* ? ?* the pull > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** request patch which is for > > tengine.?*** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Thanks. > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > Oh sorry I missed that attachment. It seems > > to apply and > > > > ? ? ? build OK. > > > > ? ? ? > ?* ? ?* ? ?* > I'll start testing it. > > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* I added the patch on top of nginx 1.2.7 and > > enabled the > > > > ? ? ? following > > > > ? ? ? > ?* ? ?* ? ?* options: > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* client_body_postpone_sending ?** ?* ?** 64k; > > > > ? ? ? > ?* ? ?* ? ?* proxy_request_buffering ?** ?* ?** ?* ?** ?* > > ?** ?* off; > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* after that connections through the nginx > > reverse proxy started > > > > ? ? ? failing > > > > ? ? ? > ?* ? ?* ? ?* with errors like this: > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* [error] 29087#0: *49 upstream prematurely > > closed connection > > > > ? ? ? while > > > > ? ? ? > ?* ? ?* ? ?* reading response header from upstream > > > > ? ? ? > ?* ? ?* ? ?* [error] 29087#0: *60 upstream sent invalid > > header while > > > > ? ? ? reading response > > > > ? ? ? > ?* ? ?* ? ?* header from upstream > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* And the services are unusable. > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* Commenting out the two config options above > > makes nginx happy > > > > ? ? ? again. > > > > ? ? ? > ?* ? ?* ? ?* Any idea what causes that? Any tips how to > > troubleshoot it? > > > > ? ? ? > ?* ? ?* ? ?* Thanks! > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* -- Pasi > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 2013/2/22 Pasi > > K?***?*??*?*??rkk?***?*??*?*??inen > > > > ? ? ? <[1][2][5][6]pasik at iki.fi> > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** On Fri, Jan 18, 2013 at > > 10:38:21AM +0200, > > > > ? ? ? Pasi > > > > ? ? ? > ?* ? ?* ? ?* K?***?*??*?*??rkk?***?*??*?*??inen wrote: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > On Thu, Jan 17, 2013 > > at 11:15:58AM +0800, > > > > ? ? ? ?????? wrote: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** Yes. > > It should work for any > > > > ? ? ? request method. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > Great, thanks, I'll > > let you know how it > > > > ? ? ? works for me. > > > > ? ? ? > ?* ? ?* ? ?* Probably in two > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** weeks or so. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Hi, > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Adding the tengine pull > > request 91 on top of > > > > ? ? ? nginx 1.2.7 > > > > ? ? ? > ?* ? ?* ? ?* doesn't work: > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** cc1: warnings being > > treated as errors > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > src/http/ngx_http_request_body.c: In function > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? 'ngx_http_read_non_buffered_client_request_body': > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > src/http/ngx_http_request_body.c:506: error: > > > > ? ? ? implicit > > > > ? ? ? > ?* ? ?* ? ?* declaration of > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** function > > 'ngx_http_top_input_body_filter' > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** make[1]: *** > > > > ? ? ? [objs/src/http/ngx_http_request_body.o] Error 1 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** make[1]: Leaving > > directory > > > > ? ? ? `/root/src/nginx/nginx-1.2.7' > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** make: *** [build] Error > > 2 > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ngx_http_top_input_body_filter() cannot be > > > > ? ? ? found from any > > > > ? ? ? > ?* ? ?* ? ?* .c/.h files.. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Which other patches > > should I apply? > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Perhaps this? > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* ?** > > > > ? ? > > ? [2][3][6][7]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Thanks, > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** -- Pasi > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > > 2013/1/16 Pasi > > > > ? ? ? K?****?**?*??*?**?*??rkk?****?**?*??*?**?*??inen > > > > ? ? ? > ?* ? ?* ? ?* <[1][3][4][7][8]pasik at iki.fi> > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** On Sun, Jan 13, 2013 at > > > > ? ? ? 08:22:17PM +0800, > > > > ? ? ? > ?* ? ?* ? ?* ?????? wrote: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** This > > > > ? ? ? patch should work between > > > > ? ? ? > ?* ? ?* ? ?* nginx-1.2.6 and > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx-1.3.8. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** The > > > > ? ? ? documentation is here: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ## > > > > ? ? ? > ?* ? ?* ? ?* client_body_postpone_sending ## > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** Syntax: > > > > ? ? ? > ?* ? ?* ? ?* **client_body_postpone_sending** `size` > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? Default: 64k > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? Context: `http, server, > > > > ? ? ? > ?* ? ?* ? ?* location` > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** If you > > > > ? ? ? specify the > > > > ? ? ? > ?* ? ?* ? ?* `proxy_request_buffering` or > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? `fastcgi_request_buffering` to > > > > ? ? ? > ?* ? ?* ? ?* be off, Nginx will > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** send the body > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** to backend > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** when it > > > > ? ? ? receives more than > > > > ? ? ? > ?* ? ?* ? ?* `size` data or the > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** whole request body > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** has been > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? received. It could save the > > > > ? ? ? > ?* ? ?* ? ?* connection and reduce > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** the IO number > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** with > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? backend. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ## > > > > ? ? ? proxy_request_buffering ## > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** Syntax: > > > > ? ? ? > ?* ? ?* ? ?* **proxy_request_buffering** `on | off` > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? Default: `on` > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? Context: `http, server, > > > > ? ? ? > ?* ? ?* ? ?* location` > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** Specify > > > > ? ? ? the request body will > > > > ? ? ? > ?* ? ?* ? ?* be buffered to the > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** disk or not. If > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** it's off, > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** the > > > > ? ? ? request body will be > > > > ? ? ? > ?* ? ?* ? ?* stored in memory and sent > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to backend > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** after Nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? receives more than > > > > ? ? ? > ?* ? ?* ? ?* `client_body_postpone_sending` > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** data. It could > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** save the > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** disk IO > > > > ? ? ? with large request > > > > ? ? ? > ?* ? ?* ? ?* body. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** ?*** ?**** ?*** > > > > ? ? ? > ?* ? ?* ? ?* Note that, if you specify it > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** to be off, the nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** retry mechanism > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** with > > > > ? ? ? unsuccessful response > > > > ? ? ? > ?* ? ?* ? ?* will be broken after > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** you sent part of > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** the > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** request > > > > ? ? ? to backend. It will > > > > ? ? ? > ?* ? ?* ? ?* just return 500 when > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** it encounters > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** such > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? unsuccessful response. This > > > > ? ? ? > ?* ? ?* ? ?* directive also breaks > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** these > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** variables: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? $request_body, > > > > ? ? ? > ?* ? ?* ? ?* $request_body_file. You should not > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** use these > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** variables any > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** more > > > > ? ? ? while their values are > > > > ? ? ? > ?* ? ?* ? ?* undefined. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** Hello, > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** This patch sounds > > > > ? ? ? exactly like what I need > > > > ? ? ? > ?* ? ?* ? ?* aswell! > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** I assume it works for > > > > ? ? ? both POST and PUT > > > > ? ? ? > ?* ? ?* ? ?* requests? > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** Thanks, > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** -- Pasi > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** Hello! > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** @yaoweibin > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** If you are eager > > > > ? ? ? > ?* ? ?* ? ?* for this feature, you > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** could try my > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** patch: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? [2][2][4][5][8][9]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** This patch has > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** been running in > > > > ? ? ? > ?* ? ?* ? ?* our production servers. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** what's the nginx > > > > ? ? ? > ?* ? ?* ? ?* version your patch based on? > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** Thanks! > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** On Fri, Jan 11, 2013 at > > > > ? ? ? > ?* ? ?* ? ?* 5:17 PM, ?*****?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*****?****?***?**?*???****?***?**?*???****?***?**?*?? > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > ? ? ? <[3][3][5][6][9][10]yaoweibin at gmail.com> wrote: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** I know nginx > > > > ? ? ? > ?* ? ?* ? ?* team are working on it. You > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** can wait for it. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** If you are eager > > > > ? ? ? > ?* ? ?* ? ?* for this feature, you > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** could try my > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** patch: > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [4][4][6][7][10][11]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** This patch has > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** been running in > > > > ? ? ? > ?* ? ?* ? ?* our production servers. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** 2013/1/11 li > > > > ? ? ? > ?* ? ?* ? ?* zJay > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > <[5][5][7][8][11][12]zjay1987 at gmail.com> > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** Hello! > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** is it > > > > ? ? ? > ?* ? ?* ? ?* possible that nginx will not > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** buffer the client > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** body before > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** handle > > > > ? ? ? > ?* ? ?* ? ?* the request to upstream? > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** we want > > > > ? ? ? > ?* ? ?* ? ?* to use nginx as a reverse > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** proxy to upload very > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** very big file > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** to the > > > > ? ? ? > ?* ? ?* ? ?* upstream, but the default > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** behavior of nginx is to > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** save the > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** whole > > > > ? ? ? > ?* ? ?* ? ?* request to the local disk > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** first before handle it > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** to the > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** upstream, > > > > ? ? ? > ?* ? ?* ? ?* which make the upstream > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** impossible to process > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** the file on > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** the fly > > > > ? ? ? > ?* ? ?* ? ?* when the file is uploading, > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** results in much high > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** request > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** latency > > > > ? ? ? > ?* ? ?* ? ?* and server-side resource > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** consumption. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** Thanks! > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** nginx > > > > ? ? ? > ?* ? ?* ? ?* mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** > > > > ? ? ? > ?* ? ?* ? ?* [6][6][8][9][12][13]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** ?*** ?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [7][7][9][10][13][14]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** -- > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** Weibin Yao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** Developer @ > > > > ? ? ? > ?* ? ?* ? ?* Server Platform Team of > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** Taobao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** nginx mailing > > > > ? ? ? > ?* ? ?* ? ?* list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** > > > > ? ? ? > ?* ? ?* ? ?* [8][8][10][11][14][15]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** ?*** ?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* ?** > > > > ? ? > > ? [9][9][11][12][15][16]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** > > > > ? ? ? > ?* ? ?* ? ?* [10][10][12][13][16][17]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** ?*** > > > > ? ? ? ?**** > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** > > > > ? ? ? > ?* ? ?* ? ?* ?** > > > > ? ? > > ? [11][11][13][14][17][18]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** -- > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** Weibin > > > > ? ? ? Yao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** > > > > ? ? ? Developer @ Server Platform > > > > ? ? ? > ?* ? ?* ? ?* Team of Taobao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > References > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** Visible > > > > ? ? ? links > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 1. > > > > ? ? ? > ?* ? ?* ? ?* mailto:[12][14][15][18][19]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 2. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [13][15][16][19][20]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 3. > > > > ? ? ? > ?* ? ?* ? ?* > > mailto:[14][16][17][20][21]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 4. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [15][17][18][21][22]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 5. > > > > ? ? ? > ?* ? ?* ? ?* mailto:[16][18][19][22][23]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 6. > > > > ? ? ? > ?* ? ?* ? ?* mailto:[17][19][20][23][24]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 7. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [18][20][21][24][25]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 8. > > > > ? ? ? > ?* ? ?* ? ?* mailto:[19][21][22][25][26]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** ?**** 9. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [20][22][23][26][27]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** 10. > > > > ? ? ? > ?* ? ?* ? ?* mailto:[21][23][24][27][28]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > ?**** ?*** 11. > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [22][24][25][28][29]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? [23][25][26][29][30]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > > ? ? ? > ?* ? ?* > > > > ? ? ? ?* > > [24][26][27][30][31]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > ? ? ? > ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > ? ? ? [25][27][28][31][32]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** ?** > > ?*** > > > > ? ? ? > ?* ? ?* > > > > ? ? ? ?* > > [26][28][29][32][33]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** -- > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > > Weibin Yao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > > Developer @ Server Platform > > > > ? ? ? Team of Taobao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > References > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** > > Visible links > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 1. > > > > ? ? ? mailto:[29][30][33][34]pasik at iki.fi > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 2. > > > > ? ? ? > ?* ? ?* ? ?* > > [30][31][34][35]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 3. > > > > ? ? ? mailto:[31][32][35][36]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 4. > > > > ? ? ? > ?* ? ?* ? ?* > > [32][33][36][37]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 5. > > > > ? ? ? mailto:[33][34][37][38]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 6. > > > > ? ? ? mailto:[34][35][38][39]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 7. > > > > ? ? ? > ?* ? ?* ? ?* > > [35][36][39][40]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 8. > > > > ? ? ? mailto:[36][37][40][41]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** ?*** 9. > > > > ? ? ? > ?* ? ?* ? ?* > > [37][38][41][42]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 10. > > > > ? ? ? mailto:[38][39][42][43]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 11. > > > > ? ? ? > ?* ? ?* ? ?* > > [39][40][43][44]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 12. > > > > ? ? ? mailto:[40][41][44][45]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 13. > > > > ? ? ? > ?* ? ?* ? ?* > > [41][42][45][46]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 14. > > > > ? ? ? mailto:[42][43][46][47]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 15. > > > > ? ? ? > ?* ? ?* ? ?* > > [43][44][47][48]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 16. > > > > ? ? ? mailto:[44][45][48][49]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 17. > > > > ? ? ? mailto:[45][46][49][50]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 18. > > > > ? ? ? > ?* ? ?* ? ?* > > [46][47][50][51]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 19. > > > > ? ? ? mailto:[47][48][51][52]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 20. > > > > ? ? ? > ?* ? ?* ? ?* > > [48][49][52][53]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 21. > > > > ? ? ? mailto:[49][50][53][54]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 22. > > > > ? ? ? > ?* ? ?* ? ?* > > [50][51][54][55]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 23. > > > > ? ? ? mailto:[51][52][55][56]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 24. > > > > ? ? ? > ?* ? ?* ? ?* > > [52][53][56][57]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 25. > > > > ? ? ? mailto:[53][54][57][58]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > ?*** ?** 26. > > > > ? ? ? > ?* ? ?* ? ?* > > [54][55][58][59]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? ? _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > [55][56][59][60]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > > ? ? > > ? [56][57][60][61]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? ? _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > [57][58][61][62]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > > ? ? > > ? [58][59][62][63]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > [59][60][63][64]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** ?* ?** > > > > ? ? > > ? [60][61][64][65]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** -- > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Weibin Yao > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Developer @ Server Platform > > Team of Taobao > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > References > > > > ? ? ? > ?* ? ?* ? ?* > > > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** Visible links > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 1. > > mailto:[62][65][66]pasik at iki.fi > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 2. > > > > ? ? ? > ?* ? ?* > > > > ? ? ? ?* > > [63][66][67]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 3. > > mailto:[64][67][68]pasik at iki.fi > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 4. > > > > ? ? ? [65][68][69]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 5. > > mailto:[66][69][70]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 6. > > > > ? ? ? [67][70][71]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 7. > > mailto:[68][71][72]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 8. > > mailto:[69][72][73]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* ?** 9. > > > > ? ? ? [70][73][74]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 10. > > mailto:[71][74][75]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 11. > > > > ? ? ? [72][75][76]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 12. > > mailto:[73][76][77]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 13. > > > > ? ? ? [74][77][78]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 14. > > mailto:[75][78][79]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 15. > > [76][79][80]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 16. > > mailto:[77][80][81]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 17. > > [78][81][82]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 18. > > mailto:[79][82][83]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 19. > > mailto:[80][83][84]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 20. > > > > ? ? ? [81][84][85]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 21. > > mailto:[82][85][86]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 22. > > > > ? ? ? [83][86][87]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 23. > > mailto:[84][87][88]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 24. > > > > ? ? ? [85][88][89]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 25. > > mailto:[86][89][90]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 26. > > > > ? ? ? [87][90][91]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 27. > > mailto:[88][91][92]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 28. > > > > ? ? ? [89][92][93]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 29. > > mailto:[90][93][94]pasik at iki.fi > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 30. > > [91][94][95]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 31. > > mailto:[92][95][96]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 32. > > [93][96][97]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 33. > > mailto:[94][97][98]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 34. > > mailto:[95][98][99]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 35. > > > > ? ? ? [96][99][100]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 36. > > mailto:[97][100][101]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 37. > > > > ? ? > > ? [98][101][102]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 38. > > mailto:[99][102][103]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 39. > > > > ? ? > > ? [100][103][104]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 40. > > mailto:[101][104][105]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 41. > > > > ? ? ? [102][105][106]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 42. > > mailto:[103][106][107]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 43. > > > > ? ? ? [104][107][108]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 44. > > mailto:[105][108][109]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 45. > > mailto:[106][109][110]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 46. > > > > ? ? > > ? [107][110][111]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 47. > > mailto:[108][111][112]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 48. > > > > ? ? > > ? [109][112][113]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 49. > > mailto:[110][113][114]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 50. > > > > ? ? > > ? [111][114][115]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 51. > > mailto:[112][115][116]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 52. > > > > ? ? > > ? [113][116][117]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 53. > > mailto:[114][117][118]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 54. > > > > ? ? > > ? [115][118][119]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 55. > > mailto:[116][119][120]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 56. > > > > ? ? > > ? [117][120][121]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 57. > > mailto:[118][121][122]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 58. > > > > ? ? > > ? [119][122][123]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 59. > > mailto:[120][123][124]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > ?** ?* 60. > > > > ? ? > > ? [121][124][125]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > > nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > > [122][125][126]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > > > [123][126][127]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* ? ?* > > > > > ? ? ? > ?* ? ?* ? ?* > > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* > nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* > [124][127][128]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > > [125][128][129]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* [126][129][130]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > [127][130][131]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* -- > > > > ? ? ? > ?* ? ?* Weibin Yao > > > > ? ? ? > ?* ? ?* Developer @ Server Platform Team of Taobao > > > > ? ? ? > > > > > ? ? ? > References > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* Visible links > > > > ? ? ? > ?* ? ?* 1. mailto:[131][132]pasik at iki.fi > > > > ? ? ? > ?* ? ?* 2. mailto:[132][133]pasik at iki.fi > > > > ? ? ? > ?* ? ?* 3. > > > > ? ? > > ? [133][134]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? ? > ?* ? ?* 4. mailto:[134][135]pasik at iki.fi > > > > ? ? ? > ?* ? ?* 5. > > [135][136]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* 6. mailto:[136][137]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* 7. > > [137][138]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* 8. mailto:[138][139]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* 9. mailto:[139][140]nginx at nginx.org > > > > ? ? ? > ?* ? 10. > > [140][141]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 11. mailto:[141][142]nginx at nginx.org > > > > ? ? ? > ?* ? 12. > > [142][143]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 13. mailto:[143][144]nginx at nginx.org > > > > ? ? ? > ?* ? 14. > > [144][145]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 15. mailto:[145][146]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 16. > > [146][147]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 17. mailto:[147][148]yaoweibin at gmail.com > > > > ? ? ? > ?* ? 18. > > [148][149]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 19. mailto:[149][150]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 20. mailto:[150][151]nginx at nginx.org > > > > ? ? ? > ?* ? 21. > > [151][152]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 22. mailto:[152][153]nginx at nginx.org > > > > ? ? ? > ?* ? 23. > > [153][154]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 24. mailto:[154][155]nginx at nginx.org > > > > ? ? ? > ?* ? 25. > > [155][156]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 26. mailto:[156][157]nginx at nginx.org > > > > ? ? ? > ?* ? 27. > > [157][158]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 28. mailto:[158][159]nginx at nginx.org > > > > ? ? ? > ?* ? 29. > > [159][160]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 30. mailto:[160][161]pasik at iki.fi > > > > ? ? ? > ?* ? 31. > > [161][162]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 32. mailto:[162][163]yaoweibin at gmail.com > > > > ? ? ? > ?* ? 33. > > [163][164]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 34. mailto:[164][165]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 35. mailto:[165][166]nginx at nginx.org > > > > ? ? ? > ?* ? 36. > > [166][167]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 37. mailto:[167][168]nginx at nginx.org > > > > ? ? ? > ?* ? 38. > > [168][169]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 39. mailto:[169][170]nginx at nginx.org > > > > ? ? ? > ?* ? 40. > > [170][171]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 41. mailto:[171][172]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 42. > > [172][173]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 43. mailto:[173][174]yaoweibin at gmail.com > > > > ? ? ? > ?* ? 44. > > [174][175]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 45. mailto:[175][176]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 46. mailto:[176][177]nginx at nginx.org > > > > ? ? ? > ?* ? 47. > > [177][178]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 48. mailto:[178][179]nginx at nginx.org > > > > ? ? ? > ?* ? 49. > > [179][180]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 50. mailto:[180][181]nginx at nginx.org > > > > ? ? ? > ?* ? 51. > > [181][182]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 52. mailto:[182][183]nginx at nginx.org > > > > ? ? ? > ?* ? 53. > > [183][184]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 54. mailto:[184][185]nginx at nginx.org > > > > ? ? ? > ?* ? 55. > > [185][186]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 56. mailto:[186][187]nginx at nginx.org > > > > ? ? ? > ?* ? 57. > > [187][188]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 58. mailto:[188][189]nginx at nginx.org > > > > ? ? ? > ?* ? 59. > > [189][190]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 60. mailto:[190][191]nginx at nginx.org > > > > ? ? ? > ?* ? 61. > > [191][192]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 62. mailto:[192][193]pasik at iki.fi > > > > ? ? ? > ?* ? 63. > > > > ? ? > > ? [193][194]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? ? > ?* ? 64. mailto:[194][195]pasik at iki.fi > > > > ? ? ? > ?* ? 65. > > [195][196]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 66. mailto:[196][197]yaoweibin at gmail.com > > > > ? ? ? > ?* ? 67. > > [197][198]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 68. mailto:[198][199]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 69. mailto:[199][200]nginx at nginx.org > > > > ? ? ? > ?* ? 70. > > [200][201]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 71. mailto:[201][202]nginx at nginx.org > > > > ? ? ? > ?* ? 72. > > [202][203]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 73. mailto:[203][204]nginx at nginx.org > > > > ? ? ? > ?* ? 74. > > [204][205]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 75. mailto:[205][206]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 76. > > [206][207]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 77. mailto:[207][208]yaoweibin at gmail.com > > > > ? ? ? > ?* ? 78. > > [208][209]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 79. mailto:[209][210]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 80. mailto:[210][211]nginx at nginx.org > > > > ? ? ? > ?* ? 81. > > [211][212]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 82. mailto:[212][213]nginx at nginx.org > > > > ? ? ? > ?* ? 83. > > [213][214]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 84. mailto:[214][215]nginx at nginx.org > > > > ? ? ? > ?* ? 85. > > [215][216]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 86. mailto:[216][217]nginx at nginx.org > > > > ? ? ? > ?* ? 87. > > [217][218]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 88. mailto:[218][219]nginx at nginx.org > > > > ? ? ? > ?* ? 89. > > [219][220]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 90. mailto:[220][221]pasik at iki.fi > > > > ? ? ? > ?* ? 91. > > [221][222]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 92. mailto:[222][223]yaoweibin at gmail.com > > > > ? ? ? > ?* ? 93. > > [223][224]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? 94. mailto:[224][225]zjay1987 at gmail.com > > > > ? ? ? > ?* ? 95. mailto:[225][226]nginx at nginx.org > > > > ? ? ? > ?* ? 96. > > [226][227]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 97. mailto:[227][228]nginx at nginx.org > > > > ? ? ? > ?* ? 98. > > [228][229]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 99. mailto:[229][230]nginx at nginx.org > > > > ? ? ? > ?* 100. > > [230][231]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 101. mailto:[231][232]zjay1987 at gmail.com > > > > ? ? ? > ?* 102. > > [232][233]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* 103. mailto:[233][234]yaoweibin at gmail.com > > > > ? ? ? > ?* 104. > > [234][235]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* 105. mailto:[235][236]zjay1987 at gmail.com > > > > ? ? ? > ?* 106. mailto:[236][237]nginx at nginx.org > > > > ? ? ? > ?* 107. > > [237][238]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 108. mailto:[238][239]nginx at nginx.org > > > > ? ? ? > ?* 109. > > [239][240]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 110. mailto:[240][241]nginx at nginx.org > > > > ? ? ? > ?* 111. > > [241][242]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 112. mailto:[242][243]nginx at nginx.org > > > > ? ? ? > ?* 113. > > [243][244]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 114. mailto:[244][245]nginx at nginx.org > > > > ? ? ? > ?* 115. > > [245][246]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 116. mailto:[246][247]nginx at nginx.org > > > > ? ? ? > ?* 117. > > [247][248]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 118. mailto:[248][249]nginx at nginx.org > > > > ? ? ? > ?* 119. > > [249][250]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 120. mailto:[250][251]nginx at nginx.org > > > > ? ? ? > ?* 121. > > [251][252]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 122. mailto:[252][253]nginx at nginx.org > > > > ? ? ? > ?* 123. > > [253][254]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 124. mailto:[254][255]nginx at nginx.org > > > > ? ? ? > ?* 125. > > [255][256]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* 126. mailto:[256][257]nginx at nginx.org > > > > ? ? ? > ?* 127. > > [257][258]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > _______________________________________________ > > > > ? ? ? > nginx mailing list > > > > ? ? ? > [258][259]nginx at nginx.org > > > > ? ? ? > [259][260]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? _______________________________________________ > > > > ? ? ? nginx mailing list > > > > ? ? ? [260][261]nginx at nginx.org > > > > ? ? ? [261][262]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? -- > > > > ? ? Weibin Yao > > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > > > References > > > > > > > > ? ? Visible links > > > > ? ? 1. [263]http://wiki.nginx.org/Debugging > > > > ? ? 2. mailto:[264]pasik at iki.fi > > > > ? ? 3. [265]https://service-backend/ > > > > ? ? 4. mailto:[266]pasik at iki.fi > > > > ? ? 5. mailto:[267]pasik at iki.fi > > > > ? ? 6. > > [268]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? 7. mailto:[269]pasik at iki.fi > > > > ? ? 8. [270]https://github.com/taobao/tengine/pull/91 > > > > ? ? 9. mailto:[271]yaoweibin at gmail.com > > > > ? 10. [272]https://github.com/taobao/tengine/pull/91 > > > > ? 11. mailto:[273]zjay1987 at gmail.com > > > > ? 12. mailto:[274]nginx at nginx.org > > > > ? 13. [275]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 14. mailto:[276]nginx at nginx.org > > > > ? 15. [277]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 16. mailto:[278]nginx at nginx.org > > > > ? 17. [279]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 18. mailto:[280]zjay1987 at gmail.com > > > > ? 19. [281]https://github.com/taobao/tengine/pull/91 > > > > ? 20. mailto:[282]yaoweibin at gmail.com > > > > ? 21. [283]https://github.com/taobao/tengine/pull/91 > > > > ? 22. mailto:[284]zjay1987 at gmail.com > > > > ? 23. mailto:[285]nginx at nginx.org > > > > ? 24. [286]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 25. mailto:[287]nginx at nginx.org > > > > ? 26. [288]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 27. mailto:[289]nginx at nginx.org > > > > ? 28. [290]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 29. mailto:[291]nginx at nginx.org > > > > ? 30. [292]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 31. mailto:[293]nginx at nginx.org > > > > ? 32. [294]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 33. mailto:[295]pasik at iki.fi > > > > ? 34. [296]https://github.com/taobao/tengine/pull/91 > > > > ? 35. mailto:[297]yaoweibin at gmail.com > > > > ? 36. [298]https://github.com/taobao/tengine/pull/91 > > > > ? 37. mailto:[299]zjay1987 at gmail.com > > > > ? 38. mailto:[300]nginx at nginx.org > > > > ? 39. [301]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 40. mailto:[302]nginx at nginx.org > > > > ? 41. [303]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 42. mailto:[304]nginx at nginx.org > > > > ? 43. [305]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 44. mailto:[306]zjay1987 at gmail.com > > > > ? 45. [307]https://github.com/taobao/tengine/pull/91 > > > > ? 46. mailto:[308]yaoweibin at gmail.com > > > > ? 47. [309]https://github.com/taobao/tengine/pull/91 > > > > ? 48. mailto:[310]zjay1987 at gmail.com > > > > ? 49. mailto:[311]nginx at nginx.org > > > > ? 50. [312]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 51. mailto:[313]nginx at nginx.org > > > > ? 52. [314]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 53. mailto:[315]nginx at nginx.org > > > > ? 54. [316]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 55. mailto:[317]nginx at nginx.org > > > > ? 56. [318]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 57. mailto:[319]nginx at nginx.org > > > > ? 58. [320]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 59. mailto:[321]nginx at nginx.org > > > > ? 60. [322]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 61. mailto:[323]nginx at nginx.org > > > > ? 62. [324]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 63. mailto:[325]nginx at nginx.org > > > > ? 64. [326]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 65. mailto:[327]pasik at iki.fi > > > > ? 66. > > [328]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? 67. mailto:[329]pasik at iki.fi > > > > ? 68. [330]https://github.com/taobao/tengine/pull/91 > > > > ? 69. mailto:[331]yaoweibin at gmail.com > > > > ? 70. [332]https://github.com/taobao/tengine/pull/91 > > > > ? 71. mailto:[333]zjay1987 at gmail.com > > > > ? 72. mailto:[334]nginx at nginx.org > > > > ? 73. [335]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 74. mailto:[336]nginx at nginx.org > > > > ? 75. [337]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 76. mailto:[338]nginx at nginx.org > > > > ? 77. [339]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 78. mailto:[340]zjay1987 at gmail.com > > > > ? 79. [341]https://github.com/taobao/tengine/pull/91 > > > > ? 80. mailto:[342]yaoweibin at gmail.com > > > > ? 81. [343]https://github.com/taobao/tengine/pull/91 > > > > ? 82. mailto:[344]zjay1987 at gmail.com > > > > ? 83. mailto:[345]nginx at nginx.org > > > > ? 84. [346]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 85. mailto:[347]nginx at nginx.org > > > > ? 86. [348]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 87. mailto:[349]nginx at nginx.org > > > > ? 88. [350]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 89. mailto:[351]nginx at nginx.org > > > > ? 90. [352]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 91. mailto:[353]nginx at nginx.org > > > > ? 92. [354]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 93. mailto:[355]pasik at iki.fi > > > > ? 94. [356]https://github.com/taobao/tengine/pull/91 > > > > ? 95. mailto:[357]yaoweibin at gmail.com > > > > ? 96. [358]https://github.com/taobao/tengine/pull/91 > > > > ? 97. mailto:[359]zjay1987 at gmail.com > > > > ? 98. mailto:[360]nginx at nginx.org > > > > ? 99. [361]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 100. mailto:[362]nginx at nginx.org > > > > ? 101. [363]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 102. mailto:[364]nginx at nginx.org > > > > ? 103. [365]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 104. mailto:[366]zjay1987 at gmail.com > > > > ? 105. [367]https://github.com/taobao/tengine/pull/91 > > > > ? 106. mailto:[368]yaoweibin at gmail.com > > > > ? 107. [369]https://github.com/taobao/tengine/pull/91 > > > > ? 108. mailto:[370]zjay1987 at gmail.com > > > > ? 109. mailto:[371]nginx at nginx.org > > > > ? 110. [372]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 111. mailto:[373]nginx at nginx.org > > > > ? 112. [374]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 113. mailto:[375]nginx at nginx.org > > > > ? 114. [376]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 115. mailto:[377]nginx at nginx.org > > > > ? 116. [378]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 117. mailto:[379]nginx at nginx.org > > > > ? 118. [380]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 119. mailto:[381]nginx at nginx.org > > > > ? 120. [382]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 121. mailto:[383]nginx at nginx.org > > > > ? 122. [384]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 123. mailto:[385]nginx at nginx.org > > > > ? 124. [386]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 125. mailto:[387]nginx at nginx.org > > > > ? 126. [388]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 127. mailto:[389]nginx at nginx.org > > > > ? 128. [390]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 129. mailto:[391]nginx at nginx.org > > > > ? 130. [392]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 131. mailto:[393]pasik at iki.fi > > > > ? 132. mailto:[394]pasik at iki.fi > > > > ? 133. > > [395]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? 134. mailto:[396]pasik at iki.fi > > > > ? 135. [397]https://github.com/taobao/tengine/pull/91 > > > > ? 136. mailto:[398]yaoweibin at gmail.com > > > > ? 137. [399]https://github.com/taobao/tengine/pull/91 > > > > ? 138. mailto:[400]zjay1987 at gmail.com > > > > ? 139. mailto:[401]nginx at nginx.org > > > > ? 140. [402]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 141. mailto:[403]nginx at nginx.org > > > > ? 142. [404]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 143. mailto:[405]nginx at nginx.org > > > > ? 144. [406]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 145. mailto:[407]zjay1987 at gmail.com > > > > ? 146. [408]https://github.com/taobao/tengine/pull/91 > > > > ? 147. mailto:[409]yaoweibin at gmail.com > > > > ? 148. [410]https://github.com/taobao/tengine/pull/91 > > > > ? 149. mailto:[411]zjay1987 at gmail.com > > > > ? 150. mailto:[412]nginx at nginx.org > > > > ? 151. [413]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 152. mailto:[414]nginx at nginx.org > > > > ? 153. [415]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 154. mailto:[416]nginx at nginx.org > > > > ? 155. [417]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 156. mailto:[418]nginx at nginx.org > > > > ? 157. [419]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 158. mailto:[420]nginx at nginx.org > > > > ? 159. [421]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 160. mailto:[422]pasik at iki.fi > > > > ? 161. [423]https://github.com/taobao/tengine/pull/91 > > > > ? 162. mailto:[424]yaoweibin at gmail.com > > > > ? 163. [425]https://github.com/taobao/tengine/pull/91 > > > > ? 164. mailto:[426]zjay1987 at gmail.com > > > > ? 165. mailto:[427]nginx at nginx.org > > > > ? 166. [428]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 167. mailto:[429]nginx at nginx.org > > > > ? 168. [430]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 169. mailto:[431]nginx at nginx.org > > > > ? 170. [432]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 171. mailto:[433]zjay1987 at gmail.com > > > > ? 172. [434]https://github.com/taobao/tengine/pull/91 > > > > ? 173. mailto:[435]yaoweibin at gmail.com > > > > ? 174. [436]https://github.com/taobao/tengine/pull/91 > > > > ? 175. mailto:[437]zjay1987 at gmail.com > > > > ? 176. mailto:[438]nginx at nginx.org > > > > ? 177. [439]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 178. mailto:[440]nginx at nginx.org > > > > ? 179. [441]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 180. mailto:[442]nginx at nginx.org > > > > ? 181. [443]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 182. mailto:[444]nginx at nginx.org > > > > ? 183. [445]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 184. mailto:[446]nginx at nginx.org > > > > ? 185. [447]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 186. mailto:[448]nginx at nginx.org > > > > ? 187. [449]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 188. mailto:[450]nginx at nginx.org > > > > ? 189. [451]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 190. mailto:[452]nginx at nginx.org > > > > ? 191. [453]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 192. mailto:[454]pasik at iki.fi > > > > ? 193. > > [455]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? 194. mailto:[456]pasik at iki.fi > > > > ? 195. [457]https://github.com/taobao/tengine/pull/91 > > > > ? 196. mailto:[458]yaoweibin at gmail.com > > > > ? 197. [459]https://github.com/taobao/tengine/pull/91 > > > > ? 198. mailto:[460]zjay1987 at gmail.com > > > > ? 199. mailto:[461]nginx at nginx.org > > > > ? 200. [462]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 201. mailto:[463]nginx at nginx.org > > > > ? 202. [464]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 203. mailto:[465]nginx at nginx.org > > > > ? 204. [466]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 205. mailto:[467]zjay1987 at gmail.com > > > > ? 206. [468]https://github.com/taobao/tengine/pull/91 > > > > ? 207. mailto:[469]yaoweibin at gmail.com > > > > ? 208. [470]https://github.com/taobao/tengine/pull/91 > > > > ? 209. mailto:[471]zjay1987 at gmail.com > > > > ? 210. mailto:[472]nginx at nginx.org > > > > ? 211. [473]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 212. mailto:[474]nginx at nginx.org > > > > ? 213. [475]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 214. mailto:[476]nginx at nginx.org > > > > ? 215. [477]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 216. mailto:[478]nginx at nginx.org > > > > ? 217. [479]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 218. mailto:[480]nginx at nginx.org > > > > ? 219. [481]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 220. mailto:[482]pasik at iki.fi > > > > ? 221. [483]https://github.com/taobao/tengine/pull/91 > > > > ? 222. mailto:[484]yaoweibin at gmail.com > > > > ? 223. [485]https://github.com/taobao/tengine/pull/91 > > > > ? 224. mailto:[486]zjay1987 at gmail.com > > > > ? 225. mailto:[487]nginx at nginx.org > > > > ? 226. [488]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 227. mailto:[489]nginx at nginx.org > > > > ? 228. [490]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 229. mailto:[491]nginx at nginx.org > > > > ? 230. [492]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 231. mailto:[493]zjay1987 at gmail.com > > > > ? 232. [494]https://github.com/taobao/tengine/pull/91 > > > > ? 233. mailto:[495]yaoweibin at gmail.com > > > > ? 234. [496]https://github.com/taobao/tengine/pull/91 > > > > ? 235. mailto:[497]zjay1987 at gmail.com > > > > ? 236. mailto:[498]nginx at nginx.org > > > > ? 237. [499]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 238. mailto:[500]nginx at nginx.org > > > > ? 239. [501]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 240. mailto:[502]nginx at nginx.org > > > > ? 241. [503]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 242. mailto:[504]nginx at nginx.org > > > > ? 243. [505]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 244. mailto:[506]nginx at nginx.org > > > > ? 245. [507]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 246. mailto:[508]nginx at nginx.org > > > > ? 247. [509]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 248. mailto:[510]nginx at nginx.org > > > > ? 249. [511]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 250. mailto:[512]nginx at nginx.org > > > > ? 251. [513]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 252. mailto:[514]nginx at nginx.org > > > > ? 253. [515]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 254. mailto:[516]nginx at nginx.org > > > > ? 255. [517]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 256. mailto:[518]nginx at nginx.org > > > > ? 257. [519]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 258. mailto:[520]nginx at nginx.org > > > > ? 259. [521]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 260. mailto:[522]nginx at nginx.org > > > > ? 261. [523]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > [524]nginx at nginx.org > > > > [525]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > [526]nginx at nginx.org > > > [527]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [528]nginx at nginx.org > > [529]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > References > > > > Visible links > > 1. mailto:pasik at iki.fi > > 2. http://wiki.nginx.org/Debugging > > 3. mailto:pasik at iki.fi > > 4. https://service-backend/ > > 5. mailto:pasik at iki.fi > > 6. mailto:pasik at iki.fi > > 7. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 8. mailto:pasik at iki.fi > > 9. https://github.com/taobao/tengine/pull/91 > > 10. mailto:yaoweibin at gmail.com > > 11. https://github.com/taobao/tengine/pull/91 > > 12. mailto:zjay1987 at gmail.com > > 13. mailto:nginx at nginx.org > > 14. http://mailman.nginx.org/mailman/listinfo/nginx > > 15. mailto:nginx at nginx.org > > 16. http://mailman.nginx.org/mailman/listinfo/nginx > > 17. mailto:nginx at nginx.org > > 18. http://mailman.nginx.org/mailman/listinfo/nginx > > 19. mailto:zjay1987 at gmail.com > > 20. https://github.com/taobao/tengine/pull/91 > > 21. mailto:yaoweibin at gmail.com > > 22. https://github.com/taobao/tengine/pull/91 > > 23. mailto:zjay1987 at gmail.com > > 24. mailto:nginx at nginx.org > > 25. http://mailman.nginx.org/mailman/listinfo/nginx > > 26. mailto:nginx at nginx.org > > 27. http://mailman.nginx.org/mailman/listinfo/nginx > > 28. mailto:nginx at nginx.org > > 29. http://mailman.nginx.org/mailman/listinfo/nginx > > 30. mailto:nginx at nginx.org > > 31. http://mailman.nginx.org/mailman/listinfo/nginx > > 32. mailto:nginx at nginx.org > > 33. http://mailman.nginx.org/mailman/listinfo/nginx > > 34. mailto:pasik at iki.fi > > 35. https://github.com/taobao/tengine/pull/91 > > 36. mailto:yaoweibin at gmail.com > > 37. https://github.com/taobao/tengine/pull/91 > > 38. mailto:zjay1987 at gmail.com > > 39. mailto:nginx at nginx.org > > 40. http://mailman.nginx.org/mailman/listinfo/nginx > > 41. mailto:nginx at nginx.org > > 42. http://mailman.nginx.org/mailman/listinfo/nginx > > 43. mailto:nginx at nginx.org > > 44. http://mailman.nginx.org/mailman/listinfo/nginx > > 45. mailto:zjay1987 at gmail.com > > 46. https://github.com/taobao/tengine/pull/91 > > 47. mailto:yaoweibin at gmail.com > > 48. https://github.com/taobao/tengine/pull/91 > > 49. mailto:zjay1987 at gmail.com > > 50. mailto:nginx at nginx.org > > 51. http://mailman.nginx.org/mailman/listinfo/nginx > > 52. mailto:nginx at nginx.org > > 53. http://mailman.nginx.org/mailman/listinfo/nginx > > 54. mailto:nginx at nginx.org > > 55. http://mailman.nginx.org/mailman/listinfo/nginx > > 56. mailto:nginx at nginx.org > > 57. http://mailman.nginx.org/mailman/listinfo/nginx > > 58. mailto:nginx at nginx.org > > 59. http://mailman.nginx.org/mailman/listinfo/nginx > > 60. mailto:nginx at nginx.org > > 61. http://mailman.nginx.org/mailman/listinfo/nginx > > 62. mailto:nginx at nginx.org > > 63. http://mailman.nginx.org/mailman/listinfo/nginx > > 64. mailto:nginx at nginx.org > > 65. http://mailman.nginx.org/mailman/listinfo/nginx > > 66. mailto:pasik at iki.fi > > 67. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 68. mailto:pasik at iki.fi > > 69. https://github.com/taobao/tengine/pull/91 > > 70. mailto:yaoweibin at gmail.com > > 71. https://github.com/taobao/tengine/pull/91 > > 72. mailto:zjay1987 at gmail.com > > 73. mailto:nginx at nginx.org > > 74. http://mailman.nginx.org/mailman/listinfo/nginx > > 75. mailto:nginx at nginx.org > > 76. http://mailman.nginx.org/mailman/listinfo/nginx > > 77. mailto:nginx at nginx.org > > 78. http://mailman.nginx.org/mailman/listinfo/nginx > > 79. mailto:zjay1987 at gmail.com > > 80. https://github.com/taobao/tengine/pull/91 > > 81. mailto:yaoweibin at gmail.com > > 82. https://github.com/taobao/tengine/pull/91 > > 83. mailto:zjay1987 at gmail.com > > 84. mailto:nginx at nginx.org > > 85. http://mailman.nginx.org/mailman/listinfo/nginx > > 86. mailto:nginx at nginx.org > > 87. http://mailman.nginx.org/mailman/listinfo/nginx > > 88. mailto:nginx at nginx.org > > 89. http://mailman.nginx.org/mailman/listinfo/nginx > > 90. mailto:nginx at nginx.org > > 91. http://mailman.nginx.org/mailman/listinfo/nginx > > 92. mailto:nginx at nginx.org > > 93. http://mailman.nginx.org/mailman/listinfo/nginx > > 94. mailto:pasik at iki.fi > > 95. https://github.com/taobao/tengine/pull/91 > > 96. mailto:yaoweibin at gmail.com > > 97. https://github.com/taobao/tengine/pull/91 > > 98. mailto:zjay1987 at gmail.com > > 99. mailto:nginx at nginx.org > > 100. http://mailman.nginx.org/mailman/listinfo/nginx > > 101. mailto:nginx at nginx.org > > 102. http://mailman.nginx.org/mailman/listinfo/nginx > > 103. mailto:nginx at nginx.org > > 104. http://mailman.nginx.org/mailman/listinfo/nginx > > 105. mailto:zjay1987 at gmail.com > > 106. https://github.com/taobao/tengine/pull/91 > > 107. mailto:yaoweibin at gmail.com > > 108. https://github.com/taobao/tengine/pull/91 > > 109. mailto:zjay1987 at gmail.com > > 110. mailto:nginx at nginx.org > > 111. http://mailman.nginx.org/mailman/listinfo/nginx > > 112. mailto:nginx at nginx.org > > 113. http://mailman.nginx.org/mailman/listinfo/nginx > > 114. mailto:nginx at nginx.org > > 115. http://mailman.nginx.org/mailman/listinfo/nginx > > 116. mailto:nginx at nginx.org > > 117. http://mailman.nginx.org/mailman/listinfo/nginx > > 118. mailto:nginx at nginx.org > > 119. http://mailman.nginx.org/mailman/listinfo/nginx > > 120. mailto:nginx at nginx.org > > 121. http://mailman.nginx.org/mailman/listinfo/nginx > > 122. mailto:nginx at nginx.org > > 123. http://mailman.nginx.org/mailman/listinfo/nginx > > 124. mailto:nginx at nginx.org > > 125. http://mailman.nginx.org/mailman/listinfo/nginx > > 126. mailto:nginx at nginx.org > > 127. http://mailman.nginx.org/mailman/listinfo/nginx > > 128. mailto:nginx at nginx.org > > 129. http://mailman.nginx.org/mailman/listinfo/nginx > > 130. mailto:nginx at nginx.org > > 131. http://mailman.nginx.org/mailman/listinfo/nginx > > 132. mailto:pasik at iki.fi > > 133. mailto:pasik at iki.fi > > 134. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 135. mailto:pasik at iki.fi > > 136. https://github.com/taobao/tengine/pull/91 > > 137. mailto:yaoweibin at gmail.com > > 138. https://github.com/taobao/tengine/pull/91 > > 139. mailto:zjay1987 at gmail.com > > 140. mailto:nginx at nginx.org > > 141. http://mailman.nginx.org/mailman/listinfo/nginx > > 142. mailto:nginx at nginx.org > > 143. http://mailman.nginx.org/mailman/listinfo/nginx > > 144. mailto:nginx at nginx.org > > 145. http://mailman.nginx.org/mailman/listinfo/nginx > > 146. mailto:zjay1987 at gmail.com > > 147. https://github.com/taobao/tengine/pull/91 > > 148. mailto:yaoweibin at gmail.com > > 149. https://github.com/taobao/tengine/pull/91 > > 150. mailto:zjay1987 at gmail.com > > 151. mailto:nginx at nginx.org > > 152. http://mailman.nginx.org/mailman/listinfo/nginx > > 153. mailto:nginx at nginx.org > > 154. http://mailman.nginx.org/mailman/listinfo/nginx > > 155. mailto:nginx at nginx.org > > 156. http://mailman.nginx.org/mailman/listinfo/nginx > > 157. mailto:nginx at nginx.org > > 158. http://mailman.nginx.org/mailman/listinfo/nginx > > 159. mailto:nginx at nginx.org > > 160. http://mailman.nginx.org/mailman/listinfo/nginx > > 161. mailto:pasik at iki.fi > > 162. https://github.com/taobao/tengine/pull/91 > > 163. mailto:yaoweibin at gmail.com > > 164. https://github.com/taobao/tengine/pull/91 > > 165. mailto:zjay1987 at gmail.com > > 166. mailto:nginx at nginx.org > > 167. http://mailman.nginx.org/mailman/listinfo/nginx > > 168. mailto:nginx at nginx.org > > 169. http://mailman.nginx.org/mailman/listinfo/nginx > > 170. mailto:nginx at nginx.org > > 171. http://mailman.nginx.org/mailman/listinfo/nginx > > 172. mailto:zjay1987 at gmail.com > > 173. https://github.com/taobao/tengine/pull/91 > > 174. mailto:yaoweibin at gmail.com > > 175. https://github.com/taobao/tengine/pull/91 > > 176. mailto:zjay1987 at gmail.com > > 177. mailto:nginx at nginx.org > > 178. http://mailman.nginx.org/mailman/listinfo/nginx > > 179. mailto:nginx at nginx.org > > 180. http://mailman.nginx.org/mailman/listinfo/nginx > > 181. mailto:nginx at nginx.org > > 182. http://mailman.nginx.org/mailman/listinfo/nginx > > 183. mailto:nginx at nginx.org > > 184. http://mailman.nginx.org/mailman/listinfo/nginx > > 185. mailto:nginx at nginx.org > > 186. http://mailman.nginx.org/mailman/listinfo/nginx > > 187. mailto:nginx at nginx.org > > 188. http://mailman.nginx.org/mailman/listinfo/nginx > > 189. mailto:nginx at nginx.org > > 190. http://mailman.nginx.org/mailman/listinfo/nginx > > 191. mailto:nginx at nginx.org > > 192. http://mailman.nginx.org/mailman/listinfo/nginx > > 193. mailto:pasik at iki.fi > > 194. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 195. mailto:pasik at iki.fi > > 196. https://github.com/taobao/tengine/pull/91 > > 197. mailto:yaoweibin at gmail.com > > 198. https://github.com/taobao/tengine/pull/91 > > 199. mailto:zjay1987 at gmail.com > > 200. mailto:nginx at nginx.org > > 201. http://mailman.nginx.org/mailman/listinfo/nginx > > 202. mailto:nginx at nginx.org > > 203. http://mailman.nginx.org/mailman/listinfo/nginx > > 204. mailto:nginx at nginx.org > > 205. http://mailman.nginx.org/mailman/listinfo/nginx > > 206. mailto:zjay1987 at gmail.com > > 207. https://github.com/taobao/tengine/pull/91 > > 208. mailto:yaoweibin at gmail.com > > 209. https://github.com/taobao/tengine/pull/91 > > 210. mailto:zjay1987 at gmail.com > > 211. mailto:nginx at nginx.org > > 212. http://mailman.nginx.org/mailman/listinfo/nginx > > 213. mailto:nginx at nginx.org > > 214. http://mailman.nginx.org/mailman/listinfo/nginx > > 215. mailto:nginx at nginx.org > > 216. http://mailman.nginx.org/mailman/listinfo/nginx > > 217. mailto:nginx at nginx.org > > 218. http://mailman.nginx.org/mailman/listinfo/nginx > > 219. mailto:nginx at nginx.org > > 220. http://mailman.nginx.org/mailman/listinfo/nginx > > 221. mailto:pasik at iki.fi > > 222. https://github.com/taobao/tengine/pull/91 > > 223. mailto:yaoweibin at gmail.com > > 224. https://github.com/taobao/tengine/pull/91 > > 225. mailto:zjay1987 at gmail.com > > 226. mailto:nginx at nginx.org > > 227. http://mailman.nginx.org/mailman/listinfo/nginx > > 228. mailto:nginx at nginx.org > > 229. http://mailman.nginx.org/mailman/listinfo/nginx > > 230. mailto:nginx at nginx.org > > 231. http://mailman.nginx.org/mailman/listinfo/nginx > > 232. mailto:zjay1987 at gmail.com > > 233. https://github.com/taobao/tengine/pull/91 > > 234. mailto:yaoweibin at gmail.com > > 235. https://github.com/taobao/tengine/pull/91 > > 236. mailto:zjay1987 at gmail.com > > 237. mailto:nginx at nginx.org > > 238. http://mailman.nginx.org/mailman/listinfo/nginx > > 239. mailto:nginx at nginx.org > > 240. http://mailman.nginx.org/mailman/listinfo/nginx > > 241. mailto:nginx at nginx.org > > 242. http://mailman.nginx.org/mailman/listinfo/nginx > > 243. mailto:nginx at nginx.org > > 244. http://mailman.nginx.org/mailman/listinfo/nginx > > 245. mailto:nginx at nginx.org > > 246. http://mailman.nginx.org/mailman/listinfo/nginx > > 247. mailto:nginx at nginx.org > > 248. http://mailman.nginx.org/mailman/listinfo/nginx > > 249. mailto:nginx at nginx.org > > 250. http://mailman.nginx.org/mailman/listinfo/nginx > > 251. mailto:nginx at nginx.org > > 252. http://mailman.nginx.org/mailman/listinfo/nginx > > 253. mailto:nginx at nginx.org > > 254. http://mailman.nginx.org/mailman/listinfo/nginx > > 255. mailto:nginx at nginx.org > > 256. http://mailman.nginx.org/mailman/listinfo/nginx > > 257. mailto:nginx at nginx.org > > 258. http://mailman.nginx.org/mailman/listinfo/nginx > > 259. mailto:nginx at nginx.org > > 260. http://mailman.nginx.org/mailman/listinfo/nginx > > 261. mailto:nginx at nginx.org > > 262. http://mailman.nginx.org/mailman/listinfo/nginx > > 263. http://wiki.nginx.org/Debugging > > 264. mailto:pasik at iki.fi > > 265. https://service-backend/ > > 266. mailto:pasik at iki.fi > > 267. mailto:pasik at iki.fi > > 268. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 269. mailto:pasik at iki.fi > > 270. https://github.com/taobao/tengine/pull/91 > > 271. mailto:yaoweibin at gmail.com > > 272. https://github.com/taobao/tengine/pull/91 > > 273. mailto:zjay1987 at gmail.com > > 274. mailto:nginx at nginx.org > > 275. http://mailman.nginx.org/mailman/listinfo/nginx > > 276. mailto:nginx at nginx.org > > 277. http://mailman.nginx.org/mailman/listinfo/nginx > > 278. mailto:nginx at nginx.org > > 279. http://mailman.nginx.org/mailman/listinfo/nginx > > 280. mailto:zjay1987 at gmail.com > > 281. https://github.com/taobao/tengine/pull/91 > > 282. mailto:yaoweibin at gmail.com > > 283. https://github.com/taobao/tengine/pull/91 > > 284. mailto:zjay1987 at gmail.com > > 285. mailto:nginx at nginx.org > > 286. http://mailman.nginx.org/mailman/listinfo/nginx > > 287. mailto:nginx at nginx.org > > 288. http://mailman.nginx.org/mailman/listinfo/nginx > > 289. mailto:nginx at nginx.org > > 290. http://mailman.nginx.org/mailman/listinfo/nginx > > 291. mailto:nginx at nginx.org > > 292. http://mailman.nginx.org/mailman/listinfo/nginx > > 293. mailto:nginx at nginx.org > > 294. http://mailman.nginx.org/mailman/listinfo/nginx > > 295. mailto:pasik at iki.fi > > 296. https://github.com/taobao/tengine/pull/91 > > 297. mailto:yaoweibin at gmail.com > > 298. https://github.com/taobao/tengine/pull/91 > > 299. mailto:zjay1987 at gmail.com > > 300. mailto:nginx at nginx.org > > 301. http://mailman.nginx.org/mailman/listinfo/nginx > > 302. mailto:nginx at nginx.org > > 303. http://mailman.nginx.org/mailman/listinfo/nginx > > 304. mailto:nginx at nginx.org > > 305. http://mailman.nginx.org/mailman/listinfo/nginx > > 306. mailto:zjay1987 at gmail.com > > 307. https://github.com/taobao/tengine/pull/91 > > 308. mailto:yaoweibin at gmail.com > > 309. https://github.com/taobao/tengine/pull/91 > > 310. mailto:zjay1987 at gmail.com > > 311. mailto:nginx at nginx.org > > 312. http://mailman.nginx.org/mailman/listinfo/nginx > > 313. mailto:nginx at nginx.org > > 314. http://mailman.nginx.org/mailman/listinfo/nginx > > 315. mailto:nginx at nginx.org > > 316. http://mailman.nginx.org/mailman/listinfo/nginx > > 317. mailto:nginx at nginx.org > > 318. http://mailman.nginx.org/mailman/listinfo/nginx > > 319. mailto:nginx at nginx.org > > 320. http://mailman.nginx.org/mailman/listinfo/nginx > > 321. mailto:nginx at nginx.org > > 322. http://mailman.nginx.org/mailman/listinfo/nginx > > 323. mailto:nginx at nginx.org > > 324. http://mailman.nginx.org/mailman/listinfo/nginx > > 325. mailto:nginx at nginx.org > > 326. http://mailman.nginx.org/mailman/listinfo/nginx > > 327. mailto:pasik at iki.fi > > 328. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 329. mailto:pasik at iki.fi > > 330. https://github.com/taobao/tengine/pull/91 > > 331. mailto:yaoweibin at gmail.com > > 332. https://github.com/taobao/tengine/pull/91 > > 333. mailto:zjay1987 at gmail.com > > 334. mailto:nginx at nginx.org > > 335. http://mailman.nginx.org/mailman/listinfo/nginx > > 336. mailto:nginx at nginx.org > > 337. http://mailman.nginx.org/mailman/listinfo/nginx > > 338. mailto:nginx at nginx.org > > 339. http://mailman.nginx.org/mailman/listinfo/nginx > > 340. mailto:zjay1987 at gmail.com > > 341. https://github.com/taobao/tengine/pull/91 > > 342. mailto:yaoweibin at gmail.com > > 343. https://github.com/taobao/tengine/pull/91 > > 344. mailto:zjay1987 at gmail.com > > 345. mailto:nginx at nginx.org > > 346. http://mailman.nginx.org/mailman/listinfo/nginx > > 347. mailto:nginx at nginx.org > > 348. http://mailman.nginx.org/mailman/listinfo/nginx > > 349. mailto:nginx at nginx.org > > 350. http://mailman.nginx.org/mailman/listinfo/nginx > > 351. mailto:nginx at nginx.org > > 352. http://mailman.nginx.org/mailman/listinfo/nginx > > 353. mailto:nginx at nginx.org > > 354. http://mailman.nginx.org/mailman/listinfo/nginx > > 355. mailto:pasik at iki.fi > > 356. https://github.com/taobao/tengine/pull/91 > > 357. mailto:yaoweibin at gmail.com > > 358. https://github.com/taobao/tengine/pull/91 > > 359. mailto:zjay1987 at gmail.com > > 360. mailto:nginx at nginx.org > > 361. http://mailman.nginx.org/mailman/listinfo/nginx > > 362. mailto:nginx at nginx.org > > 363. http://mailman.nginx.org/mailman/listinfo/nginx > > 364. mailto:nginx at nginx.org > > 365. http://mailman.nginx.org/mailman/listinfo/nginx > > 366. mailto:zjay1987 at gmail.com > > 367. https://github.com/taobao/tengine/pull/91 > > 368. mailto:yaoweibin at gmail.com > > 369. https://github.com/taobao/tengine/pull/91 > > 370. mailto:zjay1987 at gmail.com > > 371. mailto:nginx at nginx.org > > 372. http://mailman.nginx.org/mailman/listinfo/nginx > > 373. mailto:nginx at nginx.org > > 374. http://mailman.nginx.org/mailman/listinfo/nginx > > 375. mailto:nginx at nginx.org > > 376. http://mailman.nginx.org/mailman/listinfo/nginx > > 377. mailto:nginx at nginx.org > > 378. http://mailman.nginx.org/mailman/listinfo/nginx > > 379. mailto:nginx at nginx.org > > 380. http://mailman.nginx.org/mailman/listinfo/nginx > > 381. mailto:nginx at nginx.org > > 382. http://mailman.nginx.org/mailman/listinfo/nginx > > 383. mailto:nginx at nginx.org > > 384. http://mailman.nginx.org/mailman/listinfo/nginx > > 385. mailto:nginx at nginx.org > > 386. http://mailman.nginx.org/mailman/listinfo/nginx > > 387. mailto:nginx at nginx.org > > 388. http://mailman.nginx.org/mailman/listinfo/nginx > > 389. mailto:nginx at nginx.org > > 390. http://mailman.nginx.org/mailman/listinfo/nginx > > 391. mailto:nginx at nginx.org > > 392. http://mailman.nginx.org/mailman/listinfo/nginx > > 393. mailto:pasik at iki.fi > > 394. mailto:pasik at iki.fi > > 395. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 396. mailto:pasik at iki.fi > > 397. https://github.com/taobao/tengine/pull/91 > > 398. mailto:yaoweibin at gmail.com > > 399. https://github.com/taobao/tengine/pull/91 > > 400. mailto:zjay1987 at gmail.com > > 401. mailto:nginx at nginx.org > > 402. http://mailman.nginx.org/mailman/listinfo/nginx > > 403. mailto:nginx at nginx.org > > 404. http://mailman.nginx.org/mailman/listinfo/nginx > > 405. mailto:nginx at nginx.org > > 406. http://mailman.nginx.org/mailman/listinfo/nginx > > 407. mailto:zjay1987 at gmail.com > > 408. https://github.com/taobao/tengine/pull/91 > > 409. mailto:yaoweibin at gmail.com > > 410. https://github.com/taobao/tengine/pull/91 > > 411. mailto:zjay1987 at gmail.com > > 412. mailto:nginx at nginx.org > > 413. http://mailman.nginx.org/mailman/listinfo/nginx > > 414. mailto:nginx at nginx.org > > 415. http://mailman.nginx.org/mailman/listinfo/nginx > > 416. mailto:nginx at nginx.org > > 417. http://mailman.nginx.org/mailman/listinfo/nginx > > 418. mailto:nginx at nginx.org > > 419. http://mailman.nginx.org/mailman/listinfo/nginx > > 420. mailto:nginx at nginx.org > > 421. http://mailman.nginx.org/mailman/listinfo/nginx > > 422. mailto:pasik at iki.fi > > 423. https://github.com/taobao/tengine/pull/91 > > 424. mailto:yaoweibin at gmail.com > > 425. https://github.com/taobao/tengine/pull/91 > > 426. mailto:zjay1987 at gmail.com > > 427. mailto:nginx at nginx.org > > 428. http://mailman.nginx.org/mailman/listinfo/nginx > > 429. mailto:nginx at nginx.org > > 430. http://mailman.nginx.org/mailman/listinfo/nginx > > 431. mailto:nginx at nginx.org > > 432. http://mailman.nginx.org/mailman/listinfo/nginx > > 433. mailto:zjay1987 at gmail.com > > 434. https://github.com/taobao/tengine/pull/91 > > 435. mailto:yaoweibin at gmail.com > > 436. https://github.com/taobao/tengine/pull/91 > > 437. mailto:zjay1987 at gmail.com > > 438. mailto:nginx at nginx.org > > 439. http://mailman.nginx.org/mailman/listinfo/nginx > > 440. mailto:nginx at nginx.org > > 441. http://mailman.nginx.org/mailman/listinfo/nginx > > 442. mailto:nginx at nginx.org > > 443. http://mailman.nginx.org/mailman/listinfo/nginx > > 444. mailto:nginx at nginx.org > > 445. http://mailman.nginx.org/mailman/listinfo/nginx > > 446. mailto:nginx at nginx.org > > 447. http://mailman.nginx.org/mailman/listinfo/nginx > > 448. mailto:nginx at nginx.org > > 449. http://mailman.nginx.org/mailman/listinfo/nginx > > 450. mailto:nginx at nginx.org > > 451. http://mailman.nginx.org/mailman/listinfo/nginx > > 452. mailto:nginx at nginx.org > > 453. http://mailman.nginx.org/mailman/listinfo/nginx > > 454. mailto:pasik at iki.fi > > 455. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 456. mailto:pasik at iki.fi > > 457. https://github.com/taobao/tengine/pull/91 > > 458. mailto:yaoweibin at gmail.com > > 459. https://github.com/taobao/tengine/pull/91 > > 460. mailto:zjay1987 at gmail.com > > 461. mailto:nginx at nginx.org > > 462. http://mailman.nginx.org/mailman/listinfo/nginx > > 463. mailto:nginx at nginx.org > > 464. http://mailman.nginx.org/mailman/listinfo/nginx > > 465. mailto:nginx at nginx.org > > 466. http://mailman.nginx.org/mailman/listinfo/nginx > > 467. mailto:zjay1987 at gmail.com > > 468. https://github.com/taobao/tengine/pull/91 > > 469. mailto:yaoweibin at gmail.com > > 470. https://github.com/taobao/tengine/pull/91 > > 471. mailto:zjay1987 at gmail.com > > 472. mailto:nginx at nginx.org > > 473. http://mailman.nginx.org/mailman/listinfo/nginx > > 474. mailto:nginx at nginx.org > > 475. http://mailman.nginx.org/mailman/listinfo/nginx > > 476. mailto:nginx at nginx.org > > 477. http://mailman.nginx.org/mailman/listinfo/nginx > > 478. mailto:nginx at nginx.org > > 479. http://mailman.nginx.org/mailman/listinfo/nginx > > 480. mailto:nginx at nginx.org > > 481. http://mailman.nginx.org/mailman/listinfo/nginx > > 482. mailto:pasik at iki.fi > > 483. https://github.com/taobao/tengine/pull/91 > > 484. mailto:yaoweibin at gmail.com > > 485. https://github.com/taobao/tengine/pull/91 > > 486. mailto:zjay1987 at gmail.com > > 487. mailto:nginx at nginx.org > > 488. http://mailman.nginx.org/mailman/listinfo/nginx > > 489. mailto:nginx at nginx.org > > 490. http://mailman.nginx.org/mailman/listinfo/nginx > > 491. mailto:nginx at nginx.org > > 492. http://mailman.nginx.org/mailman/listinfo/nginx > > 493. mailto:zjay1987 at gmail.com > > 494. https://github.com/taobao/tengine/pull/91 > > 495. mailto:yaoweibin at gmail.com > > 496. https://github.com/taobao/tengine/pull/91 > > 497. mailto:zjay1987 at gmail.com > > 498. mailto:nginx at nginx.org > > 499. http://mailman.nginx.org/mailman/listinfo/nginx > > 500. mailto:nginx at nginx.org > > 501. http://mailman.nginx.org/mailman/listinfo/nginx > > 502. mailto:nginx at nginx.org > > 503. http://mailman.nginx.org/mailman/listinfo/nginx > > 504. mailto:nginx at nginx.org > > 505. http://mailman.nginx.org/mailman/listinfo/nginx > > 506. mailto:nginx at nginx.org > > 507. http://mailman.nginx.org/mailman/listinfo/nginx > > 508. mailto:nginx at nginx.org > > 509. http://mailman.nginx.org/mailman/listinfo/nginx > > 510. mailto:nginx at nginx.org > > 511. http://mailman.nginx.org/mailman/listinfo/nginx > > 512. mailto:nginx at nginx.org > > 513. http://mailman.nginx.org/mailman/listinfo/nginx > > 514. mailto:nginx at nginx.org > > 515. http://mailman.nginx.org/mailman/listinfo/nginx > > 516. mailto:nginx at nginx.org > > 517. http://mailman.nginx.org/mailman/listinfo/nginx > > 518. mailto:nginx at nginx.org > > 519. http://mailman.nginx.org/mailman/listinfo/nginx > > 520. mailto:nginx at nginx.org > > 521. http://mailman.nginx.org/mailman/listinfo/nginx > > 522. mailto:nginx at nginx.org > > 523. http://mailman.nginx.org/mailman/listinfo/nginx > > 524. mailto:nginx at nginx.org > > 525. http://mailman.nginx.org/mailman/listinfo/nginx > > 526. mailto:nginx at nginx.org > > 527. http://mailman.nginx.org/mailman/listinfo/nginx > > 528. mailto:nginx at nginx.org > > 529. http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Mar 20 22:09:26 2013 From: nginx-forum at nginx.us (nickpalmer) Date: Wed, 20 Mar 2013 18:09:26 -0400 Subject: proxy_pass of PUT with no Content_length header returns 411 Message-ID: <9f8d7c95e8dc77773beb28113abdeea4.NginxMailingListEnglish@forum.nginx.org> I am having trouble with proxy_pass and PUT without a Content-Length header returning a 411 error. # curl -XPUT http://localhost:8080/ 411 Length Required

411 Length Required


nginx/1.1.19
# touch temp # curl -X PUT http://localhost:8080/ -T temp {"response": "ok"} # Relevant configuration: # Proxy to Backend Server server { listen localhost:8080; location / { proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend_server; } } I found this post which seems to be the same problem: http://forum.nginx.org/read.php?2,72279,72279#msg-72279 Is there a way to get nginx to proxy PUT requests WITHOUT a Content-Length header? Does a newer version of nginx NOT suffer from this limitation? Thanks, ~ Nick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237607,237607#msg-237607 From nginx-forum at nginx.us Thu Mar 21 02:47:39 2013 From: nginx-forum at nginx.us (toddlahman) Date: Wed, 20 Mar 2013 22:47:39 -0400 Subject: Too Many Redirects - CDN Rewrite Rule Message-ID: <29c390cd78501ec05a92b384f02e1f19.NginxMailingListEnglish@forum.nginx.org> I have tried both ways to redirect my static files to a CDN (content delivery network), but both ways result in the message "too many redirects." Can anyone tell me what I am doing wrong here? location ~* ^.+.(jpe?g|gif|css|png|js|ico)$ { rewrite ^ http://cdn.mydomain.com$request_uri? permanent; access_log off; } location ~* \.(jpg|jpeg|gif|png|flv|mp3|mpg|mpeg|js|css|ico|woff)$ { return 301 http://cdn.mydomain.com$request_uri; access_log off; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237609#msg-237609 From steve at greengecko.co.nz Thu Mar 21 03:00:25 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 21 Mar 2013 16:00:25 +1300 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <29c390cd78501ec05a92b384f02e1f19.NginxMailingListEnglish@forum.nginx.org> References: <29c390cd78501ec05a92b384f02e1f19.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1363834825.5117.991.camel@steve-new> On Wed, 2013-03-20 at 22:47 -0400, toddlahman wrote: > I have tried both ways to redirect my static files to a CDN (content > delivery network), but both ways result in the message "too many redirects." > Can anyone tell me what I am doing wrong here? > > location ~* ^.+.(jpe?g|gif|css|png|js|ico)$ { > rewrite ^ http://cdn.mydomain.com$request_uri? permanent; > access_log off; > } > > > location ~* \.(jpg|jpeg|gif|png|flv|mp3|mpg|mpeg|js|css|ico|woff)$ { > return 301 http://cdn.mydomain.com$request_uri; > access_log off; > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237609#msg-237609 > That should work ok. Are you sure your cdn isn't using the same ruleset? Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-forum at nginx.us Thu Mar 21 04:20:53 2013 From: nginx-forum at nginx.us (toddlahman) Date: Thu, 21 Mar 2013 00:20:53 -0400 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <1363834825.5117.991.camel@steve-new> References: <1363834825.5117.991.camel@steve-new> Message-ID: <1bdfdec5f49755f0d38a9538e212f6b2.NginxMailingListEnglish@forum.nginx.org> It is possible my CDN is using the same ruleset. I am using MaxCDN, and they use Nginx to serve static images. How would I write this ruleset to be compatible with MaxCDN (aka NetDNA)? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237612#msg-237612 From pcgeopc at gmail.com Thu Mar 21 05:45:20 2013 From: pcgeopc at gmail.com (Geo P.C.) Date: Thu, 21 Mar 2013 11:15:20 +0530 Subject: Need to proxypass to different servers. In-Reply-To: <08094371-7F46-4144-B847-F1FCCEC2C9DC@sysoev.ru> References: <20130319092029.GH18002@craic.sysops.org> <08094371-7F46-4144-B847-F1FCCEC2C9DC@sysoev.ru> Message-ID: Thanks for your updates. We are able to proxypass to another domain. But the issue is domain?s sub directories are not working fine. That is in server { listen 80; server_name geotest.com; proxy_set_header Host geotest.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://192.168.1.1; } location /cms { proxy_pass http://192.168.1.2; } } While accessing geotest.com/cms we are getting the application that running on 192.168.1.2 but when we access geotest.com/cms/address we are not getting the address page but insated we are getting the same index page. That?s we are unable to access any subdirectories inside /cms/ if we access we are getting the index page only. So can you please guys help us on it. Thanks Geo On Wed, Mar 20, 2013 at 2:12 PM, Igor Sysoev wrote: > On Mar 19, 2013, at 18:41 , Andreas Weber wrote: > > Im not expert but i think you must specify /cms BEFORE / because "/" will > match everything > > > No. Since "/" and "/cms" are not regex locations, nginx finds the maximum > match despite > location order. This is why using only non-regex locations allows to > create at once large and > easy to maintain configurations with a lot of locations. > > > -- > Igor Sysoev > http://nginx.com/services.html > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 21 05:48:20 2013 From: nginx-forum at nginx.us (geopcgeo) Date: Thu, 21 Mar 2013 01:48:20 -0400 Subject: Need to proxypass to different servers. In-Reply-To: <20130319201717.GI18002@craic.sysops.org> References: <20130319201717.GI18002@craic.sysops.org> Message-ID: Thanks for your updates. We are able to proxypass to another domain. But the issue is domain?s sub directories are not working fine. That is in server { listen 80; server_name geotest.com; proxy_set_header Host geotest.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://192.168.1.1; } location /cms { proxy_pass http://192.168.1.2; } } While accessing geotest.com/cms we are getting the application that running on 192.168.1.2 but when we access geotest.com/cms/address we are not getting the address page but insated we are getting the same index page. That?s we are unable to access any subdirectories inside /cms/ if we access we are getting the index page only. So can you please guys help us on it. Thanks Geo http://forum.nginx.org/read.php?2,237520 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237520,237615#msg-237615 From steve at greengecko.co.nz Thu Mar 21 05:55:34 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 21 Mar 2013 18:55:34 +1300 Subject: Need to proxypass to different servers. In-Reply-To: References: <20130319201717.GI18002@craic.sysops.org> Message-ID: <1363845334.5117.1077.camel@steve-new> On Thu, 2013-03-21 at 01:48 -0400, geopcgeo wrote: > Thanks for your updates. We are able to proxypass to another domain. But > the issue is domain?s sub directories are not working fine. > That is in > > server { > listen 80; > server_name geotest.com; > proxy_set_header Host geotest.com; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > location / { > proxy_pass http://192.168.1.1; > } > location /cms { > proxy_pass http://192.168.1.2; > } > } > > While accessing geotest.com/cms we are getting the application that running > on 192.168.1.2 but when we access geotest.com/cms/address we are not > getting the address page but insated we are getting the same index page. > > That?s we are unable to access any subdirectories inside /cms/ if we access > we are getting the index page only. > > So can you please guys help us on it. > > Thanks > > Geo > http://forum.nginx.org/read.php?2,237520 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237520,237615#msg-237615 location ^~ /cms { } ?? -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From Peter_Booth at s5a.com Thu Mar 21 06:03:03 2013 From: Peter_Booth at s5a.com (Peter Booth) Date: Thu, 21 Mar 2013 02:03:03 -0400 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <29c390cd78501ec05a92b384f02e1f19.NginxMailingListEnglish@forum.nginx.org> References: <29c390cd78501ec05a92b384f02e1f19.NginxMailingListEnglish@forum.nginx.org> Message-ID: <198FDE60-CF46-4A76-A1EB-16067B0E15F5@s5a.com> Why are you trying to rewrite your URLs at all? Why don't you simply endure that your HTML or dynamic content references images at cdn.mydomain.com? Sent from my iPhone On Mar 20, 2013, at 10:47 PM, "toddlahman" wrote: > I have tried both ways to redirect my static files to a CDN (content > delivery network), but both ways result in the message "too many redirects." > Can anyone tell me what I am doing wrong here? > > location ~* ^.+.(jpe?g|gif|css|png|js|ico)$ { > rewrite ^ http://cdn.mydomain.com$request_uri? permanent; > access_log off; > } > > > location ~* \.(jpg|jpeg|gif|png|flv|mp3|mpg|mpeg|js|css|ico|woff)$ { > return 301 http://cdn.mydomain.com$request_uri; > access_log off; > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237609#msg-237609 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 21 06:10:39 2013 From: nginx-forum at nginx.us (geopcgeo) Date: Thu, 21 Mar 2013 02:10:39 -0400 Subject: Need to proxypass to different servers. In-Reply-To: <1363845334.5117.1077.camel@steve-new> References: <1363845334.5117.1077.camel@steve-new> Message-ID: We also tried these options: location ^~ /cms { } location ^~ /cms/ { } But still gets same issue. We are only getting the index page. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237520,237618#msg-237618 From john at disqus.com Thu Mar 21 08:45:14 2013 From: john at disqus.com (John Watson) Date: Thu, 21 Mar 2013 01:45:14 -0700 Subject: Upstream least_conn behavior irregularity Message-ID: Was investigating some issues today when we noticed that least_conn wasn't behaving as expected. upstream backend { least_conn; server unix:/tmp/sock-1.sock; server unix:/tmp/sock-2.sock; server unix:/tmp/sock-3.sock; } The expected behavior for 4 simultaneous requests it should distribute them: sock-1: 2 sock-2: 1 sock-3: 1 However, what we're seeing is: sock-1: 3 sock-2: 1 sock-3: 0 Which coincidentally lines up with the number of requests a socket can service simultaneously. This is using 1.2.7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 21 10:03:07 2013 From: nginx-forum at nginx.us (fastcatch) Date: Thu, 21 Mar 2013 06:03:07 -0400 Subject: When does nginx return a Bad gateway (502)? In-Reply-To: <8fd0e2adc40cbfba00b43209e857e1af.NginxMailingListEnglish@forum.nginx.org> References: <5148CEEA.3040308@consbio.org> <8fd0e2adc40cbfba00b43209e857e1af.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7fd428adca10af0680e00a0f8536396d.NginxMailingListEnglish@forum.nginx.org> I can confirm that allowing a longer backlog queue (in unicorn config + net.core.somaxconn and net.core.netdev_max_backlog) did solve the issue (for me). I posted a bit more details at http://stackoverflow.com/a/15544229/1061997. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237573,237622#msg-237622 From mdounin at mdounin.ru Thu Mar 21 10:32:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Mar 2013 14:32:34 +0400 Subject: proxy_pass of PUT with no Content_length header returns 411 In-Reply-To: <9f8d7c95e8dc77773beb28113abdeea4.NginxMailingListEnglish@forum.nginx.org> References: <9f8d7c95e8dc77773beb28113abdeea4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130321103234.GO62550@mdounin.ru> Hello! On Wed, Mar 20, 2013 at 06:09:26PM -0400, nickpalmer wrote: > I am having trouble with proxy_pass and PUT without a Content-Length header > returning a 411 error. > > # curl -XPUT http://localhost:8080/ > > 411 Length Required > >

411 Length Required

>
nginx/1.1.19
> > > # touch temp > # curl -X PUT http://localhost:8080/ -T temp > {"response": "ok"} > # > > Relevant configuration: > > # Proxy to Backend Server > server { > listen localhost:8080; > > location / { > proxy_set_header Host $http_host; > proxy_set_header X-Forwarded-Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_pass http://backend_server; > } > } > > I found this post which seems to be the same problem: > http://forum.nginx.org/read.php?2,72279,72279#msg-72279 > > Is there a way to get nginx to proxy PUT requests WITHOUT a Content-Length > header? > Does a newer version of nginx NOT suffer from this limitation? PUT requests without a Content-Length header (either using chunked transfer encoding, or without a request body at all) are allowed in nginx 1.3.9+. -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Thu Mar 21 11:47:08 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 21 Mar 2013 15:47:08 +0400 Subject: Upstream least_conn behavior irregularity In-Reply-To: References: Message-ID: <20130321114708.GD84136@lo0.su> On Thu, Mar 21, 2013 at 01:45:14AM -0700, John Watson wrote: > Was investigating some issues today when we noticed that least_conn wasn't > behaving as expected. > upstream backend { > ? least_conn; > ? server unix:/tmp/sock-1.sock; > ? server unix:/tmp/sock-2.sock; > ? server unix:/tmp/sock-3.sock; > } > The expected behavior for 4 simultaneous requests it should distribute > them: > sock-1: 2 > sock-2: 1 > sock-3: 1 > However, what we're seeing is: > sock-1: 3 > sock-2: 1 > sock-3: 0 > Which coincidentally lines up with the number of requests a socket can > service simultaneously. > This is using 1.2.7 And the number of configured worker processes is? From WBrown at e1b.org Thu Mar 21 12:18:14 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Thu, 21 Mar 2013 08:18:14 -0400 Subject: Translating an F5 rule In-Reply-To: References: <37FEFC80-6A9B-43F8-8503-3D7149F2507F@sysoev.ru> Message-ID: > > proxy_set_header SWSSLHDR $server_port; > > > > nice catch! But once again, because HTTP_REQUEST is client-side, so > says this F5-certified engineer with reference to the docs, it should > be $proxy_port instead of $server_port. Thanks to everyone that responded to my questions. Nginx has a great community around it! It has become clear that I need to learn more about the HTTP protocol. I am starting with the O'Reilly book "HTTP The Definitive Guide". Does anyone have other recommended reading to help my understand how HTTP operates? Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From yaoweibin at gmail.com Thu Mar 21 12:52:53 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Thu, 21 Mar 2013 20:52:53 +0800 Subject: [Announce] Tengine-1.4.4 is released Message-ID: Hi folks! We are excited to announce that Tengine-1.4.4 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-1.4.4.tar.gz The highlight of this release is the session_sticky module which supports session persistence between a client and a server. By using this module, subsequent requests will be dispatched to the same upstream server as the first request. Its syntax is very similar to HAProxy's. BTW, Tengine is now based on the Nginx-1.2.7, the latest stable version. The full changelog is as follows: *) Feature: added the session_sticky module by using which one client can be always served by the same upstream server. (dinic) *) Feature: now the sysguard module can protect the server based on the amount of free memory. (lifeibo) *) Feature: added support for geoip regional database in geoip module. (jasonlfunk) *) Feature: log_empty_request can also disable the logs for timeout (408) empty request. (yaoweibin) *) Change: merged changes between Nginx-1.2.5 and Nginx-1.2.7. (cfsego) *) Change: CPU affinity is off by default now. (cfsego) *) Bugfix: fixed a bug that sysguard and upstream_check module didn't compile on Solaris 11. (lifeibo, yaoweibin) *) Bugfix: fixed a bug with TFS module that it might return bad values. (zhcn381) *) Bugfix: fixed a bug with TFS module that it might corrupt large files. (zhcn381) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas.herbolt at etnetera.cz Thu Mar 21 14:30:32 2013 From: lukas.herbolt at etnetera.cz (=?UTF-8?B?SGXFmWJvbHQsIEx1a8OhxaE=?=) Date: Thu, 21 Mar 2013 15:30:32 +0100 Subject: tcp splicing Message-ID: Hello, I am new in nginx and I'd like to know if nginx implements tcp-splicing system call. Thanx -- Luk?? He?bolt Linux Administrator ET NETERA | smart e-business [a] Milady Hor?kov? 108, 160 00 Praha 6 [t] +420 725 267 158 [i] www.etnetera.cz ~ [www.ifortuna.cz | www.o2.cz | www.datart.cz ] [www.skodaplus.cz | www.nivea.cz | www.allianz.cz] Created by ET NETERA | Powered by jNetPublish -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolbsd at hotmail.com Thu Mar 21 16:17:14 2013 From: coolbsd at hotmail.com (Cool) Date: Thu, 21 Mar 2013 09:17:14 -0700 Subject: How to change cookie header in a filter? In-Reply-To: <20130320083819.GA62550@mdounin.ru> References: <20130319130423.GL15378@mdounin.ru> <20130320083819.GA62550@mdounin.ru> Message-ID: Got it, thanks, appreciate. -C ? 13-3-20 ??1:38, Maxim Dounin ??: > Hello! > > On Tue, Mar 19, 2013 at 11:50:43AM -0700, Cool wrote: > >> Thanks Maxim, I got what you mean. >> >> Since I'm using fastCGI so I put something like this: >> >> fastcgi_param HTTP_COOKIE "$http_cookie; mycookie=$cookie_note"; >> >> (I populated cookie_note in my filter already, this was done for >> logging purpose thus it is just a reuse of existing facility) >> >> More problems come with this solution: >> >> 1. it seems fastcgi_param called before my filter so $cookie_note >> always got empty, and > You shouldn't rely on your filter already executed, and should > instead register a variable handler which does the actual work. > This way it will work at any time. > >> 2. it seems fastcgi_param could not be used in a if directive so I >> end up with change the cookie header even the mycookie is presented >> in user's request, thus > If there are conditions when you should not add a cookie I would > recommend you implementing a variable with full Cookie header you > want to pass, e.g. > > fastcgi_param HTTP_COOKIE $my_new_cookie; > > This way you may implement arbitrary conditions you want in your > module. (You may also construct the variable using if/set/map/etc, but > doing appropriate tests in your module would be less error prone.) > From john at disqus.com Thu Mar 21 16:34:19 2013 From: john at disqus.com (John Watson) Date: Thu, 21 Mar 2013 09:34:19 -0700 Subject: Upstream least_conn behavior irregularity In-Reply-To: <20130321114708.GD84136@lo0.su> References: <20130321114708.GD84136@lo0.su> Message-ID: Ohhhh... that makes complete sense now. Had 4 workers. Thanks! On Thu, Mar 21, 2013 at 4:47 AM, Ruslan Ermilov wrote: > On Thu, Mar 21, 2013 at 01:45:14AM -0700, John Watson wrote: > > Was investigating some issues today when we noticed that least_conn > wasn't > > behaving as expected. > > upstream backend { > > least_conn; > > server unix:/tmp/sock-1.sock; > > server unix:/tmp/sock-2.sock; > > server unix:/tmp/sock-3.sock; > > } > > The expected behavior for 4 simultaneous requests it should distribute > > them: > > sock-1: 2 > > sock-2: 1 > > sock-3: 1 > > However, what we're seeing is: > > sock-1: 3 > > sock-2: 1 > > sock-3: 0 > > Which coincidentally lines up with the number of requests a socket can > > service simultaneously. > > This is using 1.2.7 > > And the number of configured worker processes is? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Mar 21 17:55:58 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Mar 2013 17:55:58 +0000 Subject: tcp splicing In-Reply-To: References: Message-ID: <20130321175558.GK18002@craic.sysops.org> On Thu, Mar 21, 2013 at 03:30:32PM +0100, He?bolt, Luk?? wrote: Hi there, > I am new in nginx and I'd like to know if nginx implements tcp-splicing > system call. If it does, I guess that the string "splice" will appear in the source code somewhere. $ grep -r splice ~/nginx/nginx-1.2.7 It shouldn't be too hard for you to check a development version. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Mar 21 18:01:06 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Mar 2013 18:01:06 +0000 Subject: Need to proxypass to different servers. In-Reply-To: References: <20130319201717.GI18002@craic.sysops.org> Message-ID: <20130321180106.GL18002@craic.sysops.org> On Thu, Mar 21, 2013 at 01:48:20AM -0400, geopcgeo wrote: Hi there, > Thanks for your updates. We are able to proxypass to another domain. But > the issue is domain?s sub directories are not working fine. So, when you do curl -i http://geotest.com/cms/address what response do you get? What response do you want to get? What request was actually made to the server on 192.168.1.2? What do you get if you make that request yourself with curl? f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Mar 21 18:10:19 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Mar 2013 18:10:19 +0000 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <1bdfdec5f49755f0d38a9538e212f6b2.NginxMailingListEnglish@forum.nginx.org> References: <1363834825.5117.991.camel@steve-new> <1bdfdec5f49755f0d38a9538e212f6b2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130321181019.GM18002@craic.sysops.org> On Thu, Mar 21, 2013 at 12:20:53AM -0400, toddlahman wrote: Hi there, > It is possible my CDN is using the same ruleset. I am using MaxCDN, and they > use Nginx to serve static images. How would I write this ruleset to be > compatible with MaxCDN (aka NetDNA)? You shouldn't have to care. If your server is configured to cause a redirect loop, it's your problem. If their one is, it's their problem. So: which server is causing the "Too Many Redirects"? curl -i http://yourserver/XXico Do you get a http redirect? What happens when you "curl -i" the returned Location: value? Repeat until you can see which server (yourserver or the cdn) is causing the redirect chain. Then fix that server. f -- Francis Daly francis at daoine.org From john at disqus.com Thu Mar 21 19:03:59 2013 From: john at disqus.com (John Watson) Date: Thu, 21 Mar 2013 12:03:59 -0700 Subject: Upstream least_conn behavior irregularity In-Reply-To: References: <20130321114708.GD84136@lo0.su> Message-ID: Well doesn't make sense when theres >4 concurrent requests At any given time there's around 12 active_connections, but sock-3 is still never being used On Thu, Mar 21, 2013 at 9:34 AM, John Watson wrote: > Ohhhh... that makes complete sense now. > > Had 4 workers. > > Thanks! > > > On Thu, Mar 21, 2013 at 4:47 AM, Ruslan Ermilov wrote: > >> On Thu, Mar 21, 2013 at 01:45:14AM -0700, John Watson wrote: >> > Was investigating some issues today when we noticed that least_conn >> wasn't >> > behaving as expected. >> > upstream backend { >> > least_conn; >> > server unix:/tmp/sock-1.sock; >> > server unix:/tmp/sock-2.sock; >> > server unix:/tmp/sock-3.sock; >> > } >> > The expected behavior for 4 simultaneous requests it should >> distribute >> > them: >> > sock-1: 2 >> > sock-2: 1 >> > sock-3: 1 >> > However, what we're seeing is: >> > sock-1: 3 >> > sock-2: 1 >> > sock-3: 0 >> > Which coincidentally lines up with the number of requests a socket >> can >> > service simultaneously. >> > This is using 1.2.7 >> >> And the number of configured worker processes is? >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 21 19:13:43 2013 From: nginx-forum at nginx.us (toddlahman) Date: Thu, 21 Mar 2013 15:13:43 -0400 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <1363834825.5117.991.camel@steve-new> References: <1363834825.5117.991.camel@steve-new> Message-ID: <294d931c16a7063e0fda7a9808c69fcb.NginxMailingListEnglish@forum.nginx.org> I talked to NetDNA and they said my redirect would in fact cause a redirect loop. Unfortunately the PHP project I am using the CDN for would take a long time to convert URLs to the CDN for static files so another route may be a better solution. There has been mention of using proxy_pass to an upstream to pass static files off to the upstream CDN. I've tried a few different ways to apply this, but none have worked. Does anyone have a working proxy_pass solution to offload static files to the CDN? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237651#msg-237651 From contact at jpluscplusm.com Thu Mar 21 19:16:47 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 21 Mar 2013 19:16:47 +0000 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <294d931c16a7063e0fda7a9808c69fcb.NginxMailingListEnglish@forum.nginx.org> References: <1363834825.5117.991.camel@steve-new> <294d931c16a7063e0fda7a9808c69fcb.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 21 March 2013 19:13, toddlahman wrote: > I talked to NetDNA and they said my redirect would in fact cause a redirect > loop. Unfortunately the PHP project I am using the CDN for would take a long > time to convert URLs to the CDN for static files so another route may be a > better solution. > > There has been mention of using proxy_pass to an upstream to pass static > files off to the upstream CDN. I've tried a few different ways to apply > this, but none have worked. Does anyone have a working proxy_pass solution > to offload static files to the CDN? Francis already told you what to do. Please follow his instructions and report back how you get on, where you get stuck, or what your final working config looks like so others can learn from it. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From francis at daoine.org Thu Mar 21 19:21:23 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Mar 2013 19:21:23 +0000 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <294d931c16a7063e0fda7a9808c69fcb.NginxMailingListEnglish@forum.nginx.org> References: <1363834825.5117.991.camel@steve-new> <294d931c16a7063e0fda7a9808c69fcb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130321192123.GN18002@craic.sysops.org> On Thu, Mar 21, 2013 at 03:13:43PM -0400, toddlahman wrote: Hi there, > I talked to NetDNA and they said my redirect would in fact cause a redirect > loop. Unfortunately the PHP project I am using the CDN for would take a long > time to convert URLs to the CDN for static files so another route may be a > better solution. I'm a bit confused about how your configuration can have any affect on someone else's CDN. If you want to fetch "file.js" from the CDN, what url should you use? Perhaps I misunderstand what NetDNA actually do. f -- Francis Daly francis at daoine.org From ru at nginx.com Thu Mar 21 19:29:32 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 21 Mar 2013 23:29:32 +0400 Subject: Upstream least_conn behavior irregularity In-Reply-To: References: <20130321114708.GD84136@lo0.su> Message-ID: <20130321192932.GN84136@lo0.su> On Thu, Mar 21, 2013 at 12:03:59PM -0700, John Watson wrote: > Well doesn't make sense when theres >4 concurrent requests > At any given time there's around 12 active_connections, but sock-3 is > still never being used Can you see a difference with only one worker process? Currently, different workers have distinct counters of active connections. It should be unnoticed under a high load. > On Thu, Mar 21, 2013 at 9:34 AM, John Watson <[1]john at disqus.com> wrote: > > Ohhhh... that makes complete sense now. > Had 4 workers. > Thanks! > > On Thu, Mar 21, 2013 at 4:47 AM, Ruslan Ermilov <[2]ru at nginx.com> wrote: > > On Thu, Mar 21, 2013 at 01:45:14AM -0700, John Watson wrote: > > ? ?Was investigating some issues today when we noticed that > least_conn wasn't > > ? ?behaving as expected. > > ? ?upstream backend { > > ? ?? least_conn; > > ? ?? server unix:/tmp/sock-1.sock; > > ? ?? server unix:/tmp/sock-2.sock; > > ? ?? server unix:/tmp/sock-3.sock; > > ? ?} > > ? ?The expected behavior for 4 simultaneous requests it should > distribute > > ? ?them: > > ? ?sock-1: 2 > > ? ?sock-2: 1 > > ? ?sock-3: 1 > > ? ?However, what we're seeing is: > > ? ?sock-1: 3 > > ? ?sock-2: 1 > > ? ?sock-3: 0 > > ? ?Which coincidentally lines up with the number of requests a > socket can > > ? ?service simultaneously. > > ? ?This is using 1.2.7 > > And the number of configured worker processes is? From nginx-forum at nginx.us Thu Mar 21 19:39:37 2013 From: nginx-forum at nginx.us (toddlahman) Date: Thu, 21 Mar 2013 15:39:37 -0400 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: References: Message-ID: As I already said, the code doesn't work with NetDNA, so there is nothing to fix. A different approach must be taken, and I am looking for solutions other than that which is already broken. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237655#msg-237655 From contact at jpluscplusm.com Thu Mar 21 19:52:36 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 21 Mar 2013 19:52:36 +0000 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: References: Message-ID: On 21 March 2013 19:39, toddlahman wrote: > As I already said, the code doesn't work with NetDNA, so there is nothing to > fix. A different approach must be taken, and I am looking for solutions > other than that which is already broken. You haven't explained *why* it's broken, or what you've done to troubleshoot the problem. You need to follow the instructions that Francis wrote earlier. Show us the output of the curl invocations. Tell us what your CDN support have told you about how you /should/ configure this - even if just conceptually, and not in nginx-config language. There isn't a single nginx "CDN solution" because there isn't just one "CDN". They all implement their services differently, and those differences will dictate how you solve this problem. Hence you need to tell us everything they've told /you/. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From john at disqus.com Thu Mar 21 20:01:57 2013 From: john at disqus.com (John Watson) Date: Thu, 21 Mar 2013 13:01:57 -0700 Subject: Upstream least_conn behavior irregularity In-Reply-To: <20130321192932.GN84136@lo0.su> References: <20130321114708.GD84136@lo0.su> <20130321192932.GN84136@lo0.su> Message-ID: Going to pushing out the change to 1 worker later today. It's just become more of an exercise in understanding why it was behaving that way. Even under "high" load (in this case ~50 active_connections), the 3 socks don't seem to be getting equal number of requests. On Thu, Mar 21, 2013 at 12:29 PM, Ruslan Ermilov wrote: > On Thu, Mar 21, 2013 at 12:03:59PM -0700, John Watson wrote: > > Well doesn't make sense when theres >4 concurrent requests > > At any given time there's around 12 active_connections, but sock-3 is > > still never being used > > Can you see a difference with only one worker process? > > Currently, different workers have distinct counters of active connections. > It should be unnoticed under a high load. > > > On Thu, Mar 21, 2013 at 9:34 AM, John Watson <[1]john at disqus.com> > wrote: > > > > Ohhhh... that makes complete sense now. > > Had 4 workers. > > Thanks! > > > > On Thu, Mar 21, 2013 at 4:47 AM, Ruslan Ermilov <[2]ru at nginx.com> > wrote: > > > > On Thu, Mar 21, 2013 at 01:45:14AM -0700, John Watson wrote: > > > Was investigating some issues today when we noticed that > > least_conn wasn't > > > behaving as expected. > > > upstream backend { > > > least_conn; > > > server unix:/tmp/sock-1.sock; > > > server unix:/tmp/sock-2.sock; > > > server unix:/tmp/sock-3.sock; > > > } > > > The expected behavior for 4 simultaneous requests it should > > distribute > > > them: > > > sock-1: 2 > > > sock-2: 1 > > > sock-3: 1 > > > However, what we're seeing is: > > > sock-1: 3 > > > sock-2: 1 > > > sock-3: 0 > > > Which coincidentally lines up with the number of requests a > > socket can > > > service simultaneously. > > > This is using 1.2.7 > > > > And the number of configured worker processes is? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 21 22:48:43 2013 From: nginx-forum at nginx.us (toddlahman) Date: Thu, 21 Mar 2013 18:48:43 -0400 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: References: Message-ID: <69ddd87799d066cb7940c16da0bbb076.NginxMailingListEnglish@forum.nginx.org> The reply I received from NetDNA after supplying the same information as I did here is as follows: "Too many redirects" is a legit message in this scenario - example: You are redirecting domain.com/file.jpg TO cdn.domain.com/file.jpg ---> request comes to CDN and CDN neds to cache this file from origin so it tries to fetch from origin from location "domain.com/file.jpg" ---> request comes to origin and redirect rule you made redirects this request back to CDN <--- this is where infinite loop starts. This is not a proper way to implement CDN as you have to monitor who is requesting your origin file so you could know whether it's a request you want to redirect or not. The best way to handle this is to monitor our anycast IPs and redirect everything except for those ips: 108.161.176.0/20 70.39.132.0/24 92.60.240.208/29 92.60.240.217/29 216.12.211.60/32 216.12.211.59/32 If you want to implement CDN this way, we can't support that implementation as we don't really encourage our client to use this type of implementation. The reason is quite simple: We are going to add new IP block that has to be white listed and until you make update for your redirects rule you'll be pushing infinite redirects to our servers causing service not to work until you add new block as well. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237661#msg-237661 From nginx-forum at nginx.us Fri Mar 22 04:10:46 2013 From: nginx-forum at nginx.us (tstianzy) Date: Fri, 22 Mar 2013 00:10:46 -0400 Subject: Nginx - SMTP no authentication In-Reply-To: <20091010102854.GJ79672@mdounin.ru> References: <20091010102854.GJ79672@mdounin.ru> Message-ID: I also have this problem, when I use externet domain mail account send mail to me, it display "530 5.7.1 Authentication required". can you help me ? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,12609,237663#msg-237663 From Peter_Booth at s5a.com Fri Mar 22 04:31:35 2013 From: Peter_Booth at s5a.com (Peter Booth) Date: Fri, 22 Mar 2013 00:31:35 -0400 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <69ddd87799d066cb7940c16da0bbb076.NginxMailingListEnglish@forum.nginx.org> References: <69ddd87799d066cb7940c16da0bbb076.NginxMailingListEnglish@forum.nginx.org> Message-ID: <01560351-AACA-4C55-81B3-A47B900BC57C@s5a.com> What netdna said is sensible and I imagine any cdn would say the same. Ultimately the ball is in your court. If you want to use a CDN (and it's not compulsory) then change your app so that the image links are absolute links with the cdn domain name. There's no good reason for nginx to have any part of this Sent from my iPhone On Mar 21, 2013, at 6:49 PM, "toddlahman" wrote: > The reply I received from NetDNA after supplying the same information as I > did here is as follows: > > "Too many redirects" is a legit message in this scenario - example: You are > redirecting domain.com/file.jpg TO cdn.domain.com/file.jpg ---> request > comes to CDN and CDN neds to cache this file from origin so it tries to > fetch from origin from location "domain.com/file.jpg" ---> request comes to > origin and redirect rule you made redirects this request back to CDN <--- > this is where infinite loop starts. > > This is not a proper way to implement CDN as you have to monitor who is > requesting your origin file so you could know whether it's a request you > want to redirect or not. The best way to handle this is to monitor our > anycast IPs and redirect everything except for those ips: > > 108.161.176.0/20 > 70.39.132.0/24 > 92.60.240.208/29 > 92.60.240.217/29 > 216.12.211.60/32 > 216.12.211.59/32 > > If you want to implement CDN this way, we can't support that implementation > as we don't really encourage our client to use this type of implementation. > The reason is quite simple: We are going to add new IP block that has to be > white listed and until you make update for your redirects rule you'll be > pushing infinite redirects to our servers causing service not to work until > you add new block as well. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237661#msg-237661 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 22 09:25:15 2013 From: nginx-forum at nginx.us (selphon) Date: Fri, 22 Mar 2013 05:25:15 -0400 Subject: How to make nginx establish persistent connections with squid? Message-ID: hi, I use nginx as load balance and forward request to squid use http/1.1, the topology is below: chrome ---> nginx(:80) ---> squid(:8080) ---> origin server(nginx :80) the nginx configuration: upstream backend { server 192.168.13.210:80; keepalive 10; } server { listen 80 default; server_name _; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header Connection ""; location / { proxy_pass http://backend; } } the squid configuration of persistent connections is supported on both client side and server side: ##########timeout########## client_persistent_connections on server_persistent_connections on request_timeout 240 seconds #to client/wait client's request client_lifetime 240 seconds #to client/all request time persistent_request_timeout 30 seconds #to client/keepalive pconn_timeout 30 seconds #to origin server or peer/keepalive connect_timeout 240 seconds #to origin server/only connect read_timeout 240 seconds #to origin server or peer/wait recv data Then, I made 5 requests such as 'http://test.cache.com/p3.jpg?tt=2013032206' and could not find any persistent connection between nginx and squid. the squid log show: 127.0.0.1 - - [22/Mar/2013:16:12:06 +0800] "GET http://test.cache.com/p3.jpg?tt=2013032205 HTTP/1.1" 304 364 "http://test.cache.com/p3.jpg?tt=2013032205" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22" TCP_REFRESH_HIT:DIRECT/192.168.13.210 0 "Host: test.cache.com #request header Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22 Referer: http://test.cache.com/p3.jpg?tt=2013032205 Accept-Encoding: gzip,deflate,sdch Accept-Language: zh-CN,zh;q=0.8 Accept-Charset: GBK,utf-8;q=0.7,*;q=0.3 If-Modified-Since: Tue, 23 Oct 2012 04:07:19 GMT" "HTTP/1.0 304 Not Modified #response header Server: nginx/1.2.6 Date: Fri, 22 Mar 2013 08:12:06 GMT Last-Modified: Tue, 23 Oct 2012 04:07:19 GMT Expires: Fri, 22 Mar 2013 10:12:06 GMT Cache-Control: max-age=7200 X-Cache: MISS from vm-linux1.test.com X-Cache-Lookup: HIT from vm-linux1.test.com:8080 Via: 1.1 vm-linux1.test.com:8080 (squid/2.7.STABLE9) Connection: close" as we see(last line), squid announced the Connection should be close after the request. I think this is why nginx couldn't make a persistent connection with squid. then I try to set: proxy_set_header Connection "keep-alive"; forcing that the request must be keep-alive. the request and response is: 127.0.0.1 - - [22/Mar/2013:15:59:09 +0800] "GET http://test.cache.com/p3.jpg?tt=2013032203 HTTP/1.1" 304 369 "http://test.cache.com/p3.jpg?tt=2013032203" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22" TCP_REFRESH_HIT:DIRECT/192.168.13.210 0 "Host: test.cache.com Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22 Referer: http://test.cache.com/p3.jpg?tt=2013032203 Accept-Encoding: gzip,deflate,sdch Accept-Language: zh-CN,zh;q=0.8 Accept-Charset: GBK,utf-8;q=0.7,*;q=0.3 If-Modified-Since: Tue, 23 Oct 2012 04:07:19 GMT" "HTTP/1.0 304 Not Modified Server: nginx/1.2.6 Date: Fri, 22 Mar 2013 07:59:09 GMT Last-Modified: Tue, 23 Oct 2012 04:07:19 GMT Expires: Fri, 22 Mar 2013 09:59:09 GMT Cache-Control: max-age=7200 X-Cache: MISS from vm-linux1.test.com X-Cache-Lookup: HIT from vm-linux1.test.com:8080 Via: 1.1 vm-linux1.test.com:8080 (squid/2.7.STABLE9) Connection: keep-alive" though the Connection of response is keep-alive, nginx still didn't make any persistent connection with squid. Is there any way to configure nginx use a persistent connection where forward requests to squid ? help help nginx_version: 1.2.6 squid_version: 2.7.STABLE9 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237666,237666#msg-237666 From francis at daoine.org Fri Mar 22 09:27:24 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Mar 2013 09:27:24 +0000 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <69ddd87799d066cb7940c16da0bbb076.NginxMailingListEnglish@forum.nginx.org> References: <69ddd87799d066cb7940c16da0bbb076.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130322092724.GO18002@craic.sysops.org> On Thu, Mar 21, 2013 at 06:48:43PM -0400, toddlahman wrote: Hi there, > The reply I received from NetDNA after supplying the same information as I > did here is as follows: > > "Too many redirects" is a legit message in this scenario - example: You are > redirecting domain.com/file.jpg TO cdn.domain.com/file.jpg ---> request > comes to CDN and CDN neds to cache this file from origin so it tries to > fetch from origin from location "domain.com/file.jpg" ---> request comes to > origin and redirect rule you made redirects this request back to CDN <--- > this is where infinite loop starts. Ah, okay, that makes sense. You don't upload to the CDN; it instead mirrors your content on demand. (That was the part I had missed.) As was mentioned, the correct solution is for all of your links to actually be to things like http://cdn.domain.com/file.jpg; but as was also mentioned, that is not a quick change for you. The NetDNA suggestion -- to serve when it comes from them and to redirect when it comes from others -- is reasonable; but as they also indicate, "when it comes from them" is non-trivial to get right, and will be a problem if you ever get it wrong in one direction. Using proxy_pass in your nginx would defeat the main purpose of using a CDN, as your own bandwidth would be used always. An alternative possibility could be for you to set up another server{} block with a server_name of (say) cdn-src.domain.com which has the same document root as your main one, but which just serves the static files. Then, at the point where your CDN is configured to map cdn.domain.com/file.jpg -> domain.com/file.jpg, map it instead to cdn-src.domain.com/file.jpg. With no change to your application, your users go to domain.com/something and follow a link to domain.com/file.jpg; your "main" server redirects them to cdn.domain.com/file.jpg; they get that, which fetches (now successfully) cdn-src.domain.com/file.jpg and returns it to the user. Does that sound like it might do what you want, without taking too much effort to keep synchronised? f -- Francis Daly francis at daoine.org From kirilk at cloudxcel.com Fri Mar 22 09:32:47 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Fri, 22 Mar 2013 11:32:47 +0200 Subject: Too Many Redirects - CDN Rewrite Rule In-Reply-To: <01560351-AACA-4C55-81B3-A47B900BC57C@s5a.com> References: <69ddd87799d066cb7940c16da0bbb076.NginxMailingListEnglish@forum.nginx.org> <01560351-AACA-4C55-81B3-A47B900BC57C@s5a.com> Message-ID: Hi, What Peter said is correct the best way is to prepare your application for using CDNs. But I think for a quick workaround of the problem you can try to make another server to be used only from CDN. server { location ~* ^.+.(jpe?g|gif|css|png|js|ico)$ { rewrite ^ http://cdn.mydomain.com/my_sercret_cdn_location_(some_random_hash)/$request_uri? permanent; access_log off; } location ~* \.(jpg|jpeg|gif|png|flv|mp3|mpg|mpeg|js|css|ico|woff)$ { return 301 http://cdn.mydomain.com/my_sercret_cdn_location_(some_random_hash)/$request_uri; access_log off; } location /my_sercret_cdn_location_(some_random_hash) { you can put some http_access _module rules here to allow requests only from your CDN. rewrite ^/my_sercret_cdn_location_(some_random_hash)(.*)$ $1 break; pass request to your backend server; } } Is there any reason of using rewrite in the first location and return in the second. As far as I know they do the same thing in this case. You can implement this with separate nginx server and domain instead of location. Please correct me if I am missing something. Regards, Kiril On Mar 22, 2013, at 6:31 AM, Peter Booth wrote: > What netdna said is sensible and I imagine any cdn would say the same. Ultimately the ball is in your court. > > If you want to use a CDN (and it's not compulsory) then change your app so that the image links are absolute links with the cdn domain name. There's no good reason for nginx to have any part of this > > Sent from my iPhone > > On Mar 21, 2013, at 6:49 PM, "toddlahman" wrote: > >> The reply I received from NetDNA after supplying the same information as I >> did here is as follows: >> >> "Too many redirects" is a legit message in this scenario - example: You are >> redirecting domain.com/file.jpg TO cdn.domain.com/file.jpg ---> request >> comes to CDN and CDN neds to cache this file from origin so it tries to >> fetch from origin from location "domain.com/file.jpg" ---> request comes to >> origin and redirect rule you made redirects this request back to CDN <--- >> this is where infinite loop starts. >> >> This is not a proper way to implement CDN as you have to monitor who is >> requesting your origin file so you could know whether it's a request you >> want to redirect or not. The best way to handle this is to monitor our >> anycast IPs and redirect everything except for those ips: >> >> 108.161.176.0/20 >> 70.39.132.0/24 >> 92.60.240.208/29 >> 92.60.240.217/29 >> 216.12.211.60/32 >> 216.12.211.59/32 >> >> If you want to implement CDN this way, we can't support that implementation >> as we don't really encourage our client to use this type of implementation. >> The reason is quite simple: We are going to add new IP block that has to be >> white listed and until you make update for your redirects rule you'll be >> pushing infinite redirects to our servers causing service not to work until >> you add new block as well. >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237609,237661#msg-237661 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From mdounin at mdounin.ru Fri Mar 22 09:52:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Mar 2013 13:52:29 +0400 Subject: How to make nginx establish persistent connections with squid? In-Reply-To: References: Message-ID: <20130322095229.GA62550@mdounin.ru> Hello! On Fri, Mar 22, 2013 at 05:25:15AM -0400, selphon wrote: > hi, > I use nginx as load balance and forward request to squid use http/1.1, the > topology is below: > > chrome ---> nginx(:80) ---> squid(:8080) ---> origin server(nginx :80) > > the nginx configuration: > upstream backend { > server 192.168.13.210:80; > keepalive 10; > } > > > server { > listen 80 default; > server_name _; > > proxy_set_header Host $host; > > proxy_http_version 1.1; > proxy_set_header Connection ""; > > location / { > proxy_pass http://backend; > } > } > > the squid configuration of persistent connections is supported on both > client side and server side: > ##########timeout########## > client_persistent_connections on > server_persistent_connections on > > request_timeout 240 seconds #to client/wait client's request > client_lifetime 240 seconds #to client/all request time > persistent_request_timeout 30 seconds #to client/keepalive > pconn_timeout 30 seconds #to origin server or peer/keepalive > connect_timeout 240 seconds #to origin server/only connect > read_timeout 240 seconds #to origin server or peer/wait recv data > > Then, I made 5 requests such as 'http://test.cache.com/p3.jpg?tt=2013032206' > and could not find any persistent connection between nginx and squid. > > the squid log show: > 127.0.0.1 - - [22/Mar/2013:16:12:06 +0800] "GET > http://test.cache.com/p3.jpg?tt=2013032205 HTTP/1.1" 304 364 > "http://test.cache.com/p3.jpg?tt=2013032205" "Mozilla/5.0 (Windows NT 6.1) > AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22" > TCP_REFRESH_HIT:DIRECT/192.168.13.210 0 > > "Host: test.cache.com #request header > Cache-Control: max-age=0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like > Gecko) Chrome/25.0.1364.172 Safari/537.22 > Referer: http://test.cache.com/p3.jpg?tt=2013032205 > Accept-Encoding: gzip,deflate,sdch > Accept-Language: zh-CN,zh;q=0.8 > Accept-Charset: GBK,utf-8;q=0.7,*;q=0.3 > If-Modified-Since: Tue, 23 Oct 2012 04:07:19 GMT" > > "HTTP/1.0 304 Not Modified #response header > Server: nginx/1.2.6 > Date: Fri, 22 Mar 2013 08:12:06 GMT > Last-Modified: Tue, 23 Oct 2012 04:07:19 GMT > Expires: Fri, 22 Mar 2013 10:12:06 GMT > Cache-Control: max-age=7200 > X-Cache: MISS from vm-linux1.test.com > X-Cache-Lookup: HIT from vm-linux1.test.com:8080 > Via: 1.1 vm-linux1.test.com:8080 (squid/2.7.STABLE9) > Connection: close" > > as we see(last line), squid announced the Connection should be close after > the request. I think this is why nginx couldn't make a persistent connection > with squid. > > then I try to set: proxy_set_header Connection "keep-alive"; forcing that > the request must be keep-alive. > the request and response is: > > 127.0.0.1 - - [22/Mar/2013:15:59:09 +0800] "GET > http://test.cache.com/p3.jpg?tt=2013032203 HTTP/1.1" 304 369 > "http://test.cache.com/p3.jpg?tt=2013032203" "Mozilla/5.0 (Windows NT 6.1) > AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22" > TCP_REFRESH_HIT:DIRECT/192.168.13.210 0 > > "Host: test.cache.com > Connection: keep-alive > Cache-Control: max-age=0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like > Gecko) Chrome/25.0.1364.172 Safari/537.22 > Referer: http://test.cache.com/p3.jpg?tt=2013032203 > Accept-Encoding: gzip,deflate,sdch > Accept-Language: zh-CN,zh;q=0.8 > Accept-Charset: GBK,utf-8;q=0.7,*;q=0.3 > If-Modified-Since: Tue, 23 Oct 2012 04:07:19 GMT" > > "HTTP/1.0 304 Not Modified > Server: nginx/1.2.6 > Date: Fri, 22 Mar 2013 07:59:09 GMT > Last-Modified: Tue, 23 Oct 2012 04:07:19 GMT > Expires: Fri, 22 Mar 2013 09:59:09 GMT > Cache-Control: max-age=7200 > X-Cache: MISS from vm-linux1.test.com > X-Cache-Lookup: HIT from vm-linux1.test.com:8080 > Via: 1.1 vm-linux1.test.com:8080 (squid/2.7.STABLE9) > Connection: keep-alive" > > though the Connection of response is keep-alive, nginx still didn't make any > persistent connection with squid. > > Is there any way to configure nginx use a persistent connection where > forward requests to squid ? help help > > nginx_version: 1.2.6 > squid_version: 2.7.STABLE9 Connections are kept alive only if a response is in HTTP/1.1 protocol. -- Maxim Dounin http://nginx.org/en/donation.html From grails at jmsd.co.uk Fri Mar 22 11:18:50 2013 From: grails at jmsd.co.uk (John Moore) Date: Fri, 22 Mar 2013 11:18:50 +0000 Subject: Simple question about proxy cache Message-ID: <4tof401psoow.1gq09-jvrvwa4g@elasticemail.com> On 18/03/13 11:21, John Moore wrote: > On 17/03/13 23:08, Maxim Dounin wrote: >> Hello! >> >> On Sun, Mar 17, 2013 at 08:08:39PM +0000, John Moore wrote: >> >>> I've used nginx as a reverse proxy server for a long while but I've not >>> tried out the proxy cache until today, and I have to say I'm a little >>> bit confused by what I'm seeing in the cache log, and I'm wondering >>> whether I've set things up correctly. My requirements are actually >>> pretty simple. I have a couple of locations which I want to proxy to >>> another server and cache the results. Thus: >>> >>> location /media/house_images/{ >>> proxy_pass http://backend; >>> proxy_cache one; >>> } >>> >>> location /media/boat_images/{ >>> proxy_pass http://backend; >>> proxy_cache one; >>> } >>> >>> >>> Apart from this, I don't want any cacheing of responses to be done. I am >>> assuming that the default is NOT to cache unless a cache zone is >>> specified (at the server or location level, presumably), so either >>> omitting a proxy_cache or specifying 'proxy_cache off' should be >>> sufficient to achieve this, should it not? >> Yes, without proxy_cache (or with "proxy_cache off") configured in >> a location cache won't be used. >> >>> Two things are puzzling me, though. Firstly, in the cache log, I'm >>> seeing the URLs of all kinds of requests which SHOULD NOT be cached, and >>> I'm wondering whether all requests are logged whether they're cached or >>> not - I certainly hope this is the case and it's not actually cacheing >>> these responses. I would definitely prefer to only see entries in the >>> log for requests matching locations for which a cache has been >>> specified. I presume this is possible? >> You can configure logs for a specific location, see >> http://nginx.org/r/access_log. >> >>> Secondly, the very requests which I would expect to be cached are all >>> showing up in the log with the word 'MISS' in the $upstream_cache_status >>> column. >> This usually happens if your backend doesn't specify allowed cache >> time (in this case, proxy_cache_valid should be used to set one, >> see http://nginx.org/r/proxy_cache_valid) or if backend responses >> doesn't allow cache to be used (either directly with >> Cache-Control/Expires headers, or indirectly with Set-Cookie >> header, see http://nginx.org/r/proxy_ignore_headers). >> > Excellent - thanks, Maxim! That's got me sorted now, it all seems to be > working as planned. > > Actually, there is one final tweak I'd like. There are a number of different locations which I'd like to use the proxy cache for. I cannot repeat for each location the block where the cache log is defined (it rightly complains about duplication). So I have to define it at a server level instead. If I do this, though, then EVERY request for that server ends up being logged, even if there is no cache in force for the request location (i.e., the location has either 'proxy_cache off' or no proxy_cache definition. Is there a way I can configure things so that the only requests which are logged are ones from locations where a proxy cache is in force? From grails at jmsd.co.uk Fri Mar 22 11:39:17 2013 From: grails at jmsd.co.uk (John Moore) Date: Fri, 22 Mar 2013 11:39:17 +0000 Subject: Simple question about proxy cache Message-ID: <4tof45on1da8.1gq09-jvsdkp42@elasticemail.com> On 22/03/13 11:18, John Moore wrote: > > Actually, there is one final tweak I'd like. There are a number of > different locations which I'd like to use the proxy cache for. I cannot > repeat for each location the block where the cache log is defined (it > rightly complains about duplication). So I have to define it at a server > level instead. If I do this, though, then EVERY request for that server > ends up being logged, even if there is no cache in force for the request > location (i.e., the location has either 'proxy_cache off' or no > proxy_cache definition. Is there a way I can configure things so that > the only requests which are logged are ones from locations where a proxy > cache is in force? > > Sorry, please pretend I never wrote that! My problem was simply that I was defining the log_format in each location, hence the duplication. I can DEFINE it once at the server level and USE it multiple times at the location level without a problem. From mdounin at mdounin.ru Fri Mar 22 11:43:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Mar 2013 15:43:25 +0400 Subject: Simple question about proxy cache In-Reply-To: <4tof401psoow.1gq09-jvrvwa4g@elasticemail.com> References: <4tof401psoow.1gq09-jvrvwa4g@elasticemail.com> Message-ID: <20130322114325.GC62550@mdounin.ru> Hello! On Fri, Mar 22, 2013 at 11:18:50AM +0000, John Moore wrote: [...] > Actually, there is one final tweak I'd like. There are a number of > different locations which I'd like to use the proxy cache for. I cannot > repeat for each location the block where the cache log is defined (it > rightly complains about duplication). So I have to define it at a server > level instead. If I do this, though, then EVERY request for that server > ends up being logged, even if there is no cache in force for the request > location (i.e., the location has either 'proxy_cache off' or no > proxy_cache definition. Is there a way I can configure things so that > the only requests which are logged are ones from locations where a proxy > cache is in force? You _can_ repeat acces_log in every location. If nginx complains - you did something wrong (tried to repeat log_format?). Something like this will do logging into normal.log for all requests, and additionally to cache.log in a locations where it's needed: log_format cache "..."; access_log /path/to/normal.log combined; server { ... location / { proxy_pass http://uncached; } location /foo { proxy_pass http://foo; proxy_cache one; access_log /path/to/normal.log combined; access_log /path/to/cache.log cache; } location /bar { proxy_pass http://bar; proxy_cache one; access_log /path/to/normal.log combined; access_log /path/to/cache.log cache; } } -- Maxim Dounin http://nginx.org/en/donation.html From grails at jmsd.co.uk Fri Mar 22 12:24:21 2013 From: grails at jmsd.co.uk (John Moore) Date: Fri, 22 Mar 2013 12:24:21 +0000 Subject: Simple question about proxy cache Message-ID: <4tof4i3tyetc.1gq09-jvtivq4k@elasticemail.com> On 22/03/13 11:43, Maxim Dounin wrote: > Hello! > > On Fri, Mar 22, 2013 at 11:18:50AM +0000, John Moore wrote: > > [...] > >> Actually, there is one final tweak I'd like. There are a number of >> different locations which I'd like to use the proxy cache for. I cannot >> repeat for each location the block where the cache log is defined (it >> rightly complains about duplication). So I have to define it at a server >> level instead. If I do this, though, then EVERY request for that server >> ends up being logged, even if there is no cache in force for the request >> location (i.e., the location has either 'proxy_cache off' or no >> proxy_cache definition. Is there a way I can configure things so that >> the only requests which are logged are ones from locations where a proxy >> cache is in force? > You _can_ repeat acces_log in every location. If nginx complains - > you did something wrong (tried to repeat log_format?). > > Thanks Maxim, that's exactly what I did do, as my follow-up post mentioned. All sorted now and working wonderfully. There is one other thing, though...At the moment, I have two location blocks with identical content, in order to match two different patterns. One matches all URLs starting with certain strings, e.g. location ~ ^/(abc|def){ The other is an exact match for the top level page: location = /{ Ideally I would like to use a single pattern which combined both. I thought to use regex end markers instead of the exact match, but I'm not sure if it's syntactically correct (and this is a very busy server i don't want to mess about with too much experimenting). So would something like this work? location ~ (^/abc|^/def/|^/$) John John From mdounin at mdounin.ru Fri Mar 22 14:11:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Mar 2013 18:11:40 +0400 Subject: Simple question about proxy cache In-Reply-To: <4tof4i3tyetc.1gq09-jvtivq4k@elasticemail.com> References: <4tof4i3tyetc.1gq09-jvtivq4k@elasticemail.com> Message-ID: <20130322141140.GG62550@mdounin.ru> Hello! On Fri, Mar 22, 2013 at 12:24:21PM +0000, John Moore wrote: > On 22/03/13 11:43, Maxim Dounin wrote: > > Hello! > > > > On Fri, Mar 22, 2013 at 11:18:50AM +0000, John Moore wrote: > > > > [...] > > > >> Actually, there is one final tweak I'd like. There are a number of > >> different locations which I'd like to use the proxy cache for. I cannot > >> repeat for each location the block where the cache log is defined (it > >> rightly complains about duplication). So I have to define it at a server > >> level instead. If I do this, though, then EVERY request for that server > >> ends up being logged, even if there is no cache in force for the request > >> location (i.e., the location has either 'proxy_cache off' or no > >> proxy_cache definition. Is there a way I can configure things so that > >> the only requests which are logged are ones from locations where a proxy > >> cache is in force? > > You _can_ repeat acces_log in every location. If nginx complains - > > you did something wrong (tried to repeat log_format?). > > > > > > Thanks Maxim, that's exactly what I did do, as my follow-up post > mentioned. All sorted now and working wonderfully. There is one other > thing, though...At the moment, I have two location blocks with identical > content, in order to match two different patterns. One matches all URLs > starting with certain strings, e.g. > > location ~ ^/(abc|def){ > > The other is an exact match for the top level page: > > location = /{ > > Ideally I would like to use a single pattern which combined both. I > thought to use regex end markers instead of the exact match, but I'm not > sure if it's syntactically correct (and this is a very busy server i > don't want to mess about with too much experimenting). So would > something like this work? > > location ~ (^/abc|^/def/|^/$) This looks like valid regular expression (try pcretest if unsure), but from maintainability point of view I would recommend to use multiple normal prefix or exact match locations instead of trying to combine them into a single regexp. -- Maxim Dounin http://nginx.org/en/donation.html From rkearsley at blueyonder.co.uk Fri Mar 22 14:32:11 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Fri, 22 Mar 2013 14:32:11 +0000 Subject: nginx/freebsd kern.maxbcache Message-ID: <514C6B6B.8010204@blueyonder.co.uk> Hi I'm trying to tune 'kern.maxbcache' with hope of increasing 'vfs.maxbufspace' so that more files can be stored in buffer memory on freebsd 9.1 It's suggested to tune this value here http://serverfault.com/questions/64356/freebsd-performance-tuning-sysctls-loader-conf-kernel and here http://wiki.nginx.org/FreeBSDOptimizations However, I can't get the value of 'vfs.maxbufspace' to increase: kern.maxbcache: 21474836480 vfs.maxbufspace: 3441033216 Did anyone else use this tunable and get results? Many thanks Richard From grails at jmsd.co.uk Fri Mar 22 15:39:53 2013 From: grails at jmsd.co.uk (John Moore) Date: Fri, 22 Mar 2013 15:39:53 +0000 Subject: Simple question about proxy cache Message-ID: <4tof60039668.1gq09-jvzoiz4s@elasticemail.com> On 22/03/13 14:11, Maxim Dounin wrote: > This looks like valid regular expression (try pcretest if unsure), but > from maintainability point of view I would recommend to use multiple > normal prefix or exact match locations instead of trying to combine > them into a single regexp. Yes, you're probably right. Maybe I should just put all the repeated stuff in a text file and use include on each location requiring it. I guess an include file can contain another include directive. Easy enough for me to try it out. From lists at ruby-forum.com Fri Mar 22 20:46:00 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Fri, 22 Mar 2013 21:46:00 +0100 Subject: 403 Forbidden, Nignx Message-ID: Hello again: I am have public a redmine and other aplication with nginx the other aplication is make in php, the php aplication have a api direccion for some services, when i put the url of de aplication without the api all is fine and not hae problem, but when I try to access the application using the API URL the following error appears: 403 Forbidden nginx this is my nginx configuration for de php aplication. server { listen 80; server_name appphp.nginx.com; root /var/www/appphp; index index.php, index.html; location / { include /etc/nginx/proxy_opts; proxy_redirect off; error_page 404 404.html; error_page 500 502 503 504 500.html; access_log /var/log/nginx/appphp.access.log; error_log /var/log/nginx/appphp.error.log; #Gzip gzip on; gzip_min_length 1000; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss #text/javascript; gzip_disable "MSIE [1-6]\."; charset utf-8; } location ~ "^(.+\.php)($|/)" { set $script $uri; set $path_info "/"; if ($uri ~ "^(.+\.php)($|/)") { set $script $1; } if ($uri ~ "^(.+\.php)(/.+)") { set $script $1; set $path_info $2; } include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME appphp.nginx.com$script; fastcgi_param SCRIPT_NAME $script; fastcgi_param PATH_INFO $path_info; } } proxy_opts configurations: # Shared options used by all proxies proxy_set_header Host $http_host; # Following headers are not used by Redmine but may be useful for plugins and # other web applications proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Any other options for all proxies here client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Mar 22 22:44:34 2013 From: nginx-forum at nginx.us (openletter) Date: Fri, 22 Mar 2013 18:44:34 -0400 Subject: Require 'www' for https://example.com Message-ID: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> I am setting up a server that will be for a B2B business, and I want the whole site to be served as https://www.example.com/ I have gotten a certificate and https://www.example.com runs just fine, but I can't figure out how to require https://www.example.com when a user tries to go to https://example.com. In reading through the nginx.org site, it seems like rewrites and if statements are discouraged. I did figure out how to require http://www.example.com/ by using the following in my server block file: server { listen [::]:80; server_name example.com *.example.com; return 301 $scheme://www.example.com$request_uri; } But doing something similar for 443 doesn't seem to work. Can someone please help me out or point to a good page on setting up for this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237695,237695#msg-237695 From kirpit at gmail.com Fri Mar 22 23:49:22 2013 From: kirpit at gmail.com (kirpit) Date: Sat, 23 Mar 2013 01:49:22 +0200 Subject: Require 'www' for https://example.com In-Reply-To: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> References: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: # www fix server { # www.example.com -> example.com #server_name www.example.com; #rewrite ^ $scheme://example.com$request_uri? permanent; # www.example.com -> example.com server_name example.com; rewrite ^ $scheme://www.example.com$request_uri? permanent; } https://github.com/kirpit/webstack/blob/master/sites/_nginx-example.com.conf cheers. On Sat, Mar 23, 2013 at 12:44 AM, openletter wrote: > I am setting up a server that will be for a B2B business, and I want the > whole site to be served as https://www.example.com/ > > I have gotten a certificate and https://www.example.com runs just fine, > but > I can't figure out how to require https://www.example.com when a user > tries > to go to https://example.com. > > In reading through the nginx.org site, it seems like rewrites and if > statements are discouraged. I did figure out how to require > http://www.example.com/ by using the following in my server block file: > > server { > listen [::]:80; > server_name example.com *.example.com; > return 301 $scheme://www.example.com$request_uri; > } > > But doing something similar for 443 doesn't seem to work. > > Can someone please help me out or point to a good page on setting up for > this? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,237695,237695#msg-237695 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at kodewerx.org Sat Mar 23 02:11:52 2013 From: jay at kodewerx.org (Jay Oster) Date: Fri, 22 Mar 2013 19:11:52 -0700 Subject: Strange $upstream_response_time latency spikes with reverse proxy In-Reply-To: References: <20130315082059.GR15378@mdounin.ru> <051BFC6E-6CAE-4844-978E-415E0939B36A@kodewerx.org> Message-ID: Hi again everyone! Just posting a status update (because I hate coming across old threads with reports of a problem I'm experiencing, and there is no answer!) What I've found so far is starting to look like a Linux kernel bug that was fixed for ipv6, but still remains for ipv4! Here's the relevant discussion: https://groups.google.com/forum/?fromgroups=#!topic/linux_net/ACDB15QbHls And thanks for making nginx awesome! :) On Tue, Mar 19, 2013 at 3:42 PM, Jay Oster wrote: > Hi Andrei! > > On Tue, Mar 19, 2013 at 2:49 AM, Andrei Belov wrote: > >> Hello Jay, >> >> If I understand you right, issue can be repeated in the following cases: >> >> 1) client and server are on different EC2 instances, public IPs are used; >> 2) client and server are on different EC2 instances, private IPs are used; >> 3) client and server are on a single EC2 instance, public IP is used. >> >> And there are no problems when: >> >> 1) client and server are on a single EC2 instance, either loopback or >> private IP is used. >> >> Please correct me if I'm wrong. >> > > If by "client" you mean nginx, and by "server" you mean our upstream HTTP > service ... That is exactly correct. You could also throw in another > permutation by changing where ApacheBench is run, but it doesn't change the > occurrence of dropped packets; only increases average latency when AB and > nginx are on separate EC2 instances. > > >> What about EC2 security group - do the client and the server use the same >> group? >> How many rules are present in this group? Have you tried to either >> decrease >> a number of rules used, or create "pass any to any" simple configuration? >> > > That's a great point! We have been struggling with the number of firewall > rules as a separate matter, in fact. There may be some relation here. Thank > you for reminding me. > > >> And just to clarify the things - under "external IP address" do you mean >> EC2 >> instance's public IP, or maybe Elastic IP? > > > I'm talking about the instance public IPs. Elastic IPs are only used for > client access to nginx. And specifically only for managing DNS. Between > nginx and upstream servers, the public IPs are used. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Sat Mar 23 03:01:35 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 23 Mar 2013 04:01:35 +0100 Subject: Require 'www' for https://example.com In-Reply-To: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> References: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87a9puu8ds.wl%appa@perusio.net> On 22 Mar 2013 23h44 CET, nginx-forum at nginx.us wrote: > I am setting up a server that will be for a B2B business, and I want > the whole site to be served as https://www.example.com/ > > I have gotten a certificate and https://www.example.com runs just > fine, but I can't figure out how to require https://www.example.com > when a user tries to go to https://example.com. > > In reading through the nginx.org site, it seems like rewrites and if > statements are discouraged. I did figure out how to require > http://www.example.com/ by using the following in my server block > file: > > server { > listen [::]:80; > server_name example.com *.example.com; > return 301 $scheme://www.example.com$request_uri; > } > > But doing something similar for 443 doesn't seem to work. It works, but you have to add the SSL certificate and respective key. Note that the root domain must be also in the certificate otherwise the client will complain about the certificate, not being able to establish endpoint authentication. > Can someone please help me out or point to a good page on setting up > for this? Here's an example. It rewrites from www to the base domain. So just switch the server names and it will work. Add also a listen directive for port 80. https://github.com/perusio/drupal-with-nginx/blob/D7/sites-available/example.com.conf#L101 --- appa From igor at sysoev.ru Sat Mar 23 05:19:39 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 23 Mar 2013 09:19:39 +0400 Subject: Require 'www' for https://example.com In-Reply-To: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> References: <513c1dfa1cdbd43c4acfa71228e2d6e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mar 23, 2013, at 2:44 , openletter wrote: > I am setting up a server that will be for a B2B business, and I want the > whole site to be served as https://www.example.com/ > > I have gotten a certificate and https://www.example.com runs just fine, but > I can't figure out how to require https://www.example.com when a user tries > to go to https://example.com. > > In reading through the nginx.org site, it seems like rewrites and if > statements are discouraged. I did figure out how to require > http://www.example.com/ by using the following in my server block file: > > server { > listen [::]:80; > server_name example.com *.example.com; > return 301 $scheme://www.example.com$request_uri; > } > > But doing something similar for 443 doesn't seem to work. > > Can someone please help me out or point to a good page on setting up for > this? You have to got a certificate also for "example.com" or certificate for two names "www.example.com" and "example.com". -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sat Mar 23 10:14:34 2013 From: nginx-forum at nginx.us (Larry) Date: Sat, 23 Mar 2013 06:14:34 -0400 Subject: max virtualhosts Message-ID: <588978ea7692acab32e95eac7d7c5dc7.NginxMailingListEnglish@forum.nginx.org> Hi ! I was wondering, for the sake of curiosity, how many server blocks (virtual hosts) nginx can afford ? To the extreme : will 1000 server blocks will decrease nginx performances ? Cheers, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237702,237702#msg-237702 From igor at sysoev.ru Sat Mar 23 10:50:08 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 23 Mar 2013 14:50:08 +0400 Subject: max virtualhosts In-Reply-To: <588978ea7692acab32e95eac7d7c5dc7.NginxMailingListEnglish@forum.nginx.org> References: <588978ea7692acab32e95eac7d7c5dc7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8A0EBC8E-4A6E-4B14-B5E4-8BB4A808D5B3@sysoev.ru> On Mar 23, 2013, at 14:14 , Larry wrote: > Hi ! > > I was wondering, for the sake of curiosity, how many server blocks (virtual > hosts) nginx can afford ? > > To the extreme : will 1000 server blocks will decrease nginx performances ? No. Unless server_names are regular expressions. -- Igor Sysoev http://nginx.com/services.html From ru at nginx.com Sat Mar 23 17:39:03 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Sat, 23 Mar 2013 21:39:03 +0400 Subject: max virtualhosts In-Reply-To: <8A0EBC8E-4A6E-4B14-B5E4-8BB4A808D5B3@sysoev.ru> References: <588978ea7692acab32e95eac7d7c5dc7.NginxMailingListEnglish@forum.nginx.org> <8A0EBC8E-4A6E-4B14-B5E4-8BB4A808D5B3@sysoev.ru> Message-ID: <20130323173903.GJ91875@lo0.su> On Sat, Mar 23, 2013 at 02:50:08PM +0400, Igor Sysoev wrote: > On Mar 23, 2013, at 14:14 , Larry wrote: > > > I was wondering, for the sake of curiosity, how many server blocks (virtual > > hosts) nginx can afford ? > > > > To the extreme : will 1000 server blocks will decrease nginx performances ? > > No. Unless server_names are regular expressions. To elaborate more on this, here's why: http://nginx.org/en/docs/hash.html From pluknet at nginx.com Sat Mar 23 19:03:51 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Sat, 23 Mar 2013 23:03:51 +0400 Subject: nginx/freebsd kern.maxbcache In-Reply-To: <514C6B6B.8010204@blueyonder.co.uk> References: <514C6B6B.8010204@blueyonder.co.uk> Message-ID: On Mar 22, 2013, at 6:32 PM, Richard Kearsley wrote: > Hi > > I'm trying to tune 'kern.maxbcache' with hope of increasing 'vfs.maxbufspace' so that more files can be stored in buffer memory on freebsd 9.1 > It's suggested to tune this value here http://serverfault.com/questions/64356/freebsd-performance-tuning-sysctls-loader-conf-kernel and here http://wiki.nginx.org/FreeBSDOptimizations > However, I can't get the value of 'vfs.maxbufspace' to increase: > kern.maxbcache: 21474836480 > vfs.maxbufspace: 3441033216 I assume that you have 9.1 amd64, the following applies only to amd64. On amd64 maxbcache is zero by default and effectively is no-op, thus it doesn't further limit maxbufspace (cf. on i386 it's set to 200MB, due to KVA constraints; and buffer space is reserved in KVM on early boot). The kernel auto-tunes maxbufspace based on the amount of physical memory available using the formula "for the first 64 MB of ram use 1/4 for buffers, plus 1/10 of the ram over 64 MB". So, your current vfs.maxbufspace value corresponds to 32GB RAM and is quite enough. Anyway, you can further increase it by setting kern.nbuf in /boot/loader.conf With your current maxbufspace value, it's kern.nbuf=210024 now. -- Sergey Kandaurov pluknet at nginx.com From ben at hoskings.net Sun Mar 24 03:56:23 2013 From: ben at hoskings.net (Ben Hoskings) Date: Sun, 24 Mar 2013 14:56:23 +1100 Subject: Request path not passed to proxy when $scheme is used in proxy_pass URI Message-ID: Hi all, I've found a strange behaviour with proxy_pass. I've reduced a simple vhost that reproduces it: server { listen 127.0.0.1:8000; location / { proxy_pass $scheme://127.0.0.1:9000/; } } A listener on localhost:9000 receives a request, but for '/' instead of the correct path. For example, curling this URI... > curl -I -X GET http://localhost:8000/test-path ... causes a "GET /" on a local listener: > nc -l 127.0.0.1 9000 GET / HTTP/1.0 If I replace $scheme with 'http' (i.e. "proxy_pass http://127.0.0.1:9000/"), then it works correctly: > curl -I -X GET http://localhost:8000/test-path > nc -l 127.0.0.1 9000 GET /test-path HTTP/1.0 I ran these tests on nginx-1.2.7 / OS X 10.8.3; the behaviour is the same on nginx-1.2.4 / Ubuntu 12.04 (which is where I discovered it). Am I using $scheme incorrectly here, or could this be a bug? Cheers, Ben Hoskings -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 24 08:30:48 2013 From: nginx-forum at nginx.us (selphon) Date: Sun, 24 Mar 2013 04:30:48 -0400 Subject: How to make nginx establish persistent connections with squid? In-Reply-To: <20130322095229.GA62550@mdounin.ru> References: <20130322095229.GA62550@mdounin.ru> Message-ID: Thank you, Maxim Dounin. I see. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237666,237713#msg-237713 From nginx-forum at nginx.us Sun Mar 24 08:42:36 2013 From: nginx-forum at nginx.us (selphon) Date: Sun, 24 Mar 2013 04:42:36 -0400 Subject: How to make nginx establish persistent connections with squid? In-Reply-To: <20130322095229.GA62550@mdounin.ru> References: <20130322095229.GA62550@mdounin.ru> Message-ID: <1501d86d4b0eeda75804c8a8ddb42054.NginxMailingListEnglish@forum.nginx.org> Hi, Maxim Dounin I revise the topology: chrome --> squid:80 --> origin server(nginx :80) and make 5 requests,the squid log shows: 192.168.70.160 - - [22/Mar/2013:15:41:41 +0800] "GET http://test.cache.com/p3.jpg?tt=2013032201 HTTP/1.1" 304 365 "http://test.cache.com/p3.jpg?tt=2013032201" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22" TCP_REFRESH_HIT:DIRECT/192.168.13.210 0 "Host: test.cache.com #request header Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22 Referer: http://test.cache.com/p3.jpg?tt=2013032201 Accept-Encoding: gzip,deflate,sdch Accept-Language: zh-CN,zh;q=0.8 Accept-Charset: GBK,utf-8;q=0.7,*;q=0.3 If-Modified-Since: Tue, 23 Oct 2012 04:07:19 GMT" "HTTP/1.0 304 Not Modified #response header Server: nginx/1.2.6 Date: Fri, 22 Mar 2013 07:41:41 GMT Last-Modified: Tue, 23 Oct 2012 04:07:19 GMT Expires: Fri, 22 Mar 2013 09:41:41 GMT Cache-Control: max-age=7200 X-Cache: MISS from vm-linux1.test.com X-Cache-Lookup: HIT from vm-linux1.test.com:80 Via: 1.1 vm-linux1.test.com:80 (squid/2.7.STABLE9) Connection: keep-alive" Though the response is in HTTP/1.0, the persistent connections is successfully established between chrome and squid. I wonder, is there a restriction that nginx can only make establish persistent connections when the response is in HTTP/1.1 protocol ? Thx for answering. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237666,237714#msg-237714 From ben at hoskings.net Sun Mar 24 10:50:40 2013 From: ben at hoskings.net (Ben Hoskings) Date: Sun, 24 Mar 2013 21:50:40 +1100 Subject: Request path not passed to proxy when $scheme is used in proxy_pass URI In-Reply-To: References: Message-ID: Ahh, I've found the answer in the proxy_pass documentation. http://wiki.nginx.org/HttpProxyModule#proxy_pass -- "A special case is using variables in the proxy_pass statement: The requested URL is not used and you are fully responsible to construct the target URL yourself." Surprising, but now I know for next time :) Cheers Ben On 24 March 2013 14:56, Ben Hoskings wrote: > Hi all, I've found a strange behaviour with proxy_pass. I've reduced a > simple vhost that reproduces it: > > server { > listen 127.0.0.1:8000; > location / { > proxy_pass $scheme://127.0.0.1:9000/; > } > } > > A listener on localhost:9000 receives a request, but for '/' instead of > the correct path. For example, curling this URI... > > curl -I -X GET http://localhost:8000/test-path > > ... causes a "GET /" on a local listener: > > nc -l 127.0.0.1 9000 > GET / HTTP/1.0 > > If I replace $scheme with 'http' (i.e. "proxy_pass http://127.0.0.1:9000/"), > then it works correctly: > > curl -I -X GET http://localhost:8000/test-path > > > nc -l 127.0.0.1 9000 > GET /test-path HTTP/1.0 > > I ran these tests on nginx-1.2.7 / OS X 10.8.3; the behaviour is the same > on nginx-1.2.4 / Ubuntu 12.04 (which is where I discovered it). > > Am I using $scheme incorrectly here, or could this be a bug? > > Cheers, > Ben Hoskings > -- Cheers Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From hagaia at qwilt.com Sun Mar 24 11:52:42 2013 From: hagaia at qwilt.com (Hagai Avrahami) Date: Sun, 24 Mar 2013 13:52:42 +0200 Subject: client_max_body_size Message-ID: Hi Is there any way to deny all requests with body? I know I can set set client_max_body_size to 1 (byte) But.. in that case Nginx reads all body request before finalizing the request. In case of requests with body as part of attack I would like to close the connection immediately without wasting any processing on that request. *I thought changing the code (ngx_http_core_module.c:996) from:* if (r->headers_in.content_length_n != -1 && !r->discard_body && clcf->client_max_body_size && clcf->client_max_body_size < r->headers_in.content_length_n) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "client intended to send too large body: %O bytes", r->headers_in.content_length_n); (void) ngx_http_discard_request_body(r); ngx_http_finalize_request(r, NGX_HTTP_REQUEST_ENTITY_TOO_LARGE); return NGX_OK; } *To:* if (r->headers_in.content_length_n != -1 && !r->discard_body && clcf->client_max_body_size && clcf->client_max_body_size < r->headers_in.content_length_n) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "client intended to send too large body: %O bytes", r->headers_in.content_length_n); * ngx_close_connection(r->connection);* return NGX_OK; } Is that cover all or more changes are needed? Thanks Hagai -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Mar 25 07:06:50 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 25 Mar 2013 00:06:50 -0700 Subject: [ANN] ngx_openresty devel version 1.2.7.3 released In-Reply-To: References: Message-ID: Hello! I am happy to announce that the new development version of ngx_openresty, 1.2.7.3, is now released: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this release happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.7.1: * upgraded LuaNginxModule to 0.7.18. * feature: implemented ngx.req.http_version() that returns the HTTP version number for the current request. thanks Matthieu Tourne for requesting this. * feature: implemented the ngx.req.raw_header() function for returning the original raw HTTP protocol header string received by Nginx. thanks Matthieu Tourne for requesting this. * feature: added new methods safe_set and safe_add to ngx.shared.DICT objects, which never override existing unexpired items but immediately return nil and a "no memory" string message when running out of storage. thanks Matthieu Tourne for requesting this. * feature: datagram Unix domain sockets created by ngx.socket.udp() can now receive data from the other endpoint via "autobind" on Linux. thanks Dirk Feytons for the patch. * change: the ngx.re.match, ngx.re.gmatch, ngx.re.sub, and ngx.re.gsub functions used to throw Lua exceptions aggressively for all the error conditions; now they just return an additional Lua string describing the error for almost all common errors instead of throwing exceptions, including pcre compile-time and exec-time failures. thanks Matthieu Tourne for requesting this change. * bugfix: use of ngx.req.socket() could make socket reading hang infinitely when the request did not take a request body at all (that is, when the Content-Length request header is missing). thanks Matthieu Tourne for reporting this issue. * bugfix: when a non-table value was specified for the "args" option in the ngx.location.capture or ngx.location.capture_multi call, memory invalid access might happen, which resulted in garbage data at least. thanks Siddon Tang for reporting this issue. * bugfix: when the Lua code using UDP/TCP cosockets + resolver was run in a subrequest, the subrequest could hang due to missing calls to "ngx_http_run_posted_requests" in the UDP/TCP cosocket resolver handler. thanks Lanshun Zhou for reporting this issue. * bugfix: ngx.socket.udp: memory leaks or invalid memory accesses might happen when the DNS resolver failed to resolve. * bugfix: rewrite_by_lua_no_postpone can only work globally and did not reject contexts like "server" and "location" configuration blocks. thanks Matthieu Tourne for reporting this issue. * bugfix: (large) in-file request bodies could not be inherited correctly by multiple subrequests issued by ngx.location.capture. thanks Matthieu Tourne for reporting this issue. * bugfix: ngx.req.get_headers(limit, true) would still return header names in the pure lower-case form when the "limit" argument was an integer. thanks Matthieu Tourne for reporting this issue. * bugfix: ngx.re.match: when the "D" regular expression option was specified, an empty Lua table would always be created even when the named capture was actually empty. thanks Matthieu Tourne for reporting this issue. * docs: made it explicit that redirecting to external domains is also supported in ngx.redirect(). thanks Ron Gomes for asking. * upgraded EchoNginxModule to 0.44. * bugfix: $echo_client_request_headers was evaluated to only the last part of the request header when "large header buffers" were used. * change: preserve the trailing "CR LF" at the end of the whole HTTP protocol header returned by $echo_client_request_headers. * upgraded Redis2NginxModule to 0.10. * feature: allow use of the request body data in Nginx variables for main requests by always reading the request body automatically; we used to always discard the request body just like the standard ngx_memcached module. thanks Ristona Hua for sharing this usage. * docs: updated the docs for the limitations on Redis pub/sub. thanks LazyZhu for pointing out the potential confusions. * docs: now we recommend LuaRestyRedisLibrary instead when being used with LuaNginxModule. * upgraded LuaRestyUploadLibrary to 0.08. * bugfix: when multiple "Content-Type" request headers were given, a Lua exception would be thrown; now we just pick up the first one. * docs: better error handling in the code sample. thanks wgm.china for the report. * feature: applied the variables_in_redis_pass patch to RedisNginxModule 0.3.6 to allow use of Nginx variables in the redis_pass directive. thanks Diptamay Sanyal for requesting this feature. * bugfix: applied Lanshun Zhou's run_posted_requests_in_resolver patch to the Nginx core: * bugfix: applied the official hotfix #1 patch for the bundled LuaJIT 2.0.1. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002007 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From safe3q at gmail.com Mon Mar 25 07:11:10 2013 From: safe3q at gmail.com (David Shee) Date: Mon, 25 Mar 2013 15:11:10 +0800 Subject: ngx unescape uri bug In-Reply-To: References: Message-ID: I'm Zuwen Shi from China,I find a unescape uri bug in your program. The source code location is src\core\ngx_string.c->ngx_unescape_uri If I put a string "%%s%elect",it convert the string to "%slect",and %% to %,%el to l,actually the right convert is "%%s%elect". So,I patch the ngx_unescape_uri like below,the red part is which I modified. Nginx is a really nice project. void ngx_unescape_uri(u_char **dst, u_char **src, size_t size, ngx_uint_t type) { u_char *d, *s, ch, c, decoded; enum { sw_usual = 0, sw_quoted, sw_quoted_second } state; d = *dst; s = *src; state = 0; decoded = 0; while (size--) { ch = *s++; switch (state) { case sw_usual: if (ch == '?' && (type & (NGX_UNESCAPE_URI|NGX_UNESCAPE_REDIRECT))) { *d++ = ch; goto done; } if (ch == '%'&&size>1) { ch=*s; c = (u_char) (ch | 0x20); if ((ch >= '0' && ch <= '9')||(c >= 'a' && c <= 'f')) { ch=*(s+1); c = (u_char) (ch | 0x20); if ((ch >= '0' && ch <= '9')||(c >= 'a' && c <= 'f')) { state = sw_quoted; break; } } *d++ = '%'; break; } if (ch == '+') { *d++ = ' '; break; } *d++ = ch; break; case sw_quoted: if (ch >= '0' && ch <= '9') { decoded = (u_char) (ch - '0'); state = sw_quoted_second; break; } c = (u_char) (ch | 0x20); if (c >= 'a' && c <= 'f') { decoded = (u_char) (c - 'a' + 10); state = sw_quoted_second; break; } /* the invalid quoted character */ state = sw_usual; *d++ = ch; break; case sw_quoted_second: state = sw_usual; if (ch >= '0' && ch <= '9') { ch = (u_char) ((decoded << 4) + ch - '0'); if (type & NGX_UNESCAPE_REDIRECT) { if (ch > '%' && ch < 0x7f) { *d++ = ch; break; } *d++ = '%'; *d++ = *(s - 2); *d++ = *(s - 1); break; } *d++ = ch; break; } c = (u_char) (ch | 0x20); if (c >= 'a' && c <= 'f') { ch = (u_char) ((decoded << 4) + c - 'a' + 10); if (type & NGX_UNESCAPE_URI) { if (ch == '?') { *d++ = ch; goto done; } *d++ = ch; break; } if (type & NGX_UNESCAPE_REDIRECT) { if (ch == '?') { *d++ = ch; goto done; } if (ch > '%' && ch < 0x7f) { *d++ = ch; break; } *d++ = '%'; *d++ = *(s - 2); *d++ = *(s - 1); break; } *d++ = ch; break; } /* the invalid quoted character */ break; } } done: *dst = d; *src = s; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 25 11:26:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Mar 2013 15:26:57 +0400 Subject: How to make nginx establish persistent connections with squid? In-Reply-To: <1501d86d4b0eeda75804c8a8ddb42054.NginxMailingListEnglish@forum.nginx.org> References: <20130322095229.GA62550@mdounin.ru> <1501d86d4b0eeda75804c8a8ddb42054.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130325112657.GN62550@mdounin.ru> Hello! On Sun, Mar 24, 2013 at 04:42:36AM -0400, selphon wrote: [...] > Though the response is in HTTP/1.0, the persistent connections is > successfully established between chrome and squid. I wonder, is there a > restriction that nginx can only make establish persistent connections when > the response is in HTTP/1.1 protocol ? It looks I wasn't clear enough. While keepalive connections via HTTP/1.0 are possible with "Connection: keep-alive" extension, nginx will only keep upstream connections alive after an HTTP/1.1 response. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Mar 25 11:46:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Mar 2013 15:46:45 +0400 Subject: ngx unescape uri bug In-Reply-To: References: Message-ID: <20130325114645.GO62550@mdounin.ru> Hello! On Mon, Mar 25, 2013 at 03:11:10PM +0800, David Shee wrote: > I'm Zuwen Shi from China,I find a unescape uri bug in your program. > The source code location is src\core\ngx_string.c->ngx_unescape_uri > If I put a string "%%s%elect",it convert the string to "%slect",and %% to > %,%el to l,actually the right convert is "%%s%elect". I would rather say that behaviour is undefined in case of incorrect input, and both results are correct. > So,I patch the ngx_unescape_uri like below,the red part is which I modified. You may want to post unified diff into nginx-devel@ mailing list. And please don't use html. Thank you. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Mar 25 12:05:18 2013 From: nginx-forum at nginx.us (Larry) Date: Mon, 25 Mar 2013 08:05:18 -0400 Subject: max virtualhosts In-Reply-To: <20130323173903.GJ91875@lo0.su> References: <20130323173903.GJ91875@lo0.su> Message-ID: <7da42bd0b2a75026845de20844359384.NginxMailingListEnglish@forum.nginx.org> Many thanks to both of you, you made my day :) Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237702,237743#msg-237743 From miguelmclara at gmail.com Mon Mar 25 17:51:20 2013 From: miguelmclara at gmail.com (Miguel Clara) Date: Mon, 25 Mar 2013 17:51:20 +0000 Subject: uWSGI + Moin (subdir) + Nginx Message-ID: Hi, I followed the example in: http://projects.unbit.it/uwsgi/wiki/Example#MoinMoinonlinenow However when I type domain/wiki I get a message that the page does not exist yet... which already tells me something is wrong, and also all my URL's are point to '/' not '/wiki' I can't seem to figure out why this happens... I tough using: uwsgi_param SCRIPT_NAME /wiki; uwsgi_modifier1 30; Would do the trick.... but It doesn't... Please note that I'm using moin-1.9.4, py27-uwsgi-1.2.4 As for nginx: nginx version: nginx/1.3.14 built by gcc 4.5.3 (NetBSD nb2 20110806) TLS SNI support enabled configure arguments: --user=nginx --group=nginx --with-openssl=/home/miguelc/sources/openssl-1.0.1e --with-ld-opt='-L/usr/pkg/lib -Wl,-R/usr/pkg/lib' --prefix=/usr/pkg --sbin-path=/usr/pkg/sbin --conf-path=/usr/pkg/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/db/nginx/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/db/nginx/client_body_temp --http-proxy-temp-path=/var/db/nginx/proxy_temp --http-fastcgi-temp-path=/var/db/nginx/fstcgi_temp --with-ipv6 --with-mail_ssl_module --with-http_ssl_module --with-http_spdy_module Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto at unbit.it Mon Mar 25 18:00:35 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Mon, 25 Mar 2013 19:00:35 +0100 Subject: uWSGI + Moin (subdir) + Nginx In-Reply-To: References: Message-ID: <891377a65bf1492f0530fd7644ca76b3.squirrel@manage.unbit.it> > Hi, > > I followed the example in: > > http://projects.unbit.it/uwsgi/wiki/Example#MoinMoinonlinenow > > > However when I type domain/wiki I get a message that the page does not > exist yet... which already tells me something is wrong, and also all my > URL's are point to '/' not '/wiki' > > I can't seem to figure out why this happens... I tough using: > > uwsgi_param SCRIPT_NAME /wiki; > uwsgi_modifier1 30; > > Would do the trick.... but It doesn't... > Do not use that trick, is pretty outdated and incredibly ugly. Just let uWSGI do the SCRIPT_NAME rewrite, add the --manage-script-name to the option (be sure to use the --mount way for loading moinmoin). The nginx configuration will be simply location /wiki { include uwsgi_params; uwsgi_pass ...; } -- Roberto De Ioris http://unbit.it From miguelmclara at gmail.com Mon Mar 25 18:12:03 2013 From: miguelmclara at gmail.com (Miguel Clara) Date: Mon, 25 Mar 2013 18:12:03 +0000 Subject: uWSGI + Moin (subdir) + Nginx In-Reply-To: <891377a65bf1492f0530fd7644ca76b3.squirrel@manage.unbit.it> References: <891377a65bf1492f0530fd7644ca76b3.squirrel@manage.unbit.it> Message-ID: Thanks for the hint... I though passing SCRIPT_NAME was a "ugly" way to do into, but since it was refereed in the docs... I was using mount: uwsgi -s /tmp/moin.socket --uid nginx --gid nginx --mount /wiki=/usr/pkg/share/moin/mywiki/moin.wsgi -M -p 2 -d /var/log/nginx/uwsgi_moin.log --logdate --pidfile=/var/run/uwsgi_moin.pid I added --manage-script-name has suggested, now it points to the right direction... ;) It would be nice if the link was updated with this info... Many thanks, Mike On Mon, Mar 25, 2013 at 6:00 PM, Roberto De Ioris wrote: > > > Hi, > > > > I followed the example in: > > > > http://projects.unbit.it/uwsgi/wiki/Example#MoinMoinonlinenow > > > > > > However when I type domain/wiki I get a message that the page does not > > exist yet... which already tells me something is wrong, and also all my > > URL's are point to '/' not '/wiki' > > > > I can't seem to figure out why this happens... I tough using: > > > > uwsgi_param SCRIPT_NAME /wiki; > > uwsgi_modifier1 30; > > > > Would do the trick.... but It doesn't... > > > > > Do not use that trick, is pretty outdated and incredibly ugly. > > Just let uWSGI do the SCRIPT_NAME rewrite, add the --manage-script-name to > the option (be sure to use the --mount way for loading moinmoin). The > nginx configuration will be simply > > > location /wiki { > include uwsgi_params; > uwsgi_pass ...; > } > > -- > Roberto De Ioris > http://unbit.it > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Mar 26 02:23:20 2013 From: nginx-forum at nginx.us (honwel) Date: Mon, 25 Mar 2013 22:23:20 -0400 Subject: nginx: worker process: malloc(): memory corruption: 0x0000000000b6bdb0 *** In-Reply-To: <20130315100909.GU15378@mdounin.ru> References: <20130315100909.GU15378@mdounin.ru> Message-ID: <3ea144bd17a18669284ab67792f1f4a0.NginxMailingListEnglish@forum.nginx.org> i use valgrind to check memory leak, and have detected some error: ==2243== Invalid write of size 1 ==2243== at 0x4A08088: memcpy (mc_replace_strmem.c:628) ==2243== by 0x4448C9: ngx_http_proxy_subs_headers (ngx_http_proxy_subs_filter.c:149) ==2243== by 0x45B2FB: ngx_http_proxy_create_request (ngx_http_proxy_module.c:1235) ==2243== by 0x43EA7E: ngx_http_upstream_init_request (ngx_http_upstream.c:505) ==2243== by 0x43EE92: ngx_http_upstream_init (ngx_http_upstream.c:446) ==2243== by 0x4361C0: ngx_http_read_client_request_body (ngx_http_request_body.c:59) ==2243== by 0x459972: ngx_http_proxy_handler (ngx_http_proxy_module.c:703) ==2243== by 0x42BD23: ngx_http_core_content_phase (ngx_http_core_module.c:1396) ==2243== by 0x4269A2: ngx_http_core_run_phases (ngx_http_core_module.c:877) ==2243== by 0x426A9D: ngx_http_handler (ngx_http_core_module.c:860) ==2243== by 0x430661: ngx_http_process_request (ngx_http_request.c:1874) ==2243== by 0x430D97: ngx_http_process_request_headers (ngx_http_request.c:1318) ==2243== Address 0x5a1f29a is not stack'd, malloc'd or (recently) free'd ==2243== ==2243== Invalid write of size 8 ==2243== at 0x4A080B3: memcpy (mc_replace_strmem.c:628) ==2243== by 0x4448C9: ngx_http_proxy_subs_headers (ngx_http_proxy_subs_filter.c:149) ==2243== by 0x45B2FB: ngx_http_proxy_create_request (ngx_http_proxy_module.c:1235) ==2243== by 0x43EA7E: ngx_http_upstream_init_request (ngx_http_upstream.c:505) ==2243== by 0x43EE92: ngx_http_upstream_init (ngx_http_upstream.c:446) ==2243== by 0x4361C0: ngx_http_read_client_request_body (ngx_http_request_body.c:59) ==2243== by 0x459972: ngx_http_proxy_handler (ngx_http_proxy_module.c:703) ==2243== by 0x42BD23: ngx_http_core_content_phase (ngx_http_core_module.c:1396) ==2243== by 0x4269A2: ngx_http_core_run_phases (ngx_http_core_module.c:877) ==2243== by 0x426A9D: ngx_http_handler (ngx_http_core_module.c:860) ==2243== by 0x430661: ngx_http_process_request (ngx_http_request.c:1874) due to ngx_copy() out of bound, and caused by my code. so, i modify the corrspongding code, it's running ok until now. thanks for Maxim Dounin ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237393,237778#msg-237778 From nginx-forum at nginx.us Tue Mar 26 02:28:26 2013 From: nginx-forum at nginx.us (honwel) Date: Mon, 25 Mar 2013 22:28:26 -0400 Subject: [ANN] ngx_openresty devel version 1.2.7.3 released In-Reply-To: References: Message-ID: good news ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237732,237779#msg-237779 From nginx-forum at nginx.us Tue Mar 26 02:29:22 2013 From: nginx-forum at nginx.us (appchemist) Date: Mon, 25 Mar 2013 22:29:22 -0400 Subject: can i change time format in access log? Message-ID: <4709d0feae0cb7df109249a0887f5097.NginxMailingListEnglish@forum.nginx.org> Hi, EveryBody! I am really happy with nginx from now. but I have one question. I want to customize time format in access log, because of calculating time. There are two kind of time format in log_format. one is $time_local. (21/Mar/2013:18:47:32 +0900) another one is $time_iso8601. (2013-03-26T10:27:40+09:00) but, these are not matched by my demand. I want to log time likes "2013-03-25 16:35:35".... can i customize time format ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237780,237780#msg-237780 From ru at nginx.com Tue Mar 26 05:21:48 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 26 Mar 2013 09:21:48 +0400 Subject: can i change time format in access log? In-Reply-To: <4709d0feae0cb7df109249a0887f5097.NginxMailingListEnglish@forum.nginx.org> References: <4709d0feae0cb7df109249a0887f5097.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130326052148.GE91875@lo0.su> On Mon, Mar 25, 2013 at 10:29:22PM -0400, appchemist wrote: > I want to customize time format in access log, because of calculating time. > There are two kind of time format in log_format. > one is $time_local. (21/Mar/2013:18:47:32 +0900) > another one is $time_iso8601. (2013-03-26T10:27:40+09:00) > but, these are not matched by my demand. There's a third one, $msec. It's time in seconds since UNIX Epoch, with a milliseconds resolution, convenient if you need to further process your logs and do time calculations. > I want to log time likes "2013-03-25 16:35:35".... > > can i customize time format ? Such a customization isn't possible. From nginx-forum at nginx.us Tue Mar 26 06:12:06 2013 From: nginx-forum at nginx.us (appchemist) Date: Tue, 26 Mar 2013 02:12:06 -0400 Subject: can i change time format in access log? In-Reply-To: <20130326052148.GE91875@lo0.su> References: <20130326052148.GE91875@lo0.su> Message-ID: <6fd10660f5998224a9fb6ca86a4821a2.NginxMailingListEnglish@forum.nginx.org> Thank you very much! :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237780,237786#msg-237786 From nginx-forum at nginx.us Tue Mar 26 06:22:39 2013 From: nginx-forum at nginx.us (appchemist) Date: Tue, 26 Mar 2013 02:22:39 -0400 Subject: can i change time format in access log? In-Reply-To: <4709d0feae0cb7df109249a0887f5097.NginxMailingListEnglish@forum.nginx.org> References: <4709d0feae0cb7df109249a0887f5097.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have one more question about access log. there is a +0900 in 21/Mar/2013:18:47:32 +0900, 2013-03-26T10:27:40+09:00 what does +0900 mean? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237780,237787#msg-237787 From ru at nginx.com Tue Mar 26 06:28:01 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 26 Mar 2013 10:28:01 +0400 Subject: can i change time format in access log? In-Reply-To: References: <4709d0feae0cb7df109249a0887f5097.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130326062801.GG91875@lo0.su> On Tue, Mar 26, 2013 at 02:22:39AM -0400, appchemist wrote: > I have one more question about access log. > there is a +0900 in 21/Mar/2013:18:47:32 +0900, 2013-03-26T10:27:40+09:00 > > what does +0900 mean? It's the the as "+09:00" to the right. :) http://en.wikipedia.org/wiki/ISO_8601#Time_offsets_from_UTC From nginx-forum at nginx.us Tue Mar 26 12:56:28 2013 From: nginx-forum at nginx.us (huttarichard) Date: Tue, 26 Mar 2013 08:56:28 -0400 Subject: Sub-domain in variable Message-ID: <6eb392f1b0aa14f68ca6d323bc483aff.NginxMailingListEnglish@forum.nginx.org> Hi guys, I have question. My server_name looks like this: server_name ~^(www\.)(?[^\.]*)\.(?[^\.]*)$; but I need, for my website do subdomains. I try: server_name ~^(www\.)?(?\.)(?[^\.]*)\.(?[^\.]*)$; but won't work for me. And what will be super, if subdomain emtpy se to default (mean string "default"). Can me anybody help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237799,237799#msg-237799 From mdounin at mdounin.ru Tue Mar 26 13:29:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Mar 2013 17:29:48 +0400 Subject: nginx-1.3.15 Message-ID: <20130326132948.GO62550@mdounin.ru> Changes with nginx 1.3.15 26 Mar 2013 *) Change: opening and closing a connection without sending any data in it is no longer logged to access_log with error code 400. *) Feature: the ngx_http_spdy_module. Thanks to Automattic for sponsoring this work. *) Feature: the "limit_req_status" and "limit_conn_status" directives. Thanks to Nick Marden. *) Feature: the "image_filter_interlace" directive. Thanks to Ian Babrou. *) Feature: $connections_waiting variable in the ngx_http_stub_status_module. *) Feature: the mail proxy module now supports IPv6 backends. *) Bugfix: request body might be transmitted incorrectly when retrying a request to the next upstream server; the bug had appeared in 1.3.9. Thanks to Piotr Sikora. *) Bugfix: in the "client_body_in_file_only" directive; the bug had appeared in 1.3.9. *) Bugfix: responses might hang if subrequests were used and a DNS error happened during subrequest processing. Thanks to Lanshun Zhou. *) Bugfix: in backend usage accounting. -- Maxim Dounin http://nginx.org/en/donation.html From nik.molnar at consbio.org Tue Mar 26 16:52:47 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 26 Mar 2013 09:52:47 -0700 Subject: Sub-domain in variable In-Reply-To: <6eb392f1b0aa14f68ca6d323bc483aff.NginxMailingListEnglish@forum.nginx.org> References: <6eb392f1b0aa14f68ca6d323bc483aff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5151D25F.3070004@consbio.org> Check your subdomain regex. Right now, if will only work if your subdomain is a dot ;) It should be (?[^\.]*) or (?[^\.]+) _Nik On 3/26/2013 5:56 AM, huttarichard wrote: > Hi guys, > > I have question. My server_name looks like this: > > server_name ~^(www\.)(?[^\.]*)\.(?[^\.]*)$; > > but I need, for my website do subdomains. I try: > > server_name ~^(www\.)?(?\.)(?[^\.]*)\.(?[^\.]*)$; > > but won't work for me. And what will be super, if subdomain emtpy se to > default (mean string "default"). > > Can me anybody help? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237799,237799#msg-237799 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Mar 26 18:01:42 2013 From: nginx-forum at nginx.us (huttarichard) Date: Tue, 26 Mar 2013 14:01:42 -0400 Subject: Sub-domain in variable In-Reply-To: <5151D25F.3070004@consbio.org> References: <5151D25F.3070004@consbio.org> Message-ID: Thanks lot, but still wont to work. And how i set $subdomain to "default" if subdomain doesnt exists? I try this: ~^(www\.)(?[^\.]*)(?[^\.]*)\.(?[^\.]*)$ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237799,237814#msg-237814 From nik.molnar at consbio.org Tue Mar 26 18:19:05 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 26 Mar 2013 11:19:05 -0700 Subject: Sub-domain in variable In-Reply-To: References: <5151D25F.3070004@consbio.org> Message-ID: <5151E699.4000407@consbio.org> The "www" part is probably causing a problem too. As the regex is written it will only match "www..." which I'm guessing isn't what you intent. I would instead treat "www" as another possible subdomain. ~^(?[^\.]*\.)?(?[^\.]*)\.(?[^\.*)$ Note I haven't tested this, but it's similar to a pattern I've used. The $subdomain variable will be either the subdomain, "www", or will be empty. It would need some more tweaking to handle multiple subdomains (one.two.domain.com). _Nik On 3/26/2013 11:01 AM, huttarichard wrote: > Thanks lot, but still wont to work. > And how i set $subdomain to "default" if subdomain doesnt exists? > > I try this: > ~^(www\.)(?[^\.]*)(?[^\.]*)\.(?[^\.]*)$ > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237799,237814#msg-237814 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Mar 26 18:42:54 2013 From: nginx-forum at nginx.us (huttarichard) Date: Tue, 26 Mar 2013 14:42:54 -0400 Subject: Sub-domain in variable In-Reply-To: <5151E699.4000407@consbio.org> References: <5151E699.4000407@consbio.org> Message-ID: <87a570dc36c1e0963187df5dc60a6406.NginxMailingListEnglish@forum.nginx.org> Ok, I love u :D! Work perfectly... I solved all thx :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237799,237816#msg-237816 From pravinmuppala at gmail.com Wed Mar 27 12:34:27 2013 From: pravinmuppala at gmail.com (praveenkumar Muppala) Date: Wed, 27 Mar 2013 18:04:27 +0530 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" Message-ID: Hi, We have a nginx1.0.5 version installed in our system. We are getting this error continuously in our nginx error log. ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz". I have increased this value to 20G, even 30G also still getting the same error. Can you help to fix this error please. -- Thanks in Advance, -Praveen Kumar.M -------------- next part -------------- An HTML attachment was scrubbed... URL: From hagaia at qwilt.com Wed Mar 27 13:32:32 2013 From: hagaia at qwilt.com (Hagai Avrahami) Date: Wed, 27 Mar 2013 15:32:32 +0200 Subject: Fwd: client_max_body_size In-Reply-To: References: Message-ID: Hi I am trying to resend (with small modification..) my request for help Many Thanks Hagai ---------------------------------------------------------------------------------------------- Hi Is there any way to deny all requests with body? I know I can set set client_max_body_size to 1 (byte) But.. in that case Nginx reads all body request before finalizing the request. In case of requests with body as part of attack I would like to close the connection immediately without wasting any processing on that request. *I thought changing the code (ngx_http_core_module.c:996) from:* if (r->headers_in.content_length_n != -1 && !r->discard_body && clcf->client_max_body_size && clcf->client_max_body_size < r->headers_in.content_length_n) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "client intended to send too large body: %O bytes", r->headers_in.content_length_n); (void) ngx_http_discard_request_body(r); ngx_http_finalize_request(r, NGX_HTTP_REQUEST_ENTITY_TOO_LARGE); return NGX_OK; } *To:* if (r->headers_in.content_length_n != -1 && !r->discard_body && clcf->client_max_body_size && clcf->client_max_body_size < r->headers_in.content_length_n) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "client intended to send too large body: %O bytes", r->headers_in.content_length_n); * ngx_connection_t* connection = r->connection; ngx_http_finalize_request(r, NGX_DONE); ngx_close_connection(connection);* return NGX_OK; } Is that cover all or more changes are needed? Thanks Hagai -- *Hagai Avrahami* Qwilt | Work: +972-72-2221644| Mobile: +972-54-4895656 | hagaia at qwilt.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Mar 27 17:21:06 2013 From: nginx-forum at nginx.us (David.Neumann) Date: Wed, 27 Mar 2013 13:21:06 -0400 Subject: Error when proxying websocket connection over unix socket Message-ID: Hello everybody, I found that if you try to proxy a websocket connection over a unix socket you get an error. This is due to the fact that nginx tries to set TCP_NODELAY for the unix socket in ngx_http_upstream_upgrade in line 2436 This probably should not be done for unix sockets. Best Regards David Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237839,237839#msg-237839 From ru at nginx.com Wed Mar 27 17:49:43 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 27 Mar 2013 21:49:43 +0400 Subject: Error when proxying websocket connection over unix socket In-Reply-To: References: Message-ID: <20130327174943.GB31457@lo0.su> On Wed, Mar 27, 2013 at 01:21:06PM -0400, David.Neumann wrote: > Hello everybody, > > I found that if you try to proxy a websocket connection over a unix socket > you get an error. > This is due to the fact that nginx tries to set TCP_NODELAY for the unix > socket in > ngx_http_upstream_upgrade > in line 2436 > > This probably should not be done for unix sockets. You're too late with your report: https://trac.nginx.org/nginx/changeset/5143/nginx :-) From nginx-forum at nginx.us Thu Mar 28 09:04:56 2013 From: nginx-forum at nginx.us (selphon) Date: Thu, 28 Mar 2013 05:04:56 -0400 Subject: How to make nginx establish persistent connections with squid? In-Reply-To: <20130325112657.GN62550@mdounin.ru> References: <20130325112657.GN62550@mdounin.ru> Message-ID: <8f1fd29696418f08ae2c63ec2eb3cf9d.NginxMailingListEnglish@forum.nginx.org> Thank you~ As the test show, squid can make keepalive connections base on HTTP/1.0, it is a character of squid 2.7.9. Unfortunately, nginx can not make keepalive connections with squid 2.7.9, I think. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237666,237869#msg-237869 From nginx-forum at nginx.us Thu Mar 28 09:45:52 2013 From: nginx-forum at nginx.us (Larry) Date: Thu, 28 Mar 2013 05:45:52 -0400 Subject: Why use haproxy now ? Message-ID: Hi, I made a lot of reading and comparisons. But I still cannot understand why, in 2013 and with the latest version of nginx, we would still need haproxy in front of it. Nginx has it all to handle high traffic loads + very good load balancer. Could someone help/explain what I am missing ? Even Varnish.. nginx can cache too. Absolutely not a fanboy, just someone trying to understand different statements and why people want different layers when one seems enough. Thanks, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237874,237874#msg-237874 From r at roze.lv Thu Mar 28 10:10:18 2013 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 Mar 2013 12:10:18 +0200 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: <0E835E818CA74F99ACF66EC31FBDF7C5@MasterPC> > But I still cannot understand why, in 2013 and with the latest version of > nginx, we would still need haproxy in front of it. You don't need it is just a thing of preference or needs / that is also why we don't have a single webserver or database server software. But to name few advantages (at least for me) why I would use (and am doing it) haproxy as a balancer - it has more refined backend status/administrative page ( http://demo.1wt.eu/ (without the admin features)). The nginx upstream module is lacking in this area and for now (as far as I know) you can only get info only via logging. You have detailed information of what's up and what's down / how many failures there have been. Also you can easily bring down any backends without the need to change configuration (in case of nginx would need to rewrite the config and do a reload). > Even Varnish.. nginx can cache too. As to varnish - I preffer the memory mapped file instead of nginx approach of creating a file for each cachable object in filesystem. rr From lukas.herbolt at etnetera.cz Thu Mar 28 10:23:17 2013 From: lukas.herbolt at etnetera.cz (=?UTF-8?B?SGXFmWJvbHQsIEx1a8OhxaE=?=) Date: Thu, 28 Mar 2013 11:23:17 +0100 Subject: Why use haproxy now ? In-Reply-To: <0E835E818CA74F99ACF66EC31FBDF7C5@MasterPC> References: <0E835E818CA74F99ACF66EC31FBDF7C5@MasterPC> Message-ID: Hi, actually in our setup we use NGINX as SSL termination before HAProxy. HAProxy have some features that Nginx still doesn't have. Like backend max connections and frontend queue. So you can do throtlling to prevent your backend server high load and keep request from client in front. So the didn't get HTTP 500. Another feature is splice system call, which makes HAProxy really fast with low system load. Lukas On 28 March 2013 11:10, Reinis Rozitis wrote: > But I still cannot understand why, in 2013 and with the latest version of >> nginx, we would still need haproxy in front of it. >> > > You don't need it is just a thing of preference or needs / that is also > why we don't have a single webserver or database server software. > > > But to name few advantages (at least for me) why I would use (and am doing > it) haproxy as a balancer - it has more refined backend > status/administrative page ( http://demo.1wt.eu/ (without the admin > features)). > The nginx upstream module is lacking in this area and for now (as far as I > know) you can only get info only via logging. > > You have detailed information of what's up and what's down / how many > failures there have been. Also you can easily bring down any backends > without the need to change configuration (in case of nginx would need to > rewrite the config and do a reload). > > > > > Even Varnish.. nginx can cache too. >> > > As to varnish - I preffer the memory mapped file instead of nginx approach > of creating a file for each cachable object in filesystem. > > > rr > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -- Luk?? He?bolt Linux Administrator ET NETERA | smart e-business [a] Milady Hor?kov? 108, 160 00 Praha 6 [t] +420 725 267 158 [i] www.etnetera.cz ~ [www.ifortuna.cz | www.o2.cz | www.datart.cz ] [www.skodaplus.cz | www.nivea.cz | www.allianz.cz] Created by ET NETERA | Powered by jNetPublish -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Thu Mar 28 11:21:39 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 28 Mar 2013 12:21:39 +0100 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: Very simple: features. haproxy has a huge list of features for reverse proxying that nginx hasn't, varnish has the same for caching. If you can do everything with nginx, go for it. But for more complex scenarios and if you really need the highest possible performance, you probably wanna stick to what the particular product does best. For example: haproxy does tcp splicing, that means the http payload not even touches the user-space, and the kernel just does a zero copy. Are you able to forward 20Gbps with nginx on a single machine? I doubt that. Lukas From brian at akins.org Thu Mar 28 11:22:15 2013 From: brian at akins.org (Brian Akins) Date: Thu, 28 Mar 2013 07:22:15 -0400 Subject: Why use haproxy now ? In-Reply-To: References: <0E835E818CA74F99ACF66EC31FBDF7C5@MasterPC> Message-ID: I use haproxy for alot of non-HTTP load balancing. From brian at akins.org Thu Mar 28 11:23:08 2013 From: brian at akins.org (Brian Akins) Date: Thu, 28 Mar 2013 07:23:08 -0400 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 7:21 AM, Lukas Tribus wrote: > Are you able to forward 20Gbps with nginx on a single machine? > I doubt that. Why would you doubt that? Of course, my machines may be bigger than the norm... From luky-37 at hotmail.com Thu Mar 28 11:57:57 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 28 Mar 2013 12:57:57 +0100 Subject: Why use haproxy now ? In-Reply-To: References: , , Message-ID: > Why would you doubt that? Of course, my machines may be bigger than the norm... Because nginx doesn't do tcp splicing. Is my assumption wrong; are you able to forward 20Gbps with nginx? Then yes, probably you have huge hardware, which isn't necessary with haproxy. From flygoast at 126.com Thu Mar 28 12:13:23 2013 From: flygoast at 126.com (flygoast) Date: Thu, 28 Mar 2013 20:13:23 +0800 (CST) Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: References: Message-ID: <70a44082.20816.13db0eab015.Coremail.flygoast@126.com> Did you tune the shared memory size in proxy_cache_path? At 2013-03-27 20:34:27,"praveenkumar Muppala" wrote: Hi, We have a nginx1.0.5 version installed in our system. We are getting this error continuously in our nginx error log. ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz". I have increased this value to 20G, even 30G also still getting the same error. Can you help to fix this error please. -- Thanks in Advance, -Praveen Kumar.M -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Thu Mar 28 12:16:42 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Thu, 28 Mar 2013 16:16:42 +0400 Subject: Why use haproxy now ? In-Reply-To: References: , , Message-ID: <0513F68C-F366-4377-9120-9A31CD6E30AB@nginx.com> On Mar 28, 2013, at 3:57 PM, Lukas Tribus wrote: >> Why would you doubt that? Of course, my machines may be bigger than the norm... > > Because nginx doesn't do tcp splicing. Is my assumption wrong; are you able to > forward 20Gbps with nginx? Then yes, probably you have huge hardware, which isn't > necessary with haproxy. Just curious, are you referring to "splice-auto" or just "splice-response"? I'd assume "splice-response" sort of disables response buffering and it might be useful indeed if you've got fast clients and fast servers. I wonder know what happens with slow clients/fast servers tho :) Also, to the best of my understanding, both Linux kernel version and network card present a lot of specifics in regards to how splice is used. From rkearsley at blueyonder.co.uk Thu Mar 28 12:26:50 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Thu, 28 Mar 2013 12:26:50 +0000 Subject: Why use haproxy now ? In-Reply-To: References: , , Message-ID: <5154370A.808@blueyonder.co.uk> Hi I actually did some quite in-depth comparison with splice() sys call (only available on linux btw), between nginx and haproxy, and even wrote a small standalone proxy server that uses it There was some improvement, but not on the scale that would make it a deciding factor The thing that makes most difference to forwarding is your network card, and if it supports LRO (large receive offload) - if you're using a 10G lan card, it probably has it, anything less probably doesn't I've attached my results, the test was proxying a file a certain amount of times, and I would log how much cpu time was used (ab -n 1000 -c 10 192.168.1.101:8001/10MB.zip) RTL = onboard realtek (they are crap) INTEL = intel 1000CT ($30 thing) LIN = Linux (3.6.something) BSD = FreeBSD 9.0 HA = Haproxy (latest 1.5 dev version at the time) NGX = Nginx 1.3.something PS = splice() proxy that I wrote SPL/BUF/OFF = mode either splice, buffer or off/on (nginx proxy_buffering) Afterwards I got some 10G cards to test and it was (by probably 80-90%) faster at all tests On 28/03/13 11:57, Lukas Tribus wrote: > Because nginx doesn't do tcp splicing. Is my assumption wrong; are you able to > forward 20Gbps with nginx? Then yes, probably you have huge hardware, which isn't > necessary with haproxy. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: graph.png Type: image/png Size: 57537 bytes Desc: not available URL: From lukas.herbolt at etnetera.cz Thu Mar 28 12:30:26 2013 From: lukas.herbolt at etnetera.cz (=?UTF-8?B?SGXFmWJvbHQsIEx1a8OhxaE=?=) Date: Thu, 28 Mar 2013 13:30:26 +0100 Subject: Why use haproxy now ? In-Reply-To: <0513F68C-F366-4377-9120-9A31CD6E30AB@nginx.com> References: <0513F68C-F366-4377-9120-9A31CD6E30AB@nginx.com> Message-ID: > Also, to the best of my understanding, both Linux kernel version and network card present a lot of specifics in regards to how > splice is used. Kernel, yes. The fist splice was implemented in 2.6.17 but it was buggy. So it is not recommended to use it. Reimplementation was done in 3.5 and since that version everything works fine. I'm not sure how much it depends on NIC. I assume it would'n be much difference, more importatnt is tcp offloading support. On 28 March 2013 13:16, Andrew Alexeev wrote: > On Mar 28, 2013, at 3:57 PM, Lukas Tribus wrote: > > >> Why would you doubt that? Of course, my machines may be bigger than the > norm... > > > > Because nginx doesn't do tcp splicing. Is my assumption wrong; are you > able to > > forward 20Gbps with nginx? Then yes, probably you have huge hardware, > which isn't > > necessary with haproxy. > > Just curious, are you referring to "splice-auto" or just "splice-response"? > > I'd assume "splice-response" sort of disables response buffering and it > might be useful indeed if you've got fast clients and fast servers. I > wonder know what happens with slow clients/fast servers tho :) > > Also, to the best of my understanding, both Linux kernel version and > network card present a lot of specifics in regards to how splice is used. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Luk?? He?bolt Linux Administrator ET NETERA | smart e-business [a] Milady Hor?kov? 108, 160 00 Praha 6 [t] +420 725 267 158 [i] www.etnetera.cz ~ [www.ifortuna.cz | www.o2.cz | www.datart.cz ] [www.skodaplus.cz | www.nivea.cz | www.allianz.cz] Created by ET NETERA | Powered by jNetPublish -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Mar 28 13:53:22 2013 From: nginx-forum at nginx.us (Larry) Date: Thu, 28 Mar 2013 09:53:22 -0400 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: Did anyone had problems with upstream modules ? There is backup servers, least_conn and other fancy things. Isn't it as efficient as Haproxy (open question)? I read carefully, maybe not enough, what you all said, but, just cannot understand how it comes nginx cannot perform as well as haproxy to serve lot of connections. Tcp splicing is not really useable for everyone running on stable debian 6. Here is my scenario : I just nginx for just everyhting I have to deal with. If I don't want php, is use lua for simple things or tough rewriting. I use nginx as a routing engine on another server. And still use it to serve static files on my private cdn. It doesn't do round robin but least_conn to share the load evenly. My sessions are accessed by a database backend with memcached activated. This setup is soooooo simple and easy to maintain ! So far so good, really easy to setup, scripts know where to search/replace. But i don't want to miss anything. As for varnish : if you are on a static html page, then it is your browser cache that relays you. If it is semi static, chances are that you don't reuse the same part several times among different users due to personalization. And if you can split this sub part to serve something general enough, then the time that it calls varnish to serve it, nginx alone would have already done half the way to serve the file. If in this scenario Haproxy performs significantly better, then I am in thirst of knowledge. Cheers, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237874,237900#msg-237900 From kworthington at gmail.com Thu Mar 28 14:24:34 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 28 Mar 2013 10:24:34 -0400 Subject: nginx-1.3.15 In-Reply-To: <20130326132948.GO62550@mdounin.ru> References: <20130326132948.GO62550@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.15 For Windows http://goo.gl/RqVQ7 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Mar 26, 2013 at 9:29 AM, Maxim Dounin wrote: > Changes with nginx 1.3.15 26 Mar > 2013 > > *) Change: opening and closing a connection without sending any data in > it is no longer logged to access_log with error code 400. > > *) Feature: the ngx_http_spdy_module. > Thanks to Automattic for sponsoring this work. > > *) Feature: the "limit_req_status" and "limit_conn_status" directives. > Thanks to Nick Marden. > > *) Feature: the "image_filter_interlace" directive. > Thanks to Ian Babrou. > > *) Feature: $connections_waiting variable in the > ngx_http_stub_status_module. > > *) Feature: the mail proxy module now supports IPv6 backends. > > *) Bugfix: request body might be transmitted incorrectly when retrying > a > request to the next upstream server; the bug had appeared in 1.3.9. > Thanks to Piotr Sikora. > > *) Bugfix: in the "client_body_in_file_only" directive; the bug had > appeared in 1.3.9. > > *) Bugfix: responses might hang if subrequests were used and a DNS > error > happened during subrequest processing. > Thanks to Lanshun Zhou. > > *) Bugfix: in backend usage accounting. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Mar 28 14:32:22 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 28 Mar 2013 18:32:22 +0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: References: Message-ID: <201303281832.22834.vbart@nginx.com> On Wednesday 27 March 2013 16:34:27 praveenkumar Muppala wrote: > Hi, > > We have a nginx1.0.5 version installed in our system. We are getting this > error continuously in our nginx error log. ngx_slab_alloc() failed: no > memory in cache keys zone "zone-xyz". I have increased this value to 20G, > even 30G also still getting the same error. Can you help to fix this error > please. Do you really have 30 gigabytes of RAM? Why do you need such a big zone? You are probably confuse "keys_zone" with max cache size. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From r at roze.lv Thu Mar 28 15:20:45 2013 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 Mar 2013 17:20:45 +0200 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: > There is backup servers, least_conn and other fancy things. Isn't it as > efficient as Haproxy (open question)? The simple fact that you are not actually (externaly) able to tell if/how many backends are down should answer your question. You also have to use third party modules for active health checks - the default Upstream considers a backend down only after failing (configured amount of times) actual client requests - both varnish and haproxy allow you to avoid this by having such functionality in the core. > As for varnish : if you are on a static html page, then it is your browser > cache that relays you. If it is semi static, chances are that you don't > reuse the same part several times among different users due to > personalization. And if you can split this sub part to serve something > general enough, then the time that it calls varnish to serve it, nginx > alone would have already done half the way to serve the file. You cover only a part of "caching". Besides parts of html (which in my opinion using nginx with SSI is somewhat more complicated (due to single location/if limitations) than varnish ESI implementation though you can probably work arround it using the agentzh openresty module) varnish can just work as an accelerator for static content. While of course nginx can do the same again you have to use a third party module for cache invalidation (not saying it's a bad thing). Also the cache residing 1:1 on the filesystem makes it problematic in setups where you have a lot of cachable objects. At least in my case the nginx cache manager process took way too much resources/io when traversing the directory tree with few milion files versus storing them all in a single mmaped file. > Here is my scenario : I just nginx for just everyhting I have to deal > with. Good for you, but what is the goal of your mail? Don't get me wrong nginx is a stellar software and one of the best webservers but it doesnt mean it needs to do everything or sticked everywhere even the active community and the increasing ammount of modules (would) allow that :) rr From nginx-forum at nginx.us Thu Mar 28 15:38:40 2013 From: nginx-forum at nginx.us (Larry) Date: Thu, 28 Mar 2013 11:38:40 -0400 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: Okay, You, as others did, gave really good reason why haproxy + varnish + nginx should be good together. But seems a real hassle to setup and maintain... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237874,237911#msg-237911 From luky-37 at hotmail.com Thu Mar 28 17:38:06 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 28 Mar 2013 18:38:06 +0100 Subject: How to make nginx establish persistent connections with squid? In-Reply-To: <8f1fd29696418f08ae2c63ec2eb3cf9d.NginxMailingListEnglish@forum.nginx.org> References: <20130325112657.GN62550@mdounin.ru>, <8f1fd29696418f08ae2c63ec2eb3cf9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Upgrade to >= squid 3.2, which seems to support HTTP/1.1 and you will have your persistent connections with squid: http://www.squid-cache.org/mail-archive/squid-users/201108/0061.html http://wiki.squid-cache.org/Squid-3.2 From nginx-forum at nginx.us Fri Mar 29 00:33:56 2013 From: nginx-forum at nginx.us (x64architecture) Date: Thu, 28 Mar 2013 20:33:56 -0400 Subject: Compiling Nginx on Windows 7 Message-ID: Im experiencing issues with compiling Nginx on Windows 7, every thing goes good until nmake -f objs/Makefile. I get the following error: Generating Code... link -lib -out:pcre.lib -verbose:lib pcre_*.obj /usr/bin/link: invalid option -- l Try `/usr/bin/link --help' for more information. NMAKE : fatal error U1077: 'C:\MinGW\msys\1.0\bin\link.EXE' : return code '0x1' Stop. NMAKE : fatal error U1077: '"c:\Program Files (x86)\Microsoft Visual Studio 10.0 \VC\BIN\nmake.exe"' : return code '0x2' Stop. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237923,237923#msg-237923 From nginx-forum at nginx.us Fri Mar 29 01:36:31 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Thu, 28 Mar 2013 21:36:31 -0400 Subject: How to investigate upstream timed out issues? In-Reply-To: References: <02cd8251ff47402a7f19ac7dde4ea0e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1800e983dd6919b01ca5860424d14646.NginxMailingListEnglish@forum.nginx.org> No it works with port 4443 (I have opened it somewhere else) but am getting another wget error message: > wget https://videomail.io:4443/socket.io/socket.io.v0.9.11.js --2013-03-29 14:35:00-- https://videomail.io:4443/socket.io/socket.io.v0.9.11.js Resolving videomail.io (videomail.io)... 103.6.212.124 Connecting to videomail.io (videomail.io)|103.6.212.124|:4443... connected. ERROR: cannot verify videomail.io's certificate, issued by `/C=US/O=GeoTrust, Inc./CN=RapidSSL CA': Unable to locally verify the issuer's authority. To connect to videomail.io insecurely, use `--no-check-certificate'. It's related to the certificate. No idea what's wrong? How can I examine this deeper? Cheers Michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237385,237924#msg-237924 From praveen.yarlagadda at gmail.com Fri Mar 29 04:44:59 2013 From: praveen.yarlagadda at gmail.com (Praveen Yarlagadda) Date: Thu, 28 Mar 2013 21:44:59 -0700 Subject: jpeg image quality is bad Message-ID: Hi there! I'm playing around with nginx and I'm running into a problem related to image uploading. I have nginx as a load balancer and java server (jetty, spring based) as the backend server. When I upload an image (JPEG) using POST method via nginx, the quality gets dropped a lot. Please take a look at the attached image. If I upload it directly to the backend server, it gets saved properly. Am I missing anything in the conf file? I really appreciate your help. Here is my nginx.conf. *http {* * include /etc/nginx/mime.types;* * default_type application/octet-stream;* * * * #access_log /dev/null main;* * access_log /data/logs/nginx/access.log main;* * log_not_found off;* * * * sendfile on;* * #tcp_nopush on;* * * * keepalive_timeout 65;* * * * client_max_body_size 10M;* * * * * * #gzip on;* * * * include /etc/nginx/conf.d/*.conf;* * * * * * upstream backend {* * ip_hash;* * server example.com:7070;* * }* * * * * * server {* * listen 80;* * server_name example.com;* * * * location / {* * proxy_pass http://backend;* * proxy_buffering on;* * }* * * * }* * * * server {* * listen 80;* * server_name ~.*;* * location / {* * access_log off;* * return 503;* * }* * }* *}* Thanks a lot! -Praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: low_quality_image.jpg Type: image/jpeg Size: 26332 bytes Desc: not available URL: From steve at greengecko.co.nz Fri Mar 29 04:56:54 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 29 Mar 2013 17:56:54 +1300 Subject: jpeg image quality is bad In-Reply-To: References: Message-ID: <1364533014.5117.2767.camel@steve-new> Hi, On Thu, 2013-03-28 at 21:44 -0700, Praveen Yarlagadda wrote: > Hi there! > > > I'm playing around with nginx and I'm running into a problem related > to image uploading. I have nginx as a load balancer and java server > (jetty, spring based) as the backend server. When I upload an image > (JPEG) using POST method via nginx, the quality gets dropped a lot. > Please take a look at the attached image. If I upload it directly to > the backend server, it gets saved properly. Am I missing anything in > the conf file? I really appreciate your help. Here is my nginx.conf. > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > > #access_log /dev/null main; > access_log /data/logs/nginx/access.log main; > log_not_found off; > > > sendfile on; > #tcp_nopush on; > > > keepalive_timeout 65; > > > client_max_body_size 10M; > > > > > #gzip on; > > > include /etc/nginx/conf.d/*.conf; > > > > > upstream backend { > ip_hash; > server example.com:7070; > } > > > > > server { > listen 80; > server_name example.com; > > > location / { > proxy_pass http://backend; > proxy_buffering on; > } > > > } > > > server { > listen 80; > server_name ~.*; > location / { > access_log off; > return 503; > } > } > } > > > Thanks a lot! > > > -Praveen > _______________________________________________ nginx does nothing whatsoever to images. They're just files as far as it's concerned. It looks like your site must post-process them in some way... resample, reduce quality, etc??? Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Mar 29 07:01:17 2013 From: nginx-forum at nginx.us (x64architecture) Date: Fri, 29 Mar 2013 03:01:17 -0400 Subject: Compiling Nginx on Windows 7 In-Reply-To: References: Message-ID: <0db9ea7df56ba06b7ad13c693aeccf3f.NginxMailingListEnglish@forum.nginx.org> To fix the error open a MSYS Bash and run the following command: mv /usr/bin/link.exe /usr/bin/link_.exe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237923,237929#msg-237929 From igor at sysoev.ru Fri Mar 29 07:19:33 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 29 Mar 2013 11:19:33 +0400 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: On Mar 28, 2013, at 19:20 , Reinis Rozitis wrote: > Also the cache residing 1:1 on the filesystem makes it problematic in setups where you have a lot of cachable objects. At least in my case the nginx cache manager process took way too much resources/io when traversing the directory tree with few milion files versus storing them all in a single mmaped file. Did you try nginx cache since version 1.1.0? Changes with nginx 1.1.0 01 Aug 2011 *) Feature: cache loader run time decrease. BTW, do you use Varnish persistent cache? -- Igor Sysoev http://nginx.com/services.html From lukas.herbolt at etnetera.cz Fri Mar 29 09:56:13 2013 From: lukas.herbolt at etnetera.cz (=?UTF-8?B?SGXFmWJvbHQsIEx1a8OhxaE=?=) Date: Fri, 29 Mar 2013 10:56:13 +0100 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: Yes and no, persistent cache is marked as experimental. And actually we are testing Apache Traffic Server as cache server. As I said before nginx is great http server and good proxy but haproxy has more features. I hope that nginx will be as good as haproxy in proxy mode. But this time is slower and has less features so we used it for SSL termination. Lukas Herbolt etnetera On 29 March 2013 08:19, Igor Sysoev wrote: > On Mar 28, 2013, at 19:20 , Reinis Rozitis wrote: > > > Also the cache residing 1:1 on the filesystem makes it problematic in > setups where you have a lot of cachable objects. At least in my case the > nginx cache manager process took way too much resources/io when traversing > the directory tree with few milion files versus storing them all in a > single mmaped file. > > Did you try nginx cache since version 1.1.0? > > Changes with nginx 1.1.0 01 Aug > 2011 > > *) Feature: cache loader run time decrease. > > BTW, do you use Varnish persistent cache? > > > -- > Igor Sysoev > http://nginx.com/services.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Luk?? He?bolt Linux Administrator ET NETERA | smart e-business [a] Milady Hor?kov? 108, 160 00 Praha 6 [t] +420 725 267 158 [i] www.etnetera.cz ~ [www.ifortuna.cz | www.o2.cz | www.datart.cz ] [www.skodaplus.cz | www.nivea.cz | www.allianz.cz] Created by ET NETERA | Powered by jNetPublish -------------- next part -------------- An HTML attachment was scrubbed... URL: From shangtefa at gmail.com Fri Mar 29 10:04:54 2013 From: shangtefa at gmail.com (MCoder) Date: Fri, 29 Mar 2013 18:04:54 +0800 Subject: why socket for communication between master and worker used the so large memory? Message-ID: my nginx is just a http tunul proxy http connection, and max connection is just lower than 100. # ps aux root 19849 0.0 0.0 18028 2452 ? Ss 2012 0:00 nginx: master process /usr/local/qqwebsrv/nginx/sbin/nginx nobody 25389 0.1 0.0 19752 4104 ? S Mar25 9:07 nginx: worker process nobody 25390 0.1 0.0 19752 4104 ? S Mar25 9:03 nginx: worker process nobody 25391 0.1 0.0 19752 4108 ? S Mar25 8:46 nginx: worker process nobody 25392 0.1 0.0 19760 4116 ? S Mar25 8:58 nginx: worker process # lsof | grep nginx | grep socket nginx 19849 root 3w unix 0xffff8102b7574380 1677964948 socket nginx 19849 root 4u unix 0xffff81047e144f00 1677964949 socket nginx 19849 root 6u unix 0xffff8102dce9dcc0 1677964950 socket nginx 19849 root 7u unix 0xffff81047e532340 1677964951 socket nginx 19849 root 8u unix 0xffff81027785a980 1677964954 socket nginx 19849 root 9u unix 0xffff81047bd8ac80 1677964955 socket nginx 19849 root 10u unix 0xffff8102b7574900 1677964957 socket nginx 19849 root 12w unix 0xffff81010e285100 1677964958 socket nginx 25389 nobody 3u unix 0xffff8102dce9dcc0 1677964950 socket nginx 25389 nobody 4u unix 0xffff81047e144f00 1677964949 socket nginx 25389 nobody 7u unix 0xffff81027785a980 1677964954 socket nginx 25389 nobody 8u unix 0xffff8102b7574900 1677964957 socket nginx 25390 nobody 3u unix 0xffff8102b7574380 1677964948 socket nginx 25390 nobody 4u unix 0xffff81027785a980 1677964954 socket nginx 25390 nobody 6u unix 0xffff8102b7574900 1677964957 socket nginx 25390 nobody 7u unix 0xffff81047e532340 1677964951 socket nginx 25391 nobody 3u unix 0xffff8102b7574380 1677964948 socket nginx 25391 nobody 4u unix 0xffff8102b7574900 1677964957 socket nginx 25391 nobody 6u unix 0xffff8102dce9dcc0 1677964950 socket nginx 25391 nobody 9u unix 0xffff81047bd8ac80 1677964955 socket nginx 25392 nobody 3u unix 0xffff8102b7574380 1677964948 socket nginx 25392 nobody 6u unix 0xffff8102dce9dcc0 1677964950 socket nginx 25392 nobody 8u unix 0xffff81027785a980 1677964954 socket nginx 25392 nobody 12u unix 0xffff81010e285100 1677964958 socket # lsof |awk '$1=="nginx" && $NF == "socket" {n[$6]=$7} END {for (i in n) {m += n[i]} print m / (1024 * 1024 * 1024)}' 12.5018 # netstat -na | grep ESTABLISHED | wc -l 87 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 29 10:41:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Mar 2013 14:41:26 +0400 Subject: why socket for communication between master and worker used the so large memory? In-Reply-To: References: Message-ID: <20130329104126.GA62550@mdounin.ru> Hello! On Fri, Mar 29, 2013 at 06:04:54PM +0800, MCoder wrote: > my nginx is just a http tunul proxy http connection, and max connection is > just lower than 100. > > # ps aux > root 19849 0.0 0.0 18028 2452 ? Ss 2012 0:00 nginx: > master process /usr/local/qqwebsrv/nginx/sbin/nginx > nobody 25389 0.1 0.0 19752 4104 ? S Mar25 9:07 nginx: > worker process > nobody 25390 0.1 0.0 19752 4104 ? S Mar25 9:03 nginx: > worker process > nobody 25391 0.1 0.0 19752 4108 ? S Mar25 8:46 nginx: > worker process > nobody 25392 0.1 0.0 19760 4116 ? S Mar25 8:58 nginx: > worker process > > # lsof | grep nginx | grep socket > nginx 19849 root 3w unix 0xffff8102b7574380 > 1677964948 socket [...] > nginx 25392 nobody 3u unix 0xffff8102b7574380 > 1677964948 socket > nginx 25392 nobody 6u unix 0xffff8102dce9dcc0 > 1677964950 socket > nginx 25392 nobody 8u unix 0xffff81027785a980 > 1677964954 socket > nginx 25392 nobody 12u unix 0xffff81010e285100 > 1677964958 socket > > # lsof |awk '$1=="nginx" && $NF == "socket" {n[$6]=$7} END {for (i in n) {m > += n[i]} print m / (1024 * 1024 * 1024)}' > 12.5018 What makes you think that what you are counting is memory? From here it looks like NODE column, with SIZE/OFF colum omitted for some reason (likely just empty). On a linux system here the output looks like: $ lsof | egrep 'socket|SIZE' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME nginx 30299 mdounin 3u unix 0xffff880224f836c0 0t0 1953927 socket nginx 30299 mdounin 6u unix 0xffff880413ff0380 0t0 1953928 socket nginx 30300 mdounin 6u unix 0xffff880413ff0380 0t0 1953928 socket -- Maxim Dounin http://nginx.org/en/donation.html From shangtefa at gmail.com Fri Mar 29 11:14:49 2013 From: shangtefa at gmail.com (MCoder) Date: Fri, 29 Mar 2013 19:14:49 +0800 Subject: why socket for communication between master and worker used the so large memory? In-Reply-To: <20130329104126.GA62550@mdounin.ru> References: <20130329104126.GA62550@mdounin.ru> Message-ID: it's description in man lsof SIZE, SIZE/OFF, or OFFSET ... In other cases, files don't have true sizes - e.g., sockets, FIFOs, pipes - so lsof displays for their sizes the content amounts it finds in their kernel buffer descriptors (e.g., socket buffer size counts or TCP/IP window sizes.) ... 2013/3/29 Maxim Dounin > Hello! > > On Fri, Mar 29, 2013 at 06:04:54PM +0800, MCoder wrote: > > > my nginx is just a http tunul proxy http connection, and max connection > is > > just lower than 100. > > > > # ps aux > > root 19849 0.0 0.0 18028 2452 ? Ss 2012 0:00 nginx: > > master process /usr/local/qqwebsrv/nginx/sbin/nginx > > nobody 25389 0.1 0.0 19752 4104 ? S Mar25 9:07 nginx: > > worker process > > nobody 25390 0.1 0.0 19752 4104 ? S Mar25 9:03 nginx: > > worker process > > nobody 25391 0.1 0.0 19752 4108 ? S Mar25 8:46 nginx: > > worker process > > nobody 25392 0.1 0.0 19760 4116 ? S Mar25 8:58 nginx: > > worker process > > > > # lsof | grep nginx | grep socket > > nginx 19849 root 3w unix 0xffff8102b7574380 > > 1677964948 socket > > [...] > > > nginx 25392 nobody 3u unix 0xffff8102b7574380 > > 1677964948 socket > > nginx 25392 nobody 6u unix 0xffff8102dce9dcc0 > > 1677964950 socket > > nginx 25392 nobody 8u unix 0xffff81027785a980 > > 1677964954 socket > > nginx 25392 nobody 12u unix 0xffff81010e285100 > > 1677964958 socket > > > > # lsof |awk '$1=="nginx" && $NF == "socket" {n[$6]=$7} END {for (i in n) > {m > > += n[i]} print m / (1024 * 1024 * 1024)}' > > 12.5018 > > What makes you think that what you are counting is memory? From > here it looks like NODE column, with SIZE/OFF colum omitted for > some reason (likely just empty). On a linux system here the > output looks like: > > $ lsof | egrep 'socket|SIZE' > COMMAND PID USER FD TYPE DEVICE SIZE/OFF > NODE NAME > nginx 30299 mdounin 3u unix 0xffff880224f836c0 0t0 > 1953927 socket > nginx 30299 mdounin 6u unix 0xffff880413ff0380 0t0 > 1953928 socket > nginx 30300 mdounin 6u unix 0xffff880413ff0380 0t0 > 1953928 socket > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaurus at gmail.com Fri Mar 29 11:32:28 2013 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Fri, 29 Mar 2013 15:32:28 +0400 Subject: reset_timedout_connection Message-ID: Hi. Why is this parameter disabled by default? Best regards, Dmitriy Shalashov -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 29 12:15:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Mar 2013 16:15:43 +0400 Subject: why socket for communication between master and worker used the so large memory? In-Reply-To: References: <20130329104126.GA62550@mdounin.ru> Message-ID: <20130329121543.GB62550@mdounin.ru> Hello! On Fri, Mar 29, 2013 at 07:14:49PM +0800, MCoder wrote: > it's description in man lsof > > SIZE, SIZE/OFF, or OFFSET > ... In other cases, files don't have true sizes - e.g., sockets, > FIFOs, pipes - so lsof displays for their sizes the content amounts it > finds in their kernel buffer descriptors (e.g., socket buffer size counts > or TCP/IP window sizes.) ... And what makes you think that: a) Numbers given in SIZE/OFF column on your system for a particular type of object is memory used? Note that e.g. window size is not a memory, as well as socket buffer size. b) The particular column you are counting _is_ SIZE/OFF? > > > 2013/3/29 Maxim Dounin > > > Hello! > > > > On Fri, Mar 29, 2013 at 06:04:54PM +0800, MCoder wrote: > > > > > my nginx is just a http tunul proxy http connection, and max connection > > is > > > just lower than 100. > > > > > > # ps aux > > > root 19849 0.0 0.0 18028 2452 ? Ss 2012 0:00 nginx: > > > master process /usr/local/qqwebsrv/nginx/sbin/nginx > > > nobody 25389 0.1 0.0 19752 4104 ? S Mar25 9:07 nginx: > > > worker process > > > nobody 25390 0.1 0.0 19752 4104 ? S Mar25 9:03 nginx: > > > worker process > > > nobody 25391 0.1 0.0 19752 4108 ? S Mar25 8:46 nginx: > > > worker process > > > nobody 25392 0.1 0.0 19760 4116 ? S Mar25 8:58 nginx: > > > worker process > > > > > > # lsof | grep nginx | grep socket > > > nginx 19849 root 3w unix 0xffff8102b7574380 > > > 1677964948 socket > > > > [...] > > > > > nginx 25392 nobody 3u unix 0xffff8102b7574380 > > > 1677964948 socket > > > nginx 25392 nobody 6u unix 0xffff8102dce9dcc0 > > > 1677964950 socket > > > nginx 25392 nobody 8u unix 0xffff81027785a980 > > > 1677964954 socket > > > nginx 25392 nobody 12u unix 0xffff81010e285100 > > > 1677964958 socket > > > > > > # lsof |awk '$1=="nginx" && $NF == "socket" {n[$6]=$7} END {for (i in n) > > {m > > > += n[i]} print m / (1024 * 1024 * 1024)}' > > > 12.5018 > > > > What makes you think that what you are counting is memory? From > > here it looks like NODE column, with SIZE/OFF colum omitted for > > some reason (likely just empty). On a linux system here the > > output looks like: > > > > $ lsof | egrep 'socket|SIZE' > > COMMAND PID USER FD TYPE DEVICE SIZE/OFF > > NODE NAME > > nginx 30299 mdounin 3u unix 0xffff880224f836c0 0t0 > > 1953927 socket > > nginx 30299 mdounin 6u unix 0xffff880413ff0380 0t0 > > 1953928 socket > > nginx 30300 mdounin 6u unix 0xffff880413ff0380 0t0 > > 1953928 socket > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/en/donation.html From brian at akins.org Fri Mar 29 13:32:07 2013 From: brian at akins.org (Brian Akins) Date: Fri, 29 Mar 2013 09:32:07 -0400 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: We never really use nginx in straight proxy mode - we always have some munging or something to do to the request or response along with cacheing, etc. So, we'd wind up using nginx (or varnish) along with haproxy anyway and that's just an unneeded layer for us, right now. Apache TrafficServer looks interesting for similar use cases. We get great performance from nginx for our use cases. We continually test other technologies, but haven't found a reason to switch or augment it right now. in 6 months that may change, of course. For just straight up http proxy, I'd agree that haproxy is probably a better fit. Once you start needing to edit header or bodies in a programmatic fashion, I'd look at something else. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Mar 29 21:30:21 2013 From: nginx-forum at nginx.us (lblankers) Date: Fri, 29 Mar 2013 17:30:21 -0400 Subject: Mail proxy with SNI Message-ID: Hi, I would like to use nginx 1.2.1 with TLS SNI support to proxy SMTP submission for several different domains over SSL. I would expect that if I configure multiple servers with different server names that a TLS v1 client will select the correct one through SNI. However I always get the first certificate regardless of the hostname specified in ClientHello. Is there something wrong with my config? mail { auth_http 127.0.0.1/auth.php; smtp_auth login plain; smtp_capabilities "SIZE 10240000" "VRFY" "ETRN" "ENHANCEDSTATUSCODES" "8BITMIME" "DSN"; server { listen 587; server_name domain1.nl; protocol smtp; proxy on; starttls only; ssl_certificate /etc/nginx/ssl/domain1.crt; ssl_certificate_key /etc/nginx/ssl/domain1.key; } server { listen 587; server_name domain2.com; protocol smtp; proxy on; starttls only; ssl_certificate /etc/nginx/ssl/domain2.crt; ssl_certificate_key /etc/nginx/ssl/domain2.key; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237967,237967#msg-237967 From vbart at nginx.com Fri Mar 29 22:24:52 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 30 Mar 2013 02:24:52 +0400 Subject: Mail proxy with SNI In-Reply-To: References: Message-ID: <201303300224.52781.vbart@nginx.com> On Saturday 30 March 2013 01:30:21 lblankers wrote: > Hi, > > I would like to use nginx 1.2.1 with TLS SNI support to proxy SMTP > submission for several different domains over SSL. I would expect that if I > configure multiple servers with different server names that a TLS v1 client > will select the correct one through SNI. However I always get the first > certificate regardless of the hostname specified in ClientHello. > > Is there something wrong with my config? > The problem is that TLS SNI currently is not supported in mail proxy. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx+phil at spodhuis.org Sat Mar 30 00:11:56 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Fri, 29 Mar 2013 20:11:56 -0400 Subject: Mail proxy with SNI In-Reply-To: <201303300224.52781.vbart@nginx.com> References: <201303300224.52781.vbart@nginx.com> Message-ID: <20130330001156.GA26485@redoubt.spodhuis.org> On 2013-03-30 at 02:24 +0400, Valentin V. Bartenev wrote: > On Saturday 30 March 2013 01:30:21 lblankers wrote: > > I would like to use nginx 1.2.1 with TLS SNI support to proxy SMTP > > submission for several different domains over SSL. I would expect that if I > > configure multiple servers with different server names that a TLS v1 client > > will select the correct one through SNI. However I always get the first > > certificate regardless of the hostname specified in ClientHello. > > > > Is there something wrong with my config? > > > > The problem is that TLS SNI currently is not supported in mail proxy. If someone needs TLS SNI with SMTP right now, Exim supports this. It's not designed to be as scalable as nginx in performance, but it does okay for most folks' purposes. (Support added in 4.80, released 2012-05-31; 4.80.1 is current) From nginx-forum at nginx.us Sat Mar 30 08:33:30 2013 From: nginx-forum at nginx.us (lblankers) Date: Sat, 30 Mar 2013 04:33:30 -0400 Subject: Mail proxy with SNI In-Reply-To: <20130330001156.GA26485@redoubt.spodhuis.org> References: <20130330001156.GA26485@redoubt.spodhuis.org> Message-ID: <48c4d1fa1e08d7ee84ea69fa606be823.NginxMailingListEnglish@forum.nginx.org> On March 29, 2013 08:14PM Phil Pennock wrote: > On 2013-03-30 at 02:24 +0400, Valentin V. Bartenev wrote: > > On Saturday 30 March 2013 01:30:21 lblankers wrote: > > > I would like to use nginx 1.2.1 with TLS SNI support to proxy SMTP > > > submission for several different domains over SSL. I would expect that if I > > > configure multiple servers with different server names that a TLS v1 client > > > will select the correct one through SNI. However I always get the first > > > certificate regardless of the hostname specified in ClientHello. > > > > > > Is there something wrong with my config? > > > > > > > The problem is that TLS SNI currently is not supported in mail proxy. > > If someone needs TLS SNI with SMTP right now, Exim supports this. It's > not designed to be as scalable as nginx in performance, but it does okay > for most folks' purposes. Thanks for clearing that up. I would prefer to use nginx rather than switch to Exim because I would like to use nginx to proxy IMAP using SSL SNI as well. Would it be possible to add SNI to the mail proxy? I am doing this as a hobby project rather than professionally so getting multiple IPs in order to host multiple domains is prohibitively expensive. Both in one time cost (~ ? 100) and recurring cost (? 2.50 / month / IP). So if someone could suggest a cheaper solution (e.g. sponsoring a developer to add this feature) I would very much appreciate that. Laurens Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237967,237972#msg-237972 From r at roze.lv Sat Mar 30 10:17:07 2013 From: r at roze.lv (Reinis Rozitis) Date: Sat, 30 Mar 2013 12:17:07 +0200 Subject: Why use haproxy now ? In-Reply-To: References: Message-ID: <6B92BA8B64E54DAD8CADC31585382718@MezhRoze> > Did you try nginx cache since version 1.1.0? Yes, but only for fastcgi cache therefore the file count isn't too big to make an impact. I'll try the static cache again with the current version and see how it works out now. > BTW, do you use Varnish persistent cache? No, just a huge mmaped file .. Since the instances get restarted very rarely (most have now over a year of uptime) the result is basicaly the same without the persistant storage bad side effects/bugs. rr From ianevans at digitalhit.com Sat Mar 30 10:36:13 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Sat, 30 Mar 2013 06:36:13 -0400 Subject: fastcgi-cache and expires Message-ID: Just curious... In a lot of examples I've seen on the net for setting up fastcgi-cache the directives fastcgi_ignore_headers Cache-Control Expires; expires epoch; are included. Is there any major reason for that? For my own setup I want the fastcgi caching to reduce load on the server, but I'd also like to perhaps fine tune the browser caching with expires headers for some pages. Before I change it, is there any huge reason "fastcgi_ignore_headers Cache-Control Expires;" and "expires epoch;" should stay in my fastcgi-params file? Thanks. From r at roze.lv Sat Mar 30 10:59:04 2013 From: r at roze.lv (Reinis Rozitis) Date: Sat, 30 Mar 2013 12:59:04 +0200 Subject: fastcgi-cache and expires In-Reply-To: References: Message-ID: > Before I change it, is there any huge reason "fastcgi_ignore_headers > Cache-Control Expires;" and "expires epoch;" should stay in my > fastcgi-params file? nginx honours the headers the backend sends. Often dynamic applications (like php when using sessions etc) send headers which "deny" any kind of caching ( Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 and Expires header in the past) so to cache such responses nginx has to override theese headers. In this case though without the context the 'epires epoch;' directive makes not a lot of sense (at least for me) since it sets the epire header to 1 January, 1970 00:00:01 GMT meaning it shouldn't be cached. I always feel that setting the expire/cache headers in the application (unless imposible to change the code) is more flexible and allows to avoid errors (caching something for too long or wrong response) rather than overriding it in the proxy. Also I wouldn't put the expire/header directives in the fastcgi_params file since that means you use those for every request versus putting in some specific location {} blocks. rr From contact at jpluscplusm.com Sat Mar 30 12:57:51 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 30 Mar 2013 12:57:51 +0000 Subject: Mail proxy with SNI In-Reply-To: <48c4d1fa1e08d7ee84ea69fa606be823.NginxMailingListEnglish@forum.nginx.org> References: <20130330001156.GA26485@redoubt.spodhuis.org> <48c4d1fa1e08d7ee84ea69fa606be823.NginxMailingListEnglish@forum.nginx.org> Message-ID: A cheaper, non-commercially-viable option (which might be acceptable as you indicate it's not a professional project) would just be to put different domains' certs on different ports. A slight one-time setup annoyance to the users, of course, but they shouldn't care if you're doing it for free. Maybe. -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From luky-37 at hotmail.com Sat Mar 30 13:57:38 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 30 Mar 2013 14:57:38 +0100 Subject: Mail proxy with SNI In-Reply-To: References: <20130330001156.GA26485@redoubt.spodhuis.org>, <48c4d1fa1e08d7ee84ea69fa606be823.NginxMailingListEnglish@forum.nginx.org>, Message-ID: Thats may be a dump question: but why do you use different host names in the first place? Is it a real business requirement to have a host name per domain? Simply using a single host name for all domains would solve all you issues here. If this really is a business requirement for you (maybe the solution shouldn't look "shared"), then their should be the money to buy dedicated IP addresses. I wouldn't rely on SNI anyway, because you never know if your clients are all SNI capable; this is slowly improving for HTTPS but SMTP/IMAP is another story (see: nginx). Regards, Lukas From nginx-forum at nginx.us Sat Mar 30 14:45:17 2013 From: nginx-forum at nginx.us (cavamondo) Date: Sat, 30 Mar 2013 10:45:17 -0400 Subject: Nginx maintenance page/redirect with ip exception Message-ID: Coming fra Apache/htaccess, im trying to get my head around the Nginx paradigme. Im trying to achieve a temporary redirect to a www.mysite.com/maintenance.html page, for all visitors to www.mysite.com except visitors coming from IP:xx.xx.xx.xx.wich should have fuul access to the site. I know how to achieve this with htaccess, but this doesnt applya to Nginx. I have tried different suggestions on modding nginx.conf, default.conf and mysite.cong file. I using virtual host and my site conf is placed in: /etc/nginx/sites-available/ Any simple solution to redirect all visitors except visitors defined by IP? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237978,237978#msg-237978 From meteor8488 at 163.com Sat Mar 30 15:17:22 2013 From: meteor8488 at 163.com (Meteor) Date: Sat, 30 Mar 2013 23:17:22 +0800 (CST) Subject: php-fpm take up all cpu resources Message-ID: <623220b2.16b16.13dbbdfd8b5.Coremail.meteor8488@163.com> Hi All, In these days, php-fpm took up all cpu resources several times a day.. Each php-fpm may take up 25% cpu resource which should be less than 5% in normal status. Below is the php-fpm.log. [30-Mar-2013 22:18:47] NOTICE: about to trace 29600 [30-Mar-2013 22:18:47] NOTICE: finished trace of 29600 [30-Mar-2013 22:18:47] NOTICE: child 29599 stopped for tracing [30-Mar-2013 22:18:47] NOTICE: about to trace 29599 [30-Mar-2013 22:18:47] ERROR: failed to ptrace(PT_IO) pid 29599: Bad address (14) At first, I though it's caused by mysql. But when php-fpm took up all cpu resource, I found that there is no sql process in mysql. So I'm thinking this problem is caused by php-fpm. I'm using Freebsd, and I disabled 'ktrace' in kernel. And the error for php-fpm is that failed to ptrace(PT_IO). Is it possible that this problem is caused by ktrace? Anyone can help? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Mar 30 16:05:43 2013 From: nginx-forum at nginx.us (lblankers) Date: Sat, 30 Mar 2013 12:05:43 -0400 Subject: Mail proxy with SNI In-Reply-To: References: Message-ID: On March 30, 2013 09:58AM Lukas Tribus wrote: > Thats may be a dump question: but why do you use different > host names in the first place? Is it a real business > requirement to have a host name per domain? No such thing as dumb questions, only people who can't answer them :-) I have multiple domains for email because the domains contain the family name and I host for both my own family as well as several 'in-law' with different family names. On March 30, 2013 09:00AM Jonathan Matthews wrote: > A cheaper, non-commercially-viable option (which might be > acceptable as you indicate it's not a professional project) > would just be to putdifferent domains' certs on different > ports. A slight one-time setup annoyance to the users, of > course, but they shouldn't care if you're doing it for > free. Maybe. Yes, either using one domain or hosting on multiple ports would definitely work. And since I am providing this service for free the in-laws would not complain. However I prefer to keep my support duties to a minimum. And neither of these solutions will work with the auto configuration present in almost all mail clients today. So if I can spend a bit of resources on getting SNI to work and hence auto configuration that would be benicifial in the long run. Laurens Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237967,237980#msg-237980 From r at roze.lv Sat Mar 30 16:16:02 2013 From: r at roze.lv (Reinis Rozitis) Date: Sat, 30 Mar 2013 18:16:02 +0200 Subject: php-fpm take up all cpu resources In-Reply-To: <623220b2.16b16.13dbbdfd8b5.Coremail.meteor8488@163.com> References: <623220b2.16b16.13dbbdfd8b5.Coremail.meteor8488@163.com> Message-ID: > So I'm thinking this problem is caused by php-fpm. This is not exactly php list and not really related to nginx, but you should start by at least telling your php version, there have been a bunch of bugs fixed regarding fpm and cpu hogging (also in php core itself) in past. It can always be a poor php code () though but without the fpm backtrace it's hard to tell. rr From nginx-forum at nginx.us Sat Mar 30 17:17:20 2013 From: nginx-forum at nginx.us (esujit) Date: Sat, 30 Mar 2013 13:17:20 -0400 Subject: default domain name Message-ID: <48f3dbe3f670a79996511002a50ff970.NginxMailingListEnglish@forum.nginx.org> I am trying out the proxy (without cache module) functionality in nginx. Here is the problem I am encountering On the client machine the default domain name is company.com when user types in http://department in the browser, the DNS resolves to department.company.com When this http request gets intercepted by the nginx proxy, it is not able to process the request. It gives an error "cannot resolve department" tcpdump trace showed that the http request has the HOST header without the domain name. HOST: department Is there any way we can specify a default domain name in nginx config. >From further investigating found that the server_name directive matches the HOST header. Is there any rewrite regex we could use such that IF the HOST header has a value without domain name THEN add the default domain name. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237982,237982#msg-237982 From nginx-forum at nginx.us Sat Mar 30 19:39:22 2013 From: nginx-forum at nginx.us (JCR) Date: Sat, 30 Mar 2013 15:39:22 -0400 Subject: where is the pid file Message-ID: Hi on Linux Centos 5 wiht nginx 1.2.7, I am struggling to find the pid file in my nginx.conf file I have pid /usr/local/nginx/logs/nginx.pid; but I dont see any nginx.pid in there. I tried to set that path at compile time but to no avail. Is there a command to know where that file is? Wy doesn't the above entry work? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237983,237983#msg-237983 From nginx-list at puzzled.xs4all.nl Sun Mar 31 01:13:13 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Sun, 31 Mar 2013 03:13:13 +0200 Subject: where is the pid file In-Reply-To: References: Message-ID: <51578DA9.3010006@puzzled.xs4all.nl> On 03/30/2013 08:39 PM, JCR wrote: > Hi > on Linux Centos 5 wiht nginx 1.2.7, I am struggling to find the pid file > > in my nginx.conf file I have > pid /usr/local/nginx/logs/nginx.pid; > > but I dont see any nginx.pid in there. I tried to set that path at compile > time but to no avail. > > Is there a command to know where that file is? > Wy doesn't the above entry work? Have you checked the nginx init file in /etc/rc.d/init.d/ ? Regards, Patrick From igor at sysoev.ru Sun Mar 31 06:05:22 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 31 Mar 2013 10:05:22 +0400 Subject: Why use haproxy now ? In-Reply-To: <6B92BA8B64E54DAD8CADC31585382718@MezhRoze> References: <6B92BA8B64E54DAD8CADC31585382718@MezhRoze> Message-ID: <6C9E35BA-B419-4879-B624-C05EBE3BE1AD@sysoev.ru> On Mar 30, 2013, at 14:17 , Reinis Rozitis wrote: >> BTW, do you use Varnish persistent cache? > > No, just a huge mmaped file .. > Since the instances get restarted very rarely (most have now over a year of uptime) the result is basicaly the same without the persistant storage bad side effects/bugs. How much the cache's size is larger than the host's physical memory? -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sun Mar 31 07:28:01 2013 From: nginx-forum at nginx.us (nicolas1390) Date: Sun, 31 Mar 2013 03:28:01 -0400 Subject: How to create a cache server to caching downloaded files (with IDM) Message-ID: <0e11b2fe7f362edb3842968c559ca01e.NginxMailingListEnglish@forum.nginx.org> Hello to all friends. How can I install cache server for caching downloaded files ? I try use Squid and Polipo for it. but it's not working properly. Squid can caching downloaded files when download the file without Download Managers.(Like IDM) When I use a Download Manager , Squid can't cache the downloaded file. ( Max connections number in IDM = 16) Please help me , How can I cache downloaded file , when download file with IDM and multi connection mode (max connections number = 16 or 8) My download speed is 500 KB/s (When I'm downloading files without Internet Download Manager). If I use the IDM (Or any other Internet Download Manager) and set the connections number to 16 , increase my download speed to 800 KB/s. I want to downloading files with 800 KB/s speed. If I set the connections number to 1 , Squid or Polipo cache is working properly , but my download speed decrease to 500 KB/s. And if I set the connections number to 16 or any number that's greater than 1 , cache not working properly. My cache server only use with Internet Download Manager and it's not set in the web browsers. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237986,237986#msg-237986 From r at roze.lv Sun Mar 31 10:11:59 2013 From: r at roze.lv (Reinis Rozitis) Date: Sun, 31 Mar 2013 13:11:59 +0300 Subject: Why use haproxy now ? In-Reply-To: <6C9E35BA-B419-4879-B624-C05EBE3BE1AD@sysoev.ru> References: <6B92BA8B64E54DAD8CADC31585382718@MezhRoze> <6C9E35BA-B419-4879-B624-C05EBE3BE1AD@sysoev.ru> Message-ID: > How much the cache's size is larger than the host's physical memory? 32Gb ram and 240Gb (fits on a ssd) mapped file (no swapping involved). rr From justin at specialbusservice.com Sun Mar 31 10:33:51 2013 From: justin at specialbusservice.com (Justin Cormack) Date: Sun, 31 Mar 2013 11:33:51 +0100 Subject: "bug in glibc" Message-ID: There is a note in src/os/unix/ngx_user.c about a bug in glibc for crypt_r: /* work around the glibc bug */ cd.current_salt[0] = ~salt[0]; value = crypt_r((char *) key, (char *) salt, &cd); I was wondering if anyone knew what the bug was, as I am running on a platform (Musl libc) that has got NGX_HAVE_GNU_CRYPT_R but has a different implementation, in particular has no current_salt field in struct crypt_data (and indeed the man page says you should treat it as opaque except for the initialized field). I was wondering exactly what the bug was as then I could write a test for it rather than always including this code; I have not been able to find it in the glibc bug tracker though. Thanks Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Mar 31 10:44:10 2013 From: nginx-forum at nginx.us (cavamondo) Date: Sun, 31 Mar 2013 06:44:10 -0400 Subject: Nginx maintenance page/redirect with ip exception In-Reply-To: References: Message-ID: <82670b202f642599ecd95ca2c35a59e7.NginxMailingListEnglish@forum.nginx.org> Moved this post to "How to" section .. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237978,237990#msg-237990 From igor at sysoev.ru Sun Mar 31 14:43:43 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 31 Mar 2013 18:43:43 +0400 Subject: Why use haproxy now ? In-Reply-To: References: <6B92BA8B64E54DAD8CADC31585382718@MezhRoze> <6C9E35BA-B419-4879-B624-C05EBE3BE1AD@sysoev.ru> Message-ID: <70FA65D1-38CC-49C8-9678-202DAACE43D5@sysoev.ru> On Mar 31, 2013, at 14:11 , Reinis Rozitis wrote: >> How much the cache's size is larger than the host's physical memory? > > 32Gb ram and 240Gb (fits on a ssd) mapped file (no swapping involved). Did you try previously nginx cache also on SSD or on usual hard disk? -- Igor Sysoev From igor at sysoev.ru Sun Mar 31 15:12:55 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 31 Mar 2013 19:12:55 +0400 Subject: "bug in glibc" In-Reply-To: References: Message-ID: <7D8E8962-DB47-41D5-9235-DBAE4CD842EC@sysoev.ru> On Mar 31, 2013, at 14:33 , Justin Cormack wrote: > There is a note in src/os/unix/ngx_user.c about a bug in glibc for crypt_r: > > /* work around the glibc bug */ > cd.current_salt[0] = ~salt[0]; > > value = crypt_r((char *) key, (char *) salt, &cd); > > I was wondering if anyone knew what the bug was, as I am running on a platform (Musl libc) that has got NGX_HAVE_GNU_CRYPT_R but has a different implementation, in particular has no current_salt field in struct crypt_data (and indeed the man page says you should treat it as opaque except for the initialized field). > > I was wondering exactly what the bug was as then I could write a test for it rather than always including this code; I have not been able to find it in the glibc bug tracker though. 2002-10-29 Daniel Jacobowitz * crypt/crypt_util.c (__init_des_r): Initialize current_salt and current_saltbits. https://groups.google.com/forum/?fromgroups=#!topic/linux.debian.maint.glibc/Q88bwAp222w -- Igor Sysoev http://nginx.com/services.html From justin at specialbusservice.com Sun Mar 31 21:43:28 2013 From: justin at specialbusservice.com (Justin Cormack) Date: Sun, 31 Mar 2013 22:43:28 +0100 Subject: "bug in glibc" In-Reply-To: <7D8E8962-DB47-41D5-9235-DBAE4CD842EC@sysoev.ru> References: <7D8E8962-DB47-41D5-9235-DBAE4CD842EC@sysoev.ru> Message-ID: On Sun, Mar 31, 2013 at 4:12 PM, Igor Sysoev wrote: > On Mar 31, 2013, at 14:33 , Justin Cormack wrote: > > > There is a note in src/os/unix/ngx_user.c about a bug in glibc for > crypt_r: > > > > /* work around the glibc bug */ > > cd.current_salt[0] = ~salt[0]; > > > > value = crypt_r((char *) key, (char *) salt, &cd); > > > > I was wondering if anyone knew what the bug was, as I am running on a > platform (Musl libc) that has got NGX_HAVE_GNU_CRYPT_R but has a different > implementation, in particular has no current_salt field in struct > crypt_data (and indeed the man page says you should treat it as opaque > except for the initialized field). > > > > I was wondering exactly what the bug was as then I could write a test > for it rather than always including this code; I have not been able to find > it in the glibc bug tracker though. > > > 2002-10-29 Daniel Jacobowitz > > * crypt/crypt_util.c (__init_des_r): Initialize current_salt > and current_saltbits. > > > https://groups.google.com/forum/?fromgroups=#!topic/linux.debian.maint.glibc/Q88bwAp222w > > Ok I just checked and this bug was fixed in glibc-2.3.2, which was released in March 2003... Any chance of removing the workaround as it is relying on fields that may not exist and are implementation internal? Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: