Thanks for the help Maxim, I'll submit this code if I get around implementing it.<div><div><br></div><div>Also, I think I used the wrong string comparison function in the patch I sent earlier. </div><div>This one should work as intended in the description ..</div>
<div><br></div><div>Matthieu.<br><br><div class="gmail_quote">On Fri, Aug 12, 2011 at 3:27 PM, Maxim Dounin <span dir="ltr"><<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hello!<br>
<div class="im"><br>
On Fri, Aug 12, 2011 at 02:11:51PM -0700, Matthieu Tourne wrote:<br>
<br>
> Also, if I was planning on having a lot of different connections using the<br>
> upstream keepalive module.<br>
> Would it make sense to convert the queues into rbtrees for faster lookup ?<br>
<br>
</div>Yes, it may make sense if you are planning to keep lots of<br>
connections to lots of different backends (you'll still need<br>
queues though, but that's details).<br>
<div><div></div><div class="h5"><br>
Maxim Dounin<br>
<br>
><br>
> Thank you!<br>
><br>
> Matthieu.<br>
><br>
> On Fri, Aug 12, 2011 at 12:59 PM, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>> wrote:<br>
><br>
> > Hello!<br>
> ><br>
> > On Fri, Aug 12, 2011 at 12:32:26PM -0700, Matthieu Tourne wrote:<br>
> ><br>
> > > Hi all,<br>
> > ><br>
> > > I think I have found a small issue, if we're using proxy_pass to talk to<br>
> > an<br>
> > > origin that doesn't support keep alives.<br>
> > > The origin will return a HTTP header "Connection: close", and terminate<br>
> > the<br>
> > > connection (TCP FIN).<br>
> > > We don't take this into account, and assume there is a keep-alive<br>
> > connection<br>
> > > available.<br>
> > > The next time the connection is used, it won't be part of a valid TCP<br>
> > > stream, and the origin server will send a TCP RST.<br>
> ><br>
> > Yes, I'm aware of this, thank you. Actually, this is harmless:<br>
> > upstream keepalive module should detect connection was closed<br>
> > while keeping it, and even if it wasn't able to do so - nginx will<br>
> > re-try sending request if sending to cached connection fails.<br>
> ><br>
> > > This can be simulated with 2 nginx instances, one acting as a reverse<br>
> > proxy<br>
> > > with keep alive connection. And the other using the directive<br>
> > > keepalive_timeout 0; (which will always terminate connections right<br>
> > away).<br>
> > ><br>
> > > The patches attached take into account the response of the origin, and<br>
> > > should fix this issue.<br>
> ><br>
> > I'm planing to add similar patch, thanks.<br>
> ><br>
> > Maxim Dounin<br>
> ><br>
> > ><br>
> > > Matthieu.<br>
> > ><br>
> > > On Mon, Aug 8, 2011 at 2:36 AM, SplitIce <<a href="mailto:mat999@gmail.com">mat999@gmail.com</a>> wrote:<br>
> > ><br>
> > > > Oh and I havent been able to reproduce the crash, I tried for a while<br>
> > but<br>
> > > > gave up. if it happens again ill build with debugging and restart<br>
> > howeaver<br>
> > > > so far its been 36hours without issues (under a significant amount of<br>
> > > > traffic)<br>
> > > ><br>
> > > ><br>
> > > > On Mon, Aug 8, 2011 at 7:35 PM, SplitIce <<a href="mailto:mat999@gmail.com">mat999@gmail.com</a>> wrote:<br>
> > > ><br>
> > > >> 50ms per HTTP request (taken from firebug and chrome resource panel)<br>
> > as<br>
> > > >> the time it takes the html to load from request to arrival.<br>
> > > >> 200ms is the time saved by when the http starts transfering to me<br>
> > > >> (allowing other resources to begin downloading before the HTML<br>
> > completes),<br>
> > > >> previously the html only started transfering after the full request<br>
> > was<br>
> > > >> downloaded to the proxy server (due to buffering)<br>
> > > >><br>
> > > >> HTTP to talk to the backends (between countries)<br>
> > > >><br>
> > > >> The node has a 30-80ms ping time between the backend and frontend.<br>
> > > >> (Russia->Germany, Sweden->NL, Ukraine->Germany/NL etc)<br>
> > > >><br>
> > > >> On Mon, Aug 8, 2011 at 7:22 PM, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
> > wrote:<br>
> > > >><br>
> > > >>> Hello!<br>
> > > >>><br>
> > > >>> On Mon, Aug 08, 2011 at 02:44:12PM +1000, SplitIce wrote:<br>
> > > >>><br>
> > > >>> > Been testing this on my servers now for 2 days now, handling<br>
> > > >>> approximately<br>
> > > >>> > 100mbit of constant traffic (3x20mbit, 1x40mbit).<br>
> > > >>> ><br>
> > > >>> > Havent noticed any large bugs, had an initial crash on one of the<br>
> > > >>> > servers however havent been able to replicate. The servers are a<br>
> > > >>> mixture of<br>
> > > >>> > openvz, XEN and one vmware virtualised containers running debian<br>
> > lenny<br>
> > > >>> or<br>
> > > >>> > squeeze,<br>
> > > >>><br>
> > > >>> By "crash" you mean nginx segfault? If yes, it would be great to<br>
> > > >>> track it down (either to fix problem in keepalive patch or to<br>
> > > >>> prove it's unrelated problem).<br>
> > > >>><br>
> > > >>> > Speed increases from this module are decent, approximately 50ms<br>
> > from<br>
> > > >>> the<br>
> > > >>> > request time and the HTTP download starts 200ms earler resulting in<br>
> > a<br>
> > > >>> 150ms<br>
> > > >>> > quicker load time on average.<br>
> > > >>><br>
> > > >>> Sounds cool, but I don't really understand what "50ms from the<br>
> > > >>> request time" and "download starts 200ms earler" actually means.<br>
> > > >>> Could you please elaborate?<br>
> > > >>><br>
> > > >>> And, BTW, do you use proxy or fastcgi to talk to backends?<br>
> > > >>><br>
> > > >>> Maxim Dounin<br>
> > > >>><br>
> > > >>> ><br>
> > > >>> > all in all, seems good.<br>
> > > >>> ><br>
> > > >>> > Thanks for all your hard work Maxim.<br>
> > > >>> ><br>
> > > >>> > On Thu, Aug 4, 2011 at 4:51 PM, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
> > > >>> wrote:<br>
> > > >>> ><br>
> > > >>> > > Hello!<br>
> > > >>> > ><br>
> > > >>> > > On Wed, Aug 03, 2011 at 05:06:56PM -0700, Matthieu Tourne wrote:<br>
> > > >>> > ><br>
> > > >>> > > > Hi,<br>
> > > >>> > > ><br>
> > > >>> > > > I'm trying to use keepalive http connections for proxy_pass<br>
> > > >>> directives<br>
> > > >>> > > > containing variables.<br>
> > > >>> > > > Currently it only works for named upstream blocks.<br>
> > > >>> > > ><br>
> > > >>> > > > I'm wondering what would be the easiest way,<br>
> > > >>> > > > maybe setting peer->get to ngx_http_upstream_get_keepalive_peer<br>
> > and<br>
> > > >>> > > > kp->original_get_peer to<br>
> > ngx_http_upstream_get_round_robin_peer()<br>
> > > >>> towards<br>
> > > >>> > > > the end of ngx_http_create_round_robin_peer().<br>
> > > >>> > > > If I can figure how to set kp->conf to something sane this<br>
> > might<br>
> > > >>> work :)<br>
> > > >>> > > ><br>
> > > >>> > > > Thoughts ?<br>
> > > >>> > ><br>
> > > >>> > > You may try to pick one from upstream's main conf upstreams<br>
> > > >>> > > array (e.g. one from first found upstream with init set to<br>
> > > >>> > > ngx_http_upstream_init_keepalive). Dirty, but should work.<br>
> > > >>> > ><br>
> > > >>> > > Maxim Dounin<br>
> > > >>> > ><br>
> > > >>> > > ><br>
> > > >>> > > > Thank you,<br>
> > > >>> > > > Matthieu.<br>
> > > >>> > > ><br>
> > > >>> > > > On Tue, Aug 2, 2011 at 10:21 PM, SplitIce <<a href="mailto:mat999@gmail.com">mat999@gmail.com</a>><br>
> > > >>> wrote:<br>
> > > >>> > > ><br>
> > > >>> > > > > Ive been testing this on my localhost and one of my live<br>
> > servers<br>
> > > >>> (http<br>
> > > >>> > > > > backend) for a good week now, I haven't had any issues that I<br>
> > > >>> have<br>
> > > >>> > > noticed<br>
> > > >>> > > > > as of yet.<br>
> > > >>> > > > ><br>
> > > >>> > > > > Servers are Debian Lenny and Debian Squeeze (oldstable,<br>
> > stable)<br>
> > > >>> > > > ><br>
> > > >>> > > > > Hoping it will make it into the developer (1.1.x) branch soon<br>
> > :)<br>
> > > >>> > > > ><br>
> > > >>> > > > ><br>
> > > >>> > > > > On Wed, Aug 3, 2011 at 1:57 PM, liseen <<a href="mailto:liseen.wan@gmail.com">liseen.wan@gmail.com</a><br>
> > ><br>
> > > >>> wrote:<br>
> > > >>> > > > ><br>
> > > >>> > > > >> Hi<br>
> > > >>> > > > >><br>
> > > >>> > > > >> Could nginx keepalive work with HealthCheck? Does Maxim<br>
> > Dounin<br>
> > > >>> have<br>
> > > >>> > > a<br>
> > > >>> > > > >> support plan?<br>
> > > >>> > > > >><br>
> > > >>> > > > >><br>
> > > >>> > > > >><br>
> > > >>> > > > >> On Wed, Aug 3, 2011 at 3:09 AM, David Yu <<br>
> > > >>> <a href="mailto:david.yu.ftw@gmail.com">david.yu.ftw@gmail.com</a>><br>
> > > >>> > > wrote:<br>
> > > >>> > > > >><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> On Wed, Aug 3, 2011 at 2:47 AM, Maxim Dounin <<br>
> > > >>> <a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
> > > >>> > > wrote:<br>
> > > >>> > > > >>><br>
> > > >>> > > > >>>> Hello!<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> On Wed, Aug 03, 2011 at 01:53:30AM +0800, David Yu wrote:<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> > On Wed, Aug 3, 2011 at 1:50 AM, Maxim Dounin <<br>
> > > >>> <a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
> > > >>> > > > >>>> wrote:<br>
> > > >>> > > > >>>> ><br>
> > > >>> > > > >>>> > > Hello!<br>
> > > >>> > > > >>>> > ><br>
> > > >>> > > > >>>> > > On Wed, Aug 03, 2011 at 01:42:13AM +0800, David Yu<br>
> > wrote:<br>
> > > >>> > > > >>>> > ><br>
> > > >>> > > > >>>> > > > On Wed, Aug 3, 2011 at 1:36 AM, Maxim Dounin <<br>
> > > >>> > > <a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
> > > >>> > > > >>>> wrote:<br>
> > > >>> > > > >>>> > > ><br>
> > > >>> > > > >>>> > > > > Hello!<br>
> > > >>> > > > >>>> > > > ><br>
> > > >>> > > > >>>> > > > > On Tue, Aug 02, 2011 at 04:24:45PM +0100, António<br>
> > P.<br>
> > > >>> P.<br>
> > > >>> > > Almeida<br>
> > > >>> > > > >>>> wrote:<br>
> > > >>> > > > >>>> > > > ><br>
> > > >>> > > > >>>> > > > > > On 1 Ago 2011 17h07 WEST, mdounin@mdounin.ruwrote:<br>
> > > >>> > > > >>>> > > > > ><br>
> > > >>> > > > >>>> > > > > > > Hello!<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > > JFYI:<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > > Last week I posted patch to nginx-devel@which<br>
> > > >>> adds<br>
> > > >>> > > > >>>> keepalive<br>
> > > >>> > > > >>>> > > > > > > support to various backends (as with upstream<br>
> > > >>> keepalive<br>
> > > >>> > > > >>>> module),<br>
> > > >>> > > > >>>> > > > > > > including fastcgi and http backends (this in<br>
> > turn<br>
> > > >>> means<br>
> > > >>> > > > >>>> nginx now<br>
> > > >>> > > > >>>> > > > > > > able to talk HTTP/1.1 to backends, in<br>
> > particular<br>
> > > >>> it now<br>
> > > >>> > > > >>>> > > > > > > understands chunked responses). Patch applies<br>
> > to<br>
> > > >>> 1.0.5<br>
> > > >>> > > and<br>
> > > >>> > > > >>>> 1.1.0.<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > > Testing is appreciated.<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > > You may find patch and description here:<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > ><br>
> > > >>> > > > >>>><br>
> > > >>> > ><br>
> > <a href="http://mailman.nginx.org/pipermail/nginx-devel/2011-July/001057.html" target="_blank">http://mailman.nginx.org/pipermail/nginx-devel/2011-July/001057.html</a><br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > > Patch itself may be downloaded here:<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> <a href="http://nginx.org/patches/patch-nginx-keepalive-full.txt" target="_blank">http://nginx.org/patches/patch-nginx-keepalive-full.txt</a><br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > > Upstream keepalive module may be downloaded<br>
> > here:<br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > > ><br>
> > <a href="http://mdounin.ru/hg/ngx_http_upstream_keepalive/" target="_blank">http://mdounin.ru/hg/ngx_http_upstream_keepalive/</a><br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>><br>
> > > >>> <a href="http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz" target="_blank">http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz</a><br>
> > > >>> > > > >>>> > > > > > ><br>
> > > >>> > > > >>>> > > > > ><br>
> > > >>> > > > >>>> > > > > > So *either* we use the patch or use the module.<br>
> > > >>> Correct?<br>
> > > >>> > > > >>>> > > > ><br>
> > > >>> > > > >>>> > > > > No, to keep backend connections alive you need<br>
> > module<br>
> > > >>> *and*<br>
> > > >>> > > > >>>> patch.<br>
> > > >>> > > > >>>> > > > > Patch provides foundation in nginx core for module<br>
> > to<br>
> > > >>> work<br>
> > > >>> > > with<br>
> > > >>> > > > >>>> > > > > fastcgi and http.<br>
> > > >>> > > > >>>> > > > ><br>
> > > >>> > > > >>>> > > > With a custom nginx upstream binary protocol, I<br>
> > believe<br>
> > > >>> > > > >>>> multiplexing will<br>
> > > >>> > > > >>>> > > > now be possible?<br>
> > > >>> > > > >>>> > ><br>
> > > >>> > > > >>>> > > ENOPARSE, sorry.<br>
> > > >>> > > > >>>> > ><br>
> > > >>> > > > >>>> > After some googling ...<br>
> > > >>> > > > >>>> > ENOPARSE is a nerdy term. It is one of the standard C<br>
> > > >>> library<br>
> > > >>> > > error<br>
> > > >>> > > > >>>> codes<br>
> > > >>> > > > >>>> > that can be set in the global variable "errno" and<br>
> > stands<br>
> > > >>> for<br>
> > > >>> > > Error No<br>
> > > >>> > > > >>>> > Parse. Since you didn't get it, I can thus conclude that<br>
> > > >>> unlike me<br>
> > > >>> > > you<br>
> > > >>> > > > >>>> are probably<br>
> > > >>> > > > >>>> > a normal, well adjusted human being ;-)<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Actually, this definition isn't true: there is no such<br>
> > error<br>
> > > >>> code,<br>
> > > >>> > > > >>>> it's rather imitation. The fact that author of definition<br>
> > > >>> claims<br>
> > > >>> > > > >>>> it's real error indicate that unlike me, he is normal,<br>
> > well<br>
> > > >>> > > > >>>> adjusted human being. ;)<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> > Now I get it. Well adjusted I am.<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Now you may try to finally explain what you mean to ask in<br>
> > > >>> your<br>
> > > >>> > > > >>>> original message. Please keep in mind that your are<br>
> > talking<br>
> > > >>> to<br>
> > > >>> > > > >>>> somebody far from being normal and well adjusted. ;)<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Maxim Dounin<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> p.s. Actually, I assume you are talking about fastcgi<br>
> > > >>> > > > >>>> multiplexing.<br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> Nope not fastcgi multiplexing. Multiplexing over a<br>
> > > >>> custom/efficient<br>
> > > >>> > > > >>> nginx binary protocol.<br>
> > > >>> > > > >>> Where requests sent to upstream include a unique id w/c the<br>
> > > >>> upstream<br>
> > > >>> > > will<br>
> > > >>> > > > >>> also send on response.<br>
> > > >>> > > > >>> This allows for asychronous, out-of-bands, messaging.<br>
> > > >>> > > > >>> I believe this is what mongrel2 is trying to do now ...<br>
> > though<br>
> > > >>> as an<br>
> > > >>> > > http<br>
> > > >>> > > > >>> server, it is nowhere near as robust/stable as nginx.<br>
> > > >>> > > > >>> If nginx implements this (considering nginx already has a<br>
> > lot<br>
> > > >>> of<br>
> > > >>> > > market<br>
> > > >>> > > > >>> share), it certainly would bring more developers/users in<br>
> > > >>> (especially<br>
> > > >>> > > the<br>
> > > >>> > > > >>> ones needing async, out-of-bands request handling)<br>
> > > >>> > > > >>><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> Short answer is: no, it's still not possible.<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> _______________________________________________<br>
> > > >>> > > > >>>> nginx mailing list<br>
> > > >>> > > > >>>> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > > > >>>> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> --<br>
> > > >>> > > > >>> When the cat is away, the mouse is alone.<br>
> > > >>> > > > >>> - David Yu<br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> _______________________________________________<br>
> > > >>> > > > >>> nginx mailing list<br>
> > > >>> > > > >>> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > > > >>> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>><br>
> > > >>> > > > >><br>
> > > >>> > > > >> _______________________________________________<br>
> > > >>> > > > >> nginx mailing list<br>
> > > >>> > > > >> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > > > >> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>> > > > >><br>
> > > >>> > > > >><br>
> > > >>> > > > ><br>
> > > >>> > > > ><br>
> > > >>> > > > > --<br>
> > > >>> > > > > Warez Scene <<a href="http://thewarezscene.org" target="_blank">http://thewarezscene.org</a>> Free Rapidshare<br>
> > > >>> Downloads<<br>
> > > >>> > > <a href="http://www.nexusddl.com" target="_blank">http://www.nexusddl.com</a>><br>
> > > >>> > > > ><br>
> > > >>> > > > ><br>
> > > >>> > > > > _______________________________________________<br>
> > > >>> > > > > nginx mailing list<br>
> > > >>> > > > > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > > > > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>> > > > ><br>
> > > >>> > > > ><br>
> > > >>> > ><br>
> > > >>> > > > _______________________________________________<br>
> > > >>> > > > nginx mailing list<br>
> > > >>> > > > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > > > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>> > ><br>
> > > >>> > > _______________________________________________<br>
> > > >>> > > nginx mailing list<br>
> > > >>> > > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>> > ><br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> > --<br>
> > > >>> > Warez Scene <<a href="http://thewarezscene.org" target="_blank">http://thewarezscene.org</a>> Free Rapidshare<br>
> > > >>> > Downloads<<a href="http://www.nexusddl.com" target="_blank">http://www.nexusddl.com</a>><br>
> > > >>><br>
> > > >>> > _______________________________________________<br>
> > > >>> > nginx mailing list<br>
> > > >>> > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>><br>
> > > >>> _______________________________________________<br>
> > > >>> nginx mailing list<br>
> > > >>> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > >>> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > >>><br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> --<br>
> > > >> Warez Scene <<a href="http://thewarezscene.org" target="_blank">http://thewarezscene.org</a>> Free Rapidshare Downloads<<br>
> > <a href="http://www.nexusddl.com" target="_blank">http://www.nexusddl.com</a>><br>
> > > >><br>
> > > >><br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > > Warez Scene <<a href="http://thewarezscene.org" target="_blank">http://thewarezscene.org</a>> Free Rapidshare Downloads<<br>
> > <a href="http://www.nexusddl.com" target="_blank">http://www.nexusddl.com</a>><br>
> > > ><br>
> > > ><br>
> > > > _______________________________________________<br>
> > > > nginx mailing list<br>
> > > > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > > ><br>
> > > ><br>
> ><br>
> ><br>
> ><br>
> > > _______________________________________________<br>
> > > nginx mailing list<br>
> > > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> ><br>
> > _______________________________________________<br>
> > nginx mailing list<br>
> > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> ><br>
<br>
> _______________________________________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</div></div></blockquote></div><br></div></div>