Proxy pass, upstream and http/1.0
Cliff Wells
cliff at develix.com
Fri Jul 31 02:32:14 MSD 2009
On Thu, 2009-07-30 at 06:10 -0400, komone wrote:
> Hi Cliff,
>
> It seems I put you on the defensive a little, or sounded aggressive, neither of which was my intention.
Not at all. Neither did I intend to convey that sort of attitude in my
response. Sorry if it came across that way.
>
> Here's a little more detail (inline with your responses)...
>
> Cliff Wells Wrote:
> > Many, if not most, people using Nginx are proxying
> > to application
> > servers (myself included).
>
> I am writing a web application server which is *intended* to run
> behind a proxy so that I can rely on delegating SSL/TLS, mail
> proxying, and (for the largest part, but not all), static file
> serving, rather than simply re-inventing the wheel.
Traditional (and good) thinking. I suspected this would be your
approach, but didn't want to assume *too* much (I was making more than
enough assumptions already).
Okay, so here's my take on it: the main benefits to HTTP/1.1 have to do
with reduced/optimized network traffic (ignoring chunked requests for
the moment). This mainly revolves around persistent connections
combined with pipelining (persistent connections without pipelining
actually perform worse than HTTP/1.0 approach of concurrent
connections). However, in your scenario, pipelining won't help much
because not all requests are sent to the backend. Nginx (or whatever
proxy you choose) will be serving some (probably most) of the resources
itself. There's additional overhead and complexity in maintaining a
persistent connection. This is usually mitigated by requesting many
resources over the same connection. The more resources you get at
once, the better. But let's say that a page only has a single resource
that comes from the backend. Now you've forced Nginx (and your app) to
incur the overhead of the persistent connection but have only transfered
a single request. The performance will actually be significantly worse
than with an HTTP/1.0 request. While it's true you'll still generate a
bit less TCP traffic, the actual request time will be higher.
Even if most of the requests went
> HTTP/1.0 is largely a dead protocol, and the only cases I can find
> that require 1.0 in this server are:
>
> Lynx, Wget, and... NginX (which was to be my favored proxy)
>
> ...I am currently returning 505 to 1.0 clients. The reasoning is that
> the applications this server is intended to host will be highly
> dynamic "web 2.0+" (tho I truly hate that pseudo-term), and I am
> stuggling to find a business use case that really justifies legacy 1.0
> support. I'd like to move away from the constraints of the 20th
> century but NginX could be the showstopper.
I think Maxim addressed this adequately.
> > Without knowing a lot
> > about your particular
> > application, I'm still inclined to venture that
> > you are likely seriously
> > over-concerned about the direness of this
> > situation.
>
> This is quite possible, but I cannot test fully as yet.
>
> >
> > With a notable exception, most of the additional
> > benefits of HTTP/1.1
> > you list (copied/pasted directly from the W3C
> > page, I see)
> > have little
> > positive effect on performance between a proxy and
> > a backend server that
> > exist on the same LAN or especially the same
> > machine. They are mostly
> > of benefit between a server and a user-agent
> > (browser) or over
> > slower-than-LAN links. Perhaps you should
> > provide your own, actual
> > reasons (or better yet, test results) that
> > demonstrate this is more than
> > an imagined fear. I'm fully prepared to believe
> > that your app might
> > require HTTP/1.1, but copying reasons from the W3C
> > site isn't entirely
> > convincing.
>
> true ..., however I'm concerned that more issues will raise their head
> further down the road and I don't want to paint myself into a
> corner... inability to do chunked transfers by itself seems to be good
> enough reason for concern to me.
And I'd concede that this may very well be the case. If you're
dynamically generating some type of content, the size of which can't be
known in advance, then this may very well be a show-stopper. Other
than that, I'm hard-pressed to find limitations in 1.0 that would affect
things significantly between the application and the proxy (once again
that Nginx *does* do 1.1 to the user agent, where it seems to provide
the greatest benefit).
> >
> > No doubt there are many applications that would
> > benefit from things like
> > chunked responses (I believe range requests should
> > pass through
> > successfully, but haven't tested),
>
> I have my hands 110% full, but will likely be able to do that kind of
> testing later.However, note that this means I'm being asked to test
> other technologies rather than just my own, so frankly it's not going
> to be on my no.1 priority list and I may just have to go with a
> different front end recommendation at release.
Understood. But remember that there's several very large installations
of Nginx (Hulu, Github, rambler.ru, Wordpress.org, etc) that seem to
work fine within this constraint.
> > but unless you
> > are saying that you
> > have that sort of application, then I'd suggest
> > you simply test and see
> > if it works. I've heard rumor that premature
> > optimization is the root
> > of a significant amount of fail ;-)
>
> ..that is *without doubt* true. However, usually that is related to
> sequential code. There are design/structural limitations that should
> be addressed early, and support for HTTP/1.0 does have structural
> impact on the codebase.
Well, as Maxim pointed out, HTTP/1.0 is a subset of HTTP/1.1. So you
have to implement it all in any case. I'm not sure what level you're
programming your connections at (are you using an HTTP library? A web
framework? Dealing with raw sockets?), but this will undoubtedly
determine how difficult much of this is and whether or not some
additional checks in the code for selecting protocol is worth it to you.
> > > The 1.0 constraint is pretty dire for me. Do I
> > have alternatives to
> > > proxy-pass/upstream that would allow me to use
> > Nginx or should I be
> > > looking at another HTTP server?
> >
> > If you really believe you need HTTP/1.1 between
> > Nginx and the backend
> > (note that Nginx *does* support 1.1 to the
> > browser), then you'll need to
> > look elsewhere. I believe that 1.1 support
> > between Nginx and backend is
> > in the works, but sadly it doesn't exist today.
> >
> > Sorry that I can't give you something more
> > positive. But I do
> > encourage you to not give up unless actual testing
> > (or some known
> > condition you haven't mentioned) demonstrates that
> > HTTP/1.0 is a
> > demonstrable failure for your application.
>
> NP at all. A correct answer is worth 1000 answers that pander to some
> ideal or fantasy.
I (obviously) fully agree ;-)
As an aside, I think if I weren't using Nginx, I'd probably consider
Litespeed (although I'm not 100% fond of its licensing nor
Apache-compatible configuration). Someone also mentioned Cherokee, and
it is a very fast server. However, IMO it's sort of the Marla Singer
of HTTP servers: fast and pretty, but also and schizoid and opaque.
I've never been able to figure out how to get it to do anything very
interesting (but the GUI is fun to click on).
In any case, good luck with your selection.
Regards,
Cliff
--
http://www.google.com/search?q=vonage+sucks
More information about the nginx
mailing list