NGINX Caching

Ryan Malayter malayter at gmail.com
Tue Apr 20 08:08:14 MSD 2010


On Mon, Apr 19, 2010 at 11:17 AM, Cliff Wells <cliff at develix.com> wrote:
> Can you name one of these Python web servers?   I've only used a few
> (CherryPy, Tornado, FAPSWS), but these easily handle thousands of
> requests per second.   Some frameworks, like Django and Pylons, come
> with a "development" server that's designed for testing, but that is
> special-purpose software, not some general statement about the
> capabilities of Python web servers.

The built-in web server in Python had terrible concurrency behavior in
our tests, especially when database calls were involved. Not sure why
waiting for a DB response would block or severely hamper other
requests, but it certainly seemed to be the case. The other Python
development servers were of course not considered. I didn't look much
for other pure-Python servers, as Apache and mod_wscgi was popular and
seemed to be the path of least resistance. I looked a bit at Twisted,
but it was clearly too much added complexity for too little gain.

> Python certainly has them.  I can't speak for Perl, PHP, or .NET, but I
> don't see any reason it wouldn't be reasonable to write one.   If they
> don't exist, I assume the reason would be their long-term integration
> with larger HTTP servers (Apache, IIS) which would make development of a
> standalone server appear moot.

Nanoweb exists for PHP, but it actually has to fork to service a PHP
request. Go figure. As for .NET, there is no real choice besides IIS
except mod_mono or FastCGI.

>
> The key thing to remember here is that a standalone "hello, world"
> program can easily achieve thousands of requests per second with any of
> these servers, but the minute you start querying databases, handling
> sessions, etc, your performance will fall through the floor regardless
> of whether your use FastCGI or HTTP.   If you are basing your
> scalability planning around FastCGI vs HTTP then you are worrying about
> the wrong part of the stack.

I already agreed with you about the *protocol* not being an issue.
It's the architecture of the back-end service that matters. In my
(admittedly limited experience), all of the dynamic-language HTTP
servers I've tried have concurrency behavior, and/or require running a
bunch of instances to utilize mutli-core hardware because of
underlying issues in the language interpreter. I used the imprecise
word "slow" to encompass that for the sake of brevity.

Managing and monitoring say, two dozen Mongrel instances on one box
seems, well, stupid. If you need lots of single-threaded back-end
processes, FastCGI is widely supported, has good existing process
management tools, and is designed for the task. It's also probably
more manageable as you scale up, which is why I made the suggestion.
If scale and performance isn't a concern, why would the OP ask about
load-balancing?

-- 
RPM



More information about the nginx mailing list