Nginx + Memcached: Questions / Advice

Neil Sheth nsheth at gmail.com
Mon Mar 16 23:09:12 MSK 2009


Hello,

I'm looking to add some increased caching to our setup, and was
interested in incorporating memcached to nginx.  I just had a few
questions, looking for a little direction!

First, our current setup has an nginx front-end serving static content
(images, js, css, etc), with two backend servers running apache / php.
 Currently, we utilize memcached on our backend, storing some snippets
of html and caching some of our more expensive db queries.

First question - has anyone done a comparison between setting up the
memcached integration through nginx and just serving the pages out of
memcached on the backend?  That is, we already have to insert the
whole page into memcached on the backend.  So, either I serve out of
memcached (and avoid the overhead of the apache hit), or I just have
apache / php query memcached and return the page.

The latter would be much easier to implement - not sure what sort of
performance different would be.

One reason it would be easier to implement if the caching is handled
through our backend - we need to only cache traffic that's not logged
in.  We "could" do this through nginx if we cookie logged in users,
and have nginx read that cookie, and bypass memcache if the cookie
isn't found.


If we have nginx serving up content from memcached - how is gzipping
handled?  Do we store it in the cache gzip'd?

Another thing - we do a bit of A/B testing of our content.  So, to
fully track that, we'd need some percentage of sessions to bypass the
cache.  From nginx, that's a bit more tricky, as we don't have the
session information if things are served out of memcached.  So, I was
thinking, I could just route a certain percentage of requests that
have an external referrer back to our backend, and cookie those users
to also bypass the cache for the rest of the session.

Looking at the memcached module documentation - how do you specify
multiple memcached servers?  It appears that it would treat them as
mirrors, not as a distributed cache?

I think that's it.  In any case, the main thing is, would the
increased performance outweigh the additional complexity, if anyone's
examined that in more detail (serving the cached pages via apache vs
nginx directly)?  Anything else I should be aware of?

Thanks!





More information about the nginx mailing list