Proposal: new caching backend for nginx

Maxim Dounin mdounin at
Wed Jan 23 18:19:49 UTC 2013


On Wed, Jan 23, 2013 at 07:27:42PM +0200, Aliaksandr Valialkin wrote:

> On Wed, Jan 23, 2013 at 5:47 AM, Maxim Dounin <mdounin at> wrote:
> > I don't think that it will fit as a cache store for nginx.  In
> > particular, with quick look through sources I don't see any
> > interface to store data with size not known in advance, which
> > happens often in HTTP world.
> Yes, ybc doesn't allow storing data with size not known in advance due to
> performance and architectural reasons. There are several workarounds for
> this problem:
> - objects with unknown sizes may be streamed into a temporary location
> before storing them into ybc.

This will effectively double work needed to store response, and 
writing to cache is already one of major performance bottlenecks 
on many setups.

> - objects with unknown sizes may be cached in ybc using fixed-sized chunks,
> except for the last chunk, which may have smaller size. Here is a

Thus effectively reinventing what is known as blocks in the 
filsystem world, doing all the maintanance work by hand.

Actually, this is the basic problem with phk@'s aproach (with all 
respect to phk@): reinventing the filesystem.


> > Additionally, it looks like it
> > doesn't provide async disk IO support.
> >
> Ybc works with memory mapped files. It doesn't use disk I/O directly. Disk
> I/O may be triggered if the given memory page is missing in RAM. It's
> possible to determine whether the given virtual memory location is cached
> in RAM or not - OSes provide special syscalls designed for this case - for
> example, mincore(2) in linux. But I think it's better relying on caching
> mechanisms provided by OS for memory mapped files than using such syscalls
> directly. Ybc may block nginx worker when reading swapped out memory pages,
> but this should be rare event if frequently accessed cached objects fit RAM.

The key words are "if ... cached objects fit RAM".

But bad things happen if they aren't, and that's why support for 
AIO was introduced in nginx 0.8.11 several years ago.  
Surprisingly enough, it just works for cache without any special 
code - because it's just files.

(Well, not exactly, there is special code to handle async reading 
of a response header.  But it's rather addition to normal AIO 

And another major problem with mmap is that it doesn't tolerate IO 
errors, and if e.g. disk is unable to read a particular block - 
you'll end up with SIGBUS to the whole process leaving it no 
possibilities to recover instead of just an error returned for a 
single read() operation.  While this may be acceptable in many use 
cases, it is really bad for an event-based server with thousands 
of clients served within a single process.

> Also as I understood from the , nginx
> currently may block on disk I/O too:
> > One major problem that the developers of nginx will be solving in
> upcoming versions is
> > how to avoid most of the blocking on disk I/O. At the moment, if there's
> not enough
> > storage performance to serve disk operations generated by a particular
> worker, that
> > worker may still block on reading/writing from disk.

Yep, not all OSes have AIO support (and some, like Linux, 
require you to choose just one option, VM cache or AIO), and even on 
systems with good one (FreeBSD, notably) there are still 
operations like open() and stat(), which doesn't have async 
counterparts at all, but still may block.  That's why we are 
experimenting with various ways to make things better.  But it's 
all about improving async IO support, not about dropping it.

Maxim Dounin

More information about the nginx-devel mailing list