How does Nginx look-up cached resource?

Gena Makhomed gmm at csdoc.com
Mon Sep 7 23:17:59 UTC 2015


On 08.09.2015 0:22, Sergey Brester wrote:

>> Using MurmurHash is not good idea, because attacker
>> can easy make collisions and invalidate popular entries
>> from cache, and this technology can be used for DDoS attacks.
>> (even in case if only one site exists on server with nginx cache)
>>
>> Using secure hash function for nginx cache is strong requirement,
>> even in case then full proxy_cache_key value check will be added.
>
> It's not correct, because something like that
> will be called "security through obscurity"!

There is no obscurity here. Value of proxy_cache_key is known,
hash function is known, nginx sources is open and available.

> Hash value should be used only for fast searching of hash key.
> Not to identify the cached resources!

You remember proposed solution from your message?
http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007286.html

 > - In the very less likely case of collision we will just forget
 > the cached entry with previous key or save it as array for serial
 > access (not really expected by caching and large hash value,
 > because rare and that is a cache - not a database that
 > should always hold an entry).

Attacker easily can provide DDoS attack against nginx in this case:
http://www.securityweek.com/hash-table-collision-attacks-could-trigger-ddos-massive-scale
Hash Table Vulnerability Enables Wide-Scale DDoS Attacks

> If your entry should be secure, the key (not it hash) should contain
> part of security token, authentication, salt etc.

This is "security through obscurity",
and you say, what this is bad thing.

> So again: hash bears no security function, and if the whole key would be
> always compared - it would be at all not important which hash function
> will be used, and how secure it is. And to "crack" resp. return the
> cache entry you should always "crack" it completely key, not it hash.

If site is under high load, and, for example contains many pages,
which are very popular (for example, 100 req/sec of each) and backend
need many time for generating such page, for example, 2-3 seconds -
attacker can using MurmurHash create requestst to non-existend
pages, with same MurmurHash hash value. And 404 responces
from backend will replace very popular pages with different uri
but with same MurmurHash hash value. And backend will be DDoS`ed
by many requests for different popular pages. So, attacker easily
can disable nginx cache for any known uri. And if backend can't
process all client requests without cache - it will be overloaded
and access to site will be denied for all new users.

You can make workaround for this bug caused by MurmurHash,
by appending and prepending proxy_cache_key value with
some secret tokens, but management of these tokens
will be headache of each nginx user, and many of these
users don't do such "security through obscurity" things,
and leave proxy_cache_key in config as is, in form
$scheme$host$request_uri or even it default value.

So, many thousands of nginx configurations will be in vulnerable state.

Also, such workaround for bugs caused by using MurmurHash
is very bad from usability point of view, we will force
users to do the stupid things, and manually add workaround.

Automatically add workaround to config - also is not good thing,
because nginx only read config and never write it. Even more,
config may be available to nginx only in read only mode.

Storing such tokens inside cache files is also bad thing:
file with these secure tokens will be single point of failure.
Delete or modify such file, and entire nginx cache
will be useless and invalid after nginx restart.

Also, storing value of these secure tokens inside each cache file -
is just wasting of space, if not store - cache can be easily exploded,
and will return mess: unmatched proxy_cache_keys and cache content.

Better approach, from my point of view - just use secure hash function,
and attacker can't in this case do DDoS attacks against nginx cache.

I agree with you, what compare entire keys
for equal hash values is more safe solution:

: More secure and robust way is to store proxy_cache_key
: value into cache file on disk and check this value
: before sending cached response to client. In such way
: we can be ensured, what cache misuse is not possible

> I know systems, where the hash values are 32 bits and uses simplest
> algorithm like Ci << 3 + Ci+1. But as already said, hereafter the whole
> keys will be compared and it's very safe.

https://www.kb.cert.org/vuls/id/903934
Vulnerability Note VU#903934
Hash table implementations vulnerable to algorithmic complexity attacks

-- 
Best regards,
  Gena



More information about the nginx-devel mailing list