max_fails for 404s?
mdounin at mdounin.ru
Thu May 19 00:00:46 MSD 2011
On Wed, May 18, 2011 at 02:35:49PM -0500, Steven Deobald wrote:
> Hey folks,
> We've got the following scenario occurring in production. The solution is
> non-obvious... to me, at least.
> [client] ==> [nginx] ==> [service S]
> * nginx fronting 2 application servers which provide a service S. (Primary
> and backup, both running Trinidad. Trinidad is a JRuby/Rails wrapper for
Do you mean "backup" as in "server ... backup;" in upstream
> * hot-deploying a rails (jruby) app into Trinidad causes Trinidad to return
> a 404 to nginx
> * nginx returns the 404 to the application. In this particular case, the
> client is another service which expects service S to remain live during
> So, nginx does provide an "http_404" case for the "proxy_next_upstream"
> directive. However, this would require the "max_fails" setting to pertain to
> 404s, which it doesn't... otherwise legitimate 404s produce an infinite
> Is there something like "max_fails" for 404s?
> Is there another solution to this problem?
> Is it Trinidad's fault for returning 404s and not 503s? (I would say it is
> but I can't find a solution to that problem just yet.)
Parameter max_fails is to mark backends down, and you probably
don't want to mark backends down just because of some
On the other hand, using "server ... backup" indeed will cause
infinite loop with "proxy_next_upstream http_404". It's a bug, it
should only ask each backend once.
Workaround is to don't use "server ... backup" but use either
"weight=" instead (big one for real server, small one for backup
one) or use error_page-based fallback instead.
More information about the nginx