Nginx crash - possibly keepalives

Maxim Dounin mdounin at mdounin.ru
Wed Oct 19 09:24:01 UTC 2011


Hello!

On Tue, Oct 18, 2011 at 06:32:53PM -0700, Matthieu Tourne wrote:

> So in order to try to reproduce the problem locally, I made this debug
> module :
> https://github.com/mtourne/ngx_massive_chunker
> 
> This is meant to have 2 nginx chained one to the other.
> nginx#1 has  proxy_pass to nginx#2 while using keepalives
> nginx#2 has mass_chunk on, which for this extreme test will chunk a 15M file
> into 10 bytes chunk (in different tcp packets or not).
> 
> I wasn't able to reproduce the segfault, but when I'm using
> proxy_buffering off it works fine.

To reproduce the segfault you have to:

1. Fill client socket buffers to make sure no buffers will be 
freed at (2), i.e. all buffers will become busy.  This should be 
done carefully to avoid making some proxy buffers busy here, as 
this is likely to make (2) impossible.

2. Trigger "all buffers in the out chain" problem by sending multiple 
small chunks at once.  Amount of data must be less than 
proxy_busy_buffers, but total input size (including chunked encoding 
overhead) must be big enough to consume all proxy_buffers 
(including first one, proxy_buffer_size). 

3. Send some more data to actually trigger segfault due to all 
buffers being busy.

I'm able to reproduce it here (using specially crafted backend 
server and specially crafted client), and I'm not really intrested 
in another artificial reproduction.

> But, when I'm using proxy_buffering and proxy_caching (with sendfile on, and
> directio 2M).
> It seems like readv() returns 0, but finalize_request returns NGX_AGAIN.

ENOPARSE

The ngx_http_finalize_request() is void, it doesn't return anything.  

You may want to provide debug log to make it clear what are you 
talking about.

Maxim Dounin

> I'm not sure that test is relevant, and what I'm really benching here. And
> if it might not be the tcp layer directly?
> But maybe the failure should be explicit.
> 
> The behavior was the same with or without the patch. I've also tried various
> combination of proxy buffers, sendfile, directio.
> 
> Any thoughts ?
> 
> Thank you!
> Matthieu.
> 
> On Fri, Oct 14, 2011 at 12:43 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> 
> > Hello!
> >
> > On Wed, Oct 12, 2011 at 09:22:41PM +0400, Maxim Dounin wrote:
> >
> > > Hello!
> > >
> > > On Wed, Oct 12, 2011 at 09:32:41AM -0700, Matthieu Tourne wrote:
> > >
> > > > On Wed, Oct 12, 2011 at 5:20 AM, Maxim Dounin <mdounin at mdounin.ru>
> > wrote:
> > > >
> > > > > Hello!
> > > > >
> > > > > On Tue, Oct 11, 2011 at 07:53:36PM -0700, Matthieu Tourne wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > After turning on keepalives, we've been seeing one crash pretty
> > > > > > consistently.
> > > > > > We're running nginx 1.1.5 and here is the backtrace :
> >
> > [...]
> >
> > > Ok, it looks like I'm right and all buffers are in busy chain.
> > > Likely this happens due to upstream sending response in many small
> > > chunks.
> > >
> > > I'll try to reproduce it here and provide proper fix.
> >
> > Please try the attached patch.
> >
> > Maxim Dounin
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >



More information about the nginx mailing list