General Development Inquiry

Jeff Heisz jmheisz at gmail.com
Mon Jun 8 15:27:32 UTC 2020


Ok, that did it, was a bit more painful than expected since I had to
actually write a filter function (if you don't provide it, your init
method is also overwritten by the default) but in the end it is now
cleanly processing without hanging up on my upstream daemon.

A suggested change to consider for the nginx core -  perhaps at the
end of the ngx_http_upstream_process_headers() function, instead of
just blindly setting the length to -1, perhaps only do so if the
upstream->keepalive is 0/false, so that a custom process_header can
set an explicit length and the keepalive marker and not need to define
a filter if the content is pass-through.  Just a thought.

Thanks again regardless.

jmh


> >
> > My module was setting the upstream length as you suggested, hence my confusion.  So I flooded the ngx_http_upstream code with debug statements to hunt down why the code around line 3614 (which calls finalize on the upstream length reaching zero) wasn't working.  Turns out, the value was -1, which, if I read the code correctly, indicates that upstream is complete when connection is closed (consistent with what I was seeing).
> >
> > More debugging, turns out that my custom process_header method is setting the correct length to process, but immediately after that can the nginx server internally calls process_headers, which transfers all of the values between the upstream and request headers and...then...sets the upstream length to -1!
> >
> > Gah!  So now I know what's wrong, but have no idea how to address it!
> > There's no indicators or flags that I can see in the process_headers method to have it not reset the length to EOF!
> >
> > jmh
> >
> I think you should set it in the input_filter_init function, instead of process_header, this is what the proxy & memcached modules do... This function is called after process_header, so it won't get reset.
>
> Eran
>


More information about the nginx-devel mailing list