General Development Inquiry

Jeff Heisz jmheisz at gmail.com
Sun Jun 7 03:43:40 UTC 2020


Giving this one more try with a few questions this time.

1) This mailing list used to be more discussion oriented but seems to
be mainly patch notifications now.  Is there a more appropriate
channel to ask these kind of module development questions for NGINX?

2) I'm actually working on a second module/system aside from the one
mentioned in my previous post.  This might actually be an easier
question?  It's using a custom upstream (like the memcache one) to
talk a binary protocol to another daemon with requests/responses.  For
the most part it works, but there is one oddity that I can't seem to
resolve - the daemon can respond with an HTML code/header set and then
a response body.  My upstream process_header function is setting up
all of the respective HTTP headers (content length, etc.) and buffer
positioning to stream the response and NGINX is issuing it to the
browser.  However, it doesn't finalize the request, instead waits for
more data (which isn't coming, the response is complete) from the
daemon and then times out after a minute and closes the connection.
My suspicion is that the 'built-in' upstream handing is designed
around servers that close the connection after the response is issued.
But my daemon doesn't need to do that, so I'm trying to essentially
use upstream with keepalive.  Do I also need to provide an output
filter to track the response length myself and properly mark the last
buffer/end of chain so that the request finalizes without closing the
upstream connection?  Or is there some variable that I'm missing?  Any
suggested examples to look at?

3) Still looking for some sort of suggestion/ideas regarding my email
from a few weeks ago (properly streaming large buffer responses from
worker threads).  I was excited when I noticed the njs elements for
executing JS as part of a request, figuring that it would give some
direction since you wouldn't want JS executing in the primary event
thread.  But from what I could tell, that appears to be exactly what
it's doing :(   I'm still thinking of some kind of extended worker
pool model using IPC pipes to communicate from the threads to the main
event loop (essentially a local upstream) but was just looking for
thoughts/comments on best approach.

Thanks in advance,

jmh


More information about the nginx-devel mailing list