"detach" upstream from request (or allow send from keepalive saved connection) / FastCGI multiplexing
Dipl. Ing. Sergey Brester
serg.brester at sebres.de
Wed Feb 10 17:30:45 UTC 2021
I have a question: how an upstream could be properly "detached" from
request in case it gets closed by client?
Some time ago I have implemented a FastCGI multiplexing for nginx, which
would work pretty well, excepting the case if a request gets closed by
client side. In such a case _ngx_http_upstream_finalize_request_ would
close the upstream connection.
This may not be worse for a single connect per request, but very
annoying in case of multiplexing, since closing of such upstream connect
means a simultaneous close for N requests not involved by the client,
but serving through the same upstream pipe.
So I was trying to implement abortive "close", using send of
ABORT_REQUEST (2) packet, corresponding FastCGI protocol.
Since _upstream->abort_request_ handler is not yet implemented in nginx,
my first attempt was just to extend _ngx_http_fastcgi_finalize_request_
in order to create new send buffer there and restore
_r->upstream->keepalive _so that _u->peer.free _in
_ngx_http_upstream_finalize_request_ would retain the upstream connect.
So I see "free keepalive peer: saving connection" logged (and connect is
not closed), but probably because
_ngx_http_upstream_free_keepalive_peer_ moved it to cached queue, it
does not send ABORT_REQUEST packet to the fastcgi side.
Is there some proper way to retain an upstream connection (able to send)
which is detached from request by close or before close, so such an
abortive "disconnect" can be handled through upstream pipe? With other
words avoid an upstream close or saving keepalive connection in
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nginx-devel