gRPC reverse proxy - connection level info cache

Ram B ram.programme at
Tue Jun 15 01:21:45 UTC 2021

Hello nginx team,

I am nginx newbie, I am evaluating nginx as an alternative to our current
custom implementation of gRPC reverse proxy use case.
At the very high level this is how our current gRPC setup looks:
- There are multiple services(using gRPC servers) running inside a box on
different ports(some run TCP servers or use Unix Domain Sockets - UDS).
- All those services are hosted only internally on the box, not exposed to
outside world, there is one reverse proxy(revproxy) that sits on the box
which exposes gRPC server port to external world
- All the internal services register their RPCs(APIs) with the revproxy,
the revproxy keeps map of which gRPC RPCs to direct to which internal
- revproxy is also responsible for Auth RPC, it is currently serviced by
revproxy itself, it isn't forwarded to any other internal service, if the
auth fails the connection is terminated immediately, no further RPCs
allowed on that channel
- The Auth RPC also has additional context(ClientID) along with user
credentials about the specific client that established the HTTP/2
connection, revproxy keeps track of this info as a map ClientID1 ->
- If there is another client comes with the same ClientID while the first
one is still active, the new client request will be rejected.
- When an existing connection goes down(either abruptly or in a normal
case) for a particular client, we remove this ClientID1 mapping so that the
same client can try reconnecting or a new client with the same ClientID can
take over.

Trying a prototype of our implementation in nginx, I was able to use
'grpc_pass' primitive and forward the requests to corresponding internal
Here is where I am stumbling to adapt nginx for my use case:
For our use case, a typical gRPC client first establishes the channel, then
runs Auth RPC, then various service RPCs and finally client exits and the
gRPC channel gets destroyed.

 http {
          server {
                 listen 50051 http2;

                 location /auth_subreq {
                      grpc_pass grpc://localhost:50052;
                 location /AuthRPC {
                      grpc_pass grpc://localhost:50053;
                 location /RPC1 {
                      auth_request /auth_subreq
                      grpc_pass grpc://localhost:50054;
                 location /RPC2 {
                      auth_request /auth_subreq
                      grpc_pass grpc://localhost:50055;

If I have a config like this, I am expecting that a client calls "AuthRPC"
first immediately after establishing a gRPC channel, that gets forwarded to
Auth service running at 50053. This service does local authentication as
well using user credentials that are part of AuthRPC and sends response
back to the client either success or failure.
If the AuthRPC succeeds. it also captures the client's IP address, PORT and
ClientID  keeps that mapping either in memory or in some DB. If some other
client comes in with the same ClientID, the Auth service running at 50053
can see that there is already one client with the same ClientID and reject
the new request.
Typically clients start other RPCs once the AuthRPC succeeds, for example
in this "RPC1" & "RPC2" are supported by two different services running at
50054 & 50055 respectively. We will have another service called
"/auth_subreq" that is just the internal service that checks if this
particular RPC can be allowed or not, to verify this, we use the info
captured by "AuthRPC" and see if there is a successful channel
already established from the IP+PORT that the current RPC invoked from, if
not the RPC1/RPC2 request will be rejected.

This is the most tricky part, I have to clean up the cached info that
AuthRPC stored ClientID for a IP+PORT combo and also its authentication
status whether it succeeded or not when the client connection gets closed.
I couldn't figure out what is the best mechanism to handle this? If any one
has already dealt with this kind of scenario, can you please let me know.

Looking at the Development Guide, it looks like there can be different
methods, not sure which one is better suited for this.
- I tried to look at using a ngx_http_subrequest()  within
ngx_http_close_request function, but for some reason I ended with infinite
loop of subrequests, need to explore more on how the request & subrequests
- maybe make a http client request to "/AuthRPC_Clean" or something which
does the necessary cleanup of connection state, not sure if it is wise to
call that in ngx_http_close_connections
- Can we write a new module that does what the AuthRPC service provides,
stores light weight per connection data and is able to look up that info
when I get a new connection?
- Is there some event mechanism where I can subscribe to callbacks for
connection establishment and teardown? I can do the checking of existing
ClientIDs and deletion of them in the respective callbacks.

Any thoughts, greatly appreciated.

Best, -Ram.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list