[PATCH 00 of 12] HTTP/3 proxying to upstreams

Maxim Dounin mdounin at mdounin.ru
Fri Dec 29 11:44:18 UTC 2023


Hello!

On Thu, Dec 28, 2023 at 05:23:38PM +0300, Vladimir Homutov via nginx-devel wrote:

> On Thu, Dec 28, 2023 at 04:31:41PM +0300, Maxim Dounin wrote:
> > Hello!
> >
> > On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel wrote:
> >
> > > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote:
> > > > Hello!
> > > >
> > > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel wrote:
> > > >
> > > > > Hello, everyone,
> > > > >
> > > > > and Merry Christmas to all!
> > > > >
> > > > > I'm a developer of an nginx fork Angie.  Recently we implemented
> > > > > an HTTP/3 proxy support in our fork [1].
> > > > >
> > > > > We'd like to contribute this functionality to nginx OSS community.
> > > > > Hence here is a patch series backported from Angie to the current
> > > > > head of nginx mainline branch (1.25.3)
> > > >
> > > > Thank you for the patches.
> > > >
> > > > Are there any expected benefits from HTTP/3 being used as a
> > > > protocol to upstream servers?
> > >
> > > Personally, I don't see much.
> > >
> > > Probably, faster connection establishing to due 0RTT support (need to be
> > > implemented) and better multiplexing (again, if implemented wisely).
> > > I have made some simple benchmarks, and it looks more or less similar
> > > to usual SSL connections.
> >
> > Thanks for the details.
> >
> > Multiplexing is available since introduction of the FastCGI
> > protocol, yet to see it working in upstream connections.  As for
> > 0-RTT, using keepalive connections is probably more efficient
> > anyway (and not really needed for upstream connections in most
> > cases as well).
> 
> With HTTP/3 and keepalive we can have just one quic "connection" per upstream
> server (in extreme). We perform heavy handshake once, and leave it open.
> Next we just create HTTP/3 streams to perform request. They can perfectly
> run in parallel and use same quic connection. Probably, this is something
> worth implementing, with limitations of course: we don't want to mix
> requests from different (classes of) clients in same connection, we
> don't want eternal life of such connection and we need means to control
> level of such multiplexing.

Multiplexing has various downsides: already mentioned security 
implications, issues with balancing requests between upstream 
entities not directly visible to the client (such as different 
worker processes), added complexity.  And, as already mentioned, 
it is not something new in HTTP/3.

[...]

-- 
Maxim Dounin
http://mdounin.ru/


More information about the nginx-devel mailing list