Load Balancing NTLM over HTTP with NGINX

Michael B Allen ioplex at gmail.com
Sun Nov 20 00:36:16 UTC 2022

On Sat, Nov 19, 2022 at 4:04 PM Maxim Dounin <mdounin at mdounin.ru> wrote:

> Hello!
> On Fri, Nov 18, 2022 at 10:30:29PM -0500, Michael B Allen wrote:
> > NTLM over HTTP is a 3 request "handshake" that must occur over the same
> > connection.
> > My HTTP service implements the NTLMSSP acceptor and uses the clients
> remote
> > address and port like "" to track the authentication
> state
> > of each TCP connection.
> >
> > My implementation also uses a header called 'Jespa-Connection-Id' that
> > allows the remote address and port to be supplied externally.
> > NGINX can use this to act as a proxy for NTLM over HTTP with a config
> like
> > the following:
> >
> > server {
> >     location / {
> >         proxy_pass http://localhost:8080;
> >         proxy_set_header Jespa-Connection-Id
> >         $remote_addr:$remote_port;
> >     }
> > }
> I'm pretty sure you're aware of this, but just for the record.
> Note that NTML authentication is not HTTP-compatible, but rather
> requires very specific client behaviour.  Further, NTLM
> authentication can easily introduce security issues as long as any
> proxy servers are used between the client and the origin server,
> since it authenticates a connection rather than particular
> requests, and connections are not guaranteed to contain only
> requests from a particular client.

Hi Maxim,

Hijacking NTLM authenticated TCP connections is not THAT easy.
But generally, we assume HTTP TLS is being used if people care at all about
AFAIK TLS can't go through proxies without tunnelling so either way, you
shouldn't be able to hijack a TLS connection.

NTLM is used because it's fast, reliable and provides a truly password-free
SSO experience.
While Kerberos provides superior security, it can be fickle (client access
to DC, time sync, depends heavily on DNS, SPNs, ...).
Since NTLM is the fallback mechanism, it always works.

NTLM has issues that are more significant than what you described.
But they can be managed.

> More generally, do you see any problems with this scheme?
> As of now, nginx by default does not use keepalive connections to
> the upstream servers.  These are, however, can be configured by
> using the "keepalive" directive (http://nginx.org/r/keepalive),
> and obviously enough this will break the suggested scheme as there
> will be requests from other clients on the same connection.

My implementation works with connection caching (keepalive) to backends.
Here's the config I'm testing right now and so far it's holding up:

upstream backend {
    server localhost:8080;
    server localhost:8081;
    keepalive 16;
server {
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Jespa-Connection-Id $remote_addr:$remote_port;

Loopback captures look right.

Note the key difference in my scheme is the Jespa-Connection-Id which gives
the backend the id it needs to properly map clients to security contexts.


Michael B Allen
Java AD DS Integration
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20221119/d7c26945/attachment.htm>

More information about the nginx mailing list