Balancing NGINX reverse proxy

Alex Samad alex at samad.com.au
Sun Mar 5 23:00:17 UTC 2017


Hi

Firstly, I am fairly new to nginx.


>From what I understand you have a standard sort of setup.


2 nodes (vm's) with haproxy, allowing nginx to be active / passive.

You have SSL requests which once nginx terminates the SSL, it injects a
security header / token and then I presume it passes this on to a back end,
i presume that the nginx to application server is non SSL.

You are having performance issue with the SSL + header inject part, which
seems to be limiting you to approx 60req per sec before you hit 100% cpu..
This seems very very low to me looking at my prod setup - similar to yours
I am seeing 600 connections and req/s ranging from 8-400 / sec.  all whilst
the cpu stay very very low.

We try and use long lived TCP / SSL sessions, but we also use a thick
client as well so have more control.

Not sure about KEMP loadmaster.

What I describe to you was our potential plans for when the load gets too
much on the active/passive setup.

It would allow you to take your 60 session ? and distributed it between 2
or upto 16 (I believe this is the max for pacemaker). an active / active
setup

The 2 node setup would be the same as yours


router -> vlan with the 2 nodes > Node A would only process node a data and
node B would only process node b data.  This in theory would have the
potential to double your req / sec.



Alex


On 3 March 2017 at 19:33, polder_trash <nginx-forum at forum.nginx.org> wrote:

> Alexsamad,
> I might not have been clear, allow me to try again:
>
> * currently 2 NGINX revproxy nodes, 1 active the other on standby in case
> node 1 fails.
> * Since I am injecting an authentication header into the request, the HTTPS
> request has to be offloaded at the node and introduces additional load
> compared to injecting into non-encrypted requests.
> * Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent
> requests expected to more than double, so revproxy will be bottleneck.
>
> The NGINX revproxies run as a VM and I can ramp up the machine specs a
> little bit, but I do not expect this to completely solve the issue here.
> Therefore I am looking for some method of spreading the requests over
> multiple backend revproxies, without the load balancer frontend having to
> deal with SSL offloading.
>
> From the KEMP LoadMaster documentation I found that this technique is
> called
> SSL Passthrough. I am currently looking if that is also supported by NGINX.
>
> What do you think? Will this solve my issue? Am I on the wrong track?
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,272713,272729#msg-272729
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20170306/6264ccef/attachment-0001.html>


More information about the nginx mailing list