<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>So I have a few different thoughts:</div><div><br></div><div>1. Yes nginx does support SSL pass through . You can configure nginx to stream your request to your SSL backend. I do this when I don't have control of the backend and it has to be SSL. I don't think that's your situation.</div><div><br></div><div>2. I suspect that there's something wrong with your SSL configuration and/or your nginx VMs are underpowered. Can you test the throughput that you are requesting http static resources ? Check with <a href="http://webpagetest.org">webpagetest.org</a> that the expense is only being paid on the first request.</div><div><br></div><div>3. It's generally better to terminate SSL as early as possible and have the bulk of your communication be unencrypted.</div><div>What spec are your VMs?<br><br><div>Sent from my iPhone</div></div><div><br>On Mar 5, 2017, at 6:00 PM, Alex Samad <<a href="mailto:alex@samad.com.au">alex@samad.com.au</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr">Hi<div><br></div><div>Firstly, I am fairly new to nginx.</div><div><br></div><div><br></div><div>From what I understand you have a standard sort of setup.</div><div><br></div><div><br></div><div>2 nodes (vm's) with haproxy, allowing nginx to be active / passive.</div><div><br></div><div>You have SSL requests which once nginx terminates the SSL, it injects a security header / token and then I presume it passes this on to a back end, i presume that the nginx to application server is non SSL.</div><div><br></div><div>You are having performance issue with the SSL + header inject part, which seems to be limiting you to approx 60req per sec before you hit 100% cpu.. This seems very very low to me looking at my prod setup - similar to yours I am seeing 600 connections and req/s ranging from 8-400 / sec. all whilst the cpu stay very very low.</div><div><br></div><div>We try and use long lived TCP / SSL sessions, but we also use a thick client as well so have more control.</div><div><br></div><div>Not sure about KEMP loadmaster.</div><div><br></div><div>What I describe to you was our potential plans for when the load gets too much on the active/passive setup.</div><div><br></div><div>It would allow you to take your 60 session ? and distributed it between 2 or upto 16 (I believe this is the max for pacemaker). an active / active setup</div><div><br></div><div>The 2 node setup would be the same as yours</div><div><br></div><div><br></div><div>router -> vlan with the 2 nodes > Node A would only process node a data and node B would only process node b data. This in theory would have the potential to double your req / sec.</div><div><br></div><div><br></div><div><br></div><div>Alex</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 3 March 2017 at 19:33, polder_trash <span dir="ltr"><<a href="mailto:nginx-forum@forum.nginx.org" target="_blank">nginx-forum@forum.nginx.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Alexsamad,<br>
I might not have been clear, allow me to try again:<br>
<br>
* currently 2 NGINX revproxy nodes, 1 active the other on standby in case<br>
node 1 fails.<br>
* Since I am injecting an authentication header into the request, the HTTPS<br>
request has to be offloaded at the node and introduces additional load<br>
compared to injecting into non-encrypted requests.<br>
* Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent<br>
requests expected to more than double, so revproxy will be bottleneck.<br>
<br>
The NGINX revproxies run as a VM and I can ramp up the machine specs a<br>
little bit, but I do not expect this to completely solve the issue here.<br>
Therefore I am looking for some method of spreading the requests over<br>
multiple backend revproxies, without the load balancer frontend having to<br>
deal with SSL offloading.<br>
<br>
From the KEMP LoadMaster documentation I found that this technique is called<br>
SSL Passthrough. I am currently looking if that is also supported by NGINX.<br>
<br>
What do you think? Will this solve my issue? Am I on the wrong track?<br>
<br>
Posted at Nginx Forum: <a href="https://forum.nginx.org/read.php?2,272713,272729#msg-272729" rel="noreferrer" target="_blank">https://forum.nginx.org/read.<wbr>php?2,272713,272729#msg-272729</a><br>
<br>
______________________________<wbr>_________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/<wbr>mailman/listinfo/nginx</a><br>
</blockquote></div><br></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>nginx mailing list</span><br><span><a href="mailto:nginx@nginx.org">nginx@nginx.org</a></span><br><span><a href="http://mailman.nginx.org/mailman/listinfo/nginx">http://mailman.nginx.org/mailman/listinfo/nginx</a></span></div></blockquote></body></html>