SSL proxy slow....
James
thenetimp at gmail.com
Tue Sep 9 08:08:41 MSD 2008
I was thinking about that, maybe an ssh tunnel between the 2 servers,
but I don't have time to try that theory tonight. I'll try it again
later this week.
James
On Sep 8, 2008, at 11:59 PM, Gabriel Ramuglia wrote:
> gotcha. maybe a vpn connection between the front end and back ends
> would be more appropriate than ssl?
>
> On Mon, Sep 8, 2008 at 11:43 PM, James <thenetimp at gmail.com> wrote:
>> we're sending credit card data, as the back end of the proxy is
>> still on
>> public network interface, and since it's EC2 I can't change that.
>>
>> James
>>
>> On Sep 8, 2008, at 11:24 PM, Gabriel Ramuglia wrote:
>>
>>> If the http version is identical to the https version, what
>>> difference
>>> does it make if the connection between the frontend and backend is
>>> encrypted?
>>>
>>> On Mon, Sep 8, 2008 at 11:06 PM, James <thenetimp at gmail.com> wrote:
>>>>
>>>> we've decided for the time being to go round robin DNS for now.
>>>> It's got
>>>> it's disadvantages, but since the site launches in the morning, I
>>>> don't
>>>> have
>>>> time to play with it before the launch, too many other things to
>>>> do.
>>>> Kind
>>>> of sucks, I was really excited about using nginx.
>>>>
>>>> James
>>>>
>>>>
>>>> On Sep 8, 2008, at 10:41 PM, Gabriel Ramuglia wrote:
>>>>
>>>>> varnish can't act as an ssl server, not sure about being an ssl
>>>>> client.
>>>>>
>>>>> On Mon, Sep 8, 2008 at 9:41 PM, James <thenetimp at gmail.com> wrote:
>>>>>>
>>>>>> Thanks Dave. I'll look into both of those.
>>>>>>
>>>>>> Thanks,
>>>>>> James
>>>>>>
>>>>>>
>>>>>> On Sep 8, 2008, at 9:05 PM, Dave Cheney wrote:
>>>>>>
>>>>>>> The the dog slowness you are seeing is probably nginx
>>>>>>> renegitiation
>>>>>>> SSL
>>>>>>> on
>>>>>>> every backend request. At the moment nginx will issue a
>>>>>>> connection
>>>>>>> close
>>>>>>> after each request.
>>>>>>>
>>>>>>> If you are using nginx as an SSL load balancer you might need
>>>>>>> to use
>>>>>>> something else (varnish? squid?) that can maintain persistant
>>>>>>> connections
>>>>>>> to your backend, this might help, a bit.
>>>>>>>
>>>>>>> Cheers
>>>>>>>
>>>>>>> Dave
>>>>>>>
>>>>>>> On Mon, 8 Sep 2008 20:36:04 -0400, James <thenetimp at gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I do need to pass SSL back to my app from the front nginx
>>>>>>>> server,
>>>>>>>> because we are using EC2 forour servers, so I do need to
>>>>>>>> encrypt them
>>>>>>>> back to the 2 front end servers, as it's on a public network,
>>>>>>>> and the
>>>>>>>> network is public.
>>>>>>>>
>>>>>>>> James
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sep 8, 2008, at 8:05 PM, Dave Cheney wrote:
>>>>>>>>
>>>>>>>>> Hi James,
>>>>>>>>>
>>>>>>>>> If nginx is acting as your SSL handler then you don't need
>>>>>>>>> to pass
>>>>>>>>> SSL back
>>>>>>>>> to your app. This should be sufficient.
>>>>>>>>>
>>>>>>>>> location / {
>>>>>>>>> proxy_set_header X-FORWARDED_PROTO https;
>>>>>>>>> proxy_pass https://givvymain;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> Cheers
>>>>>>>>>
>>>>>>>>> Dave
>>>>>>>>>
>>>>>>>>> On Mon, 8 Sep 2008 19:50:30 -0400, James <thenetimp at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Here is my server config. When I go to http://
>>>>>>>>>> prod.givvy.com the
>>>>>>>>>> result is normal. When I go to https://prod.givvy.com it's
>>>>>>>>>> dog
>>>>>>>>>> slow.
>>>>>>>>>>
>>>>>>>>>> Any idea as to how to speed up the SSL side of it? (right
>>>>>>>>>> now I am
>>>>>>>>>> using a local host change to point to the right IP address as
>>>>>>>>>> prod.givvy.com points to a maintenance page. We want to
>>>>>>>>>> launch the
>>>>>>>>>> site tomorrow, but this is a huge problem for us. I'd hate
>>>>>>>>>> to
>>>>>>>>>> launch
>>>>>>>>>> it with one server.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> James
>>>>>>>>>>
>>>>>>>>>> http {
>>>>>>>>>>
>>>>>>>>>> upstream givvymain {
>>>>>>>>>> server 75.101.150.160:80 max_fails=1
>>>>>>>>>> fail_timeout=30s;
>>>>>>>>>> server 67.202.3.21:80 max_fails=1
>>>>>>>>>> fail_timeout=30s;
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> upstream givvymainssl {
>>>>>>>>>> server 75.101.150.160:443 max_fails=1
>>>>>>>>>> fail_timeout=30s;
>>>>>>>>>> server 67.202.3.21:443 max_fails=1
>>>>>>>>>> fail_timeout=30s;
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> server {
>>>>>>>>>> listen 80;
>>>>>>>>>> server_name prod.givvy.com;
>>>>>>>>>> location / {
>>>>>>>>>> proxy_pass http://givvymain;
>>>>>>>>>> proxy_next_upstream error timeout;
>>>>>>>>>> }
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> server {
>>>>>>>>>> listen 443;
>>>>>>>>>> server_name prod.givvy.com;
>>>>>>>>>>
>>>>>>>>>> ssl on;
>>>>>>>>>> ssl_certificate /####PATH TO CERT###/
>>>>>>>>>> ssl_certificate_key /####PATH TO KEY###/
>>>>>>>>>> keepalive_timeout 70;
>>>>>>>>>>
>>>>>>>>>> location / {
>>>>>>>>>> proxy_set_header X-FORWARDED_PROTO https;
>>>>>>>>>> proxy_pass https://givvymainssl;
>>>>>>>>>> }
>>>>>>>>>> }
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>>
>
More information about the nginx
mailing list