No subject

Peter Booth peter_booth at me.com
Mon Jun 20 00:54:21 UTC 2016



Sent from my iPhone
> 
>> On Saturday 18 June 2016 14:12:31 B.R. wrote:
>> There is no downside on the server application I suppose, especially since,
>> as you recalled, nginx got no trouble for it.
>> 
>> One big problem is, there might be socket exhaustion on the TCP stack of
>> your front-end machine(s). Remember a socket is defined by a triple
>> <protocol, address, port> and the number of available ports is 65535 (layer
>> 4) for every IP (layer 3) double <protocol, address>.
>> The baseline is, for TCP connections underlying your HTTP communication,
>> you have 65535 port for each IP version your server handles.
> [..]
> 
> Each TCP connection is identified by 4 parameters: source IP, source PORT,
> destination IP, destination PORT.  Since usually clients have different
> public IPs there's not limitation by the number of ports.

Whenever I read discussions like this I try and remember how important context is. There's a disagreement here about whether or not there's a potential go resource exhaustion with long lived (http) persistent sockets. 

Client side port exhaustion can be an issue when you have a hardware load balancer in front of your pool of Nginx servers. Because in this case it's common practice to use NAT, so what the Nginx server sees is a TCP connection originating from an internal IP address owned by the load balancer. In that scenario, for example, it can be pretty easy to run out of client  ports. 

I imagine there are other  shops that might experience a similar issue with a firewall or an inline intrusion detection system or a dedicated http proxy.

When you build a web app you're dealing with people's expectations And today I think people are pretty accustomed to the notion that sessions timeout and that going for a three hour lunch can reasonably lead to one having to reauthenticate to an app. I'm not conflating web app sessions with http persistence. It's just that if this is an app with authentication and a typical timeout, then persistent http offers no value once a session isn't valid.





> 
>> 
>> Now, you have to consider the relation between new clients (thus new
>> connections) and the existing/open ones.
>> If you have very low traffic, you could set an almost infinite timeout on
>> your keepalive capability, that would greatly help people who never sever
>> connection to your website because they are so addicted to it (and never
>> close the tab of their browser to it).
>> On the contrary, if you are very intensively seing new clients, with the
>> same parameters, you would quickly exhaust your available sockets and be
>> unable to accept client connections.
> 
> No, keep-alive connections shouldn't exhaust available sockets, because
> there's "worker_connections" directive in nginx that limits number of open
> connections and must be set according to other limits in your system.
> 
> [..]
>> And finally, nginx provides the ability to recycle connections based on a
>> number of requests made (default 100).
>> I guess that is a way of mitigating clients with different behaviors: a
>> client having made 100 requests is probably considered to hav had its share
>> of time on the server, and it is time to put it back in the pool to give
>> others access in case of congestion.
> 
> No, it's to overcome possible memory leaks of long lived connections in nginx,
> because some modules may allocate memory from connection pool on each request.
> It's usually save to increase this value to 1000-10000.
> 
>  wbr, Valentin V. Bartenev
> 
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Sun, 19 Jun 2016 10:09:39 +0100
> From: Francis Daly <francis at daoine.org>
> To: nginx at nginx.org
> Subject: Re: SSL handshake failed with mutual TLS
> Message-ID: <20160619090939.GJ2852 at daoine.org>
> Content-Type: text/plain; charset=us-ascii
> 
> On Sat, Jun 18, 2016 at 11:29:49AM +0300, Andrey Novikov wrote:
> 
> Hi there,
> 
>> We've successfully configured interaction with two of these systems
>> (all with mutual TLS), and when pointed another one to this server
>> we've got next message in the error.log (log level for error log is
>> set to debug):
>> 
>> 2016/06/16 18:07:55 [info] 21742#0: *179610 SSL_do_handshake() failed
>> (SSL: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad
>> certificate:SSL alert number 42) while SSL handshaking, client:
>> 10.117.252.168, server: 0.0.0.0:8443
>> 
>> What can cause this message? How to debug it?
> 
> I think that this message (can|does) mean that the far side did not like
> something about your certificate.
> 
> If that is the case - are there any logs on the thing connecting to
> nginx about what it thinks happened in the TLS negotiation?
> 
> Cheers,
> 
>    f
> -- 
> Francis Daly        francis at daoine.org
> 
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Sun, 19 Jun 2016 11:51:28 +0200
> From: Thomas Glanzmann <thomas at glanzmann.de>
> To: nginx <nginx at nginx.org>
> Subject: Send Strict-Transport-Security header in 401 response
> Message-ID: <20160619095128.GB29903 at glanzmann.de>
> Content-Type: text/plain; charset=us-ascii
> 
> Hello,
> I would like to send the header:
> 
> add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
> 
> Despite the 401 Unauthorized request. Is that possible?
> 
> Currently the header is only added after a successful authorization:
> 
> (x1) [~] curl -v https://tuvl.de
> * Rebuilt URL to: https://tuvl.de/
> * Hostname was NOT found in DNS cache
> *   Trying 2a01:4f8:b0:2fff::2...
> * Connected to tuvl.de (2a01:4f8:b0:2fff::2) port 443 (#0)
> * successfully set certificate verify locations:
> *   CAfile: none
>  CApath: /etc/ssl/certs
> * SSLv3, TLS handshake, Client hello (1):
> * SSLv3, TLS handshake, Server hello (2):
> * SSLv3, TLS handshake, CERT (11):
> * SSLv3, TLS handshake, Server key exchange (12):
> * SSLv3, TLS handshake, Server finished (14):
> * SSLv3, TLS handshake, Client key exchange (16):
> * SSLv3, TLS change cipher, Client hello (1):
> * SSLv3, TLS handshake, Finished (20):
> * SSLv3, TLS change cipher, Client hello (1):
> * SSLv3, TLS handshake, Finished (20):
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
> * Server certificate:
> *        subject: CN=tuvl.de
> *        start date: 2016-06-19 08:39:00 GMT
> *        expire date: 2016-09-17 08:39:00 GMT
> *        subjectAltName: tuvl.de matched
> *        issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
> *        SSL certificate verify ok.
>> GET / HTTP/1.1
>> User-Agent: curl/7.38.0
>> Host: tuvl.de
>> Accept: */*
> < HTTP/1.1 401 Unauthorized
> * Server nginx is not blacklisted
> < Server: nginx
> < Date: Sun, 19 Jun 2016 09:47:40 GMT
> < Content-Type: text/html
> < Content-Length: 188
> < Connection: keep-alive
> < WWW-Authenticate: Basic realm="Virtual Lab"
> <
> <html>
> <head><title>401 Authorization Required</title></head>
> <body bgcolor="white">
> <center><h1>401 Authorization Required</h1></center>
> <hr><center>nginx</center>
> </body>
> </html>
> * Connection #0 to host tuvl.de left intact
> 
> Cheers,
>        Thomas
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Sun, 19 Jun 2016 10:57:34 +0100
> From: Francis Daly <francis at daoine.org>
> To: nginx at nginx.org
> Subject: Re: Send Strict-Transport-Security header in 401 response
> Message-ID: <20160619095734.GK2852 at daoine.org>
> Content-Type: text/plain; charset=us-ascii
> 
> On Sun, Jun 19, 2016 at 11:51:28AM +0200, Thomas Glanzmann wrote:
> 
> Hi there,
> 
>> I would like to send the header:
>> 
>> add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
>> 
>> Despite the 401 Unauthorized request. Is that possible?
> 
> http://nginx.org/r/add_header
> 
> That suggests that you can use an "always" parameter.
> 
> Is that appropriate in this case?
> 
> If not, then possibly the third-party "headers more" module may be useful.
> 
>    f
> -- 
> Francis Daly        francis at daoine.org
> 
> 
> 
> ------------------------------
> 
> Message: 5
> Date: Sun, 19 Jun 2016 16:06:56 +0530
> From: Aahan Krish <krish at aahan.me>
> To: nginx at nginx.org
> Subject: Re: Why set keepalive_timeout to a short period when Nginx is
>    great at handling them?
> Message-ID:
>    <CAFaCOgxWnB150s3n6ZCCP5yrr8dgWH-ec78B5hRvnX7Wnun9_Q at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
> 
> Hi Valentin,
> 
> *(I repeat the same question I put to B.R. as you raised the same
> point.)*
> 
> So you are referring to the 4-tuple (source_IP, source_port,
> server_IP, server_port) socket limitation, correct? I just came to
> know about this and it's interesting. Please tell me if this
> understanding of mine is correct:
> 
>    So a server identifies a user's connection based on a combination
>    of: user's internet connection's IP + port the user's client is
>    connecting from (e.g. Chrome on 8118, IE on 8080, etc.) +
>    server IP + server_port (80 for HTTP / 443 for HTTPS).
> 
>    And the limitation is that a maximum of ~ 65536 clients all on
>    same port (say all are using Chrome and therefore connecting from
>    8118) can connect simultaneously to a web server that is connected
>    to the internet via 1 public IP address and port 80 (let's say
>    HTTP only), IFF the resources of the server permit.
> 
>    And that means I can double the no. of connections (2x 65536 per
>    second) my server can handle, if it has enough resources in the
>    first place (i.e. sufficient RAM, CPU, I/O capacity or whatever
>    is relevant) by simply adding another public IP address to my
>    server and making sure that the traffic is load-balanced between
>    the two public IPs of the server.
> 
> Am I correct?
> 
> If my understanding is correct, this comment was helpful:
> http://stackoverflow.com/q/34079965#comment55913149_34079965
> 
> 
> 
> ------------------------------
> 
> Subject: Digest Footer
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> ------------------------------
> 
> End of nginx Digest, Vol 80, Issue 18
> *************************************



More information about the nginx mailing list