Reverse Proxy with 500k connections
tolga.ceylan at gmail.com
Wed Mar 8 18:45:54 UTC 2017
is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the
blog post with two backends, OP's case has one backend server. If any
of the hash slots exceed the 65K port limit, there's no chance to
recover. Despite having enough port capacity, the client will receive
an error if the client ip/port hashed to a full slot.
IMHO picking bind IP based on a client ip/port hash is not very robust
in this case since
you can't really make sure you really are directing %10 of the
traffic. This solution does
not consider long connections (web sockets) and the hash slot could
get out of balance
On Wed, Mar 8, 2017 at 3:20 AM, Maxim Konovalov <maxim at nginx.com> wrote:
> On 3/7/17 10:50 PM, larsg wrote:
>> we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
>> The nginx routes requests to a backend server that can be reached from the
>> proxy via a single internal IP address.
>> We have to support a large number of concurrent websocket connections - say
>> 100k to 500k.
>> As we don't want to increase the number of proxy instances (with different
>> IPs) and we cannot use the "proxy_bind transarent" option (was introduced in
>> a later nginx release, upgrade is not possible) we wanted to configure the
>> nginx to use different source IPs then routing to the backend. Thus, we want
>> nginx to select an available source ip + source port when a connection is
>> established with the backend.
>> For that we assigned ten internal IPs to the proxy server and used the
>> proxy_bind directive bound to 0.0.0.0.
>> But this approach seems not to work. The nginx instance seems always use the
>> first IP as source IP.
>> Using multiple proxy_bind's is not possible.
>> So my question is: How can I configure nginx to select from a pool of source
>> IPs? Or generally: to overcome the 64k problem?
> We ever wrote a blog post for you!
> As a side note: I'd really encourage all of you to add our blog rss
> to your feeds. While there is some marketing "noise" we are still
> trying to make it useful for tech people too.
> Maxim Konovalov
> nginx mailing list
> nginx at nginx.org
More information about the nginx