Multiple Upstream Servers Result in Error 310 (net::ERR_TOO_MANY_REDIRECTS)
Francis Daly
francis at daoine.org
Tue Mar 6 22:06:07 UTC 2012
On Tue, Mar 06, 2012 at 04:08:44PM -0500, mevans336 wrote:
Hi there,
> I have an Nginx reverse-proxy sitting in front of two JBoss servers. If
> I attempt to add the second JBoss server to the upstream directive, I
> receive an Error 310 (net::ERR_TOO_MANY_REDIRECTS) from Chrome, IE just
> displays an "Internet Explorer cannot display this webpage" error.
If you test using something like "curl", you'll see exactly what the
response is, and you can compare it to what you expect it to be. That
tends to make things easier to follow.
> If I
> comment the second upstream server out a kill -HUP the master process,
> everything works properly.
>
> I am attempting to use Nginx to rewrite all HTTP requests to HTTPS and
> upon first contact with our domain, the JBoss application redirects you
> to https://my.domain.com/home.
What does the JBoss application understand by the phrase "first
contact"? I suspect that that's going to be related to the fix.
> upstream jboss_dev_servers {
> server 10.0.3.15:8080;
> server 10.0.3.16:8080;
> }
>
> server {
> listen 10.0.3.28:80;
> server_name my.domain.com;
> location / {
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_next_upstream error timeout invalid_header;
> rewrite ^ https://$server_name$request_uri? permanent;
> proxy_pass http://jboss_dev_servers;
The "rewrite" there means that the "proxy_pass" will never be used.
curl -i http://my.domain.com/
should show a http 301 redirect to https://my.domain.com/, so that's
what the client will ask for next.
> server {
> listen 10.0.3.28:443 default ssl;
> location / {
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_next_upstream error timeout invalid_header;
> proxy_pass http://jboss_dev_servers;
curl -i https://my.domain.com/
should get to here, and be proxied to a jboss server. If that thinks
that this is "first contact", it will issue a http redirect to
https://my.domain.com/home.
If you modify each jboss server to add a "I'm #1" header, you'll see
which jboss server you just contacted.
The next thing the client will do is
curl -i https://my.domain.com/home
which will be proxied to a jboss server. If that server thinks
that this is "first contact", it will issue a http redirect to
https://my.domain.com/home, at which point the client can reasonably think
"redirect loop; this is broken".
If that server thinks that this isn't "first contact", it will presumably
return http 200 with some useful content.
Which do you see when you test it like this?
And in this second response, do you see an "I'm #1" or an "I'm #2" header?
(You could look in logs to know which upstream server you were eventually
talking to, but a header is more immediate.)
> Any ideas what could be causing this? Is it bouncing the requests back
> and forth between the two upstream servers somehow?
My guess is that "first contact" information is not shared between the
two jboss servers, so if serial requests from a single client don't
always go to the same server, bad things happen.
You should be able to see whether or not that is the case.
With only one jboss server configured, all requests must go to it,
and things should work, as you reported.
All the best,
f
--
Francis Daly francis at daoine.org
More information about the nginx
mailing list