input required on proxy_next_upstream

kaustubh nginx-forum at
Tue Feb 21 09:45:09 UTC 2017

Thanks Francis! I was able to test that above works.

But problem is when we have proxy buffering off and when we try to send
large file say 1gb, it fails with 502 without trying next instance.
  proxy_request_buffering off;
  proxy_http_version 1.1;

Docs say so,
"When buffering is disabled, the request body is sent to the proxied server
immediately as it is received. In this case, the request cannot be passed to
the next server if nginx already started sending the request body."

Problem is we want best of both worlds, proxy buffering to be off so it
works like streaming and some way to try next instance on 503 from first

Any suggestions? May be is there a way where nginx can expect-100 from
upstream before starting to send data to it, so if expect-100 fails, it can
try next instance without losing data already sent otherwise?

Here is the error with nginx.conf and command where it fails

http {
  client_max_body_size 5G;

  upstream local {

  server {
    listen 80;
    location / {
      proxy_pass http://local;
      proxy_next_upstream error timeout invalid_header http_502 http_503
      proxy_http_version 1.1;
      proxy_request_buffering off;

  server {
    listen 8008;
    access_log /var/log/nginx/503.log combined;
    return 503;

  server {
    listen 8009;
    access_log /var/log/nginx/200.log combined;
    return 200 "Got $request on port 8009\n";

$ cat 1gb.img | curl -H "Expect:" -v -T -  

* About to connect() to port 80 (#0)
*   Trying
* Connected to ( port 80 (#0)
> PUT /one HTTP/1.1
> User-Agent: curl/7.29.0
> Host:
> Accept: */*
> Transfer-Encoding: chunked
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.9.15
< Date: Tue, 21 Feb 2017 09:23:09 GMT
< Content-Type: text/html
< Content-Length: 173
< Connection: keep-alive
* HTTP error before end of send, stop sending
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
* Closing connection 0

Posted at Nginx Forum:,272440,272543#msg-272543

More information about the nginx mailing list