Buffering issues with nginx

Dan34 nginx-forum at forum.nginx.org
Fri Jul 21 11:02:07 UTC 2017


> Depending on the compromises you are willing to make, to accuracy or
> convenience, you may be able to come up with something good enough.

I have a more or less working solution. nginx breaks it and I'm trying to
figure out how to fix it.


> Yes. That is (part of) what a proxy does. Even without nginx as a
> reverse-proxy, your client might be talking through one or more proxy
> servers. You will never know whether your response got to the actual
> end client, without some extra verification step that only the end
> client does.

I don't care for the case if there are any other proxies, I care for bytes
that left my server. Specifically, bytes that left my server and were ACKed
by next point (either final user or some proxy in between). Verification
isn't an option.

> > When I updated some of these buffering
> > configs things improved, but still were failing with smaller uploads
that
> > are still fully buffered by nginx.

> proxy_buffers and proxy_buffer_size can be tuned (lowered, in this case,
> probably) to slow down nginx's receive-rate from your upstream.
> 
> If you can show one working configuration with a description of how
> it does not do what you want it to do, possibly someone can offer some
> advice on what to change.

I tried proxy_buffers off; and it didn't make a difference. I'm fairly
confident that it's a bug in nginx, or some "feature" that doesn't get
disabled with any configs.

Here's full config that I use:

    location / {
        proxy_pass http://localhost:80;
        #proxy_http_version 1.1;
        #proxy_http_version 1.0;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        #proxy_set_header Upgrade $http_upgrade;
        #proxy_set_header Connection 'upgrade';
        #proxy_set_header Connection 'close';
        proxy_buffering off;
        proxy_request_buffering off;
        #proxy_buffer_size 4k;
        #proxy_buffers 3 4k;
        proxy_no_cache 1;
        proxy_set_header Host $host;
        #proxy_cache_bypass $http_upgrade;
        proxy_hide_header X-Powered-By;
        proxy_max_temp_file_size 0;
    }

I run nginx on 8080, for testing, since it's not suitable for live use on 80
in my case and I'm trying to figure out how to fix it.
And here's why I believe that there is a bug.

In my case, I wrote test code on node side that serves some binary content.
I can control speed at what node serves this content. On receiving end (on
the other side of the planet) I use wget with --limite-rate. In the test
that I'm trying to fix I send 5MB from nodejs at 20KB/s speed, client that
requests that binary data reads it at 10KB/s. Obviously overall speed has to
be 10KB/s as it's limited by the client that requests the data.

What happens is that entire connection from nginx to node is closed after
node sends all data to nginx. Basically in my test 5MB will take
approximately 500s to deliver, but node gets tcp connection closed 255 s
from start (when there is still 250 more seconds to go and 2.5MB is still
stuck on nginx side). So, no matter what I do nginx totally breaks my
scenario, it does not obey any configs and still buffers 2.5MB

Just in case if nginx devs ever read here, I have 1.12.1 version on ubuntu.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275526,275605#msg-275605



More information about the nginx mailing list