Nginx Proxy seems to send twice the same request to the backend

Arnaud Le-roy sdnetwork at
Sun Jul 9 21:26:47 UTC 2017


i encountered a strange behaviour with nginx, my backend seems to receive twice the same request from nginx proxy, to be sure that it's not the client that send two request i have had an uuid params to each request.

when the problem occurs in nginx log i found one request in success in access.log

x.x.x.x - - [09/Jul/2017:09:18:33 +0200] "GET /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428 HTTP/1.1" 200 2 "-" "-"

and an other one than generate this log in error.log :

2017/07/09 09:18:31 [error] 38111#38111: *4098505 upstream prematurely closed connection while reading response header from upstream, client: x.x.x.x, server:, request: "GET /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428 HTTP/1.1", upstream: "", host: ""

on my backend i can see two request with the same uuid (the two succeed)

{"pid":11424,"level":"info","message":"[API] AUTH1 /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428","timestamp":"2017-07-09 09:18:31.861Z"}
{"pid":11424,"level":"info","message":"[API] AUTH1 /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428","timestamp":"2017-07-09 09:18:33.196Z"}

The client is a node program so i'm sure that it sends only one request with the same uuid (no thread problem ;) 
the nginx serve as simple proxy (no load balancing)


user www-data;
worker_processes  8;
worker_rlimit_nofile 8192;
pid /run/;

events {
	worker_connections 1024;
	# multi_accept on;

http {
    upstream api {
         keepalive 100;

    include       mime.types;
    default_type  application/octet-stream;    
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    proxy_buffering    off;
    proxy_buffer_size  128k;
    proxy_buffers 100  128k;
    proxy_http_version 1.1;
    ### timeouts ###
    resolver_timeout        6;
    client_header_timeout   30;
    client_body_timeout     600;
    send_timeout            10;
    keepalive_timeout       65 20;
    proxy_read_timeout      600;
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
        include /etc/nginx/nginx_ssl.conf;
        client_max_body_size 200M;

        location / {
            proxy_next_upstream off;
            proxy_pass http://api;
            proxy_redirect http://api/ https://$host/;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Connection "";

The backend is a simple node server.

the problem occurs randomly, and it happens for sure on nginx/1.10.3 and nginx/1.13.2 on debian/jessie

After some days of research, i found that if i remove the keepalive 100 from upstream configuration there is no longer the problem but i don't understand why ? Maybe somebody can explain me what could hapen ? maybe a misunderstanding about some configuration on keep alive ?

For me it seems to be a problem on nginx, if you can't explain with these information, i can send some debug (nginx-debug) log to you.

thanks in advance.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list