How to avoid sending incomplete request data to backend if 499 error
feanorknd
nginx-forum at forum.nginx.org
Tue Nov 19 15:29:37 UTC 2019
Hello...
Few days ago I have had this problem... let me explain with log lines:
X.X.X.X - - [16/Nov/2019:04:36:17 +0100] "POST /api/budgets/new HTTP/2.0"
200 2239 "----" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X)
AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148
Safari/605.1" Exec: "2.190" Conn: "10" Upstream Time: "2.185" Upstream
Status: "200"
X.X.X.X - - [16/Nov/2019:04:36:55 +0100] "POST /api/budgets/new HTTP/2.0"
499 0 ""----"" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X)
AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148
Safari/605.1" Exec: "0.147" Conn: "1" Upstream Time: "0.142" Upstream
Status: "-"
In the first line, there is nothing of interest... just the POST request was
completely fine.
In the second request, there was a client disconnection and POST request was
not complete, as given by the 499 logged error.
The problem was:
- the incomplete POST data was sent from nginx to the backend fastcgi server
somehow.
- that code did process the incomplete request data and generated a corrupt
entry in certain database... another history.
I need NGINX to do not behavior like this. If request data is not complete
and connection was timed out, dropping a 499, I want NGINX to discard
completely that request instead of sending incomplete data to the fastcgi
backend.
I guess there would be two ways:
- Nginx main core buffer the client request and discard it completely if not
finishing correctly (499).
- Nginx fastcgi module buffer the client request and discard it completely
if not finishing correctly (499).
But I do not know how to configure like this.
Even "fastcgi_request_buffering on" is supposed to be default, but in this
case, incomplete request was sent to backend generating an execution of code
with corrupt data.
Is there a way to discard incomplete requests when happening a client
disconnect and before parsing it to the backends?
Thanks to all!
--
Gino
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286264,286264#msg-286264
More information about the nginx
mailing list