Static or dynamic content

Jens Dueholm Christensen JEDC at ramboll.com
Thu Oct 20 09:19:29 UTC 2016


On Tuesday, October 18, 2016 08:28 AM Francis Daly wrote,

> So: a POST for /x will be handled in @xact, which will return 503,
> which will be handled in @error_503, which will be rewritten to a POST
> for /error503.html which will be sent to the file error/error503.html,
> which will return a 405.
>
> Is that what you see?

Yes - per your comments later in your reply about internal redirects and the debug log, I enabled the debug log which confirms it (several lines have been removed from the following sniplet, but its pretty clear):

---
2016/10/20 10:23:45 [debug] 8408#2492: *1 http upstream request: "/2?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http proxy status 503 "503 Service Unavailable"
2016/10/20 10:23:45 [debug] 8408#2492: *1 finalize http upstream request: 503
2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 503, "/2?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 test location: "@error_503"
2016/10/20 10:23:45 [debug] 8408#2492: *1 using location: @error_503 "/2?"
2016/10/20 10:23:45 [notice] 8408#2492: *1 "^(.*)$" matches "/2" while sending to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2", host: "localhost"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http script copy: "/error503.html"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http script regex end
2016/10/20 10:23:45 [notice] 8408#2492: *1 rewritten data: "/error503.html", args: "" while sending to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2", host: "localhost"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http finalize request: 405, "/error503.html?" a:1, c:2
2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 405, "/error503.html?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 HTTP/1.1 405 Not Allowed
Server: nginx/1.8.0
Date: Thu, 20 Oct 2016 08:23:45 GMT
Content-Type: text/html
Content-Length: 172
Connection: keep-alive
---

> Two sets of questions remain:

> what output do you get when you use the test config in the earlier mail?

Alas I did not try that config yet, but I would assume that my tests would show exactly the same as yours - should I try or is it purely academic?

> what output do you want?

> That last one is probably "http 503 with the content of *this* file";
> and is probably completely obvious to you; but I don't think it has been
> explicitly stated here, and it is better to know than to try guessing.

100% correct.
If upstream returns 503 or 404 I would like to have the contents of the error_page for 404 or 503 returned to the client regardless of the HTTP request method used.

> If you remove the error_page 503 part or the proxy_intercept_errors part,
> does the expected http status code get to your client?

Yes!

> I think that the nginx handling of subrequests from a POST for error
> handling is a bit awkward here. But until someone who cares comes up with
> an elegant and consistent alternative, I expect that it will remain as-is.

Alas.. 

> Possibly in your case you could convert the POST to a GET by using
> proxy_method and proxy_pass within your error_page location.

> That also feels inelegant, but may give the output that you want.

Yes, similar "solutions" like this (http://leandroardissone.com/post/19690882654/nginx-405-not-allowed ) and others are IMO really ugly and it does make the configfile harder to understand and maintain over time.

The "best" (but still ugly!) version I could find is where I catch the 405 error inside the @error_503 location (as described in the answer to this question http://stackoverflow.com/questions/16180947/return-503-for-post-request-in-nginx ), but I dislike the use of if and $request_filename in that solution - and it still doesn't make for easy understanding.


How would you suggest I could use proxy_method and proxy_pass within the @error_503 location?
I'm comming up short on how to do that without beginning to resend a POST request as a GET request to upstream - a new request that could now potentially succeed (since a haproxy backend server could become available between the POST failed and the request is retried as a GET)?


Regards,
Jens Dueholm Christensen



More information about the nginx mailing list