proxy_set_header variable evaluation
Arvind Jayaprakash
arvind at mkhoj.com
Thu Aug 27 08:34:23 MSD 2009
On Aug 26, Maxim Dounin wrote:
>Hello!
>
>On Thu, Aug 27, 2009 at 12:04:33AM +0530, Arvind Jayaprakash wrote:
>
>> I am trying to do some tricks with upstream+proxy and ran into what
>> seems like a limitation of the proxy_set_header feature.
>>
>> When an upstream's response triggers resending the request to the next
>> upsteam, I was hoping $upstream_response_time is available with data of
>> what happened in the previous upsteams. I'm trying to pass it using the
>> following directive:
>>
>> proxy_set_header X-retry1 $upstream_response_time;
>>
>> It looks like the variable is always unavailable (even when when nginx
>> tries to access the second upstream for the same request). However, the
>> value seems to be available during the logging phase.
>
>This won't work because request to backend server created only
>once - and then nginx just tries to send the same request to
>available upstreams.
>
>You may want to use error_page fallback instead of
>proxy_next_upstream, this will allow you to recreate request with
>new headers. Note that you have to save $upstream_request_time to
>some temporary variable before upstream module will start again -
>as it will reset all related data. E.g.
>
> upsteam backend {
> server 192.168.1.1:80;
> server 192.168.1.2:80;
> ...
> }
>
> server {
> ...
>
> proxy_next_upstream off;
>
> location / {
> error_page 502 504 = @fallback;
> proxy_pass http://backend;
> }
>
> location @fallback {
> set $blah $upstream_response_time;
> proxy_pass http://backend;
> proxy_set_header X-Blah $blah;
> }
> }
>
>Note that you may use arbitrary fallback chain this way (including
>usage of spare backends and so on).
This approach has the problem that all the upstream functionality like
max_tries, weight etc. etc. is lost.
Is the proxy module the right place to start hacking if I want to create
a new version of set_header where the variable gets evaluated for each
time a new upstream is selected for a new request?
More information about the nginx
mailing list