post_action, just send http request, not fcgi - more questions + post_action bug?

Rob Mueller robm at fastmail.fm
Fri Mar 7 06:17:37 MSK 2008


> I thought something hacky like this might work.
>
>    location = @done {
>      set    $rateuser     $upstream_http_x_rate_user;
>      proxy_set_header RateUser $rateuser;
>      proxy_set_header RateURI  $request_uri;
>      proxy_set_header RateBytes $body_bytes_sent;
>      proxy_pass http://127.0.0.1:2350;
>    }
>
> And indeed for GET requests it does nicely, I get the headers I want which 
> I can quickly and easily decode them. In fact I don't really need to set 
> RateURI, since the first first line of the request gives me the URI.
>
> If I do a POST though, nginx isn't happy.

Playing around some more, I found I could do this:

    location = @done {
      set    $rateuser     $upstream_http_x_rate_user;
      proxy_set_header RateUser $rateuser;
      proxy_set_header RateURI  $request_uri;
      proxy_set_header RateBytes $body_bytes_sent;
      proxy_pass_request_body off;
      proxy_pass_request_headers off;
      proxy_pass http://unix:/var/state/ratetrack/ratepostaction:/;
    }

Adding the "proxy_pass_request_body off" makes everything work nicely, and I 
added the "proxy_pass_request_headers off" because I don't really need that 
information, so might as well not send it. So this all seems to work nicely.

So is this the best/right way of doing this? I mean it currently seems to 
work, and to do what I want. However I realise that I'm abusing the "proxy" 
system for something it wasn't designed to do, so I don't know if this will 
break with future or if there is a better way of doing this?

Also what actually happens to the content that proxy_pass returns in this 
case. I really want it to just "disappear", and for the moment it does seem 
to do that, but is that guaranteed?

However on top of that, I'm also worried about something else as well. I did 
some testing where I setup the following situation.

1. Only 1 nginx worker process
2. Many backend post_action "http done handler" processes
3. The "http done handler" would sleep 30 seconds before returning the HTTP 
response

I wanted to test the case if the "http done handler" code I wrote got slow 
for some reason, that it didn't affect all of nginx.

What I found was that when I did the first request it worked, but if I did a 
second request in < 30 seconds, it would block (browser just spinning) for 
the 30 seconds to expire. It seems anything that occurs in the post_action 
handler blocks any new connections from being processed. And as mentioned, 
it's not because there weren't any other free "http done handler" processes.

I repeated the process, and straced nginx to see what was happening.

### about to do a request

epoll_wait(11, {{EPOLLIN, {u32=3081252872, u64=3081252872}}}, 512, -1) = 1
gettimeofday({1204859394, 326281}, NULL) = 0
accept(5, {sa_family=AF_INET, sin_port=htons(2934), 
sin_addr=inet_addr("192.168.110.1")}, [16]) = 3

### all the proxying to/from the backend to the client ...

connect(4, {sa_family=AF_FILE, path="/var/state/ratetrack/ratepostaction"}, 
110) = 0
getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
writev(4, [{"GET / HTTP/1.0\r\nRateUser: testuser\r\nRateURI: 
/testdir/\r\nRateBytes: 739\r\nHost: localhost\r\nConnection: 
close\r\n\r\n", xyz}], 1) = xyz
epoll_wait(11, {{EPOLLOUT, {u32=3081253209, u64=13233881762136338777}}}, 
512, 60000) = 1
gettimeofday({1204859394, 498342}, NULL) = 0
epoll_wait(11,

### got here and would have waited 30 seconds for backend to respond, but 
before that happened I did another web request with the browser to the same 
uri...

{{EPOLLIN|EPOLLOUT, {u32=3081253125, u64=13831246206168740101}}}, 512, 
59972) = 1
gettimeofday({1204859397, 533816}, NULL) = 0
epoll_wait(11,

### In no way attempts to handle the new request. Just keeps waiting for the 
backend post_action handler to respond and finish, only then does it handle 
the new request

This seems a bug to me. A post_action handler shouldn't be able to block the 
handling of new connections to nginx.

Rob






More information about the nginx mailing list