nginx prepends extraneous "HTTP/1.1 100 Continue" headers to SSI responses
piespy
piespy at gmail.com
Fri Dec 26 02:00:47 MSK 2008
When using SSI to handle POST requests submitted with an "Expect:
100-continue" header (as Opera likes to send for >10KB post data),
nginx will generate a response with "HTTP/1.1 100 Continue" for the
request, *and* again for each SSI subrequest that has wait="yes",
breaking the chunked transfer-encoding and causing the browser to
terminate the connection prematurely.
I've been able to fix it, at least for my purposes, by setting
sr->expect_tested in ngx_http_subrequest(). I can't tell if that's
really a proper fix, or if a subrequest might actually need to send
"HTTP/1.1 100 Continue" in some cases. (If that's so, then the real
fix would probably require to associate the expect_tested variable
with the response, not any request.)
It's probably an extremely rare use case and I'm happy with my fix,
but I thought I'd report the problem here anyway.
I've been able to make a simplified test case, for these request headers:
----
$ cat test.req
POST /test/ssi.php HTTP/1.1
Host: localhost
Connection: Keep-Alive
Expect: 100-continue
Content-Length: 0
----
it generates this broken reply:
----
$ nc localhost 8001 < test.req
HTTP/1.1 100 Continue
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Server: nginx/0.7.30
Date: Thu, 25 Dec 2008 22:11:23 GMT
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
4
Pre
3
One
HTTP/1.1 100 Continue
9
Mid-one
3
Two
HTTP/1.1 100 Continue
9
Mid-two
5
Three
6
Post
0
----
Obviously that's malformed chunked transfer encoding, with nginx
sending an extraneous header for each SSI subrequest that uses
wait="yes".
Here, /test/ssi.php contains this:
----
header("X-Accel-Redirect: /test/ssi_out");
----
and /test/ssi_out (set to use SSI in the nginx config):
----
Pre
<!--# include virtual="/test/echo.php?One" wait="yes" -->
Mid-one
<!--# include virtual="/test/echo.php?Two" wait="yes" -->
Mid-two
<!--# include virtual="/test/echo.php?Three" -->
Post
----
and /test/echo.php:
----
<? echo $_SERVER["QUERY_STRING"] ?>
----
(I'm using this convoluted setup because in my real setup, PHP is
expensive memory-wise and the real sub-requests are time-consuming but
use few resources so it's better to have many of them running in
parallel as virtual requests handled by nginx instead of having PHP
wait for minutes until they complete. PHP is only used to setup the
subrequests which can be done very quickly, so I only need 1-2 PHP
processes to handle several hits per second instead of needing
hundreds of PHP processes that do nothing but shovel data for a few
minutes.)
More information about the nginx
mailing list