Request time of 60s when denying SSL requests?
Maxim Dounin
mdounin at mdounin.ru
Sat Jan 12 18:33:06 UTC 2013
Hello!
On Fri, Jan 11, 2013 at 11:18:27AM -0800, JB Hobbs wrote:
> > You may disable keepalive by configuring keepalive_timeout to
> > 0,
>
> > see http://nginx.org/r/keepalive_timeout.
>
>
> > error_page 403 /403.html;
> > location = /403.html {
> > keepalive_timeout 0; }
>
> Would that approach be any different than me just putting
> "keepalive_timeout 0;" directly into my "location = /" block? Or
> doing that would not work because of the 403 page itself acts
> like a new request that then needs to have the keep alive
> suppressed there?
As long as all requests in "location /" are rejected, and there is
no error_page 403 define, just "keepalive_timout 0" would be
enough. Separate 403 location is needed to distinguish between
allowed and not allowed requests.
> > Note well: 400 and 408 in your previous message aren't after 403.
> > They are logged for connections without any single request got by
> > nginx, and keepalive_timeout do not apply here. To limit the time
> > nginx will wait for a request you may tune client_header_timeout,
> > see http://nginx.org/r/client_header_timeout.
>
> The way our web service works, our users fetch a tiny file from
> us (front-ended by Nginx). They do this by making a simple GET
> request. I can't imagine the headers transmitted to us are more
> than 1Kb or so - just the user agent string and whatever default
> headers browsers send. There would not be any cookies and so
> forth. With this in mind, what do you think would be a
> reasonable timeout to use? For someone on a dial up connection
> in a far away land with high latency I couldn't imagine it
> taking more than 10 seconds? I want to tune it tight, but not
> overly aggressive.
Minimal time one should allow IMO is slightly more than 3s to
tolerate one retransmit without RTT known. With 10s up to two
retransmits in a row will be allowed (3s + 6s), and it's ok for
most use cases.
> At any rate, I tried putting a client_header_timeout setting
> inside of my "location = /" block, but Nginx returned an error
> message in the logs stating that the directive is not allowed in
> there.
>
> Basically what I would like to do is use a VERY aggressive
> client_header_timeout, even 0 if that is allowed, but
> specifically just for requests made to the root (location = /).
> I can do this because such requests made to our service are
> invalid and just "stray" requests coming in over the net.
> Therefore I want to dispose of any system resources ASAP for
> such requests, even if the browser doesn't like it. Is there a
> way I can set this client_header_timeout differently based on
> the location block like what I tried? If not, is there an
> alternative approach?
Request URI isn't known in advance, and therefore it's not
possible to set different header timeouts for different locations.
Moreover, please note it only works for _default_ server on the
listen socket in question (as virtual host isn't known as well).
Once request headers are got from client and you know the request
isn't legitimate, you may just close the connection by using
return 444;
See http://nginx.org/r/return.
> > I think you are right, disabling keepalive completely may be
> > beneficial in such case. (Well, nginx itself doesn't care
> >much, but it should be less painfull for your OS.)
>
> Is there a command I can run like netstat or something that
> shows me what extent keepalive is taking up resources on my
> server? I would like to get a feel for what impact, if any,
> having it on is making to the system so that I can compare that
> to how things look after turning keepalive off.
This depends on the OS you are using. E.g. on FreeBSD "vmstat -z"
will show something like this:
...
socket: 412, 25605, 149, 1804, 43516452, 0
unpcb: 172, 25622, 96, 686, 14762777, 0
ipq: 32, 904, 0, 226, 22503, 0
udp_inpcb: 220, 25614, 10, 134, 6521351, 0
udpcb: 8, 25781, 10, 193, 6521351, 0
tcp_inpcb: 220, 25614, 43, 6311, 22232147, 0
tcpcb: 632, 25602, 34, 1148, 22232147, 0
tcptw: 52, 5184, 9, 5175, 9010766, 114029
syncache: 112, 15365, 0, 175, 14160824, 0
hostcache: 76, 15400, 139, 261, 441570, 0
tcpreass: 20, 1690, 0, 338, 497191, 0
...
Each established TCP connection uses at least socket + tcp_inpcb +
tcpcb structures, i.e. about 1.5k. Additionally, each connection
sits in TCB hash and slows down lookups if there are lots of
connections. This isn't a problem if you have properly tuned
system and enough memory, but if you are trying to keep lots of
connections alive - you may want to start counting.
> > I believe you misunderstood what you actually get. Defining
> > access_log inside the location overrides all access_log's difined
> > on previous levels, so you'll only get requests logged to
> > fobidden.log. What you see in your main.log is other connections
> > opened by the browser.
>
>
> Yes. I am seeing in forbidden.log the requests made to / as
> expected. Then about 10 seconds later for http, and 60 seconds
> later for https, I get the 400 or 408 log entry in main.log. I
> guess this is nginx's way of logging that it (408) or the
> browser (400) closed the connection.
Not exactly - it's about "closed the connection without sending
any complete request in it". 400 here means "client opened the
connection presumably to send a request, but closed the connection
before it was able to send a request", and 408 "... but failed to
send a request before timeout expired".
> So then my question is how would I tell nginx to make this log
> entry somewhere else other than in main.log. As an example this
> is what it looks like in forbidden.log:
If you want nginx to log 400 and 408 errors separately you have to
configure error_page to handle these errors, and configure
access_log there. E.g.:
error_page 408 /408.html;
location = /408.html {
access_log /path/to/408.log combined;
}
[...]
--
Maxim Dounin
http://nginx.com/support.html
More information about the nginx
mailing list