1000 requests per second?
owkaye at gmail.com
Tue Nov 18 11:36:46 MSK 2008
Can nginx -- running on one server -- deliver 1000 requests
per second without "bogging down" and pushing more and more
requests into a queue?
Here's my reason for asking:
I'm designing a live auction website that needs to respond
to 500-1000 requests per second for about an hour. Each
request will post only 20 bytes of data so the volume being
posted is low. Nevertheless the HTTP headers still need to
be parsed and they will have far more volume than the
actual post data -- so it seems I should do everything I
can to reduce the HTTP header overhead. This will
substantially reduce the load and speed up nginx's response
I'm wondering if nginx has the ability to use "Web Sockets"
technology to eliminate all but the first HTTP header, and
maintain a connection with the browser so data can be
passed back and forth faster?
If this is not possible, can you tell me the best way to
reduce the HTTP header overhead so I can make sure that
each of those 1000 requests per second are responded to as
fast as they come in? Or am I concerned about something
that's a non-issue, perhaps because nginx is so blazing
fast that it can handle this kind of load without breaking
The worst problem I can imagine is that during one of these
live auctions the server will begin to respond slowly and
push requests into a queue. If this happens, bidders will
not receive timely updates from the server and then the
whole service loses credibility.
the visitor's browsers to send requests via XMLHttpRequest
is the next-best option for reducing overhead?
Thanks for any insights you can provide to help me decide
whether or not nginx might be appropriate for my needs.
More information about the nginx