On Mon, Jul 01, 2019 at 11:24:28AM +0200, Michael Würtinger wrote:
thanks a lot for your reply. Could you please elaborate a little bit which memory resources need to be freed periodically? How much memory can be held by a connection? What's the worst case scenario? We are currently running it in production with http2_max_requests set to a value so high that the connection practically lives forever and so far we cannot spot any problems but maybe we're missing something?
And example of "wost case" can be seen here:
Memory can be allocated from the connection memory pool. And this memory have to be freed at some point - so you have to close the connection to do this. And that's why number of requests in a particular connection is limited by default.
Whether or not memory allocations happens in your particular use case - doesn't really matter, especially given that things can change with seamingly minor configuration and/or client behaviour changes.
In most cases we try to limit allocations from the connection memory pool to a minimum, yet it is not always possible/convinient to completely avoid allocations from connection memory pool. This allows processing of thousands of requests on a single connection without observable memory impact. Likely millions will also work except may be in some specific use cases, yet I wouldn't recommend allowing that many requests, just to be on the safe side.