Cannot disable buffering during SSE connection

Anatoliy Malishevskiy tony.malish at
Thu May 6 23:49:10 UTC 2021

Thank you!

I was able to fix both of these issues.
However, the fix for the issue #2 is hacky and I am sure there has to be a
better way to
fix this issue. Explained below.

On Wed, May 5, 2021 at 8:40 PM Maxim Dounin <mdounin at> wrote:

> Hello!
> On Wed, May 05, 2021 at 02:24:52PM -0700, Anatoliy Malishevskiy wrote:
> > I am trying to migrate from Apache2 to NGINX and having issues with SSE
> > connections. Everything else is working fine, the regular GET or POST
> > requests are getting through successfully.
> >
> > But there are 2 critical issues with SSE connections:
> > 1) the NGINX holds up responses until the buffer is full
> > 2) The NGINX blocks any other requests to the server if SSE is in
> progress.
> >
> > With Apache2 I used to have issue #1, but it was easily resolved with the
> > following configurations:
> > - *OutputBufferSize 0*
> > The issue #2 is really blocking me from using NGINX. Why does the server
> > block
> > other connections when SSE is in progress?
> > How to fix this issue?
> [...]
> The "fastcgi_buffering off;" should be enough to disable response
> buffering on nginx side.  If it doesn't work for you, this means
> one of the following:
> - You've not configured it properly: for example, configured it in
>   the wrong server block, or failed to reload configuration, or
>   not using fastcgi_pass to process particular requests.  I think
>   this is unlikely, but you may want to double-check: for example,
>   by configuring something like "return 404;" in the relevant
>   location to see if it works.
> - Some buffering happens in your FastCGI backend.  From your
>   configuration it looks like you are using fastcgiwrap, which
>   doesn't seem to provide any ways to flush libfcgi buffers, so
>   probably it is the culprit.

You were right pointing out to the fastcgiwrap, it was keeping
buffers until the application is terminated or buffer is full.
I fix this issue by adding the following lines into the /location:

### When set (e.g., to ""), disables output fastcgiwrap buffering.
### MUST be set if SEE used
*fastcgi_param NO_BUFFERING "";*

### When buffering is disabled, the response is passed to a
### client synchronously, immediately as it is received.
### Nginx will not try to read the whole response from the
### FastCGI server.
### MUST be set if SEE used
* fastcgi_buffering off;*

> Your other problem is likely also due to fastcgiwrap, but this
> time incorrect configuration: by default it only provides one
> process, so any in-progress request, such as server-sent events
> connection, will block all other requests from being handled.  You
> have to configure more fcgiwrap processes using the "-c" command
> line argument to handle multiple parallel requests.

You were right here. I had to provide the "-c4" parameter, for example, to
fcgiwrap process. But I could not find an easy way to set this parameter,
I had to hack /etc/init.d/fcgiwrap and change DAEMON_OPTS variable and
force the script to stop then start the service again with new parameters
when I type:
> sudo /etc/init.d/fcgiwrap reload

Do you know any better way to provide this -c4 parameter to fcgiwrap?
Or I have to create my own fcgiwrap script just only to enable multiple
parallel requests handling?


> --
> Maxim Dounin
> _______________________________________________
> nginx mailing list
> nginx at

Anatoliy Malishevskiy
YumaWorks, Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list