AWS + ECS Docker NodeJS 20 + nGinx Docker Sidecar
Sergey A. Osokin
osa at freebsd.org.ru
Wed Mar 13 20:43:58 UTC 2024
Hi Graig,
On Wed, Mar 13, 2024 at 03:05:00PM -0400, Craig Hoover wrote:
> We have a pretty hard-hitting API application in NodeJS that is deployed in
> AWS ECS using nGinx as a sidecar container to proxy to the NodeJS services.
>
> We have some odd issues that occur where the NodeJS application reports
> millisecond processing times up to res.send() but occasionally, the browser
> reports time for response 2 - 5 seconds.
>
> Connections don't timeout, just occasionally hang after the NodeJS process
> completes the request. The process in NodeJS and within the output,
> reports 100ms processing time but something is "catching" random outgoing
> requests for 2-5 seconds before delivering. We believe nGinx is the
> culprit but can't figure it out. Any help would be appreciated.
>
> Here is the config
> ----
> worker_rlimit_nofile 2048;
>
> events {
> worker_connections 1024;
> worker_aio_requests 64;
> accept_mutex on;
> accept_mutex_delay 500ms;
> multi_accept on;
> use epoll;
> epoll_events 512;
> }
>
> http {
> # Nginx will handle gzip compression of responses from the app server
> gzip on;
> gzip_proxied any;
> gzip_types text/plain application/json text/css text/javascript
> application/javascript;
> gzip_min_length 1000;
> client_max_body_size 10M;
> tcp_nopush on;
> tcp_nodelay on;
> sendfile on;
>
> # Offset from AWS ALB to prevent premature closed connections
> keepalive_timeout 65s;
>
> # Erase all memory associated with the connection after it times out.
> reset_timedout_connection on;
>
> # Store metadata of files to increase speed
> open_file_cache max=10000 inactive=5s;
> open_file_cache_valid 15s;
> open_file_cache_min_uses 1;
>
> # nGinx is a proxy, keep this off
> open_file_cache_errors off;
>
> upstream node_backend {
> zone upstreams 256K;
> server 127.0.0.1:3000 max_fails=1 fail_timeout=3s;
> keepalive 256;
> }
>
> server {
> listen 80;
> proxy_read_timeout 60s;
> proxy_send_timeout 60s;
> access_log off;
>
> add_header Strict-Transport-Security "max-age=31536000;
> includeSubDomains";
> add_header X-Frame-Options "SAMEORIGIN";
> add_header Referrer-Policy "strict-origin-when-cross-origin";
> add_header X-Content-Type-Options "nosniff";
> add_header Content-Security-Policy "frame-ancestors 'self'";
>
> location / {
> # Reject requests with unsupported HTTP method
> if ($request_method !~ ^(GET|POST|HEAD|OPTIONS|PUT|DELETE)$) {
> return 405;
> }
>
> # Only requests matching the whitelist expectations will
> # get sent to the node server
> proxy_pass http://node_backend;
> proxy_http_version 1.1;
> proxy_set_header Upgrade $http_upgrade;
> proxy_set_header Connection 'upgrade';
> proxy_set_header Host $http_host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_cache_bypass $http_upgrade;
> }
>
> error_page 500 502 503 504 /50x.html;
> location = /50x.html {
> root /usr/share/nginx/html;
> internal;
> }
> }
> }
Is there something in system logs?
You may want to update the current configuration with:
- keepalive directive, [1];
- increase number of connections/limits, [2],
that may help to improve performance.
References
----------
1. https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
2. https://www.nginx.com/blog/tuning-nginx/
--
Sergey A. Osokin
More information about the nginx
mailing list