Error: Couldn't connect to server
oscaretu
oscaretu at gmail.com
Sat Apr 28 12:27:21 UTC 2018
Hello, Mohan.
Have you tried to make simultaneous request against the server from another
computer, using curl from the command line? It the request work in the
second computer, there is no problem in the server, and it will be in the
client. Perhaps you are looking for a problem in the nginx that it doesn't
exist.
Or perhaps, check the TIME_WAIT in the sockets, so they can be reused more
quickly. This can give you a clue:
https://www.thecodingforums.com/threads/how-to-reuse-tcp-listening-socket-immediately-after-it-was-connectedat-least-once.685380/
I suggest using "*sysdig* <https://sysdig.com/opensource/sysdig/>" to
monitor the server or client while you are doing the request, so you'll be
able to watch what happening in your computers.
Kind regards,
Oscar
On Sat, Apr 28, 2018 at 1:32 PM, mohan prakash via nginx <nginx at nginx.org>
wrote:
> Hi Peter
>
> Thanks for your reply.
>
> I am not using script, I am creating a streamer project where i am using
> libcurl to download the content from nginx server.
> Since the content i am downloading is HLS, i am downloading every ~5sec.
>
> During the stress test i am seeing "couldn't connect to server" error for
> HTTP request. With one or two service i don't see this problem.
>
>
>
> Regards
> Mohanaprakash T
>
>
> On Friday, 27 April, 2018, 9:30:46 PM IST, Peter Booth <peter_booth at me.com>
> wrote:
>
>
> I’m guessing that you have script that keeps executing curl. What you can
> do is use curl -K ./fileWithListOfUrls.txt
> and the one curl process will visit each url in turn reusing the socket
> (aka HTTP keep alive)
>
> That said, curl isn’t a great workload simulator and, in the long time,
> you can get better results from something like wrk2
>
>
> On 27 Apr 2018, at 11:32 AM, mohan prakash via nginx <nginx at nginx.org>
> wrote:
>
> Hi Liu
>
> Client side I have increased the file descriptor value to 10000 , but
> still the same issue .
>
> Also increased the FD in server side also then also same issue continuous.
>
>
> Followed below link to increase the FD limit.
>
> Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) -
> nixCraft
> <https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/>
>
>
> Linux Increase The Maximum Number Of Open Files / File Descriptors (FD) ...
> How do I increase the maximum number of open files under CentOS Linux? How
> do I open more file descriptors under...
>
> <https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/>
>
>
>
>
> Regards
> Mohanaprakash T
>
>
> On Friday 27 April 2018, 7:06:51 PM IST, Liu Lantao <liulantao at gmail.com>
> wrote:
>
>
> It seems like your client has reach the limit of max open files.
>
> From the shell where you start you client program, run ‘ulimit -a’ to
> check the settings.
> You can also check the files open by your client in /proc/<pid>/fd/.
>
> Increase that value is simple, you can change is temporarily or save to
> config file,
> there are tons of documents online about how to change it.
> On Fri, Apr 27, 2018 at 9:09 PM mohan prakash via nginx <nginx at nginx.org>
> wrote:
>
> Hi Team
>
> I am trying execute ~1000 curl request from my CentOS machine to my nginx
> server in ~5 sec.
> The same exercise continuous every ~5sec.
>
> I am using libcurl to make the HTTP request.
>
> During this process i see most of my request are failed with reason
>
> *Failure Curl Error Code[ 7 ] Reason[ Couldn't connect to server ]*
>
> Can someone suggest whether i am missing any configuration info in my
> nginx server. Below is my nginx server configuration
>
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
>
> # Load dynamic modules. See /usr/share/nginx/README.dynamic.
> include /usr/share/nginx/modules/*.conf;
>
>
> worker_rlimit_nofile 262144;
>
> events {
> worker_connections 16384;
> }
>
> http {
> log_format main '$remote_addr - $remote_user [$time_local]
> "$request" '
> '$status $body_bytes_sent "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"';
>
> access_log /var/log/nginx/access.log main;
>
> sendfile on;
> tcp_nopush on;
> tcp_nodelay on;
> keepalive_timeout 65;
> types_hash_max_size 2048;
>
> include /etc/nginx/mime.types;
> default_type application/octet-stream;
>
> # Load modular configuration files from the /etc/nginx/conf.d
> directory.
> # See http://nginx.org/en/docs/ngx_core_module.html#include
> # for more information.
> include /etc/nginx/conf.d/*.conf;
>
> limit_conn_zone $binary_remote_addr zone=perip:10m;
> limit_conn_zone $server_name zone=perserver:10m;
>
> server {
> limit_conn perip 2000;
> limit_conn perserver 20000;
> listen *:8080 backlog=16384;
> }
> }
>
> Regards
> Mohanaprakash T
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
Oscar Fernandez Sierra
oscaretu at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20180428/737a71dc/attachment-0001.html>
More information about the nginx
mailing list