From nginx-forum at forum.nginx.org Tue Nov 1 08:42:06 2022 From: nginx-forum at forum.nginx.org (fostercarly) Date: Tue, 01 Nov 2022 04:42:06 -0400 Subject: Nginx and 400 SSL error handling In-Reply-To: References: Message-ID: The 400 (Bad Request) status code indicates that the server cannot or will not process the request because the received syntax is invalid, nonsensical, or exceeds some limitation on what the server is willing to process. It means that the request itself has somehow incorrect or corrupted and the server couldn't understand it. The server is refusing to service the request because the entity of the request is in a format not supported by the requested resource for the requested method . Therefore, it prevents the website from being properly displayed. The main thing to understand is that the 400 Bad Request error is a client-side error. The cause of 400 Bad Request error can be a wrongly written URL or a URL that contains unrecognizable characters. Another cause of the error might be an invalid or expired cookie. Also, if you try to upload a file that's too large. If the server is programmed with a file size limit, then you might encounter a 400 error. Expired Client Certificate This issue typically happens for a 2-Way TLS, when the certificate sent by the client is expired. In a 2-way TLS, both client and server exchange their public certificates to accomplish the handshake. The client validates the server certificate and the server validates the client certificate. During the TLS handshake if it is found that the client certificate is expired, then the server will send 400 - Bad request with the message "The SSL certificate error". The solution for this problem is that procure a new certificate and upload the certificate http://net-informations.com/q/mis/400.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284506,295640#msg-295640 From nginx-forum at forum.nginx.org Tue Nov 1 12:19:15 2022 From: nginx-forum at forum.nginx.org (George) Date: Tue, 01 Nov 2022 08:19:15 -0400 Subject: nginx-quic socket() 0.0.0.0:80 failed (94: Socket type not supported) Message-ID: I tested nginx-quic https://quic.nginx.org/README for HTTP/3 over QUIC using quictls openssl 1.1.1q forked library and ran into an interesting error for non-HTTPS nginx vhost configurations. If non-HTTPS nginx vhost doesn't specifically list the listen directive for port 80, I get this error when running nginx -t config check nginx: [emerg] socket() 0.0.0.0:80 failed (94: Socket type not supported) server { server_name domain.com www.domain.com; } but if I specifically list the listen directive no error server { listen 80; server_name domain1.com www.domain1.com; } Nginx was built on CentOS 7 with GCC 11.2.1 and quictls openssl 1.1.1q nginx -V nginx version: nginx/1.23.2 (011122-105436-centos7-d9e494b-br-6e975bc) built by gcc 11.2.1 20220127 (Red Hat 11.2.1-9) (GCC) built with OpenSSL 1.1.1q+quic 5 Jul 2022 TLS SNI support enabled This seems to only be an issue with nginx-quic built Nginx versions. If I build a regular Nginx version without nginx-quic/quictls the non-HTTPS vhost with no listen directive specifically listed for port 80 works fine and has been the expected case since I started using Nginx ~11yrs ago. So with nginx-quic, does the assumption that server{} contexts without a specifically mentioned listen port, no longer default to port 80? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295642,295642#msg-295642 From arut at nginx.com Tue Nov 1 13:02:51 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 1 Nov 2022 17:02:51 +0400 Subject: nginx-quic socket() 0.0.0.0:80 failed (94: Socket type not supported) In-Reply-To: References: Message-ID: <20221101130251.4o3zrzdhcci4p6lt@N00W24XTQX> Hi George, On Tue, Nov 01, 2022 at 08:19:15AM -0400, George wrote: > I tested nginx-quic https://quic.nginx.org/README for HTTP/3 over QUIC using > quictls openssl 1.1.1q forked library and ran into an interesting error for > non-HTTPS nginx vhost configurations. If non-HTTPS nginx vhost doesn't > specifically list the listen directive for port 80, I get this error when > running nginx -t config check > > nginx: [emerg] socket() 0.0.0.0:80 failed (94: Socket type not supported) > > server { > > server_name domain.com www.domain.com; > } > > but if I specifically list the listen directive no error > > server { > listen 80; > server_name domain1.com www.domain1.com; > } > > Nginx was built on CentOS 7 with GCC 11.2.1 and quictls openssl 1.1.1q > > nginx -V > nginx version: nginx/1.23.2 (011122-105436-centos7-d9e494b-br-6e975bc) > built by gcc 11.2.1 20220127 (Red Hat 11.2.1-9) (GCC) > built with OpenSSL 1.1.1q+quic 5 Jul 2022 > TLS SNI support enabled > > This seems to only be an issue with nginx-quic built Nginx versions. If I > build a regular Nginx version without nginx-quic/quictls the non-HTTPS vhost > with no listen directive specifically listed for port 80 works fine and has > been the expected case since I started using Nginx ~11yrs ago. > > So with nginx-quic, does the assumption that server{} contexts without a > specifically mentioned listen port, no longer default to port 80? Thanks for reporting this. Indeed, default listen is broken in nginx-quic branch. Please try the attached patch which should fix the problem. -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1667307635 -14400 # Tue Nov 01 17:00:35 2022 +0400 # Branch quic # Node ID 40777e329eea363001186c4bf609d2ef0682bcee # Parent 598cbf105892bf9d7acc0fc3278ba9329b3a151c Set default listen socket type in http. The type field was added in 7999d3fbb765 at early stages of QUIC implementation and was not initialized for default listen. Missing initialization resulted in default listen socket creation error. diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3008,6 +3008,7 @@ ngx_http_core_server(ngx_conf_t *cf, ngx lsopt.socklen = sizeof(struct sockaddr_in); lsopt.backlog = NGX_LISTEN_BACKLOG; + lsopt.type = SOCK_STREAM; lsopt.rcvbuf = -1; lsopt.sndbuf = -1; #if (NGX_HAVE_SETFIB) From nginx-forum at forum.nginx.org Tue Nov 1 14:18:09 2022 From: nginx-forum at forum.nginx.org (George) Date: Tue, 01 Nov 2022 10:18:09 -0400 Subject: nginx-quic socket() 0.0.0.0:80 failed (94: Socket type not supported) In-Reply-To: <20221101130251.4o3zrzdhcci4p6lt@N00W24XTQX> References: <20221101130251.4o3zrzdhcci4p6lt@N00W24XTQX> Message-ID: That was a quick reply, was about to pop on Nginx slack channel :) Tried the patch but getting patching file src/http/ngx_http_core_module.c patch: **** malformed patch at line 18: lsopt.socklen = sizeof(struct sockaddr_in); Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295642,295645#msg-295645 From nginx-forum at forum.nginx.org Tue Nov 1 14:25:22 2022 From: nginx-forum at forum.nginx.org (George) Date: Tue, 01 Nov 2022 10:25:22 -0400 Subject: nginx-quic socket() 0.0.0.0:80 failed (94: Socket type not supported) In-Reply-To: References: <20221101130251.4o3zrzdhcci4p6lt@N00W24XTQX> Message-ID: <25bd295dfef31603c280253793119aff.NginxMailingListEnglish@forum.nginx.org> Ok fixed the patch and yup working now! No more socket() 0.0.0.0:80 failed (94: Socket type not supported) errors when listen directive is not specifically set. Thanks Roman! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295642,295646#msg-295646 From cncaiker+nginx at gmail.com Tue Nov 1 16:29:49 2022 From: cncaiker+nginx at gmail.com (Drweb Mike) Date: Wed, 2 Nov 2022 00:29:49 +0800 Subject: how can I cache upstream by mime type in nginx Message-ID: hi there My front end is nginx using reverse proxy work with my backend, I was trying to cache images files only, but it seems doesn't work at all, the *$no_cache* always output default value "proxy", which should be "0" when I visit image files here it is my config map $upstream_http_content_type $no_cache { > default proxy; > "~*image" 0; > } > proxy_store on; > proxy_temp_path /tmp/ngx_cache; > proxy_cache_path /tmp/ngx_cache/pcache levels=1:2 use_temp_path=on > keys_zone=pcache:10m inactive=14d max_size=3g; > log_format upstreamlog '$server_addr:$host:$server_port > $remote_addr:$remote_port $remote_user [$time_local] ' > '"$request" $status $request_length $body_bytes_sent [$request_time] ' > '"$http_referer" "$http_user_agent" "$http_x_forwarded_for" - ' > 'upstream: $upstream_addr - $upstream_status - upstream_response_time: > $upstream_response_time $upstream_http_content_type $skip_cache : > $no_cache'; > server{ > ..... > if ($skip_cache = false){ > set $skip_cache 0; > } > if ($no_cache = false){ > set $no_cache 0; > } > if ($request_method = POST) { > set $skip_cache 1; > } > if ($query_string != "") { > set $skip_cache 1; > } > if ($http_cache_control ~* "private") { > set $skip_cache 1; > } > if ($upstream_http_cache_control ~* "private") { > set $skip_cache 1; > } > location / { > proxy_cache_key "$http_host$uri$is_args$args"; > proxy_cache pcache; > add_header X-Cache-Status $upstream_cache_status; > proxy_cache_bypass $http_pragma $http_authorization $skip_cache; > proxy_no_cache $http_pragma $http_authorization $skip_cache $no_cache; > proxy_pass http://$backend; > access_log /dev/stdout upstreamlog; > } > } the curl test output > HTTP/1.1 200 OK > Server: nginx > Date: Sun, 30 Oct 2022 17:22:56 GMT > Content-Type: image/jpeg > Content-Length: 56769 > Connection: keep-alive > Last-Modified: Tue, 19 Jul 2022 03:27:34 GMT > ETag: "62d624a6-ddc1" > Cache-Control: public, max-age=604800, s-maxage=1209600, stale-if-error=400 > X-Cache-Status: MISS > Accept-Ranges: bytes why *$upstream_http_content_type* map doesn't works as expected -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Nov 1 22:07:37 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Nov 2022 22:07:37 +0000 Subject: how can I cache upstream by mime type in nginx In-Reply-To: References: Message-ID: <20221101220737.GG4185@daoine.org> On Wed, Nov 02, 2022 at 12:29:49AM +0800, Drweb Mike wrote: Hi there, > My front end is nginx using reverse proxy work with my backend, I was > trying to cache images files only, but it seems doesn't work at all, the > *$no_cache* always output default value "proxy", which should be "0" when I > visit image files $upstream_http_content_type is the Content-Type response header from the upstream server, after nginx has sent a request to the upstream server. Before nginx has sent a request to the upstream server, the variable has no value. If you want to use variables to decide whether nginx should handle a request by looking in the cache before asking upstream, you should only use variables that are available in the request, not ones that come from upstream. (Maybe you can use part of the request uri -- starts with /images/ or ends with .jpg or .png, for example? It depends on what requests your clients will be making.) > why *$upstream_http_content_type* map doesn't works as expected Your expectation is wrong. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Nov 1 22:14:38 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Nov 2022 22:14:38 +0000 Subject: how can I cache upstream by mime type in nginx In-Reply-To: <20221101220737.GG4185@daoine.org> References: <20221101220737.GG4185@daoine.org> Message-ID: <20221101221438.GH4185@daoine.org> On Tue, Nov 01, 2022 at 10:07:37PM +0000, Francis Daly wrote: > On Wed, Nov 02, 2022 at 12:29:49AM +0800, Drweb Mike wrote: Hi there, > > My front end is nginx using reverse proxy work with my backend, I was > > trying to cache images files only, but it seems doesn't work at all, the > > *$no_cache* always output default value "proxy", which should be "0" when I > > visit image files > If you want to use variables to decide whether nginx should handle a > request by looking in the cache before asking upstream, you should only > use variables that are available in the request, not ones that come > from upstream. > > (Maybe you can use part of the request uri -- starts with /images/ or > ends with .jpg or .png, for example? It depends on what requests your > clients will be making.) proxy_cache_bypass (http://nginx.org/r/proxy_cache_bypass) is "nginx should not look in the cache for this response; go straight to upstream". proxy_no_cache (http://nginx.org/r/proxy_no_cache) is "nginx should not save this response from upstream to the cache". Maybe you want to never use proxy_cache_bypass; and use proxy_no_cache to make sure that only the things that should be written to the cache, are written there? You could do proxy_no_cache based on $upstream_http_content_type; and that any other requests will look in the cache, see nothing there, and go to upstream anyway. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Nov 1 23:35:35 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Nov 2022 23:35:35 +0000 Subject: proxy_pass works on main page but not other pages In-Reply-To: <6821f650-3a2d-b492-765b-953570c77edc@gmail.com> References: <78a8bd73-d6e3-e890-d9b1-6df427991b9b@gmail.com> <6821f650-3a2d-b492-765b-953570c77edc@gmail.com> Message-ID: <20221101233535.GI4185@daoine.org> On Sun, Oct 30, 2022 at 11:56:39AM -0600, Brian Carey wrote: Hi there, > Thinking it through though I think my solution is bad since it implies a > dependency between the urls defined in the program and the location used in > nginx, ie. they must match and the program cannot be proxied at an arbitrary > location. if you have a "back-end" at http://one.internal.example.com/, that is reverse-proxied behind the public-facing http://example.com/one/ using the "normal" nginx config fragment location /one/ { proxy_pass http://one.internal.example.com/; } then the client browser making the requests does not know that there is a back-end service. When the client requests http://example.com/one/two.html, nginx will ask for http://one.internal.example.com/two.html, and will send the response http headers and body content back to the client. If that response contains links or references of the form "three.jpg" or "./three.jpg", then the client will make the next request for http://example.com/one/three.jpg, which will get to nginx, which will know to proxy_pass to the same back-end service, and all will probably work. If the response contains links of the form "/three.jpg", then the client will make the next request for http://example.com/three.jpg, which will get to nginx but will probably not get a useful response, because nginx knows that it must not proxy_pass to the same back-end because the local part of the request does not start with /one/. The user will probably see an error or something that looks broken. If the response contains links of the form http://one.internal.example.com/three.jpg, then the client will presumably fail to resolve the hostname one.internal.example.com, and the user will probably see an error. > So hopefully there is a better solution than the one I found. I > hope I'm not asking too many questions. Whether or not a particular back-end can be reverse-proxied easily, or can be reverse-proxied easily at a different local part of the url hierarchy from where it thinks it is installed, it mostly down to the back-end application to decide. In general (and there are exceptions), nginx can readily rewrite the http response headers, and cannot readily rewrite the http response body, in order to adjust links or references to other internal resources. If you control the back-end service, and you know that you want to reverse-proxy it behind http://example.com/one/, you will probably find it easier to work with, if you can install the back-end service at http://one.internal.example.com/one/. That would make the first two forms of links "Just Work"; and the third (full-url) form is usually easier to recognise and replace. > > > I am able to use proxy_pass to forward https:/biscotty.me/striker to > > > the main page of my app. The problem is that all of the links in the > > > app result in a page not found error from the apache server handling > > > requests to /. So it seems like the port number information is > > > somehow being lost in translation? More likely, I guess, is that the links are of the second form, to "/three.jpg" instead of the "three.jpg". But it could also be related to what the initial request from the client was -- "/striker" and "/striker/" are different, and I suspect you should use the with-trailing-slash version in your config "location" line. But if you already have a working configuration, that's good! Cheers, f -- Francis Daly francis at daoine.org From logchen2009 at gmail.com Fri Nov 4 07:59:17 2022 From: logchen2009 at gmail.com (solomon) Date: Fri, 4 Nov 2022 15:59:17 +0800 Subject: performance guide for nginx L4 stream In-Reply-To: <001001d8e9f9$6a7d1360$3f773a20$@roze.lv> References: <001001d8e9f9$6a7d1360$3f773a20$@roze.lv> Message-ID: Hi, Probably an issue associated with backlog and/or timewait. If the backlog queue or the timewait bucket is full, the new connection request will be dropped by tcp, before Nginx even accepting the connection, so you can't see error log in Nginx. To increase the backlog queue, you can increase the system tcp configuration `net.core.somaxconn`, `net.ipv4.tcp_max_syn_backlog` and the nginx config `listen backlog=xxxx`. The former is the system-level limit, the latter is the process-level limit. To avoid timewait bucket overflow, you can increase the system tcp configuration `net.ipv4.tcp_max_tw_buckets` or enable the `net.ipv4.tcp_tw_reuse`. In addition, enlarging the ip/port range of the client side may also be helpful. BTW, you can use `ss` or `netstat` to observe the backlog of the listening socket which corresponds to the `Send-Q` and `Recv-Q` fields of the output. The kernel log can also gives you some information. If I remember correctly, there is a kernel log when timewait bucket overflow occurs. Hope it helps. Reinis Rozitis 于2022年10月27日周四 19:46写道: > > We are using the hey (https://github.com/rakyll/hey) tool to pump 50k > requests per second and are seeing only 40k requests being received on the > backend application side. > > Any other tcp configuration that needs to be tuned ? > > I am not familiar with the tool but per documentation it should have some > sort of error status report for the failed requests What it is for the 10k > "missing" requests? > > Are they "missing" (already) on nginx or just on the proxied backend(s)? > (in the provided nginx configuration I don't see any access/error log > configuration - you could enable both to see if you actually get those 50k > requests to nginx). > > Are you testing from a single client (same server) or multiple? > Do you use keepalive or new connection per request (in the case of later > might come close to the ephemeral port limit (~65k) depending on if > tcp_tw_reuse is or isn’t configured)? > > Have you tried with other tools like 'ab', 'httperf' or 'siege' to see if > you get the same results/problems? > > rr > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Fri Nov 4 15:01:22 2022 From: mailinglist at unix-solution.de (basti) Date: Fri, 4 Nov 2022 16:01:22 +0100 Subject: GDPR Proxy Message-ID: <556b0d44-2375-d2a4-f1dd-98fd6ecffea0@unix-solution.de> Hello, we have a website with some embedded content to YT. So the idea is to setup a GDPR Proxy. Setup: User Client -> example.com (embedded content media.example.com) -> YT So YT only can see the IP of media.example.com. What's about cookies? Can YT track the 'User Client'? Something like that should be enough, I think: location /media/(.*)$ { proxy_pass https://media.example.com; proxy_redirect off; proxy_cache off; proxy_hide_header X-Real-IP; proxy_hide_header X-Forwarded-For; } Did I miss something? Sometimes I see proxy_set_header Host $upstream_host; But I have not found any info what $upstream_host stands for. Best regards, From mdounin at mdounin.ru Fri Nov 4 23:04:45 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 5 Nov 2022 02:04:45 +0300 Subject: GDPR Proxy In-Reply-To: <556b0d44-2375-d2a4-f1dd-98fd6ecffea0@unix-solution.de> References: <556b0d44-2375-d2a4-f1dd-98fd6ecffea0@unix-solution.de> Message-ID: Hello! On Fri, Nov 04, 2022 at 04:01:22PM +0100, basti wrote: > we have a website with some embedded content to YT. So the idea is to > setup a GDPR Proxy. > > Setup: > > User Client -> example.com (embedded content media.example.com) -> YT > > So YT only can see the IP of media.example.com. > > What's about cookies? > Can YT track the 'User Client'? > > Something like that should be enough, I think: > > location /media/(.*)$ { > proxy_pass https://media.example.com; > proxy_redirect off; > proxy_cache off; > proxy_hide_header X-Real-IP; > proxy_hide_header X-Forwarded-For; Note that proxy_hide_header hides _response_ headers, while X-Real-IP and X-Forwarded-For only expected to appear in _requests_*. To remove request headers, try proxy_set_header instead, e.g.: proxy_set_header X-Real-IP ""; proxy_set_header X-Forwareded-For ""; See http://nginx.org/r/proxy_set_header for details. > } > > Did I miss something? > Sometimes I see > proxy_set_header Host $upstream_host; > > But I have not found any info what $upstream_host stands for. There is no such builtin variable in nginx. -- Maxim Dounin http://mdounin.ru/ From kaushalshriyan at gmail.com Mon Nov 7 15:29:51 2022 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 7 Nov 2022 20:59:51 +0530 Subject: nginx returns html instead of json response Message-ID: Hi, I am running the nginx version: nginx/1.22 as a reverse proxy server on CentOS Linux release 7.9.2009 (Core). Is there a way to return json response when i hit http://mydomain.com/api/v1/* instead of the html response. location /api/v1/* { internal; add_header 'Content-Type' 'application/json charset=UTF-8'; error_page 502 '{"error": {"status_code": 502,"status": "Bad Gateway"}}'; } But whenever I try to send a request to /api/v1/users via curl I get the HTML source code in response instead of JSON response. Please guide me. Thanks in advance. I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.switzer at givainc.com Mon Nov 7 16:08:16 2022 From: dan.switzer at givainc.com (Dan G. Switzer, II) Date: Mon, 7 Nov 2022 11:08:16 -0500 Subject: nginx returns html instead of json response In-Reply-To: References: Message-ID: The "internal" keyword indicates only internal request can access the location: http://nginx.org/en/docs/http/ngx_http_core_module.html#internal So hitting ttp://mydomain.com/api/v1/*  with CURL would never hit this location. Remove the "internal" keyword, reload Nginx and try it again. -Dan On 11/7/2022 10:29 AM, Kaushal Shriyan wrote: > Hi, > > I am running the nginx version: nginx/1.22 as a reverse proxy server > on CentOS Linux release 7.9.2009 (Core). Is there a way to return json > response when i hit http://mydomain.com/api/v1/* > > instead of the html response. > > location /api/v1/* { >     internal; >     add_header 'Content-Type' 'application/json charset=UTF-8'; > >     error_page 502 '{"error": {"status_code": 502,"status": "Bad > Gateway"}}'; > } > > But whenever I try to send a request to /api/v1/users via curl I get > the HTML source code in response instead of JSON response. > > Please guide me. Thanks in advance. I look forward to hearing from you. > > Best Regards, > > Kaushal > > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -- Dan G. Switzer, II Giva, Inc. Email:dan.switzer at givainc.com Web Site:http://www.givainc.com See Our Customer Successes http://www.givainc.com/customers-casestudies.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Mon Nov 7 17:47:00 2022 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 7 Nov 2022 23:17:00 +0530 Subject: nginx returns html instead of json response In-Reply-To: References: Message-ID: On Mon, Nov 7, 2022 at 9:40 PM Dan G. Switzer, II wrote: > The "internal" keyword indicates only internal request can access the > location: > > http://nginx.org/en/docs/http/ngx_http_core_module.html#internal > > So hitting ttp://mydomain.com/api/v1/* with CURL would never hit this > location. > > Remove the "internal" keyword, reload Nginx and try it again. > > -Dan > Thanks Dan for the email response.I am still seeing the below response in html format. I am attaching the nginxtest.conf file for your reference. */var/log/nginx/access.log* 48.219.29.210 - - [07/Nov/2022:17:33:38 +0000] "POST /apis HTTP/1.1" 500 1678279 "-" "PostmanRuntime/7.29.2" "-" *Response in html format* ############################################################################################################ 500