Monitoring http returns

Peter Booth peter_booth at me.com
Wed Apr 11 05:17:14 UTC 2018


Jeff,

There are some very good reasons for doing things in what sounds like a heavy inefficient manner.

The first point is that there are some big differences between application code/business logic and monitoring code:

Business logic, or what your nginx instance is doing is what makes you money. Maximizing uptime is critical.
Monitoring code typically has a different release cycle, often it will be deployed in a tactical reactive fashion.
By decoupling the monitoring from the application logic you protect against the risk that your monitoring code
break your application, which would be a Bad Thing, The converse point is that your monitoring software is 
most valuable when your application is failing, or is overloaded. That's why it's good thing if your monitoring 
code doesn’t depend upon the health of your plant’s infrastructure. 

One example of a product that is in some ways comparable to nginx that did things the other way was the
early versions of IBM’s Websphere application server. Version 2 persisted all configuration settings as EJBs.
That meant that their was no way to view a web sphere instance's configuration when the app server 
wasn’t running. The product’s designer’s were so hungry to drink their EJB Kool-Aid they didnt stop to ask
“Is this smart?” This why, back in 1998 one could watch an IBM professional services consultant spend weeks
 installing a websphere instance or you could download and install Weblogic server in 15 minutes yourself.

tailing a log file doesnt sound sexy, but its also pretty hard to mess it up. I monitored a high traffic email site with a 
very short Ruby script that would tail an nginx log, pushing messages ten at a time as UDP datagrams to an influxdb.
The script would do its thing for 15 mins then die. cron ensured a new instance started every 15 minutes. It was 
more efficient than a shell script because it didn't start new processes in a pipeline. 

I like the scalar guide but I disagree with their advice on active monitoring I think its smarter to use real user
 requests to test if servers are up. i have seen many high profile sites that end up serving more synthetic requests 
than real customer initiated requests.


> On 11 Apr 2018, at 12:19 AM, Jeff Abrahamson <jeff at p27.eu> wrote:
> 
> I want to monitor nginx better: http returns (e.g., how many 500's, how many 404's, how many 200's, etc.), as well as request rates, response times, etc.  All the solutions I've found start with "set up something to watch and parse your logs, then ..."
> 
> Here's one of the better examples of that:
> 
> https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide <https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide>
> Perhaps I'm wrong to find this curious.  It seems somewhat heavy and inefficient to put this functionality into log watching, which means another service and being sensitive to an eventual change in log format.
> 
> Is this, indeed, the recommended solution?
> 
> And, for my better understanding, can anyone explain why this makes more sense than native nginx support of sending UDP packets to a monitor collector (in our case, telegraf)?
> -- 
> 
> Jeff Abrahamson
> +33 6 24 40 01 57
> +44 7920 594 255
> 
> http://p27.eu/jeff/ <http://p27.eu/jeff/>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20180411/4163bd55/attachment.html>


More information about the nginx mailing list