sflow module and increasing useragent

Neil Mckee neil.mckee.ca at gmail.com
Fri Aug 12 19:59:16 UTC 2011

On Aug 12, 2011, at 10:09 AM, Mark Moseley wrote:

>>> Cool, thanks, I'll take a look at that -- though working with XDR
>>> looks like it could be somewhat painful :)
>>> On a related note, I get a trickle of errors like this in the nginx error log:
>>> sFlow agent error: sfl_agent_error: receiver: flow sample too big for datagram
>>> which must be hitting the in-code limit of 1400 bytes. Would it be
>>> quite horrible if I were to bump up SFL_DEFAULT_DATAGRAM_SIZE over
>>> 1460? I'm assuming under normal conditions, it'll just fragment, which
>>> I'm ok with at the rate they're occurring now. But again, I'm worried
>>> about some data structure in the code (that my casual reading of the
>>> code isn't turning up) will overflow.
>> IT looks like you'd have to bump up both SFL_MAX_DATAGRAM_SIZE and SFL_DEFAULT_DATAGRAM_SIZE.  For example:
>> #define SFL_MAX_DATAGRAM_SIZE 3000
>> #define SFL_MIN_DATAGRAM_SIZE 200
>> If you are not using jumbo frames and packet-loss-in-transit ever hits 50% then you might have a problem getting data through to the collector (just when you really needed it) so in the end the right solution is to apply the length-limits as proposed on the sFlow mailing list.  This can be done in sfwb_sample_http() at the point where the string lengths are filled in for the sample.  I intend to put that fix in very soon,  and add the  missing "X-Forwarded-For" and "req_bytes" fields too.   If you think there are any other fields or counters missing then now is a good time to chime in.
> Cool, I'll play with that.
> As far as other counters/fields, I was sort of curious why there's no
> timestamp field. Obviously you could just use the time the packet got
> sent as the timestamp, but I imagine precision-minded people would get
> bent out of shape about not having the exact time as recorded by the
> web server.

This might be another one to bring up on http://groups.google.com/group/sflow,   but the short answer is that if you timestamp on receipt that's going to be accurate to a second or so.   Ordering is preserved too. That's going to be fine for most applications,  I think.   The kind of analysis where you are trying to sequence events that happened on different servers requires accurate clock-sync and 1-in-1 sampling,  and it's likely to impact performance too.  That's not what sFlow was designed for.

However if anyone needs an extra timestamp they can always include another XDR structure that goes along with the "http_request" and "extended_socket" structures that are sent here. The sFlow standard allows you to define and send your own structures,  you just tag them using an IANA-registered enterprise number for uniqueness.   The published standard ones use enterprise=0,  so http_request is enterprise=0,format=2201 and extended_socket_ipv4 is enterprise=0,format=2100.  So if someone from, say, CERN, wanted to include an extra structure with picoseconds since the big bang,  like this:

struct extended_timestamp {
  unsigned hyper timestamp;
  unsigned int resolution;

They could tag it with enterprise=96,format=7 and send it out.  An sFlow decoder that doesn't know what this is should just skip over it.


> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

More information about the nginx mailing list