loggint through syslog
merlin corey
merlincorey at dc949.org
Sat Dec 19 04:14:23 MSK 2009
On Thu, Dec 17, 2009 at 9:33 PM, Ryan Malayter <malayter at gmail.com> wrote:
> On Thursday, December 17, 2009, merlin corey <merlincorey at dc949.org> wrote:
>>
>> If you want to wear that security blanket, go ahead.
>>
>> If you are worried about the integrity of your logfiles, you should
>> implement some kind of integrity checking on every important point.
>> This means that even if you do push things over your favorite secure
>> protocol to another system you'll want to do some kind of integrity
>> checking there because someone could break in and tamper with the data
>> on the "secure" system.
>
> Exploiting nginx or a web app gives you access to the system where the
> logs are if they are on disk. It is not easy to get from there to a
> completely separate syslog server that is hardened. Yes, you can send
> fake data to the syslog server, but you cannot erase evidence of your
> attack without breaking into it as well. WORM media can be used on the
> log sever. Defense in depth.
Nice sideways response. The main statement (for me) was that if you
care about integrity you should check it in multiple places. This was
followed by an intimation that no system is secure, even if you
hardened it, as long as it is plugged into a network which has any
chance of being accessible via the internet (and even often still when
not, as long as it is powered on). Just because you think it would be
hard for most people to hop from the exposed front-end webserver to
the hardened syslog server certainly doesn't mean it is hard for
everyone. We both know it only takes that one person that one time
with that one attack that you/the world aren't aware of and it's
owned. This holds true for nginx, ssh, syslog/rsyslog and any other
software that listens.
You have an nginx exploit? I don't need to explain to you that I am
not asking about the vulnerabilities.
At any rate, if integrity of data is your concern, then implementing
integrity checking on multiple fronts - including within your hardened
server(s) - is certainly a good idea, and I stand by it. Do you care
to respond directly to this statement?
>> Security folks know that everything breaks, so they plan for and
>> monitor breakages.
>
> Yes, and one of those checks is "how can I trust my log files to
> provide evidence of attack so I can fix things, comply with
> regulations, and help law enforcement catch the bastards". Having your
> only logs on the system with the largest attack surface, the web
> server, is not a good idea.
No, it certainly is not a good idea to have your only logs on the web
server, but I never suggested any such silly thing anyway (nice one).
At least we clearly agree here ;).
>> What's the plan for when the syslog server goes down? No logs at all then?
>
> Standard practice is to send logs to multiple log servers, via unicast
> or multicast. Or at least send them to local disk and syslog so you
> can compare. PCI, HIPPA, SOX, and many other regulations have
> requirements for log retention and authentication.
All the more reason for integrity checking then ;).
> Are you being serious here, or just contrarian?
I'm extremely serious.
This conversation started because someone else wanted to use syslog
for log analysis, which I explained is unnecessary.
You are concerned about conforming to PCI, HIPPA, and SOX - that's
great, your reasons for wanting to use syslog are based in industry
standard practices for meeting these needs.
That's not what the other guy needed, and it's apparently not what
most people need, because we don't have a large group of users with
money clamoring to have Igor add in syslog officially.
As a final point, I don't mean to put it as if you were selling the
security blanket, because I would like to point out to you (and
everyone else) that I did note and appreciate your use of the term
"tamper resistant" logs, rather than "tamper proof" ;)... I just made
an offhand comment and look at us now XD
-- Merlin
More information about the nginx
mailing list