On my Splunk server I am seeing the following every 5 minutes:
Apr 21 05:14:20 ts-sl-server sudo: root : TTY=pts/0 ; PWD=/opt/splunk/etc/apps/ossec/bin ; USER=root ; COMMAND=/var/ossec/bin/agent_control -l
Apr 21 05:19:20 ts-sl-server sudo: root : TTY=pts/0 ; PWD=/opt/splunk/etc/apps/ossec/bin ; USER=root ; COMMAND=/var/ossec/bin/agent_control -l
The consequence is the ossec app stats and graphs are meaningless because the local server sudo events totally outnumber the ones from the forwarders. Is this a common problem, and what are folks doing about it?
Obviously I could remove "authpriv.* /var/log/secure" from syslog.conf, but that hardly seems like the smart play, and our security benchmark demands that to be there.
Ideas?
The cleanest answer would be to tell OSSEC not to alert on it in the first place. On the OSSEC server, edit /var/ossec/rules/local_rules.xml to add a new entry.
You can also null-route it as rroberts suggests, or you can create an eventtype and tag it as noise. Most of the OSSEC app searches and dashboards are configured to ignore tag::eventtype=noise
.
Assuming you want to configure OSSEC, it will look something like this:
It will look something like this:
<rule id="100010" level="0">
<if_sid>5402</if_sid>
<user>root</user>
<match>COMMAND=/var/ossec/bin/agent_control</match>
<description>Suppress alerts on Splunk polling</description>
</rule>
This tells OSSEC that if rule 5402 fires (which I'm guessing is the one you're seeing), to further check it to see if the username is root and it contains the raw text in the 'match' section. If it does, it will change the alert ID from 5402 to 100010. Since the new severity is zero, OSSEC will not generate an alert.
The cleanest answer would be to tell OSSEC not to alert on it in the first place. On the OSSEC server, edit /var/ossec/rules/local_rules.xml to add a new entry.
You can also null-route it as rroberts suggests, or you can create an eventtype and tag it as noise. Most of the OSSEC app searches and dashboards are configured to ignore tag::eventtype=noise
.
Assuming you want to configure OSSEC, it will look something like this:
It will look something like this:
<rule id="100010" level="0">
<if_sid>5402</if_sid>
<user>root</user>
<match>COMMAND=/var/ossec/bin/agent_control</match>
<description>Suppress alerts on Splunk polling</description>
</rule>
This tells OSSEC that if rule 5402 fires (which I'm guessing is the one you're seeing), to further check it to see if the username is root and it contains the raw text in the 'match' section. If it does, it will change the alert ID from 5402 to 100010. Since the new severity is zero, OSSEC will not generate an alert.
Thanks, that does the trick. I have used OSSEC rules before so it was familiar. I was mostly interested in the party line (best practice) on this situation.
You could route these events to the null queue so they arent indexed.
Check out: http://docs.splunk.com/Documentation/Splunk/5.0.2/Deploy/Routeandfilterdatad
Since this is potentially Splunk itself causing the extra messages I thought perhaps they would offer more guidance than "go set up a null route". But maybe the OSSEC integration is 3rd party? Not sure about that?
I have already been trying and failing to get a null route to work for another issue I have with unneeded data.
Thanks for the suggestion anyway