Hi All!
We have a setup where the servers sends their audit logs to a central log server (named syslog.a.b) (where also Splunk sits), through the audisp-remote plugin.
So at the central log server there is an AuditD daemon which listen to a port, and throws every incoming auth log to a SINGLE file.
Every entry begins with node=X type=... or node=Y type...
This file is chewed (correctly) by Splunk... meaning the search "sourcetype=linux:audit node=X*" shows all auth logs coming from server X
But the "Linux Auditd" app "sees"/shows only the local server (syslog.a.b) [no other hosts are available]
I tried several times the "configure" tab, it detects only the syslog.a.b host ... I tried to clean up and reinstall, still not working.
Where I should look for?
Any hints are welcome 😉
Thanks!
PS: so single Splunk Enterprise installation, latest version (v6.5.2)
It sounds like the sourcetype is being correctly set in your inputs.conf monitor stanza, so just add the following to set the host field correctly.
TA_linux-auditd/local/props.conf:
[linux:audit]
TRANSFORMS-node = auditd_node
TA_linux-auditd/local/transforms.conf:
[auditd_node]
REGEX = \snode=(\S+)
FORMAT = host::$1
DEST_KEY = MetaData:Host
It sounds like the sourcetype is being correctly set in your inputs.conf monitor stanza, so just add the following to set the host field correctly.
TA_linux-auditd/local/props.conf:
[linux:audit]
TRANSFORMS-node = auditd_node
TA_linux-auditd/local/transforms.conf:
[auditd_node]
REGEX = \snode=(\S+)
FORMAT = host::$1
DEST_KEY = MetaData:Host
Hi!
Thanks for the tip, at first glance it seams that this is working, the hosts started to appear in the Linux Auditd plugin...
Yuppiiiiii!
Testing it, and I will report here.
BR
-BBB-
Cool, this has been added to the next release of the app.
I've just updated this answer's REGEX because another user has reported that it erroneously matches on "inode=".
Ok, thanks, I updated/checking also.
So, if I understand correctly, you have a centralized log server collecting audit logs from other servers aggregating in a single file, and you'd like Splunk to identify the host name of each device?
If the audit logs coming from the other servers have a host name in each of the raw events, you can specify the host_regex in the inputs.conf file for that index or sourcetype where Splunk will extract the hostname from in the raw event.
Another option, since you mentioned these audit logs are from other servers, you could consider bringing some structure to your syslogs and establish syslog rules to separate each of those server logs into their own file or directory for monitoring. I touched a little about organizing log sources in syslog and monitoring them in Splunk in another post in case you're interested: https://answers.splunk.com/answers/504420/forward-syslogs-with-correct-sourcetypes.html#answer-50445...
We also have here centralized the normal system logs, and those are handled with RSyslog, as they sould be ;-), in separate files, rotated, & ...
But the recommendation for the audit logs is that they should be not passed through "third party" programs ran in user space... So that's why the audit daemons talk directly with the "central" auditD daemon, which throws everything in a single file [there is no configuration option in auditD to separate the files].
But it seams, that Doksu's solution is working 😉
Thanks