Getting Data In

Splunk Universal Forwarders audit logs merged - audit.conf configuration

edoardo_vicendo
Contributor

Hi All,

As indicated here (https://community.splunk.com/t5/Getting-Data-In/Why-am-I-unable-to-monitor-SPLUNK-HOME-var-log-splun...), I have been able to get the audit.log from our Universal Forwarders with audittrail sourcetype.

Unfortunately sometimes those events read from $SPLUNK_HOME/var/log/splunk/audit.log are merged in a unique event (even if each event is in a new line and starts with a timestamp).

In our deployment we have Universal Forwarders sending data to Heavy Forwarders that then send them to Indexers:

UF --> HF --> IDX

What I tried to do is to deploy a props.conf on the HF to indicate the following:

 

[audittrail]
SHOULD_LINEMERGE = false
SEDCMD = s/\d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2}\.\d{3}.* INFO  AuditLogger - //g

 

But even the SEDCMD is not applied.

And I can see with the following command that the configuration are properly read in the HF:

 

splunk btool props list --debug

 

Due to that I tried adding this props.conf directly on the UF and it is working (but it is not a good solution for us because we don't want to force the local processing on the UF).

 

[audittrail]
SHOULD_LINEMERGE = false
SEDCMD = s/\d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2}\.\d{3}.* INFO  AuditLogger - //g
force_local_processing = true

 

I believe the issue is related to the fact that the audit logs from UF are sent to HF indexQueue instead of parsingQueue

I tried also to add the audit.conf file both in UF and HF as follow without any luck:

 

[default]
queueing=false

 

Reading further on Splunk documentation (https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Auditconf) :

 

queueing = <boolean>
* Whether or not audit events are sent to the indexQueue.
* If set to "true", audit events are sent to the indexQueue.
* If set to "false", you must add an inputs.conf stanza to tail the
  audit log for the events reach your index.
* Default: true

 

My questions are:

  • Do you know what does it means "If set to "false", you must add an inputs.conf stanza to tail the audit log for the events reach your index."
  • Do you have any idea on how to apply on the HF the props.conf to the audit events coming from the UF without having to deploy it directly on UF with  force_local_processing=true

Thanks a lot,
Edoardo

 

0 Karma
1 Solution

edoardo_vicendo
Contributor

 

I have been able to solve this with Splunk support.

So basically there are 2 options:

  • OPTION 1 - enable UF audit.log monitoring + force local processing on UF

This option increase a little bit the cpu consumption on UF because you have to parse the events directly on UF, but it will be enabled only for this specific sourcetype and usually few audit events per day on each UF are generated.

On UF:

myapp/local/inputs.conf

# Specific configuration to enable monitoring Splunk Universal Forwarder audit logs
# by default they are sent to null queue

#*nix
[monitor://$SPLUNK_HOME/var/log/splunk/audit.log]
index = _audit
sourcetype = audittrail
source = audittrail

#Windows
[monitor://$SPLUNK_HOME\var\log\splunk\audit.log]
index = _audit
sourcetype = audittrail
source = audittrail

 

myapp/local/props.conf

[audittrail]
SHOULD_LINEMERGE = false
force_local_processing = true

 

  • OPTION 2 - enable UF audit.log monitoring + enable HF event parsing

This has been tested only on *nix

On UF (local/inputs.conf)

[monitor://$SPLUNK_HOME/var/log/splunk/audit.log]
index = _audit
sourcetype = ufw_audittrail
source = ufw_audittrail

 

on HF (local/props.conf)

[ufw_audittrail]
LINE_BREAKER = ([\r\n]+)
MAX_TIMESTAMP_LOOKAHEAD = 30
SHOULD_LINEMERGE = false
TIME_FORMAT = %m-%d-%Y %H:%M:%S.%l %z
TIME_PREFIX = ^

 

Best Regards,
Edoardo

View solution in original post

0 Karma

edoardo_vicendo
Contributor

 

I have been able to solve this with Splunk support.

So basically there are 2 options:

  • OPTION 1 - enable UF audit.log monitoring + force local processing on UF

This option increase a little bit the cpu consumption on UF because you have to parse the events directly on UF, but it will be enabled only for this specific sourcetype and usually few audit events per day on each UF are generated.

On UF:

myapp/local/inputs.conf

# Specific configuration to enable monitoring Splunk Universal Forwarder audit logs
# by default they are sent to null queue

#*nix
[monitor://$SPLUNK_HOME/var/log/splunk/audit.log]
index = _audit
sourcetype = audittrail
source = audittrail

#Windows
[monitor://$SPLUNK_HOME\var\log\splunk\audit.log]
index = _audit
sourcetype = audittrail
source = audittrail

 

myapp/local/props.conf

[audittrail]
SHOULD_LINEMERGE = false
force_local_processing = true

 

  • OPTION 2 - enable UF audit.log monitoring + enable HF event parsing

This has been tested only on *nix

On UF (local/inputs.conf)

[monitor://$SPLUNK_HOME/var/log/splunk/audit.log]
index = _audit
sourcetype = ufw_audittrail
source = ufw_audittrail

 

on HF (local/props.conf)

[ufw_audittrail]
LINE_BREAKER = ([\r\n]+)
MAX_TIMESTAMP_LOOKAHEAD = 30
SHOULD_LINEMERGE = false
TIME_FORMAT = %m-%d-%Y %H:%M:%S.%l %z
TIME_PREFIX = ^

 

Best Regards,
Edoardo

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...