Splunk Search

Blocked auditqueue causing random skipped searches and scheduler slowness on SH/SHC. Slow UI.

hrawat
Splunk Employee
Splunk Employee

Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI.

Tags (1)
0 Karma
1 Solution

hrawat
Splunk Employee
Splunk Employee

With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs.
If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc.

To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing.

Workaround
Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues.

Steps.
1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing.

[auditTrail]
queueing=false

After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log

[monitor://$SPLUNK_HOME/var/log/splunk/audit.log*]
index = _audit
source = audittrail
sourcetype = audittrail

2. Stop splunk

3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events.

4. Start splunk


 

View solution in original post

0 Karma

hrawat
Splunk Employee
Splunk Employee

With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs.
If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc.

To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing.

Workaround
Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues.

Steps.
1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing.

[auditTrail]
queueing=false

After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log

[monitor://$SPLUNK_HOME/var/log/splunk/audit.log*]
index = _audit
source = audittrail
sourcetype = audittrail

2. Stop splunk

3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events.

4. Start splunk


 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...