Splunk Search

Blocked auditqueue causing random skipped searches and scheduler slowness on SH/SHC. Slow UI.

hrawat_splunk
Splunk Employee
Splunk Employee

Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI.

Labels (1)
Tags (1)
0 Karma
1 Solution

hrawat_splunk
Splunk Employee
Splunk Employee

With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs.
If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc.

To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing.

Workaround
Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues.

Steps.
1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing.

[auditTrail]
queueing=false

After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log

[monitor://$SPLUNK_HOME/var/log/splunk/audit.log*]
index = _audit
source = audittrail
sourcetype = audittrail

2. Stop splunk

3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events.

4. Start splunk


 

View solution in original post

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs.
If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc.

To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing.

Workaround
Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues.

Steps.
1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing.

[auditTrail]
queueing=false

After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log

[monitor://$SPLUNK_HOME/var/log/splunk/audit.log*]
index = _audit
source = audittrail
sourcetype = audittrail

2. Stop splunk

3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events.

4. Start splunk


 

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...