- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs.
If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc.
To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing.
Workaround
Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues.
Steps.
1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing.
[auditTrail]
queueing=false
After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log
[monitor://$SPLUNK_HOME/var/log/splunk/audit.log*]
index = _audit
source = audittrail
sourcetype = audittrail
2. Stop splunk
3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events.
4. Start splunk
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs.
If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc.
To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing.
Workaround
Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues.
Steps.
1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing.
[auditTrail]
queueing=false
After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log
[monitor://$SPLUNK_HOME/var/log/splunk/audit.log*]
index = _audit
source = audittrail
sourcetype = audittrail
2. Stop splunk
3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events.
4. Start splunk
