Getting Data In

Is it possible to dump / drop data that is currently in queue?

onlineops
Explorer

Production had a bug.  One of the results of that bug was massive "over logging" of production nodes and those logs were forwarded (universal forwarder) to our splunk server.

Development reverted production, but Splunk was "log-jammed" as indicated by the queues for several hours:

onlineops_0-1687984788007.png

We know that we can clear the backlog on the clients (splunk universal forwarders) by turning off the forwarder, cleaning out the application logs as well as the following forwarder files:

\<application logs>

\var\log\splunk\metric*
\var\lib\splunk\fishbucket*

<restart forwarder>

 

It seems that several of the forwarders successfully forwarded data, so this jammed up our queues.  I realize that Splunk is designed NOT to lose data, but assuming we were willing to accept some "pending" data loss, is there any way to clear the server side queues or "dump data" from indexing to clear a backlog?

 

We considered "blacklisting" specific files from indexing as was done in this post, but as indicated by the post, removing un-doing the blacklist results in those files going back for index processing.

 

 

 

 

Labels (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust
At least I haven't heard/read this kind of feature. You should as from Splunk Support if it's possible.
r. Ismo
0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...