Getting Data In

Persistent Queue not working for Heavy Forwarder on Splunk Enterprise 6.3.2

xrtan
Explorer

Hi all,

I am current trying to test persistent queue to see whether it works on heavy forwarder. However, it doesn't seem to be working.

Here is my scenario:
I have syslogs coming in from different devices into my heav yforwarder using both tcp and udp protocol. So what I did was put persistentQueueSize=100MB into my inputs.conf stanza, so right now it looks something like this:

[udp://514]
index=main
sourcetype = syslog
connection_host = ip
disabled = 0
persistentQueueSize=100MB

[tcp://514]
index=main
sourcetype = syslog
connection_host = ip
disabled = 0
persistentQueueSize=100MB

When I restart the server, I can see the flat file being created at these 2 places respectively

$SPLUNK_HOME/var/run/splunk/tcpin/

$SPLUNK_HOME/var/run/splunk/udpin/

So I went on to shut down my indexers for 5 minutes. After that, I turn it on back. However, during the 5 mins, I did not see any changes to the flat file and when I try to search for data on my search head, logs have be dropped during that 5 mins, so no caching was done.

Am I missing something?

0 Karma

rschutt
Explorer

Without the "queueSize" being set to any value, "persistentQueueSize" will not work.

Also be aware that the instance will first fill the memory-queue and if this is exhausted it will write into the persistent queue. So in order to save as much as possible to disk, set the "queueSize" to a minimum.

0 Karma

xrtan
Explorer

apparantly it takes some time for the data to roll into flat file as its still writing to memory for some reason

0 Karma
Get Updates on the Splunk Community!

Update Your SOAR Apps for Python 3.13: What Community Developers Need to Know

To Community SOAR App Developers - we're reaching out with an important update regarding Python 3.9's ...

October Community Champions: A Shoutout to Our Contributors!

As October comes to a close, we want to take a moment to celebrate the people who make the Splunk Community ...

Automatic Discovery Part 2: Setup and Best Practices

In Part 1 of this series, we covered what Automatic Discovery is and why it’s critical for observability at ...