Monitoring Splunk

Splunk Universal Forwarder Only Sends data once on Monitor Input config followed by Restart

ssadh_splunk
Splunk Employee
Splunk Employee

I have a UF installed(v7.3.1) on CentOS with ulimits configured for max open files etc.

the file monitor input stanza looks as below:

[monitor:///<path_to_log_file>/*.log]
disabled = false
host_segment = 4
index = <index-name>
sourcetype = srctype
ignoreOlderThan = 1h

there are logs coming in at very high speed so the rsyslog creates a new file every 15mins, Hence the ignoreolderthan 1H clause is used .

Each time i configure a monitor stanza & restart UF.
It reads the files & sends it to the indexer. But after that, it doesn't forward any data.

UF splunkd.log stated that it was taking some huge files into batch mode & that maxKBPs limit had reached.
So I changed the limit.conf to set maxKBPs to 0.
There is no other error in Splunkd.log at UF & it still seems to be showcasing the same behavior.

Any pointers on how to resolve this or what else to look for?

0 Karma
1 Solution

ssadh_splunk
Splunk Employee
Splunk Employee

Closing this as setting maxKBPs to zero in limits on UF fixed the issue.

View solution in original post

0 Karma

ssadh_splunk
Splunk Employee
Splunk Employee

Closing this as setting maxKBPs to zero in limits on UF fixed the issue.

0 Karma

lmethwani_splun
Splunk Employee
Splunk Employee

@ssadh_splunk , As you mentioned rsyslog creates new file every 15 mins, can you try and increase the ignoreOlderThan parameter by 1 more hour?
For using wildcards, just make sure you are defining in correct manner.
Ref Doc: https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards

Apart from that, configuration looks okay. The log files should get monitored continuously.

0 Karma

p_gurav
Champion

If you have monitoring console set, please check indexing performance on indexers. Is any indexing queue is getting full?

0 Karma

ssadh_splunk
Splunk Employee
Splunk Employee

So it seems like changing the maxKBPs limit to unlimited(0) fixed the problem.

Looks like UF was choking the default 256Kbps bandwidth once it picked up a huge file(~400MB).
I set the limits to 0 just before posting the question. Monitored this for about ~1.5hrs. Forwarder is reading & sending data across.

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...