Getting Data In

set queue size limit for universal forwarder

Path Finder

When we have a universal forwarder installed on a VM server (hard drive is 40gb). When the service went down yesterday the logs started to queue up on the server as expected but it took so long to get the service back up and running that we ran in to issues where the hard drive was filled up. How can i set a hard stop for the size of logs the universal forwarder can queue up before it starts to purge the older logs?

use case:
if service fails, queue logs up to 10gb (or 25% of free space or something static like that), once that limit is reached, purge old logs to make room for new logs until service is restored.

Any help would be greatly appreciated! Thanks!

0 Karma

Path Finder

I was able to alleviate this issue by removing the forwarder from this server and instead installing snare for the time being. This is not an ideal solution but it does solve the current issue.

0 Karma

Splunk Employee
Splunk Employee

I doubt that your problem is the splunk instance ,because the UF queue is only 500KB and is in memory.

Please check :

  • if your logs that exploded are not simply your applications logs, then setup a rotation/retention rule in your system.
  • are you using batch monitoring, and expect the files to be deleted after indexing ?
  • if you configured persistent queues in the forwarder (see inputs.conf), if you did, then change the size.
  • if you want to deal with large backlog faster, increase the default thruput limit in limits.conf (default in the univeralforwarder app is MaxKbps=256)

Path Finder

the logs that built up were system logs that the forwarder was monitoring and queues up when the forwarder isnt working. I need either make the forwarder stop queuing logs to send at a certain point or roll the logs at a certain point. Does that make more sense? The issue is not with logs generated by the forwarder but by the system to be forwarded that the forwarder is holding until it can send them.

0 Karma