I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filenames. We observed too much CPU/Memory consumption by splunkd process when the input log files to be monitored are more ( > 1000K approx). All the input data logs files are new and total no. of events range would be 10 to 300.
Few metirc logs:
{"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=batchreader1, current_queue_size=0, max_queue_size=0, files_queued=0, new_files_queued=0","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}
{"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=tailreader1, current_queue_size=1388185, max_queue_size=1409382, files_queued=18388, new_files_queued=0, fd_cache_size=63","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}
Please help me if there is any configuration tuning to limit the number of files to be monitored.
Wait, wait, wait. Do you mean that your UF has to keep track of over a million files? That can have a huge memory footprint. Also polling directories containing all those files can be intensive. And not much tuning can help here.
Side note - are you sure you need to use batch input? You're showing events from tailingprocessor which is used with monitor inputs.
Check your inputs.conf and ensure the stanzas are properly configured to monitor only the files that you want, specifically you can adjust the block and allow lists:
[monitor:// whatever]
whitelist = ( REGEX )
blacklist = ( REGEX )
That aside, I strongly encourage you to follow Giuseppe's advice and contact your Splunk admin to open a case on your behalf.
@gcusello, I don't have access to open a case to Splunk support.
it would be much appreciated if someone could help how to limit the monitoring files and control the memory consumption.
Hi @NReddy12,
I never experienced this behavior on a Linux server.
The only hint is to open a case to Splunk Support, sending them a diag of your Universal Forwarder.
Ciao.
Giuseppe