I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filenames. We observed too much CPU/Memory consumption by splunkd process when the input log files to be monitored are more ( > 1000K approx). All the input data logs files are new and total no. of events range would be 10 to 300. Few metirc logs: {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=batchreader1, current_queue_size=0, max_queue_size=0, files_queued=0, new_files_queued=0","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}
{"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=tailreader1, current_queue_size=1388185, max_queue_size=1409382, files_queued=18388, new_files_queued=0, fd_cache_size=63","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"} Please help me if there is any configuration tuning to limit the number of files to be monitored.
... View more