Hello All,
I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog server to Splunk is getting 14+ hours delayed to actual logs, however on the same host system audit logs are in near-real time.
I have 50+ network components to collect syslog for security monitoring
My current architecture,
All Network syslog ----> syslog server (UF installed) --> UF will forward logs to Splunk cloud
Kindly suggest me a alternative suggestion to get near-real of network logs.
Hi @ranjith4,
What is the aggregate throughput of all sources? If you're unsure, what is the peak daily ingest of all sources?
Splunk Universal Forwarder uses very conservative default queue sizes and a throughput limit of 256 KBps. As a starting point, you can disable the throughput limit in $SPLUNK_HOME/etc/system/local/limits.conf:
[thruput]
maxKBps = 0
If the forwarder is still not delivering data as quickly as it arrives, we can adjust output queue sizes based on your throughput (see Little's Law).
As @PickleRick noted, the forwarder may be switching to an effectively single-threaded batch mode when reading files larger than 20 MB. Increase the min_batch_size_bytes setting in limits.conf to a value larger than your largest daily file or some other arbitrarily large value
[default]
# 1 GB
min_batch_size_bytes = 1073741824
If throughput is still an issue, you can enable additional parallel processing with the server.conf parallelIngestionPipelines setting, but I wouldn't do that until after tuning other settings.
The overall architecture is ok. There might be some issues with the configuration.
If the delay is consistent and constant it might be a problem with timestamps. If it's being read in batches, you're probably ingesting from already rotated files.