Getting Data In

Syslog ingestion delay

ranjith4
Observer

Hello All, 

I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog server to Splunk is getting 14+ hours delayed to actual logs, however on the same host system audit logs are in near-real time. 

I have 50+ network components to collect syslog for security monitoring

My current architecture, 

All Network syslog ----> syslog server (UF installed) --> UF will forward logs to Splunk cloud

Kindly suggest me a alternative suggestion to get near-real of network logs.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Other have already give you some hints to use and check for this issue.
If you have lot of logs (probably you have)? Then one option is use SC4S. There is more about it e.g.

- https://splunkbase.splunk.com/app/4740
- https://lantern.splunk.com/Data_Descriptors/Syslog/Installing_Splunk_Connect_For_Syslog_(SC4S)_on_a_...
- https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-connect-for-syslog-turnkey-and-scalable-sys... (several parts)

If I recall right there is also some .conf presentation (2019-21? or something) and some UG presentations too.
- https://conf.splunk.com/files/2020/slides/PLA1454C.pdf
0 Karma

tscroggins
Influencer

Hi @ranjith4,

What is the aggregate throughput of all sources? If you're unsure, what is the peak daily ingest of all sources?

Splunk Universal Forwarder uses very conservative default queue sizes and a throughput limit of 256 KBps. As a starting point, you can disable the throughput limit in $SPLUNK_HOME/etc/system/local/limits.conf:

[thruput]
maxKBps = 0

If the forwarder is still not delivering data as quickly as it arrives, we can adjust output queue sizes based on your throughput (see Little's Law).

As @PickleRick noted, the forwarder may be switching to an effectively single-threaded batch mode when reading files larger than 20 MB. Increase the min_batch_size_bytes setting in limits.conf to a value larger than your largest daily file or some other arbitrarily large value

[default]
# 1 GB
min_batch_size_bytes = 1073741824

If throughput is still an issue, you can enable additional parallel processing with the server.conf parallelIngestionPipelines setting, but I wouldn't do that until after tuning other settings.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

The overall architecture is ok. There might be some issues with the configuration.

If the delay is consistent and constant it might be a problem with timestamps. If it's being read in batches, you're probably ingesting from already rotated files.

0 Karma
Get Updates on the Splunk Community!

Developer Spotlight with Paul Stout

Welcome to our very first developer spotlight release series where we'll feature some awesome Splunk ...

Preparing your Splunk Environment for OpenSSL3

The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare ...

Deprecation of Splunk Observability Kubernetes “Classic Navigator” UI starting ...

Access to Splunk Observability Kubernetes “Classic Navigator” UI will no longer be available starting January ...