Splunk Enterprise

Forwarder sending data hours later then expected

robertlynch2020
Motivator

Hi

We are seeing a long lag for our forwarders to send in data to Splunk - up to 4 hours!!!

 

When we run this command we can see the output with a high max_lag in seconds.

We are monitoring a file directory with lots and lots of files (100,000+) we are wondering if this could be the issue and is there some way to know from the forwarder it cant keep up? Or is there another solution?

We are testing this prop now, but we are unsure if it will help, as we are unsure if it is the issue?

ignoreOlderThan = 1d

 

 

 

 

 

index=* host = TEST_CLUSTER1 sourcetype!=G1
| eval lag_sec=_indextime-_time 
| stats max(lag_sec) as max_lag max(_indextime) as max_index_time max(_time) as max_event_time by sourcetype host source
| addinfo 
| eval index_lag_for_search = info_search_time - max_index_time 
| eval event_lag_for_search = info_search_time - max_event_time 
|  sort - max_lag
| table sourcetype host source max_lag info_search_time  info_min_time info_max_time, max_event_time, max_index_time, , index_lag_for_search, event_lag_for_search

 

 

 

 

 

The image below shows the slowness of some of the files.

robertlynch2020_0-1606391119679.png

 

Labels (1)
1 Solution

robertlynch2020
Motivator

HI

We got the answer to this by changes a prop in the forwarder in the end.

We increased a prop in server.conf in the forwarder. From 3 to 6.

[general]
parallelIngestionPipelines = 6

Rob

View solution in original post

robertlynch2020
Motivator

HI

We got the answer to this by changes a prop in the forwarder in the end.

We increased a prop in server.conf in the forwarder. From 3 to 6.

[general]
parallelIngestionPipelines = 6

Rob

View solution in original post

.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!