Getting Data In
Highlighted

How can we find out where the delay in indexing is?

Ultra Champion

We have the following search -

base search
| eval diff= _indextime - _time 
| eval capturetime=strftime(_time,"%Y-%m-%d %H:%M:%S") 
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") 
| table capturetime indextime  diff

We see the following -

alt text

So, we see a delay of over five hours in indexing. Is there a way to find out where these events "got stuck"? In this case, these events are coming from hadoop servers and the forwarder processes around 1/2 million files. We would like to know whether the delay is at the forwarder level or on the indexer side.

Tags (3)
Highlighted

Re: How can we find out where the delay in indexing is?

SplunkTrust
SplunkTrust

Just to clarify, did you check there is no maxKBps = <some Number other than 0> option set in limits.conf on the UF?

0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

Ultra Champion

ok, I see -

$ find . -name "limits.conf"       | xargs grep -i maxKBps
./etc/apps/universal_config_forwarder/local/limits.conf:maxKBps = 0
./etc/apps/SplunkUniversalForwarder/default/limits.conf:maxKBps = 256
./etc/system/default/limits.conf:maxKBps = 0
0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

Ultra Champion

and then -

$ ./splunk btool --debug limits list | grep maxKBp
/opt/splunk/splunkforwarder/etc/apps/universal_config_forwarder/local/limits.conf maxKBps = 0
0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

SplunkTrust
SplunkTrust

I would run a btool command to check which setting is applied. (system/default has lowest priority).

bin/splunk btool limits list --debug | grep maxKBps
0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

Ultra Champion

right - that's what I did...

0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

SplunkTrust
SplunkTrust

I was late/early on that. Check the various queue sizes if there is any high spikes on the queue sizes.

index=_internal sourcetype=splunkd source=*metrics.log group=queue 
| timechart avg(current_size) by name

You can add host=yourUFName to see queue sizes on UF and host=Indexer (add more OR condition for all indexers) to see queue sizes on Indexers. You may need to adjust queue sizes based on results from there. https://answers.splunk.com/answers/38218/universal-forwarder-parsingqueue-kb-size.html

Highlighted

Re: How can we find out where the delay in indexing is?

Ultra Champion

Great. I see the following -

alt text

0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

SplunkTrust
SplunkTrust

The aggQueue is where date parsing and line merging happens. This suggest that there may be in-efficient event parsing configuration setup. What is the sourcetype definition (props.conf on indexers) you've for sourcetypes involved?

0 Karma
Highlighted

Re: How can we find out where the delay in indexing is?

Ultra Champion

Interesting - this sourcetype doesn't show up in in props.conf...

0 Karma