Getting Data In

Indexer Latency

Motivator

I have one heavy weight forwarder that is collecting from over 600 Universal Forwarder. I have syslog-ng installed on HW Forwarder collecting syslog to folder and reading folder with Splunk.

I am seeing (in SOS) an index latency of more than 1000 seconds on some of the indexes.
Where to look next for the cause is a problem????
How can i see if it is the indexers or my HW forwarder that is the bottle-neck.

I think it is the indexers but were to look to see where the issue is on the 3 indexers???

Tags (2)
0 Karma
1 Solution

Motivator

I found this article that talked about how to see the index time

http://splunk-base.splunk.com/answers/64388/is-_indextime-deprecated

I used the suggestion from Martin to right this search

index=firewall | eval diff = _indextime - _time | where diff > 0 | rename _indextime AS indxtime | rename _indextime AS indxtime | convert timeformat="%m/%d/%y %H:%M:%S" ctime(indxtime) | table _time indxtime diff

this alowed me to see the diff between the _time and the _indextime

Now I can set an alert if this number gets to high and to look for pasterns

View solution in original post

Motivator

I found this article that talked about how to see the index time

http://splunk-base.splunk.com/answers/64388/is-_indextime-deprecated

I used the suggestion from Martin to right this search

index=firewall | eval diff = _indextime - _time | where diff > 0 | rename _indextime AS indxtime | rename _indextime AS indxtime | convert timeformat="%m/%d/%y %H:%M:%S" ctime(indxtime) | table _time indxtime diff

this alowed me to see the diff between the _time and the _indextime

Now I can set an alert if this number gets to high and to look for pasterns

View solution in original post

Path Finder

This worked for me. Not sure why the table didn't like to display the raw indextime.

source=/var/log/* | eval time=time | eval itime=indextime | eval latency=(itime - time) | table time,itime,latency,_raw

SplunkTrust
SplunkTrust

You could do something like this:

some search | eval diff = _indextime - _time | where diff > some value

and go from there to see patterns of offending sources, hosts, splunk_servers, whatever.

Motivator

Martin thanks for the help. I tried this search:
index=firewall | eval diff = _indextime - _time | where diff > 890 | table _indextime _time

I could not find the _indextime and the search would only find all results or no results as i increased the greater than number. I wanted to see the diff between the index time and the time so I added the table command but all that showed was the time not the indextime. Do you know how I can see the "index time"?

0 Karma