- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I have one heavy weight forwarder that is collecting from over 600 Universal Forwarder. I have syslog-ng installed on HW Forwarder collecting syslog to folder and reading folder with Splunk.
I am seeing (in SOS) an index latency of more than 1000 seconds on some of the indexes.
Where to look next for the cause is a problem????
How can i see if it is the indexers or my HW forwarder that is the bottle-neck.
I think it is the indexers but were to look to see where the issue is on the 3 indexers???
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I found this article that talked about how to see the index time
http://splunk-base.splunk.com/answers/64388/is-_indextime-deprecated
I used the suggestion from Martin to right this search
index=firewall | eval diff = _indextime - _time | where diff > 0 | rename _indextime AS indxtime | rename _indextime AS indxtime | convert timeformat="%m/%d/%y %H:%M:%S" ctime(indxtime) | table _time indxtime diff
this alowed me to see the diff between the _time and the _indextime
Now I can set an alert if this number gets to high and to look for pasterns
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I found this article that talked about how to see the index time
http://splunk-base.splunk.com/answers/64388/is-_indextime-deprecated
I used the suggestion from Martin to right this search
index=firewall | eval diff = _indextime - _time | where diff > 0 | rename _indextime AS indxtime | rename _indextime AS indxtime | convert timeformat="%m/%d/%y %H:%M:%S" ctime(indxtime) | table _time indxtime diff
this alowed me to see the diff between the _time and the _indextime
Now I can set an alert if this number gets to high and to look for pasterns
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This worked for me. Not sure why the table didn't like to display the raw indextime.
source=/var/log/* | eval time=_time | eval itime=_indextime | eval latency=(itime - time) | table time,itime,latency,_raw
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

You could do something like this:
some search | eval diff = _indextime - _time | where diff > some value
and go from there to see patterns of offending sources, hosts, splunk_servers, whatever.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Martin thanks for the help. I tried this search:
index=firewall | eval diff = _indextime - _time | where diff > 890 | table _indextime _time
I could not find the _indextime and the search would only find all results or no results as i increased the greater than number. I wanted to see the diff between the index time and the time so I added the table command but all that showed was the time not the indextime. Do you know how I can see the "index time"?
