Rick, this could be due to a few things.
First - check the index time to be sure Splunk is seeing and indexing the data later then expecting. Below, the search will find the delay in seconds:
source=<your delayed source> | eval delay=_indextime-_time | fields delay
Typically, delays of hours will mean that the indexer is backed up OR the data is being read in at a slower pace then expected. To see if the indexer is backed up (we call it blocked), search as follows:
index=_internal source=*metrics.log blocked
If this returns events, this means the system is being blocked. If there are a lot of them, that is not a good sign and you should contact support. Support can determine if it's disk speed or something else by identifying the particular part of the queue system that is backed up.
Another thing to check is the maximum thruput for the indexers and forwarders. There is a maximum thruput setting within limits.conf that will be set to 256 kb per second on light weight forwarders:
[thruput]
maxKBps = 256
... View more