Archive

Difference between _time and -indextime

Influencer

Hi,

We have splunk UF installed on our streamers. The splunk UF sends logs to splunk forwarder of our analytics setup.

We have scheduled saved searches to summarize data. The scheduled searches are running as scheduled but we are not seeing data in the summary index.

During our troubleshooting, we noticed that the time difference between _time and _indextime of log events in raw index is close to 4 hours.

Could you please let us know why indexing of log events is taking so much time.

How to solve this issue?

Thanks

Strive

Tags (1)

Path Finder

This could happen because you have given some time interval at UF. So your UF is throwing the logs to SPlunk Forwarder and then to Indexer. There could only be 2 scenarios either Time zone if not then, time interval in forwarding the logs.

0 Karma

Path Finder

@strive
Plz upvote or accept the answer if it was helpful 🙂

0 Karma

Motivator

I had a problem that sounds like this. It was caused by the forwarding system running on GMT time while the indexer was on local time. The way I noticed the problem was that real-time search was not showing anything. I fixed the problem by putting TZ = GMT in the props.conf file in the deployment app default directory for that particular forwarder (assuming you are using deployment server; if not, you'll have to update the props.conf on the forwarder yourself). I think Ayn is guessing correctly.

Legend

Close to 4 hours = almost exactly 4 hours? Might this be some kind of timezone issue?

0 Karma

Influencer

Thanks for the response.
Our UFs are not sending large amounts of data. Later we tested with just 20 log events. Even in this case we see the time difference between _time and _indextime of log events in raw index is close to 4 hours.

0 Karma

Legend

It is impossible to answer this question without a deeper understanding of your environment.

In the manual, look at Troubleshooting Indexing Delay

Here are links to some answers that may help

http://answers.splunk.com/answers/85382/splunk-universal-forwarder-slow

http://answers.splunk.com/answers/116402/forwarding-too-slow

In addition, here are a few questions:

-How many files are you monitoring on the forwarder? Splunk forwarder performance can degrade if it is monitoring thousands of files.

- If you search the _internal index, are you seeing any errors or warnings for the forwarder? If you have the SOS app or Deployment Monitor app installed, there may be some detailed searches that explain more about your forwarder's performance.

0 Karma

Influencer

Thank you for the response.
Our latest tests showed that the problem exists even when we monitor less than 10 files and each file containing 20 log events.
I will go through the links and get back after some more troubleshooting.

0 Karma

Legend

Are your UF's sending huge amounts of data? There's a default limit for how much bandwidth UF's will use, 256kBps. This can be altered but by default that is the limit and in environments where UF's need to send large volumes of data that can be an issue.