Splunk Search

Difference between _time and -indextime

strive
Influencer

Hi,

We have splunk UF installed on our streamers. The splunk UF sends logs to splunk forwarder of our analytics setup.

We have scheduled saved searches to summarize data. The scheduled searches are running as scheduled but we are not seeing data in the summary index.

During our troubleshooting, we noticed that the time difference between _time and _indextime of log events in raw index is close to 4 hours.

Could you please let us know why indexing of log events is taking so much time.

How to solve this issue?

Thanks

Strive

Tags (1)

sarvesh_11
Communicator

This could happen because you have given some time interval at UF. So your UF is throwing the logs to SPlunk Forwarder and then to Indexer. There could only be 2 scenarios either Time zone if not then, time interval in forwarding the logs.

0 Karma

sarvesh_11
Communicator

@strive
Plz upvote or accept the answer if it was helpful 🙂

0 Karma

wrangler2x
Motivator

I had a problem that sounds like this. It was caused by the forwarding system running on GMT time while the indexer was on local time. The way I noticed the problem was that real-time search was not showing anything. I fixed the problem by putting TZ = GMT in the props.conf file in the deployment app default directory for that particular forwarder (assuming you are using deployment server; if not, you'll have to update the props.conf on the forwarder yourself). I think Ayn is guessing correctly.

Ayn
Legend

Close to 4 hours = almost exactly 4 hours? Might this be some kind of timezone issue?

0 Karma

strive
Influencer

Thanks for the response.
Our UFs are not sending large amounts of data. Later we tested with just 20 log events. Even in this case we see the time difference between _time and _indextime of log events in raw index is close to 4 hours.

0 Karma

lguinn2
Legend

It is impossible to answer this question without a deeper understanding of your environment.

In the manual, look at Troubleshooting Indexing Delay

Here are links to some answers that may help

http://answers.splunk.com/answers/85382/splunk-universal-forwarder-slow

http://answers.splunk.com/answers/116402/forwarding-too-slow

In addition, here are a few questions:

-How many files are you monitoring on the forwarder? Splunk forwarder performance can degrade if it is monitoring thousands of files.

- If you search the _internal index, are you seeing any errors or warnings for the forwarder? If you have the SOS app or Deployment Monitor app installed, there may be some detailed searches that explain more about your forwarder's performance.

0 Karma

strive
Influencer

Thank you for the response.
Our latest tests showed that the problem exists even when we monitor less than 10 files and each file containing 20 log events.
I will go through the links and get back after some more troubleshooting.

0 Karma

Ayn
Legend

Are your UF's sending huge amounts of data? There's a default limit for how much bandwidth UF's will use, 256kBps. This can be altered but by default that is the limit and in environments where UF's need to send large volumes of data that can be an issue.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...