Splunk Search

Why is table listing not in reverse _time order?

yuanliu
SplunkTrust
SplunkTrust

If I do an index search, raw events are listed in reverse _time order, which is often also the reverse _indextime order so I don't exactly know which.  But if I table the results, the table is no longer in this order.  Why is it so?

I used the following to inspect table

 

sourcetype=sometype
| eval indextime=strftime(_indextime, "%F %T")
| table _time indextime

 

The table kind of list later entries first, but not consistent, often swapped by hours.

Labels (1)
0 Karma
1 Solution

yuanliu
SplunkTrust
SplunkTrust

Thanks for all the help, @gcusello!  You are correct that it is impossible to explore without seeing data; in fact, I was at a total loss as to how to even exemplify data, as data in those messed-up records had little clue - for very good reason: it is not related to data itself. (The data are essentially yum output ingested from /receivers/simple API interface.)

I finally realized that this is related to another unsolved problem that I reported at the same time: https://community.splunk.com/t5/Getting-Data-In/Why-are-REST-API-receivers-simple-breaks-input-unexp....  The clue came when I isolated fragments from one particular broken event.  I expected _time to be extracted from the timestamp I added to the beginning of the event, then Splunk "extracted" time from some mysterious and magical place in those fragments, or used some other algorithm to combine _indextime with some sort of extraction to calculate _time.  The point is, in a period when multiple such fragmentation happen, the net result becomes very difficult to diagnose.

With the sample fragments from a single broken event, I finally see that raw event listing follows reverse _time order, whereas table listing follows reverse _indextime order.

The real fix I needed is that API ingestion problem.

View solution in original post

Tags (2)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @yuanliu,

this is a way to work configured by default and I don't know why it is!

Anyway, in the search dashboard the sort default is -_time, in a search the default is always ascending for the grouping fields (if there's a command like stats) or ascending for the first column.

If you have large differences between _time and indextime, you have to check the chain of log ingestion because there's some delay that could be caused by many factors: delays in log registration on the server, queues, etc...

Usually a Forwarder sends its logs every 30 seconds, if you have greater delays, I hint to investigate because you could have queues.

Ciao.

Giuseppe

Tags (1)

yuanliu
SplunkTrust
SplunkTrust

To clarify, I do not see large difference between _time and _indextime.  On the contrary, my first investigation was whether the _time jumps were caused by such differences.  That was why I tabled both.  To my surprise, _indextime jumps just as much as _time. (Compared with the jumps, the difference is negligible.)

0 Karma

gcusello
SplunkTrust
SplunkTrust

HI @yuanliu,

sorry but it's very difficoult to help you withous seeing data.

You should check, on original logs if there are the same jumps or if they are caused by Splunk, but I don't think because, as you said, there isn't a large difference between _time and indextime.

In addition, try to check if the timestamp parsing is correct, in other words if the Splunk TimeStamp is the same of the raw event.

Ciao.

Giuseppe

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Thanks for all the help, @gcusello!  You are correct that it is impossible to explore without seeing data; in fact, I was at a total loss as to how to even exemplify data, as data in those messed-up records had little clue - for very good reason: it is not related to data itself. (The data are essentially yum output ingested from /receivers/simple API interface.)

I finally realized that this is related to another unsolved problem that I reported at the same time: https://community.splunk.com/t5/Getting-Data-In/Why-are-REST-API-receivers-simple-breaks-input-unexp....  The clue came when I isolated fragments from one particular broken event.  I expected _time to be extracted from the timestamp I added to the beginning of the event, then Splunk "extracted" time from some mysterious and magical place in those fragments, or used some other algorithm to combine _indextime with some sort of extraction to calculate _time.  The point is, in a period when multiple such fragmentation happen, the net result becomes very difficult to diagnose.

With the sample fragments from a single broken event, I finally see that raw event listing follows reverse _time order, whereas table listing follows reverse _indextime order.

The real fix I needed is that API ingestion problem.

Tags (2)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

If you have problem with receivers/simple, why not use services/collector/event/1.0 and push the whole event with ready-made timestamp? It works 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @yuanliu,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...