Splunk Search

Why am I getting unexpected results when searching a summary index over a "large" timespan?

Engager

I am new to summary indexing, but I've tried to follow the documentation and create a scheduled search that saves the result to a summary index.

The search:

index=my_index source="SomeApp" | sitimechart count by host

This is scheduled to run every 5 minutes and start time is -5m and finish time is now.
On the dashboard I do:

index=summary search_name="Summary - test search" | timechart count by host

This apparently works when searching over a few hours, but when trying to search for more than 5-10 hours, suddenly I get back weird data. Instead of values in the range of 100-1000 I get values in the range of 0-5.
When running the search, values that appear to be valid are shown for some milliseconds and then they are replaced by these 0-5-ish values that make no sense to me.

I guess I am doing something wrong, but not sure what.
Appreciate any help!

[UPDATE]
I did some more testing, and it looks like the correct values are shown when generating preview for the search, but when the final result is shown, I am getting some weird data. To me it looks like some kind of optimization algorithm or something that is applied to the result.

SplunkTrust
SplunkTrust

My suggestion would be to try/change these

1) Change the start time and finish time of the search to include little delay to take care of indexing lag. May be use -7m@m to -2m@m to allow 2 extra minute for data to get ingested
2) In your dashboard search like this.

index=summary search_name="Summary - test search" | timechart sum(count) as count by host

Engager

The -5m - now to -7m@m - -2m@m change was a good suggestion!

Regarding 2), after doing the change in my dashboard, I don't get any results. Should I change the search as well?

0 Karma