On our search head cluster we are running into the following issue.
When searching using the time picker everything works as expected, when searching using earliest/latest in the search we are running into issues.
If the time range picker is set to last 60 minutes and earliest in the search is earliest=-15m it works.
If i set earliest=-1d@d it only shows data from the last 60 minutes.
If i try the same thing on a fresh install or a non search head cluster member i am getting the notification:
Your timerange was substituted based on your search string. And i am getting the correct timeframe/data.
Does anyone have an idea what kind of setting/conf i should be looking at? We are running splunk 7.0.3
After looking into the affected files rules.csv and rule_classifications.csv from the Splunk_TA_sourcefire add-on i saw that one of the column's was named _time.
After changing this to time and changing the searches that fill the csv files the time range does get substitued.
So never use _time as a column in a lookup file.
After looking into the affected files rules.csv and rule_classifications.csv from the Splunk_TA_sourcefire add-on i saw that one of the column's was named _time.
After changing this to time and changing the searches that fill the csv files the time range does get substitued.
So never use _time as a column in a lookup file.
We have narrowed the issue down to the Splunk Add-on for Cisco FireSIGHT. Maybe this could help others if they have te same issue.
If we disable 2 of the lookups/csv files used in that app we are able to use earliest/latest in any query.
We are not exactly sure why this is happening and if we can fix the csv files, if we are able to do so i will update this post.
@MattibergB Thanks for sharing the information. Could you advise what version of Splunk Enterprise did you experience the issue with earliest/latest not working in Splunk search. We are experiencing similar issue on Splunk enterprise 8.0.5 in a hybrid env (Splunk cloud indexers + onprem Splunk SH cluster).
Hi, we were using splunk 7.0.3 at the time with a on prem enviroment. We had 2 search head cluster and only 1 search head cluster with this issue. I am not sure if this could still be a issue. You could check your automatic lookups and see if they use specific fields as a output.