- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

HI
Every Saturday we do a full stop of Splunk and we do a full back up + restart.
The issues is come Monday morning it take up to 10 minutes to some of the heavy tstat commands to run. Its like as if all the data is in cold buckets and not warm, the data span is only 1-2 weeks only so it should be warm and fast, but its very very slow.
I am thinking of running a saved search on the last months data after a restart to "wake it up" so to speak. Any other ideas would be great on this?
Below is the type of search that is taking a long time after the restart. There could be 100 Million line it is pulling from, normall this can take 10 seconds, when in cache. The Value host=CLIENT_X can change.
| tstats summariesonly=true max(All_TPS_Logs.duration) AS All_TPS_Logs.duration FROM datamodel=MLC_TPS_DEBUG4 WHERE (nodename=All_TPS_Logs host=CLIENT_X (All_TPS_Logs.user=* OR NOT All_TPS_Logs.user=*)) All_TPS_Logs.name =*** GROUPBY _time, All_TPS_Logs.fullyQualifiedMethod span=1s | rename All_TPS_Logs.fullyQualifiedMethod as series | rename All_TPS_Logs.duration as value | table _time series value | append [ search eventtype=mlc sourcetype=lts_timings host=TALANX-Logs-18-12-17-DIJON527_2017-12-18-100009_archive | where isnum(duration_seconds) | eval task_name = upper(task_name) | lookup lts_lookup task_name OUTPUT value | eval value = if(isnotnull(value),value,95) | rex field=start ".* (?<start_time>[^ ]+)$" | rex field=end ".* (?<end_time>[^ ]+)$" | eval series = task_name." (".duration_seconds."s)" | eval end_timestamp=_time+duration_seconds | eval end_event=mvappend("",end_timestamp.",".series.",".value,"") | mvexpand end_event | rex field=end_event "(?<_time>[^,]+),(?<series>[^,]+),(?<value>[^,]+)" | eval series = replace(series,":",".") | table _time series value | dedup _time, series ]| search (series=**murex** OR series=**TEST_**) | timechart bins=1000 max(value) by series limit=20
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

It could be exactly as you are describing. When Splunk stops, all hot buckets are closed and rolled to warm. If you have configurations that cause hot buckets to stay open a long time and also causes warm buckets to roll quickly to cold, what you describe is exactly what would result.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

It could be exactly as you are describing. When Splunk stops, all hot buckets are closed and rolled to warm. If you have configurations that cause hot buckets to stay open a long time and also causes warm buckets to roll quickly to cold, what you describe is exactly what would result.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi
I agree i think this is what is happening.
However i taught warm and hot will be accessed at the same speed, this is the bit i am not really getting.
Rob
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi,
As you mentioned that “data should be in warm”, can you please let us know are you using different storage for warm and cold buckets? If so you need to check your indexes.conf configuration because when you stop/start Splunk, bucket will roll from hot to warm and if you are reaching max warm DB count then Splunk will roll old warm bucket to cold and when you are trying to search it tries to fetch data from colddb?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi
I use the same storage for cold and warm SSD.
Rob
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I'll suggest to compare Job Inspector for Job before backup and after backup, I suspect append search causing more time but best to start with Job Inspector.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

ok thanks
