Greetings fellow Splunkers,
Our client wants dashboards, reports, and alerts that provide comprehensive statistics in real-time and the ability to quickly view trends over time. Even a five-second load time for any of the dashboards irks them. Simple enough.
To meet that demand, we've created a summary index that captures virtually all data metrics available to us. We've built an accelerated data model (datamodels.conf) on top of that normalized summary index. That's all well and good - the client is happy with all of the pivot-powered dashboards/alerts/reports.
However, I have yet to find any useful detailed documentation about data model acceleration. Our problem now is being 100% sure that no data is missed. The saved searches to populate the summary index run every five minutes at let's say */5
(:00, :05, :10, etc.). The Summarization Period of the data model is set to let's say 1-59/5
(:01, :06, :11, etc.). Assuming none of those take more than one minute to run (they don't), I believe the math works out that alerts should look back 7 minutes (1:00 to populate summary index for 0:55-1:00, 1:01 to populate data model for 0:56-1:01, 1:02 to run the alert from 0:55-1:02). Rinse and repeat every 5 minutes. Again, assuming nothing takes longer than one minute to run and everything is configured as I claim it is, is my math correct? Is there ever a situation where an event is missed between alerts?
I am aware of the Splunk Operational Intelligence Cookbook which simply mentions "Summarization Period". Besides that and the links in my post, I can't find any useful documentation. The documentation linked repeatedly claims that data model acceleration searches run every 5 minutes, however, the "Summarization Schedule" option (being defined by a cron schedule) implies that this is up to the admin. After changing the summarization schedule in dev, I've confirmed that the _ACCELERATE_DM
scheduled search obeys the Summarization Schedule cron schedule ( acceleration.cron_schedule
in datamodels.conf).
I am aware of Metrics Indexes. We will be moving in that direction in the future.
Cheers,
Jacob
If you have an issue that stops the summary indexing search from working (e.g server load, maintenance, etc) you'll most likely end up with gaps. Its possible to backfill those using a script available in the Splunk installation. But its a pain. Also, like @adonio commented, I don't see a big value on building a datamodel over summer indexes.
If you are sensible on the time things take to return results, if your data is related to metrics, then metrics indexes is a way to go. If not I would test using indexed extractions. You'll be trading storage (for the bigger tsidx) for search performance. Much like you do with data models, you can then use |tstats
command against you indexed fields. The benefit here is the "acceleration" is kind of done when the data is coming in, removing the 5 min for it to be accelerated in case of DM, but its done on a per source type basis. The advantage of using DM is that you can accelerate data from whatever (multiple indexes, source types, etc.).
Thanks for the input @diogofgm
i can give the whole "do you really need real time and what is really real time" spiel, however, it seems like your client has very high REAL TIME expectations, which is fine.
read here under the "About Summary Range" and "Summary Range Example"
https://docs.splunk.com/Documentation/Splunk/7.3.2/Knowledge/Acceleratedatamodels#Data_model_acceler...
and here (same page, further down)
https://docs.splunk.com/Documentation/Splunk/7.3.2/Knowledge/Acceleratedatamodels#Advanced_configura...
i do not know the full use case but rarely i see the value of accelerated DM on top of summary indexes will give you.
lastly,
if the thing that troubles them the most is the loading time of the dashboards / views, why not create your relevant reports, schedule them, and have your dashboards use reports or | loadjob
or | savedsearch
or other similar technique?
hope it helps a little
Hi @adonio,
Thanks for the input, it definitely helps! I'll keep all of this in mind moving forward. I could definitely see how this may not be the best solution. As far as loadjob/savedsearch, we do that also between the dashboard and pivot levels although I didn't mention it. It's actually the basis of my previous question here