Dashboards & Visualizations

How to create accelerated data models for dashboard

macadminrohit
Contributor

Hi,
I have the following search to calculate the average response time on a field for which data is coming from 10 hosts.
The intention to use the data model is to accelerate the search to load the searches faster on a dashboard.
Currently, I am using summary indexing to do this but I was thinking if we can avoid the summary indexing- if data model accelaration provides the results much faster and more real-time.

index=root sourcetype=iisservice_logs Message!=REQ AND Message!=RQSAVE
    Detail.MSec=* 
| stats 
    perc95(Detail.MSec) as percnf avg(Detail.MSec) as average 
| eval ecosystem=Application 
| eval percnf=round(percnf,2) 
| eval average=round(average,2) 
| eval health=case(average>=500,"RED",average>=200,"YELLOW",1=1,"GREEN") 
| eval kpi_type="Application Response time" 
| eval kpi_key1="Application Response Time(95th)" 
| eval kpi_value1=percnf 
| eval kpi_key2="Application Response Time(Average)" 
| eval kpi_value2=average 
| eval name="N/A" 
| eval _time = now() 
| table _time name kpi_type kpi_key1 kpi_value1 kpi_key2 kpi_value2 health

I am a total newbie on Data model and acceleration.
Looking for some basic help on creating one and using that in the dashboarding.

0 Karma

evania
Splunk Employee
Splunk Employee

Hi @macadminrohit ,

Did you have a chance to check out some answers? If it worked, please resolve this post by approving it! If your problem is still not solved, keep us updated so that someone else can help you.

Thanks for posting!

0 Karma

macadminrohit
Contributor

Thanks @skalliger , I have got couple of basic questions and i will provide the searches that i am trying to include in the Data model . They are pretty basic searches and can be easily be absorbed in DMA:

1) As you have mentioned that i should put as many as eval command in the Root dataset event. But as per my understanding i can not add any pipe in the root event dataset.

2) So i created one root event dataset with index=root sourcetype=iis_service and two child event datasets with constraints like this Message!=REQ AND Message!=RQSAVE Detail.MSec=* and Message=REQ Detail.RType!=AR

Now below are the searches which i want to leverage into DMA :


index=root sourcetype=iis_service Message!=REQ AND Message!=RQSAVE
Detail.MSec=*
| lookup lookup.csv host OUTPUT host,pod,pool
| where isnotnull(pod)
| stats perc95(Detail.MSec) as percnf avg(Detail.MSec) as average by host pod


index=root sourcetype=iis_service
| rename Detail.MSec as MSec
| lookup lookup.csv host OUTPUT host,pod,pool
| where isnotnull(pod)
| stats count(eval(Message="RQS" AND MSec > 6000)) as Slow_RQS count(eval(Message="RQE")) as RQE count(eval(Message="RQA")) as RQA count(eval(Message="RQI")) as RQI count(eval(Message="RQS")) as RQS by Detail.RType pod host

3) Once i create the root and child datasets, how to i calculate the stats on the result dataset. In the Pivot i dont see that option.

4) Where is the accelerate button on the DM editor page?

Thanks

0 Karma

skalliger
SplunkTrust
SplunkTrust

Hey,

sorry for the delayed answer. You can use a lookup to populate the output of your CSV file as a field into your Data Model. So you can get rid of that | lookup ....
Just edit your DM and go to add fields and select "lookup".

The acceleration is easiest the following way: Open Settings and go to "Data Models". From there, you can either expand your desired DM and edit the acceleration from there (via the "Add" link) or click on "Edit" > "Edit Acceleration" on the right side.

In the Pivot view, you have to select a field and then you can choose which action you want to to on that field. However, you can do the same with | tstats on your own. But I'm not really an SPL ninja. 🙂

Skalli

0 Karma

skalliger
SplunkTrust
SplunkTrust

Hi,

creating a data model is done best over the UI.

You start by defining a root dataset for your data model. Something like index=root sourcetype=iisservice_logs.

After that, you can define children with further constraints added. Like Message!=REQ AND Message!=RQSAVE.

Define all your required fields (for example all your evals of the search) in the root dataset if possible. As many as you need. More fields means more data in the model, meaning longer accelerations.

When done, simply hit that accelerate button. Wait until it's done and you can start searching it with either | from datamodel or | tstats.

Keep in mind that tstats defaults to summariesonly=true, what makes it especially fast (only data included in the acceleration).
So, sometimes you might wonder why your search does not include all the events. It's because you're looking at accelerated data only by default.
You can also manage the summary range of the acceleration to specify what's included in your DMA (all done in the UI).

A DM can not be changed once it's accelerated. You need to disable acceleration and rebuild it if you're planning to change it. Simply work with summariesonly=false until you're done designing it and its searches.

Tuning a DM behaviour is what comes next but I wouldn't bother to might right now. Accelerations kick off every five minutes and should be completed in that time.

Skalli

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...