Topic of data model acceleration

New Member

Hello to everyone 🙂

Now I am wondering if there are any rules or sort of "Does and dont's" for building of data models in context of acceleration usage.

So, what I need to do is to build data model for different aggregations? f.e. amounts by customers. That's been just an example.
Due to enormous scale of events I intend to accelerate my models.

Which rules should I follow to abtain as much as possible benefits of the technology?
One more question is that how this acceleration does work. I am aware of high perfomance analytic store features but it doesn't give me an answer for the question how I can accelerate my pivots (based on data models) wich provide statistics with aggregation.

As summary, there are 2 questions:
how to build data model for pivots with figures aggregations (amount, volume, count of pieces and so on)?
how to get all benefits from such data model acceleration?

Thanks in advance 🙂

Best Regards

Tags (2)
0 Karma

Splunk Employee
Splunk Employee

This is in response to your comment to rsennett....

First off, if you haven't tried the Data Model and Pivot Tutorial, you probably should. I think it would answer many of your questions. If you have tried it and you're still confused, here's another brief tutorial that hopefully will clarify things.

You should be able to build a fairly simple data model that delivers the results you need, given the example you've provided.

You'd start by creating a data model that consists of one event-based object.

  • Name the object "Sales."
  • Give it a constraint that isolates the events where a revenue-generating sale occurred, such as Sale_Point = * (I'm assuming this provides the location or store name where something was sold).
  • Assuming the four fields are automatically extracted, add them to the object as auto-extracted attributes: Customer, Amount, Sale_Point. (Date should be added automatically as _time unless it's different from the event timestamp.) Make sure that the attribute types are correctly identified: Customer and Sale_Point are strings and Amount is a number.
  • That should be all you need to do. Save your changes.

To accelerate the data model go to the Data Model Manager page (it says "Data Models" at the top and has an Actions column; you get to it from the Data Model Editor page by clicking "Back to Data Models").

  • Click Edit and select Edit Permissions. Share the object with the App or All Apps. (Only shared objects can be accelerated.)
  • Click Edit again and click Edit Acceleration.
  • In the Edit Acceleration dialog select Accelerate and then select a Summary Range. Summary range is the amount of time that you need to be accelerated. The bigger the range, the more space the acceleration summary will take up on disk and the longer it will take to create, so don't choose a range that is longer than you need it to be. For example, if you don't plan to search over more than the last week or two, select a range of 1 Month.
  • Save your acceleration changes. Your model is now accelerated.

Now open the object in Pivot. When you go in, straight away it will give you a count of your events with a Sale_Point value over all time (the column will be titled "Count of Sales"). You can adjust the time filter to search over a shorter time range if you don't need all that data (ideally you should cut it down to within your acceleration Summary Range).

In Pivot you can fiddle around to get the charts you're interested in. For example, you could add a Split Row element for the Sale_Point attribute and then replace the "Count of Sales" Column Value element with one that sums up Amount. This would give you the total sum of sales by sale point. Or you could set up a Split Row element for the Customer attribute instead and get the total sum of sales by customer. And then you could use the Filter element to filter out all results but those for a specific sale point or customer (if you wanted to).

Hopefully this helps!

Splunk Employee
Splunk Employee

There are actually a couple of sections of the documentation that address your questions. Since you don't give any specific examples here of what you're going to do, the best thing we can do is refer you to the doc where you'll get the rules about how acceleration works and the Best Practices for building a datamodel with acceleration in mind.

Reading about Data Models in general is a good place to start. There is a lot of detail there. All of it key information and very helpful. And here is where you can read about the rules of acceleration.

As for your second question regarding building a data model for pivots with figures aggregation, I think first you have to keep in mind two main rules you'll read about in the links I've posted above. Acceleration applies only to the Top most base event object and the child of the top most base event object. Beyond that you can start at the top of that same doc and read more in depth here .

With Splunk... the answer is always "YES!". It just might require more regex than you're prepared for!

New Member

Thank you for the answer.
I have already read the documents by the links provided above but still I have questions.

Let me set an example for analysis.
There are 4 fields:

So, I need to build data model with ability to provide statistic about that what amount of incomes each of Sales_Point brought.
Sort of |stats sum(Amount) by Sale_Point.

How to solve the problem using data model and how to accelerate the model?

What I could to build is to calculate number of sales.

Thanks in advance.

0 Karma