Knowledge Management

What are best practices and uses for data models?

Builder

Sometimes in my Splunk Education I need repeating some things for myself.
Today it's Data Model.
I have used Data Model and so-so understand how it works, but I realized today that Data Model for me is only "acceleration function" that show quickly results of search.

I do not understand how to use it to get maximum benefits.

0 Karma
1 Solution

SplunkTrust
SplunkTrust

When you use the data model / pivot UI you can point and click. For most people that’s the power of data models.

Another advantage is that the data model can be accelerated. Once accelerated it creates tsidx files which are super fast for search. Another advantage of the acceleration is whatever fields you extract in the data model end up in the tsidx files too. This is similar to when you index a field versus when you apply schema on the fly (aka schema at search). Indexed extractions can be analyzed much faster using the tstats command but come at a cost of extra storage. So the accelerated data model gives you the flexibility of applying a schema at search and then indexing all the fields into tsidx. All that after indexing without indexed extractions to begin with.

I hope all that makes sense!

Btw, in a distributed environment, the datamodels are not replicated by default. You have to enable that. See this document for more details:

https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Clustersandsummaryreplication

Cheers!

View solution in original post

Esteemed Legend

Data Models are 2 main/important things: a taxonomical (nomenclature) normalization schema (so that a user is always user and not sometimes username, user-name, etc.) and a way to create a secondary index (by accelerating) so that you can dotstatssearches which are lightning fast. IMHO, if you do not need either of these (particularly the latter), then there is no good reason to use one. They enable other stuff, too, like thepivotarea, but that area is better done by nativeSPL` (although there is a learning curve there).

SplunkTrust
SplunkTrust

The normalization can occur in the datamodel by adding schema at search but typically normalization happens as a combined effort of tags and event types plus the data model’s schema on the fly capabilities. I don’t think the datamodel by itself is considered a method for normalizing data so much as the CIM + CIM compliant apps is. I would even argue the less schema you apply with your data model the better since that equates to “hiding schema in searches”. What do you think?

0 Karma

Esteemed Legend

Yes, the DM is the schema but there is much ETL work to be done to align data into that schema.

0 Karma

SplunkTrust
SplunkTrust

When you use the data model / pivot UI you can point and click. For most people that’s the power of data models.

Another advantage is that the data model can be accelerated. Once accelerated it creates tsidx files which are super fast for search. Another advantage of the acceleration is whatever fields you extract in the data model end up in the tsidx files too. This is similar to when you index a field versus when you apply schema on the fly (aka schema at search). Indexed extractions can be analyzed much faster using the tstats command but come at a cost of extra storage. So the accelerated data model gives you the flexibility of applying a schema at search and then indexing all the fields into tsidx. All that after indexing without indexed extractions to begin with.

I hope all that makes sense!

Btw, in a distributed environment, the datamodels are not replicated by default. You have to enable that. See this document for more details:

https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Clustersandsummaryreplication

Cheers!

View solution in original post

Don’t Miss Global Splunk
User Groups Week!

Free LIVE events worldwide 2/8-2/12
Connect, learn, and collect rad prizes
and swag!