- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Sometimes in my Splunk Education I need repeating some things for myself.
Today it's Data Model.
I have used Data Model and so-so understand how it works, but I realized today that Data Model for me is only "acceleration function" that show quickly results of search.
I do not understand how to use it to get maximum benefits.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

When you use the data model / pivot UI you can point and click. For most people that’s the power of data models.
Another advantage is that the data model can be accelerated. Once accelerated it creates tsidx files which are super fast for search. Another advantage of the acceleration is whatever fields you extract in the data model end up in the tsidx files too. This is similar to when you index a field versus when you apply schema on the fly (aka schema at search). Indexed extractions can be analyzed much faster using the tstats command but come at a cost of extra storage. So the accelerated data model gives you the flexibility of applying a schema at search and then indexing all the fields into tsidx. All that after indexing without indexed extractions to begin with.
I hope all that makes sense!
Btw, in a distributed environment, the datamodels are not replicated by default. You have to enable that. See this document for more details:
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Clustersandsummaryreplication
Cheers!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Data Models are 2 main/important things: a taxonomical (nomenclature) normalization schema (so that a user is always user
and not sometimes username
, user-name, etc.) and a way to create a secondary index (by accelerating) so that you can do
tstatssearches which are lightning fast. IMHO, if you do not need either of these (particularly the latter), then there is no good reason to use one. They enable other stuff, too, like the
pivotarea, but that area is better done by native
SPL` (although there is a learning curve there).
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The normalization can occur in the datamodel by adding schema at search but typically normalization happens as a combined effort of tags and event types plus the data model’s schema on the fly capabilities. I don’t think the datamodel by itself is considered a method for normalizing data so much as the CIM + CIM compliant apps is. I would even argue the less schema you apply with your data model the better since that equates to “hiding schema in searches”. What do you think?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Yes, the DM is the schema but there is much ETL work to be done to align data into that schema.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

When you use the data model / pivot UI you can point and click. For most people that’s the power of data models.
Another advantage is that the data model can be accelerated. Once accelerated it creates tsidx files which are super fast for search. Another advantage of the acceleration is whatever fields you extract in the data model end up in the tsidx files too. This is similar to when you index a field versus when you apply schema on the fly (aka schema at search). Indexed extractions can be analyzed much faster using the tstats command but come at a cost of extra storage. So the accelerated data model gives you the flexibility of applying a schema at search and then indexing all the fields into tsidx. All that after indexing without indexed extractions to begin with.
I hope all that makes sense!
Btw, in a distributed environment, the datamodels are not replicated by default. You have to enable that. See this document for more details:
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Clustersandsummaryreplication
Cheers!
