are there any experiences out there regarding performance-comparison of macros, eventtypes and data-models?
We're currently planing a rebuild of a huge Splunk app which should get more flexible to be run within other landscapes. Therefore we'd like to build some kind of sublayer between the individual Splunk landscapes and the upper Splunk App-Part.
In Enterprise Security App this has been done using eventtypes.
I'm currently unsure if eventtypes are the best solution.
From my understanding eventtypes are some kind of "subsearches" - based on the behaviour of Splunk, this means a second search-thread is being initialized for each search eventtype-definition. Is this assumption correct?
How do macros behave in this situation? Only replacing parts of the search string, and no additional search-thread?
At least how do data-models fit in these thoughts?
How is the experience in performance and maintaining a huge set of macros, eventtypes or data-models?
Would love to hear some real-life experience.
Thank you all