Hello,
is there a option/capability available to prioritize the creation/rebuild of the data model acceleration stack to make the rebuild faster in case changes have applied to a accelerated data model?
thanks
It should update / rebuild the model/ acceleration data "each time" you run a search if i don't remember wrong. Not so much you need to do update those after a change ... the same goes after you deleted them (from disk) ... then they will get recreated at search launch.
You should find the answer here;
http://docs.splunk.com/Documentation/Splunk/6.2.1/Knowledge/Aboutdatamodels
In limits.conf we can specify some settings regarding the "priority" we want to give our acceleration / data model generation.
[scheduler]
max_searches_perc = <integer>
* The maximum number of searches the scheduler can run, as a percentage of the maximum number of concurrent
searches, see [search] max_searches_per_cpu for how to set the system wide maximum number of searches.
* Defaults to 50.
auto_summary_perc = <integer>
* The maximum number of concurrent searches to be allocated for auto summarization, as a percentage of the
concurrent searches that the scheduler can run.
* Auto summary searches include:
* Searches which generate the data for the Report Acceleration feature.
* Searches which generate the data for Data Model acceleration.
* Note: user scheduled searches take precedence over auto summary searches.
* Defaults to 50.
and you have also;
[auto_summarizer]
That you might want to check out.
Here is a link to its spec file;
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf
Hello lmyrefelt,
thanks a lot for your response. the questions is if the "update/rebuild" process can be prioritized as in an environment with terabytes of data and scripted lookups that are part of the data model the rebuild can take some time.
br
There "is" ... Se my updated answer