Prior to upgrade from 8.1 to 8.2 I'm reading https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Reducetsidxdiskusage#The_tsidx_writing_le... and one thing is not entirely clear to me.
A change to the tsidxWritingLevel is applied to new index bucket tsidx files. There is no change to the existing tsidx files.
A change to the tsidxWritingLevel is applied to newly accelerated data models, or after a rebuild of the existing data models is initiated. All existing data model accelerations will not be affected.
The first statement is pretty starightforward - if I raise the tsidxWritingLevel, only newly created buckets will be indexed with the new level. That's pretty obvious.
But I'm not entirely sure what the description of accelerated data models means. If it worked the same way I'd suspect that already created summaries should be left as-is on their own level but newly created summary "buckets" (are they still called that in case of datamodel accelerated summaries?) should be created with the new level. Is that so? Or does it apply to whole acceleration summary only after a complete rebuild? That would be kinda unfortunate especially since I have some huge accelerated datamodels.
That's how I understood it as well. I was interested with what it would do with the old data written at the old level. Does it turn the whole thing into the higher level (in my case 2 to 4)? Or does it keep the old stuff level 2, but after being rebuilt have the new stuff at level 4?
Did you end up finding the answer somewhere else? I am a little worried to click that rebuild button on those data models, but it also sounds necessary.
We postponed the upgrade for now due to some external circumstances but having re-read the quoted excerpts I'd thinkt that already built summaries for accelerated data models would stay on the same level as they were. New data would get added with the same tsidxWritingLevel as the datamodel acceleartion was created with. Only complete rebuild would trigger creation of new tsidx files with the new level.
At least that sems reasonable. I haven't tested it though! So don't base your decisions on my suppositions!
What you said was correct. New data does come in with the new level. Splunk support really gave little assistance in how to speed up the rebuild.
Adding more concurrent searches with acceleration.max_concurrent may speed it up. And changing the backlog with acceleration.backfill_time may make it less painful by doing partial fill ins.
I ended up not rebuilding the larger data models just due to the length of time it would take. I believe when the new data is at the max amount of time saved in the data model they will be at the new level.