I'm not from Splunk, but I think this is not something you would want to do. This seems like a plan that's destined for failure, or at least destined to have huge amounts of heartache to implement a possible 0.032% increase in speed, while completely removing ALL ability to a) get support b) have a life c) keep your sanity. While it's possible others MAY disagree, I think you'll likely find that folks with a lot of experience in this product would just say "No" to your suggestion.
Now, what seems to be your "insert/update" performance problems specifically?
Are you running the latest Splunk? Also, have you opened a ticket for your "performance problems"? What's your hardware? Have you tuned it appropriately (with respect to THP and whatnot?) Which OS are you even running?
Those answers are very unlikely to change the opinions of the Splunk professionals, but maybe it'll help the rest of us help you out of your "performance" problems.
Another thought - if you are having performance problems out of kvstore, and you've tried reading docs and answers for optimizing it, the next step might be to file a support ticket and see if support can find something you can change to make this better.
Sorry for the lack of real context and Yes I have read the docs and answers. In fact I asked the same question at splunk conf 2017 and was told to avoid the kvstore due to replication issues. Wiredtiger is the future and if splunk continues to use MongoDB no doubt it will be supported.
Anyway, using splunk 6.5.3. My issue is the insert/update sometimes takes forever, even on a single splunk server. The servers pretty beefy.
The kvstore collect has two set of accelerations on multiple fields. 1 field is an array which can have up to 500 unique values. All other fields are single strings or numbers.
I can insert up to 1000 (splunk limit) rows using the splunk python SDK bulk save function. Which sometimes takes over 300 seconds even for much less than 1000 rows.
I guess it might be the data I am updating/inserting since, due to the acceleration on the 1 array field for each row it may be generating the acceleration index for 500 unique values and this is what's taking significant time.
The question is then how to check how long splunk MongoDB spends on generating the acceleration indexes. Or is it possible to get a breakdown of the processes and timings it takes during a bulk save?