Knowledge Management

KVstore / mongodb compact command

Lucas_K
Motivator

Is there anyway to get the mongodb COMPACT command run by splunk?

We have quite a few kvstores that have used around 100GB of local disk space per search head.

In reading into how the kvstore works at some stage each of these kvstores have preallocated this large amount of space and then deleted the data.

Upon checking the dmc -> kvstore -> instance and looking at collection sizes many of these are significantly smaller than the utilized disk on each host.

As a test I even updated a collection so it had only a single key and the total collection size was 20gb on disk.
In reading on how to remedy this outside of splunk I need to do a compact and then a resync of the kvstores.

I'd already tried the resync and this doesn't recover the disk space so I need to know if there is a way to do the compact first and then another resync.

gjanders
SplunkTrust
SplunkTrust

The answer is effectively "you cannot" but I've put some details in below, happy to be corrected if I have misinterpreted your question but this appears to be a valid answer at least on the 25th Jan 2018....

First of all you appear to be referring to an option called compact in mongodb terms which says "Rewrites and defragments all data and indexes in a collection. On WiredTiger databases, this command will release unneeded disk space to the operating system.".

As per answer Change Splunk mongodb to use wiredtiger storage engine to summarise the answer was Splunk does not used wiredtiger and it cannot easily be changed.

Now quoting from one of my recent support cases (3rd Jan 2018):
"However the allocated disk space will still remain even though data is being deleted. This is how mongodb is working and shrinking process won't start unless "repair" executed.
So that's why we decide to add option "--repair" as it's still not implemented in Splunk. With this feature, any customer has limited disk space can re-claim disk space by "--repair" option.
Here is the related discussion with mongodb disk usage after records deleted:
http://stackoverflow.com/questions/4555938/auto-compact-the-deleted-space-in-mongodb
https://docs.mongodb.com/manual/reference/program/mongod/#core-options

We’ve already had an ER(SPL-89183) for this feature and Dev team will be working further to implement on next GA and I’ll be able to link this case to the ER. Please advise.
"

So I would suggest you get added to the enhancement request...

Lucas_K
Motivator

I'd put in an enhancement request ( SPL-148763 ) and asked our account manager to escalate it 🙂

nickhills
Ultra Champion

Is there any update on either of your ERs since we are now at 7.1.1?

If my comment helps, please give it a thumbs up!

bthaker_splunk
Splunk Employee
Splunk Employee

WiredTiger update in KVstore is planned for a release some time in year 2020.

However, as mentioned by koshyk@ above, it would help to root-cause what usage pattern causes increase in storage space and if it is a valid use-case then provision/allocate the storage space for KVstore -- this is more of a sizing issue first and then a KVstore compaction issue, IMHO. Allocating required storage size via appropriate sizing should help prevent storage issues in the long-term.

0 Karma

BlueSocket
Communicator

Has anyone heard if the compaction process has been added into Splunk yet? We had a KVStore/mongodb database expand too large due to the search that was maintaining the size was not working and so the other searches that were expanding the collection, but not being brought into check by the bad search.

0 Karma

gjanders
SplunkTrust
SplunkTrust
0 Karma

gjanders
SplunkTrust
SplunkTrust
0 Karma

koshyk
Super Champion

do you have enterprise security? what could be the reason for 100GB? Might be worth looking into root cause like too many notables, too many investigation uploads etc.

0 Karma

nickhills
Ultra Champion

Out of interest, when you removed the test collection did it release the 20gb?
Are your searchheads clustered?

If my comment helps, please give it a thumbs up!
0 Karma

Lucas_K
Motivator

6 standard Clustered search heads not enterprise security. Data is about 5-10 different kvstores of various sizes. Contents per kv store is around 2-3 million rows. So not huge really. All data is valid as they do upserts and total rewrites (append=false).

The only way to release that test kvstore's 20gb was deletion of EVERYTHING inside a collection and then taking a cluster member and removing all the kvstore information from it and resyncing.

The only possible way I can see this working is the migration from one kvstore to another. Fully empty that kvstore. Copy back and then delete the temporary one. Resync all members again. Redeligate the kv store master to one of the resynced members. Resync the former master.

0 Karma

gjanders
SplunkTrust
SplunkTrust

That an interesting workaround, I've attempted to answer the question but happy to be proven wrong here as I've also hit this issue!

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...