Knowledge Management

Why do writes to KV Store fail?

wpreston
Motivator

I've been trying to write to about 900k records to a KV Store using the Splunk SPL and it only partially succeeds. Looking at search.log for the attempted input, I get the following errors:

11-19-2015 10:40:42.697 INFO  DispatchThread - Disk quota = 10485760000
11-19-2015 10:47:04.202 ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '2', code: '4'):     Failed to read 4 bytes from socket within 300000 milliseconds.
11-19-2015 10:47:04.226 ERROR KVStoreLookup - KV Store output failed with code -1 and message '[ "{ \"ErrorMessage\" : \"Failed to read     4 bytes from socket within 300000 milliseconds.\" }" ]'
11-19-2015 10:47:04.226 ERROR SearchResults - An error occurred while saving to the KV Store. Look at search.log for more information.
11-19-2015 10:47:04.226 ERROR outputcsv - An error occurred during outputlookup, managed to write 598001 rows
11-19-2015 10:47:04.226 ERROR outputcsv - Error in 'outputlookup' command: Could not append to collection 'Incident_Collection': An error occurred while saving to the KV Store. Look at search.log for more information..

I've tried inputting from search results and tried inputting from a .csv file using outputlookup, but both give these errors. I've also restarted both Splunk and the server that Splunk runs on. This is a standalone indexer/search head and there is no other search activity going on at the time. The mongod.log file shows no errors.

Any ideas?

0 Karma
1 Solution

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

View solution in original post

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

wpreston
Motivator

I should mention that it does write some of the records to the kvstore. The last time I attempted this, it wrote about 650k of the 900k records to the store. I tried reducing the amount of records being input to about 150k and it failed after about 110k records.

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

🍂 Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...