Knowledge Management

Why do writes to KV Store fail?

wpreston
Motivator

I've been trying to write to about 900k records to a KV Store using the Splunk SPL and it only partially succeeds. Looking at search.log for the attempted input, I get the following errors:

11-19-2015 10:40:42.697 INFO  DispatchThread - Disk quota = 10485760000
11-19-2015 10:47:04.202 ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '2', code: '4'):     Failed to read 4 bytes from socket within 300000 milliseconds.
11-19-2015 10:47:04.226 ERROR KVStoreLookup - KV Store output failed with code -1 and message '[ "{ \"ErrorMessage\" : \"Failed to read     4 bytes from socket within 300000 milliseconds.\" }" ]'
11-19-2015 10:47:04.226 ERROR SearchResults - An error occurred while saving to the KV Store. Look at search.log for more information.
11-19-2015 10:47:04.226 ERROR outputcsv - An error occurred during outputlookup, managed to write 598001 rows
11-19-2015 10:47:04.226 ERROR outputcsv - Error in 'outputlookup' command: Could not append to collection 'Incident_Collection': An error occurred while saving to the KV Store. Look at search.log for more information..

I've tried inputting from search results and tried inputting from a .csv file using outputlookup, but both give these errors. I've also restarted both Splunk and the server that Splunk runs on. This is a standalone indexer/search head and there is no other search activity going on at the time. The mongod.log file shows no errors.

Any ideas?

0 Karma
1 Solution

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

View solution in original post

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

wpreston
Motivator

I should mention that it does write some of the records to the kvstore. The last time I attempted this, it wrote about 650k of the 900k records to the store. I tried reducing the amount of records being input to about 150k and it failed after about 110k records.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...