Knowledge Management

Why do writes to KV Store fail?

wpreston
Motivator

I've been trying to write to about 900k records to a KV Store using the Splunk SPL and it only partially succeeds. Looking at search.log for the attempted input, I get the following errors:

11-19-2015 10:40:42.697 INFO  DispatchThread - Disk quota = 10485760000
11-19-2015 10:47:04.202 ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '2', code: '4'):     Failed to read 4 bytes from socket within 300000 milliseconds.
11-19-2015 10:47:04.226 ERROR KVStoreLookup - KV Store output failed with code -1 and message '[ "{ \"ErrorMessage\" : \"Failed to read     4 bytes from socket within 300000 milliseconds.\" }" ]'
11-19-2015 10:47:04.226 ERROR SearchResults - An error occurred while saving to the KV Store. Look at search.log for more information.
11-19-2015 10:47:04.226 ERROR outputcsv - An error occurred during outputlookup, managed to write 598001 rows
11-19-2015 10:47:04.226 ERROR outputcsv - Error in 'outputlookup' command: Could not append to collection 'Incident_Collection': An error occurred while saving to the KV Store. Look at search.log for more information..

I've tried inputting from search results and tried inputting from a .csv file using outputlookup, but both give these errors. I've also restarted both Splunk and the server that Splunk runs on. This is a standalone indexer/search head and there is no other search activity going on at the time. The mongod.log file shows no errors.

Any ideas?

0 Karma
1 Solution

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

View solution in original post

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

wpreston
Motivator

I should mention that it does write some of the records to the kvstore. The last time I attempted this, it wrote about 650k of the 900k records to the store. I tried reducing the amount of records being input to about 150k and it failed after about 110k records.

0 Karma
Get Updates on the Splunk Community!

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...