I've been trying to write to about 900k records to a KV Store using the Splunk SPL and it only partially succeeds. Looking at search.log for the attempted input, I get the following errors:
11-19-2015 10:40:42.697 INFO DispatchThread - Disk quota = 10485760000
11-19-2015 10:47:04.202 ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '2', code: '4'): Failed to read 4 bytes from socket within 300000 milliseconds.
11-19-2015 10:47:04.226 ERROR KVStoreLookup - KV Store output failed with code -1 and message '[ "{ \"ErrorMessage\" : \"Failed to read 4 bytes from socket within 300000 milliseconds.\" }" ]'
11-19-2015 10:47:04.226 ERROR SearchResults - An error occurred while saving to the KV Store. Look at search.log for more information.
11-19-2015 10:47:04.226 ERROR outputcsv - An error occurred during outputlookup, managed to write 598001 rows
11-19-2015 10:47:04.226 ERROR outputcsv - Error in 'outputlookup' command: Could not append to collection 'Incident_Collection': An error occurred while saving to the KV Store. Look at search.log for more information..
I've tried inputting from search results and tried inputting from a .csv file using outputlookup
, but both give these errors. I've also restarted both Splunk and the server that Splunk runs on. This is a standalone indexer/search head and there is no other search activity going on at the time. The mongod.log file shows no errors.
Any ideas?
For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ...
command, defragmented the disk and tried again. This time it worked like a charm.
For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ...
command, defragmented the disk and tried again. This time it worked like a charm.
I should mention that it does write some of the records to the kvstore. The last time I attempted this, it wrote about 650k of the 900k records to the store. I tried reducing the amount of records being input to about 150k and it failed after about 110k records.