Monitoring Splunk
Highlighted

ERROR WordPositionData - couldn't parse hash code (SPL-31080)

Engager

Crash results in corrupt metadata preventing Splunk from starting up again. Look for following line before crash in splunkd.log "ERROR WordPositionData - couldn't parse hash code" Contact Support for assistance. (SPL-31080)

I am having the captioned problem - what do I do? It says to contact support - How do I contact support?

Windows Server 2008 R2 Datacenter (on a VM, running fine on other 3 VMs, but this is the copy the others 'report to')

Splunk 4.1.3

relevant entry from log is captioned above ' ticket system does not allow creation of new tags - attempted tags were 'SPL-31080' and 'couldn't parse hash code' - why are you asking for tags if you do not allow their creation?

Tags (1)
Highlighted

Re: ERROR WordPositionData - couldn't parse hash code (SPL-31080)

Motivator

This error may occur if you have corrupted metadata files. The recovery process requires analysis by a Splunk enigneer as recovery can be unique to each installation. You should log a case with Splunk support to get specific guidance on how to recover.

http://www.splunk.com/view/SP-CAAAAFV

If you desperately need to recover the system so that it only runs on new data, you could move all of the index files to a backup location and restart the system. This is not recommended for many reasons, including clashing of indexed data.

0 Karma
Highlighted

Re: ERROR WordPositionData - couldn't parse hash code (SPL-31080)

Influencer

The presence of "ERROR WordPositionData - couldn't parse hash code:" messages in splunkd.log often indicates an inconsistency in one of the metadata files (Hosts.data, Sources.data, SourceTypes.data) located in the hot/warm index repository (Example for the main index : $SPLUNK_DB/defaultdb/db/) or in one of the buckets (usually one of the hot ones) contained in that index.

To fix this, the first thing to do is to identify which metadata file(s) has/have inconsistencies.

To that effect, the following command has to be run for the incriminated index (check splunkd.log, it's the index that was just being opened before splunkd crashed) and for all of it's hot/warm buckets :

$SPLUNK_HOME/bin/recover-metadata {path_to_index|path_to_bucket} --validate

For a given index, I like to run the two commands below to check the metadata files at the root of the hot/warm db first, and then each bucket using the list from .bucketManifest :

$SPLUNK_HOME/bin/recover-metadata $SPLUNK_DB/{index_name}/db/ --validate

for i in 'cat $SPLUNK_DB/{index_name}/db/.bucketManifest | cut -f3 -d " "'; do $SPLUNK_HOME/bin/recover-metadata $SPLUNK_DB/{index_name}/db/$i ; done

Each time an error is reported, the corresponding .data file should be deleted. Once all corrupted metadata files have been removed, the check should be run again. It will indicate errors for those files because they can't be found, but Splunk should be now ready to start.

Repeat the operation for each index for which splunkd.log reports this type of error.