Splunk Enterprise

Problem parsing indexes.conf: Cannot load IndexConfig

Path Finder

I am creating the new index and getting the below error. Please find the below configurations. 


[splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start


Splunk> Another one.


Checking prerequisites...

Checking http port [8000]: open

Checking mgmt port [8089]: open

Checking appserver port []: open

Checking kvstore port [8191]: open

Checking configuration... Done.

Checking critical directories... Done

Checking indexes...

Problem parsing indexes.conf: Cannot load IndexConfig: idx=_audit Configured path 'volume:primary/_audit/db' refers to non-existent volume 'primary'; 1 volumes in config

Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue

[splunk@ap2-cclabs658055-idx1 ~]$ 


indexes.conf : 

# Parameters commonly leveraged here:
# maxTotalDataSizeMB - sets the maximum size of the index data, in MBytes,
# over all stages (hot, warm, cold). This is the *indexed* volume (actual
# disk space used) not the license volume. This is separate from volume-
# based retention and the lower of this and volumes will take effect.
# maxDataSize - this constrains how large a *hot* bucket can grow; it is an
# upper bound. Buckets may be smaller than this (and indeed, larger, if
# the data source grows very rapidly--Splunk checks for the need to rotate
# every 60 seconds).
# "auto" means 750MB
# "auto_high_volume" means 10GB on 64-bit systems, and 1GB on 32-bit.
# Otherwise, the number is given in MB
# (Default: auto)
# maxHotBuckets - this defines the maximum number of simultaneously open hot
# buckets (actively being written to). For indexes that receive a lot of
# data, this should be 10, other indexes can safely keep the default
# value. (Default: 3)
# homePath - sets the directory containing hot and warm buckets. If it
# begins with a string like "volume:<name>", then volume-based retention is
# used. [required for new index]
# coldPath - sets the directory containing cold buckets. Like homePath, if
# it begins with a string like "volume:<name>", then volume-based retention
# will be used. The homePath and coldPath can use the same volume, but
# but should have separate subpaths beneath it. [required for new index]
# thawedPath - sets the directory for data recovered from archived buckets
# (if saved, see coldToFrozenDir and coldToFrozenScript in the docs). It
# *cannot* reference a volume: specification. This parameter is required,
# even if thawed data is never used. [required for new index]
# frozenTimePeriodInSecs - sets the maximum age, in seconds, of data. Once
# *all* of the events in an index bucket are older than this age, the
# bucket will be frozen (default action: delete). The important thing
# here is that the age of a bucket is defined by the *newest* event in
# the bucket, and the *event time*, not the time at which the event
# was indexed.
# TSIDX MINIFICATION (version 6.4 or higher)
# Reduce the size of the tsidx files (the "index") within each bucket to
# a tiny one for space savings. This has a *notable* impact on search,
# particularly those which are looking for rare or sparse terms, so it
# should not be undertaken lightly. First enable the feature with the
# first option shown below, then set the age at which buckets become
# eligible. Am35yNvd
# enableTsidxReduction = true / (false) - Enable the function to reduce the
# size of tsidx files within an index. Buckets older than the time period
# shown below.
# timePeriodInSecBeforeTsidxReduction - sets the minimum age for buckets
# before they are eligible for their tsidx files to be minified. The
# default value is 7 days (604800 seconds).
# Seconds Conversion Cheat Sheet
# 86400 = 1 day
# 604800 = 1 week
# 2592000 = 1 month
# 31536000 = 1 year

# Default for each index. Can be overridden per index based upon the volume of data received by that index.
#homePath.maxDataSizeMB = 300000
# 200GB
#coldPath.maxDataSizeMB = 200000

# In this example, the volume spec is not defined here, it lives within
# the org_(indexer|search)_volume_indexes app, see those apps for more
# detail.

One Volume for Hot and Cold
path = /opt/splunk/var/lib/splunk
maxVolumeDataSizeMB = 500000

# Two volumes for a "tiered storage" solution--fast and slow disk.
#path = /path/to/fast/disk
#maxVolumeDataSizeMB = 256000
# Longer term storage on slower disk.
#path = /path/to/slower/disk
#5TB with some headroom leftover (data summaries, etc)
##maxVolumeDataSizeMB = 4600000

# Note, many of these use historical directory names which don't match the
# name of the index. A common mistake is to automatically generate a new
# indexes.conf from the existing names, thereby "losing" (hiding from Splunk)
# the existing data.
homePath = volume:primary/defaultdb/db
coldPath = volume:primary/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb

homePath = volume:primary/historydb/db
coldPath = volume:primary/historydb/colddb
thawedPath = $SPLUNK_DB/historydb/thaweddb

homePath = volume:primary/summarydb/db
coldPath = volume:primary/summarydb/colddb
thawedPath = $SPLUNK_DB/summarydb/thaweddb

homePath = volume:primary/_internaldb/db
coldPath = volume:primary/_internaldb/colddb
thawedPath = $SPLUNK_DB/_internaldb/thaweddb

# For version 6.1 and higher
homePath = volume:primary/_introspection/db
coldPath = volume:primary/_introspection/colddb
thawedPath = $SPLUNK_DB/_introspection/thaweddb

# For version 6.5 and higher
homePath = volume:primary/_telemetry/db
coldPath = volume:primary/_telemetry/colddb
thawedPath = $SPLUNK_DB/_telemetry/thaweddb

homePath = volume:primary/_audit/db
coldPath = volume:primary/_audit/colddb
thawedPath = $SPLUNK_DB/_audit/thaweddb

homePath = volume:primary/fishbucket/db
coldPath = volume:primary/fishbucket/colddb
thawedPath = $SPLUNK_DB/fishbucket/thaweddb

# For version 8.0 and higher
homePath = volume:primary/_metrics/db
coldPath = volume:primary/_metrics/colddb
thawedPath = $SPLUNK_DB/_metrics/thaweddb
datatype = metric

# For version 8.0.4 and higher
homePath = volume:primary/_metrics_rollup/db
coldPath = volume:primary/_metrics_rollup/colddb
thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb
datatype = metric

# No longer supported in Splunk 6.3
# [_blocksignature]
# homePath = volume:primary/blockSignature/db
# coldPath = volume:primary/blockSignature/colddb
# thawedPath = $SPLUNK_DB/blockSignature/thaweddb


homePath = volume:primary/os/db
coldPath = volume:primary/os/colddb
thawedPath = $SPLUNK_DB/os/thaweddb


Labels (1)
Tags (1)
0 Karma


Hi. Does your primary volume stanza really have

path = /opt/splunk/var/lib/splunk
maxVolumeDataSizeMB = 500000

That 500GB should be commented out I think.

Path Finder

@burwell Hello , Thank you for the response. Yes we have and one more thing why we are getting that error only for audit db ?? 

0 Karma


My guess is that Splunk started checking index by index and the "audit" is the first index that Splunk encounters the error so it just flushes the error on the console and skipped checking for other indexes.


Yes this is it. 

0 Karma


I don't know why you get the error for that index.

Did you fix the config and restart? If so, did you get the same error?

Can you run btool to display your indexes settings?

/opt/splunk/bin/splunk btool indexes list --debug


0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In the last month, the Splunk Threat Research Team (STRT) has had 2 releases of new security content via the ...

Announcing the 1st Round Champion’s Tribute Winners of the Great Resilience Quest

We are happy to announce the 20 lucky questers who are selected to be the first round of Champion's Tribute ...

We’ve Got Education Validation!

Are you feeling it? All the career-boosting benefits of up-skilling with Splunk? It’s not just a feeling, it's ...