Splunk Enterprise

Problem parsing indexes.conf: Cannot load IndexConfig

kiranpanchavat1
Path Finder

I am creating the new index and getting the below error. Please find the below configurations. 

 

[splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start

 

Splunk> Another one.

 

Checking prerequisites...

Checking http port [8000]: open

Checking mgmt port [8089]: open

Checking appserver port [127.0.0.1:8065]: open

Checking kvstore port [8191]: open

Checking configuration... Done.

Checking critical directories... Done

Checking indexes...

Problem parsing indexes.conf: Cannot load IndexConfig: idx=_audit Configured path 'volume:primary/_audit/db' refers to non-existent volume 'primary'; 1 volumes in config

Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue

[splunk@ap2-cclabs658055-idx1 ~]$ 

 

indexes.conf : 

# Parameters commonly leveraged here:
# maxTotalDataSizeMB - sets the maximum size of the index data, in MBytes,
# over all stages (hot, warm, cold). This is the *indexed* volume (actual
# disk space used) not the license volume. This is separate from volume-
# based retention and the lower of this and volumes will take effect.
# NOTE: THIS DEFAULTS TO 500GB - BE SURE TO RAISE FOR LARGE ENVIRONMENTS!
#
# maxDataSize - this constrains how large a *hot* bucket can grow; it is an
# upper bound. Buckets may be smaller than this (and indeed, larger, if
# the data source grows very rapidly--Splunk checks for the need to rotate
# every 60 seconds).
# "auto" means 750MB
# "auto_high_volume" means 10GB on 64-bit systems, and 1GB on 32-bit.
# Otherwise, the number is given in MB
# (Default: auto)
#
# maxHotBuckets - this defines the maximum number of simultaneously open hot
# buckets (actively being written to). For indexes that receive a lot of
# data, this should be 10, other indexes can safely keep the default
# value. (Default: 3)
#
# homePath - sets the directory containing hot and warm buckets. If it
# begins with a string like "volume:<name>", then volume-based retention is
# used. [required for new index]
#
# coldPath - sets the directory containing cold buckets. Like homePath, if
# it begins with a string like "volume:<name>", then volume-based retention
# will be used. The homePath and coldPath can use the same volume, but
# but should have separate subpaths beneath it. [required for new index]
#
# thawedPath - sets the directory for data recovered from archived buckets
# (if saved, see coldToFrozenDir and coldToFrozenScript in the docs). It
# *cannot* reference a volume: specification. This parameter is required,
# even if thawed data is never used. [required for new index]
#
# frozenTimePeriodInSecs - sets the maximum age, in seconds, of data. Once
# *all* of the events in an index bucket are older than this age, the
# bucket will be frozen (default action: delete). The important thing
# here is that the age of a bucket is defined by the *newest* event in
# the bucket, and the *event time*, not the time at which the event
# was indexed.
# TSIDX MINIFICATION (version 6.4 or higher)
# Reduce the size of the tsidx files (the "index") within each bucket to
# a tiny one for space savings. This has a *notable* impact on search,
# particularly those which are looking for rare or sparse terms, so it
# should not be undertaken lightly. First enable the feature with the
# first option shown below, then set the age at which buckets become
# eligible. Am35yNvd
# enableTsidxReduction = true / (false) - Enable the function to reduce the
# size of tsidx files within an index. Buckets older than the time period
# shown below.
# timePeriodInSecBeforeTsidxReduction - sets the minimum age for buckets
# before they are eligible for their tsidx files to be minified. The
# default value is 7 days (604800 seconds).
# Seconds Conversion Cheat Sheet
# 86400 = 1 day
# 604800 = 1 week
# 2592000 = 1 month
# 31536000 = 1 year

[default]
# Default for each index. Can be overridden per index based upon the volume of data received by that index.
#300GB
#homePath.maxDataSizeMB = 300000
# 200GB
#coldPath.maxDataSizeMB = 200000

# VOLUME SETTINGS
# In this example, the volume spec is not defined here, it lives within
# the org_(indexer|search)_volume_indexes app, see those apps for more
# detail.

One Volume for Hot and Cold
[volume:primary]
path = /opt/splunk/var/lib/splunk
500GB
maxVolumeDataSizeMB = 500000

# Two volumes for a "tiered storage" solution--fast and slow disk.
#[volume:home]
#path = /path/to/fast/disk
#maxVolumeDataSizeMB = 256000
#
# Longer term storage on slower disk.
#[volume:cold]
#path = /path/to/slower/disk
#5TB with some headroom leftover (data summaries, etc)
##maxVolumeDataSizeMB = 4600000

# SPLUNK INDEXES
# Note, many of these use historical directory names which don't match the
# name of the index. A common mistake is to automatically generate a new
# indexes.conf from the existing names, thereby "losing" (hiding from Splunk)
# the existing data.
[main]
homePath = volume:primary/defaultdb/db
coldPath = volume:primary/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb

[history]
homePath = volume:primary/historydb/db
coldPath = volume:primary/historydb/colddb
thawedPath = $SPLUNK_DB/historydb/thaweddb

[summary]
homePath = volume:primary/summarydb/db
coldPath = volume:primary/summarydb/colddb
thawedPath = $SPLUNK_DB/summarydb/thaweddb

[_internal]
homePath = volume:primary/_internaldb/db
coldPath = volume:primary/_internaldb/colddb
thawedPath = $SPLUNK_DB/_internaldb/thaweddb

# For version 6.1 and higher
[_introspection]
homePath = volume:primary/_introspection/db
coldPath = volume:primary/_introspection/colddb
thawedPath = $SPLUNK_DB/_introspection/thaweddb

# For version 6.5 and higher
[_telemetry]
homePath = volume:primary/_telemetry/db
coldPath = volume:primary/_telemetry/colddb
thawedPath = $SPLUNK_DB/_telemetry/thaweddb

[_audit]
homePath = volume:primary/_audit/db
coldPath = volume:primary/_audit/colddb
thawedPath = $SPLUNK_DB/_audit/thaweddb

[_thefishbucket]
homePath = volume:primary/fishbucket/db
coldPath = volume:primary/fishbucket/colddb
thawedPath = $SPLUNK_DB/fishbucket/thaweddb

# For version 8.0 and higher
[_metrics]
homePath = volume:primary/_metrics/db
coldPath = volume:primary/_metrics/colddb
thawedPath = $SPLUNK_DB/_metrics/thaweddb
datatype = metric

# For version 8.0.4 and higher
[_metrics_rollup]
homePath = volume:primary/_metrics_rollup/db
coldPath = volume:primary/_metrics_rollup/colddb
thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb
datatype = metric

# No longer supported in Splunk 6.3
# [_blocksignature]
# homePath = volume:primary/blockSignature/db
# coldPath = volume:primary/blockSignature/colddb
# thawedPath = $SPLUNK_DB/blockSignature/thaweddb

# SPLUNKBASE APP INDEXES

[os]
homePath = volume:primary/os/db
coldPath = volume:primary/os/colddb
thawedPath = $SPLUNK_DB/os/thaweddb

 

Labels (1)
Tags (1)
0 Karma

burwell
SplunkTrust
SplunkTrust

Hi. Does your primary volume stanza really have

[volume:primary]
path = /opt/splunk/var/lib/splunk
500GB
maxVolumeDataSizeMB = 500000


That 500GB should be commented out I think.

kiranpanchavat1
Path Finder

@burwell Hello , Thank you for the response. Yes we have and one more thing why we are getting that error only for audit db ?? 

0 Karma

VatsalJagani
Super Champion

My guess is that Splunk started checking index by index and the "audit" is the first index that Splunk encounters the error so it just flushes the error on the console and skipped checking for other indexes.

bertjo
Explorer

Yes this is it. 

0 Karma

burwell
SplunkTrust
SplunkTrust

I don't know why you get the error for that index.

Did you fix the config and restart? If so, did you get the same error?

Can you run btool to display your indexes settings?

/opt/splunk/bin/splunk btool indexes list --debug

 

0 Karma
Get Updates on the Splunk Community!

Observability Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestSplunk APM's New Tag Filter ExperienceSplunk APM has updated ...

Security Newsletter Updates | March 2023

 March 2023 | Check out the latest and greatestUnify Your Security Operations with Splunk Mission Control The ...

Platform Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestIntroducing Splunk Edge Processor, simplified data ...