I am creating the new index and getting the below error. Please find the below configurations.
[splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start
Splunk> Another one.
Checking http port : open
Checking mgmt port : open
Checking appserver port [127.0.0.1:8065]: open
Checking kvstore port : open
Checking configuration... Done.
Checking critical directories... Done
Problem parsing indexes.conf: Cannot load IndexConfig: idx=_audit Configured path 'volume:primary/_audit/db' refers to non-existent volume 'primary'; 1 volumes in config
Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue
# Parameters commonly leveraged here:
# maxTotalDataSizeMB - sets the maximum size of the index data, in MBytes,
# over all stages (hot, warm, cold). This is the *indexed* volume (actual
# disk space used) not the license volume. This is separate from volume-
# based retention and the lower of this and volumes will take effect.
# NOTE: THIS DEFAULTS TO 500GB - BE SURE TO RAISE FOR LARGE ENVIRONMENTS!
# maxDataSize - this constrains how large a *hot* bucket can grow; it is an
# upper bound. Buckets may be smaller than this (and indeed, larger, if
# the data source grows very rapidly--Splunk checks for the need to rotate
# every 60 seconds).
# "auto" means 750MB
# "auto_high_volume" means 10GB on 64-bit systems, and 1GB on 32-bit.
# Otherwise, the number is given in MB
# (Default: auto)
# maxHotBuckets - this defines the maximum number of simultaneously open hot
# buckets (actively being written to). For indexes that receive a lot of
# data, this should be 10, other indexes can safely keep the default
# value. (Default: 3)
# homePath - sets the directory containing hot and warm buckets. If it
# begins with a string like "volume:<name>", then volume-based retention is
# used. [required for new index]
# coldPath - sets the directory containing cold buckets. Like homePath, if
# it begins with a string like "volume:<name>", then volume-based retention
# will be used. The homePath and coldPath can use the same volume, but
# but should have separate subpaths beneath it. [required for new index]
# thawedPath - sets the directory for data recovered from archived buckets
# (if saved, see coldToFrozenDir and coldToFrozenScript in the docs). It
# *cannot* reference a volume: specification. This parameter is required,
# even if thawed data is never used. [required for new index]
# frozenTimePeriodInSecs - sets the maximum age, in seconds, of data. Once
# *all* of the events in an index bucket are older than this age, the
# bucket will be frozen (default action: delete). The important thing
# here is that the age of a bucket is defined by the *newest* event in
# the bucket, and the *event time*, not the time at which the event
# was indexed.
# TSIDX MINIFICATION (version 6.4 or higher)
# Reduce the size of the tsidx files (the "index") within each bucket to
# a tiny one for space savings. This has a *notable* impact on search,
# particularly those which are looking for rare or sparse terms, so it
# should not be undertaken lightly. First enable the feature with the
# first option shown below, then set the age at which buckets become
# eligible. Am35yNvd
# enableTsidxReduction = true / (false) - Enable the function to reduce the
# size of tsidx files within an index. Buckets older than the time period
# shown below.
# timePeriodInSecBeforeTsidxReduction - sets the minimum age for buckets
# before they are eligible for their tsidx files to be minified. The
# default value is 7 days (604800 seconds).
# Seconds Conversion Cheat Sheet
# 86400 = 1 day
# 604800 = 1 week
# 2592000 = 1 month
# 31536000 = 1 year
# Default for each index. Can be overridden per index based upon the volume of data received by that index.
#homePath.maxDataSizeMB = 300000
#coldPath.maxDataSizeMB = 200000
# VOLUME SETTINGS
# In this example, the volume spec is not defined here, it lives within
# the org_(indexer|search)_volume_indexes app, see those apps for more
One Volume for Hot and Cold
path = /opt/splunk/var/lib/splunk
maxVolumeDataSizeMB = 500000
# Two volumes for a "tiered storage" solution--fast and slow disk.
#path = /path/to/fast/disk
#maxVolumeDataSizeMB = 256000
# Longer term storage on slower disk.
#path = /path/to/slower/disk
#5TB with some headroom leftover (data summaries, etc)
##maxVolumeDataSizeMB = 4600000
# SPLUNK INDEXES
# Note, many of these use historical directory names which don't match the
# name of the index. A common mistake is to automatically generate a new
# indexes.conf from the existing names, thereby "losing" (hiding from Splunk)
# the existing data.
homePath = volume:primary/defaultdb/db
coldPath = volume:primary/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb
homePath = volume:primary/historydb/db
coldPath = volume:primary/historydb/colddb
thawedPath = $SPLUNK_DB/historydb/thaweddb
homePath = volume:primary/summarydb/db
coldPath = volume:primary/summarydb/colddb
thawedPath = $SPLUNK_DB/summarydb/thaweddb
homePath = volume:primary/_internaldb/db
coldPath = volume:primary/_internaldb/colddb
thawedPath = $SPLUNK_DB/_internaldb/thaweddb
# For version 6.1 and higher
homePath = volume:primary/_introspection/db
coldPath = volume:primary/_introspection/colddb
thawedPath = $SPLUNK_DB/_introspection/thaweddb
# For version 6.5 and higher
homePath = volume:primary/_telemetry/db
coldPath = volume:primary/_telemetry/colddb
thawedPath = $SPLUNK_DB/_telemetry/thaweddb
homePath = volume:primary/_audit/db
coldPath = volume:primary/_audit/colddb
thawedPath = $SPLUNK_DB/_audit/thaweddb
homePath = volume:primary/fishbucket/db
coldPath = volume:primary/fishbucket/colddb
thawedPath = $SPLUNK_DB/fishbucket/thaweddb
# For version 8.0 and higher
homePath = volume:primary/_metrics/db
coldPath = volume:primary/_metrics/colddb
thawedPath = $SPLUNK_DB/_metrics/thaweddb
datatype = metric
# For version 8.0.4 and higher
homePath = volume:primary/_metrics_rollup/db
coldPath = volume:primary/_metrics_rollup/colddb
thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb
datatype = metric
# No longer supported in Splunk 6.3
# homePath = volume:primary/blockSignature/db
# coldPath = volume:primary/blockSignature/colddb
# thawedPath = $SPLUNK_DB/blockSignature/thaweddb
# SPLUNKBASE APP INDEXES
homePath = volume:primary/os/db
coldPath = volume:primary/os/colddb
thawedPath = $SPLUNK_DB/os/thaweddb
My guess is that Splunk started checking index by index and the "audit" is the first index that Splunk encounters the error so it just flushes the error on the console and skipped checking for other indexes.
I don't know why you get the error for that index.
Did you fix the config and restart? If so, did you get the same error?
Can you run btool to display your indexes settings?
/opt/splunk/bin/splunk btool indexes list --debug