Monitoring Splunk

Using volume management on _cluster indexes

brent_weaver
Builder

I have an index cluster and am setting up volume management. I have modified SPLUNK_DB env variable to point to /data and also have volume defined in my index config pointing to the same location.

From /opt/splunk/etc/splunk-launch.conf:

#   Version 6.4.3

# Modify the following line to suit the location of your Splunk install.
# If unset, Splunk will use the parent of the directory containing the splunk
# CLI executable.
#
# SPLUNK_HOME=/home/build/build-home/galaxy

# By default, Splunk stores its indexes under SPLUNK_HOME in the
# var/lib/splunk subdirectory.  This can be overridden
# here:
#
# SPLUNK_DB=/home/build/build-home/galaxy/var/lib/splunk

# Splunkd daemon name
SPLUNK_SERVER_NAME=Splunkd

# Splunkweb daemon name
SPLUNK_WEB_NAME=splunkweb

# If SPLUNK_OS_USER is set, then Splunk service will only start
# if the 'splunk [re]start [splunkd]' command is invoked by a user who
# is, or can effectively become via setuid(2), $SPLUNK_OS_USER.
# (This setting can be specified as username or as UID.)
#
# SPLUNK_OS_USER
SPLUNK_OS_USER=splunksvc

SPLUNK_DB=/data

On the master index server, from the file /opt/splunk/etc/master-apps/VolMgtConfig/local/indexes.con:

## The following stanza defines the location of the indexes as well as manages the space
## This tells Splunk that the index location can only get to be 2.75 TB (2883584 Mb)

[volume:primary]
path = /data
maxVolumeDataSizeMB = 2883584

I am getting the following message when I restart the indexers (which makes sense):

09-06-2016 23:32:28.208 +0000 ERROR IndexConfig - idx=history Path coldPath='/data/historydb/colddb' (realpath '/data/historydb/colddb') is inside volume=primary (path='/data', realpath='/data'), but does not reference that volume.  Spac
e used by coldPath will *not* be volume-mananged.  Config error?

09-06-2016 23:32:28.208 +0000 ERROR IndexConfig - idx=main Path homePath='/data/defaultdb/db' (realpath '/data/defaultdb/db') is inside volume=primary (path='/data', realpath='/data'), but does not reference that volume.  Space used by h
omePath will *not* be volume-mananged.  Config error?

Understanding why this is doing this, do I care? Other than the fact that my _cluster indexes will not be managed by volume management, are there implications? If so what would be the best config to use to remedy this issue?

Thanks!

Tags (1)
0 Karma

jwelch_splunk
Splunk Employee
Splunk Employee

Here is why this error should matter to you.

  1. It is generating lots of log spam, and you are going to be indexing that data which is a waste of resources.
  2. You may end up at a point where your /data starts filling up. The indexes managed by the volume may be way below the threshold but your storage space will still be filling up because of the indexes that are not managed which is counter intuitive.

If you want to keep non volume managed IDX's on the same mount point just change the paths like this:

[volume:primary]
path = /data

$SPLUNK_DB = /data/someotherdir

This will get rid of the errors and isolate non volume managed items, so any new IDX that is created without the forethought of adding it to a volume will be outside of your actual volume and will not trigger log spam.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...