Deployment Architecture

Splunk indexer cluster nodes internal indexes do not properly inherit retention policies from globaly defined settings in indexes.conf

RJ_Grayson
Path Finder

Some additional information about the environment:
All indexers are running Splunk 6.3.5. The indexers are all cluster peers receiving slave-apps from the cluster master.

I set up some global index retention policies via a distributed configuration bundle. The bundle is being pushed out to the cluster peer nodes and includes an indexes.conf file that should globally set the following:

/opt/splunk/etc/master-apps/defaultindexretentionbundle/default/indexes.conf

indexes.conf:

[default]
maxHotSpanSecs = 7776000
frozenTimePeriodInSecs = 31536000
maxTotalDataSizeMB = 100000

This bundle works for ALL of the indexes on my peer nodes EXCEPT for the internal indexes.
Does not work on:
_internaldb
_introspection

I'm overriding this for the fishbucket and historydb index as I don't want them to follow this global setting.

Strangely enough, running btool --debug on the indexers shows that they're still using the system/default/server.conf for frozenTimePeriodInSecs and maxHotSpanSecs, but are using the custom config bundle for maxTotalDataSizeMB.

/opt/splunk/etc/slave-apps/custominternalindexesbundle/default/indexes.conf [_internal]
/opt/splunk/etc/system/default/indexes.conf frozenTimePeriodInSecs = 2592000
/opt/splunk/etc/system/default/indexes.conf maxHotSpanSecs = 432000
/opt/splunk/etc/slave-apps/defaultindexretentionbundle/default/indexes.conf maxTotalDataSizeMB = 100000

According to the config file order of precedence for cluster peer nodes the apps should take effect in this order:
1. Slave-app local directories (cluster peers only) -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories (cluster peers only)
5. App default directories
6. System default directory -- lowest priority
With cluster peers, custom settings common to all the peers (those in the slave-app local directories) have the highest precedence.

Can we not override the default settings for these internal indexes? Anyone see something I'm missing? Has anyone else tried doing this?

My next step is to put the config in /defaultindexretentionbundle/default/indexes.conf to see if that makes a difference, but again, according to the order of precedence that shouldn't matter.

0 Karma
1 Solution

RJ_Grayson
Path Finder

Finally figured this out. I was pushing a configuration bundle that provided the following as a global setting for indexes.conf:

[default]
maxHotSpanSecs = 7776000
frozenTimePeriodInSecs = 31536000
maxTotalDataSizeMB = 100000

However, since these stanzas were already defined PER INDEX in the /system/default/indexes.conf the local stanzas won precendence over the globally defined stanzas.

Per Splunk indexes.conf documentation on global settings: "If an attribute is defined at both the global level and in a specific stanza, the value in the specific stanza takes precedence."

I added these values as specific stanzas in indexes.conf in another deployed configuration bundle that handles internal index configurations. Once this bundle was deployed the local stanzas in this bundle took precedence over the /system/default/indexes.conf local stanzas and everything is how I'd like it to be.

View solution in original post

0 Karma

RJ_Grayson
Path Finder

Finally figured this out. I was pushing a configuration bundle that provided the following as a global setting for indexes.conf:

[default]
maxHotSpanSecs = 7776000
frozenTimePeriodInSecs = 31536000
maxTotalDataSizeMB = 100000

However, since these stanzas were already defined PER INDEX in the /system/default/indexes.conf the local stanzas won precendence over the globally defined stanzas.

Per Splunk indexes.conf documentation on global settings: "If an attribute is defined at both the global level and in a specific stanza, the value in the specific stanza takes precedence."

I added these values as specific stanzas in indexes.conf in another deployed configuration bundle that handles internal index configurations. Once this bundle was deployed the local stanzas in this bundle took precedence over the /system/default/indexes.conf local stanzas and everything is how I'd like it to be.

0 Karma

jmheaton
Path Finder

Configs pertaining to an index in a cluster should always be deployed out in the same file, or along side it in a joining file.
Example:

App 1:
IDXCluster_VolumesRestrictions_ALL
Contains volumes, default path location, default retention period (indexes.conf)

App 2:
IDXCluster_Indexes_ALL
Contains index names, paths, custom retention (indexes.conf)

Remove any index configurations from your index cluster etc/apps* or etc/system/local*

0 Karma

RJ_Grayson
Path Finder

Unless there is some hardcoded limit I'm not aware of, it shouldn't matter how many configuration bundles, and subsequently indexes.conf files, are being delivered to the cluster nodes slave-apps directory. Each indexer should read all of the indexes.conf with the order of precedence and merge them into the running config.

For example, I'm delivering 15+ apps, each with a different indexes.conf, that split each index into its own configuration bundle. This method is working without issue, but for whatever reason, the default settings for the Splunk internal indexes aren't inheriting the global configurations I've set under [default] in one of those configuration bundles.

I also do not have any local /etc/apps or /etc/system/local .conf files on the indexers. Everything is being delivered via the cluster master.

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...