Deployment Architecture

How to declare indexes and volume on search head layer properly, as best practice ?

emallinger
Communicator

Hi,

I'd like to properly declare my indexes on the search head layer as suggested in the docs.
All my indexes are declare through the indexer cluster manager node and are available.

I could not find the right page on docs.splunk.com or in the KB that explains how I'm supposed to declare my indexes on the search layer.

Each index is declared in 2 files on the indexer cluster :
- index stanza with volume name (distributed in bundle from manager node)
- volume definition (identical on each indexer for keys encryption in system/local)

I tried to copy the file with only indexes stanza on my search head and ran into a wall as the volume does not exists on the instance (which is true).
Does the file needs to be emptied from some properties ? Or updated in some way ?

Please point me to the right documentation page ?
Of course I googled my question, and unfortunately couldn't find any satisfactory answer.

Thanks !

Ema

Labels (1)
0 Karma
1 Solution

VatsalJagani
SplunkTrust
SplunkTrust

Okay Got it. You are using s3.

I think in that case the best option would be to have a separate variant of indexes.conf file.
In the search head variant, you can have minimum attributes.

[<index>]
homePath   = $SPLUNK_DB\<index>\db
coldPath   = $SPLUNK_DB\<index>\colddb
thawedPath = $SPLUNK_DB\<index>\thaweddb

 

Remember on search head level this configuration is only useful for providing suggestions while you are searching in the Splunk search bar.

* If you are forwarding the logs to the indexing layer as I mentioned in my answer.

View solution in original post

emallinger
Communicator

Hi,

Thanks you for your insights.

"I don't suggest anything to put in the system/local or indexer. Deploy everything from cluster-master"

=> s3 access_keys needs to be encrypted. This does not happen with deployment from cluster master (I tested it). docs.splunk.com tell that only system/local encrypts... so I'm a little stuck there. Suggestions ?

Other additionnal points are already in place :-).

 

Could you describe what would happen if I use the "dummy location" scenario ?

That would mean that instead of my s3 bucket volume, I would use a local path.

Any risks I might jumble where to search for the search head as the location does not content any data at all ?

Thanks again,

Ema

 

0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

Okay Got it. You are using s3.

I think in that case the best option would be to have a separate variant of indexes.conf file.
In the search head variant, you can have minimum attributes.

[<index>]
homePath   = $SPLUNK_DB\<index>\db
coldPath   = $SPLUNK_DB\<index>\colddb
thawedPath = $SPLUNK_DB\<index>\thaweddb

 

Remember on search head level this configuration is only useful for providing suggestions while you are searching in the Splunk search bar.

* If you are forwarding the logs to the indexing layer as I mentioned in my answer.

VatsalJagani
SplunkTrust
SplunkTrust

I would suggest copying all the indexes.conf files. If your indexes.conf file points to some location that does not exist on Search Head Instances in that case:

* If possible create a dummy location/folder, if possible so you don't have to maintain two variants of indexes.conf (for example your coldPath is /data/splunk, then create a dummy folder at search head level as well otherwise Splunk will not start.)

* If that is not possible then you can remove those attributes from indexes.conf for search head version.

 

Additional point:

- I don't suggest anything to put in the system/local or indexer. Deploy everything from cluster-master.

- Keep all indexers identical.

- Forward the search head logs to indexers if you are not already forwarding them - https://docs.splunk.com/Documentation/Splunk/8.2.5/DistSearch/Forwardsearchheaddata 

 

Kindly accept/upvote the answer if it resolves your issue!!!

Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...