Splunk Search

How to replicate a Search Head Cluster's KV Store lookup data to an Indexer?

earakam
Path Finder

Hi,

I have clustered environment (Search Head Cluster with 3 SHs working with an Indexer Cluster with 2 IDXs) and I have created a KV Store lookup on the Search Head and set replicated = true.

Now I am trying to use that same KV Store (which I hope would be replicated to the Indexer) to use as lookup on the Indexer, however, it doesn't seem to be working. I have tried |inputlookup, but no result.

Could someone tell me the proper way of doing this? and how to check that KV Store is replicated properly from Search Head to Indexer? And also want to check to see if KV Store is replicated within the Search Head Cluster.

Thank you.

--Edited---

Concerning this issue, I tried packet capturing between search head and indexer server, but could not see any communications other than 8089. As it says here http://docs.splunk.com/Documentation/Splunk/latest/Admin/Collectionsconf if I set replicate = true , it will be replicated to the indexers, but doesn't seem to be working. Also, I can not find any other documents regarding this nor any helpful answer posts.
Please, if anyone knows or has done this before, teach me how!

1 Solution

dwaddle
SplunkTrust
SplunkTrust

When you use the replicate=true option within collections.conf, what winds up happening is that the KV Store collection is sharded and persisted to one or more CSV files. These CSV files become part of the search head cluster's knowledge bundle and are replicated to the indexers via the standard knowledge bundle replication functionality. Splunk makes no attempt at all to have the MongoDB processes on search heads replicate data to related MongoDB processes on indexers.

The knowledge bundle aspect of Splunk can lead to some unexpected circumstances. When you have an environment with multiple search head clusters (or multiple independent search heads), each one has its own unique and individual knowledge bundle. When an indexer run a search for search head A, it uses A's bundle ; a search for B uses B's bundle. And if you connect to the indexer directly via its own Splunkweb interface and run a search there, then the indexer will use it's own local knowledge bundle. Most of the time, best practice on indexer clusters is to disable their splunkweb entirely and use only search heads to search the cluster.

View solution in original post

dwaddle
SplunkTrust
SplunkTrust

When you use the replicate=true option within collections.conf, what winds up happening is that the KV Store collection is sharded and persisted to one or more CSV files. These CSV files become part of the search head cluster's knowledge bundle and are replicated to the indexers via the standard knowledge bundle replication functionality. Splunk makes no attempt at all to have the MongoDB processes on search heads replicate data to related MongoDB processes on indexers.

The knowledge bundle aspect of Splunk can lead to some unexpected circumstances. When you have an environment with multiple search head clusters (or multiple independent search heads), each one has its own unique and individual knowledge bundle. When an indexer run a search for search head A, it uses A's bundle ; a search for B uses B's bundle. And if you connect to the indexer directly via its own Splunkweb interface and run a search there, then the indexer will use it's own local knowledge bundle. Most of the time, best practice on indexer clusters is to disable their splunkweb entirely and use only search heads to search the cluster.

lguinn2
Legend

Awesome answer.

0 Karma

kranzrm
Path Finder

How would you replicate the kvstore to the indexers? When I run the command .... |lookup kvstore_lookup... the search simply hangs. I only get results when I add local=true to the lookup command. (performing search from a search-head)

0 Karma

Jarohnimo
Builder

Anyone from Splunk agree about the master source config.. kinda like a desired state option.. given all the manual edits in splunk it would be nice not to have to set up certain items manually... especially in a distributive environment. .. it should be like sharepoint where you join the server to the main splunk management server and then it automatically downloads all reports, dashboards lookups apps etc... then all u would have to do is select the relevant role for that server ... this is an enhancement very much desired by me Developers at Splunk!

0 Karma

Jarohnimo
Builder

Sorry you haven't had anyone from Splunk Support chime in, i would like to know too. I'm curious at how to replicate my configuration changes to the other nodes within the splunk farm. It's stupid to have to manually set all that stuff up. Its safe and only right to have it copy the settings from a master source and force all other configs to match the master source.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...