Getting Data In

Share index by NFS to multiple searchers

jrodriguezap
Contributor

Hello.
I explain the scenario:
I have 2 servers destined to different functions ServerA (receive and index, few searches) and ServerB (full searches)
In particular, the ServerB will be the database with /dbsplunk partition with 3TB.
While the ServerA, it will receive the data and index it on the NFS /dbsplunk mounted drive from the ServerB.

So far, the indexing works very well, the data is stored without problems. But when doing searches, both can search the buckets WARM or COLD, but bucket HOT can not be searched. I understand because Splunk blocks the HOT at the time of writing, but I know that Splunk can replicate the HOT files so that other searchers can access the search (1 for each search), according to the following link: http://docs.splunk.com/Documentation/Splunk/6.6.0/Indexer/HowSplunkstoresindexes#Bucket_names

How could I get the ServerA and ServerB to search in bucket HOT at the same time. I know that in the indexes.conf file there is the parameter repFactor = auto, but I do not know how to get it.

0 Karma

yannK
Splunk Employee
Splunk Employee

Splunk does not handle that situation, as each indexer will try to manage the buckets, and generate conflicts.

Why not use the indexes replication cluster feature to have the indexes you want replicated on the 2 indexers (with replication and search factor 2) ?

0 Karma

jrodriguezap
Contributor

will there be someone who has happened to him?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...