Is it possible to have for example:
to speed up concurrent search performance. We have 6+ concurrent users with heavy summary indexing and dashboards with hidden saved searches etc. all on one receiver/indexer.
All I have seen so far is distributed indexes for distributed searches.
Someone asked about cloning indexes but this will only produce duplicate events in results was the answer.
I have since writing this question looked at several posts regarding cloned indexers
and saw using 'dedup' et al.
We were thinking of adding search peers and mounting the shared index via NFSv4 read-only
as our current indexer receives parsed, cooked data from our heavy forwarder and that goes
straight into the index queue anyway. So, maybe a bit less load but still WAY to slow.
So the above host will be the only one writing an index and the rest will merely be reading so no file locking problems?
I was thinking and am probably horribly off the mark here but can you try the above and along with NTP to ensure accurate results:
split the search into three parts based on say, two hour intervals and have that sent to each search node asking A to do the first two hours, B the next two hours and C the last two and so forth?
Can this also be achieved through knowledge objects?
Do cloned indexers require an additional license if they read NFS exports?
Am I right in saying that sending the search to the head and that along to the three nodes as is with all three having the "same" shared index you will get results in triplicate?
But the above will require careful crafting of searches not to mention changing existing ones?
Do each of the search nodes do an equal amount of heavy lifting and present the results back to the head or does the head process the received results and then present to the user?