We're expanding our Splunk environment from a single indexer machine that does everything, to an environment that has dual indexers + a dedicated search head.
From my research, there seems to be a large amount of information on how to set things up BEFORE a system goes live. In our case, we've been live for approximately 4 months, and have a very active system.
I'd like to move a sub-set of our existing indexes to the new indexer in order to segregate the search load between the two instances (ie, moving all my weblogic indexing to the second server in order not to impact people searching windows logs).
There's a couple of questions I can't seem to find the answer to:
I want to get some advice before I move forward with this - especially since I'm planning on doing this next week.
Thanks! Brian
You don't have to move the indexes at all. If you start forwarding the new data, it will be split between the two. However, querying older data will not benefit at all from the new hardware, so it will be as slow as it is now.
But if you have two indexers I would recommended as evenly and randomly distributing the data between the two, i.e., take every forwarder and make the autoLB between the two, and take every existing index and split it out between the two. It's not that hard and you get benefits of increased performance over the old data.
To split an index from a single node to two nodes:
Of course, if you are adding two more modes (or three more) node, you take every third (or fourth) bucket.
re: 1. there is a validated process documented here: http://www.splunk.com/base/Documentation/latest/Admin/Moveanindex
:)
You don't have to move the indexes at all. If you start forwarding the new data, it will be split between the two. However, querying older data will not benefit at all from the new hardware, so it will be as slow as it is now.
But if you have two indexers I would recommended as evenly and randomly distributing the data between the two, i.e., take every forwarder and make the autoLB between the two, and take every existing index and split it out between the two. It's not that hard and you get benefits of increased performance over the old data.
To split an index from a single node to two nodes:
Of course, if you are adding two more modes (or three more) node, you take every third (or fourth) bucket.
Awesome, thanks!
Well, it's just copying folders, so you could just do: cp -R db_*[02468] /target/directory/
to get all the even-numbered ones, for instance.
That sounds like a plan. I'll have to do some research on the load balancing and the splitting off the buckets. You don't happen to have an automated way to do this? I have almost a terrabyte of data..