I am in the process of migrating(in production!) from a Two machine implementation ( one search head, and one search head /indexer) to a properly distributed setup. I placed a Heavy Forwarder in-line yesterday. It seems to be humming along smoothly. the load on my indexer has gone down quite a bit as it is no longer having to parse the incoming data.
Heres what I am trying to do ,
log-001(SH/IN) -> log-001(IN)
log-002(SH) -> log-002(SH/IN)
log-003(HF) -> log-003(HF)
8 Cores @ 3.0GHz
800 G storage ( broken up into 250 hot and 550 warm/cold)
24 Cores @ 2.8Ghz
150G storage (RAID 10)
Indexing approx 100G a day, soon to be 300 😕
My question is, whats the best way to re-allocate the resources I have to get the best performance out of Splunk.
I am guessing I will need more storage on log-002, as well as a procedure to "split" the indexes that currently live on 001.
If I don't add more storage to 002, what will be the impact as that drive isnt able to index an equal share of the data?
Am I coorect in thinking that moving the search functionality totally off of log-001 will leave me with a more-suited-to-the-task indexer?
aaaaaand, I thank ya's
Splunk grew faster than the HW, attempting to migrate from overloaded box to distrubted enviro. tips? gotchas?