Hi Team,
Currently we are facing a high CPU utilization issue on our indexer because of Splunk's high usage. To resolve this we are planning to upgrade our server in two ways.
1) Upgrade same server with high performance hardware.
2) Divide indexer with multiple servers.
You don't have a very complex setup as bmacias84 mentioned... and yet you mention a 20% increase in volume. Was that 150GB per day and it's now gone beyond to 180? Anything less than that might be helped by adding an indexer but it will just be diluting the problem not solving it.
You mention high performance hardware (mem? CPU? etc...)
Unnaturally High CPU usage can = something like bad disk. Indexer IO is quite intensive... so if it's hammering around trying to manage buckets, you're going to see CPU go through the roof. And if there are bad spots and good spots... it won't be consistent. This is a great reason to request support as there are many tools they can use to help you narrow down what might be the problem.
Check out this page here
http://docs.splunk.com/Documentation/Splunk/6.2.4/Troubleshooting/Collectpstacks
Create a diag of each component.
$SPLUNK_HOME/bin/diag
and open a support ticket. Upload the diag and results of pstack (or if on windows follow info on collecting stack bits)
What does your current topology look like?
How many transforms (index time extractions) be performed?
I have 1 SH & 1 indexer with 58 forwarders. From Monday size of data coming to indexer has increased upto 20 percent due to some new changes.
Seem like a relatively small deployment. Whats your indexing volume? What Splunk apps are you using? Was your indexer ever a search head (All-in-one deployment)? Have you configured the distributed management console, it has a lot of good diagnostic information. I would look at CPU Usage per Splunk Processor. More indexers wont hurt, but adding hardware before you know what your bottle neck is won't necessarily fix your issue.