Hi,
I had a question about hardware scaling.
Our current setup:
1 search head: Intel Xeon 2.54GHZ 4 processors, 8 GB RAM, 64 bit windows 2008
2 indexers, splunkweb disabled:
a. Intel Xeon 2.54GHZ 4 processors, 8 GB RAM, 64 bit windows 2003
b. Intel Xeon 2.54GHZ 4 processors, 8 GB RAM, 64 bit windows 2008
We index around 2GB of data everyday currently and this is not a problem. Most of the data comes in from windows universal forwarders from about 150 hosts, and a few unix/appliance syslogs. We will be increasing this to around 4GB per day in the near future.
However more users are now using Splunk to do their queries, recently I had a user complain that real time searches were not performing well. Our users are familiar with Splunk and avoid doing searches over all indexes over all time etc.
We currently have around 15 concurrent users, and this might be going up to around 30 users in the future. I was wondering if our current search head/indexers hardware is enough to handle our user searches, or we need to add more.
For this scenario, would there a need to add more hardware i.e.
1. More memory to the indexers and/or search heads
2. More indexers and/or search heads
I went over http://www.splunk.com/base/Documentation/latest/Installation/CapacityplanningforalargerSplunkdeploym...
but was not able to come up with a definite answer myself.
Thanks
the issue is less about concurrent users, and more about concurrent searches:
there is a table here with recommendations, and a detailed discussion with an example scenario.
the tl;dr of the section is that if you want search performance to improve, ensure that there are as many cores available to the search process as possible.
"The lesson here is to add indexers. Doing so reduces the load on any system from indexing, to free cores for search. Also, since the performance of almost all types of search scale with the number of indexers, searches will be faster, which mitigates the effect of slowness from resource sharing."
the issue is less about concurrent users, and more about concurrent searches:
there is a table here with recommendations, and a detailed discussion with an example scenario.
the tl;dr of the section is that if you want search performance to improve, ensure that there are as many cores available to the search process as possible.
"The lesson here is to add indexers. Doing so reduces the load on any system from indexing, to free cores for search. Also, since the performance of almost all types of search scale with the number of indexers, searches will be faster, which mitigates the effect of slowness from resource sharing."
I agree with piebob, we've found in our deployment that adding indexers and doing load balancing and distributed search significantly improves index and search performance.
Basically search heads are CPU intensive, while indexers are memory intensive. So plan accordingly, but scaling indexers will give you your most bang for your buck, imho.