Getting Data In

Best Practices for Multi-User Search Performance

balbano
Contributor

Hey guys,

I currently have a 3-server architecture (2 central indexers with 1 search head). We are looking to have Splunk used by multiple sys admins and developers in the company. I foresee that we could have approx 10+ user sessions going on at the same time. I am currently testing out having my test sample set of Splunk LF Deployed Clients send data which is load-balanced across the 2 indexers. Also am aware of time-restricted searches for some user accounts as well as summary indexing.

From a hardware perspective, what would be the preferred specs to handle up to 10+ users simultaneously?

Also, does there need to be any specific RAID Configurations?

The reason I am asking all of this is because it seems like when more than 3 people run simultaneous searches seems like Splunk starts to choke.

Any help you can provide on this would be great.

Thanks.

Brian

1 Solution

hulahoop
Splunk Employee
Splunk Employee

Hi Brian,

Have you already discovered the Capacity Planning topic on the Community Wiki?

The general rule for sizing search head capacity is 1 core per active user, but the capacity of your indexing servers also plays an important role. In any case, the link above reviews the interplay between search head and indexers and additional considerations for sizing each. It also provides specs for Splunk recommended hardware.

View solution in original post

balbano
Contributor

Here are the specs they pretty much show the hardware inconsistency:

indexer 1:

4 CPU Cores: Intel(R) Xeon(R) CPU X5270 @ 3.50GHz
Memory: 4GB
RAID: RAID 5

indexer2:

16CPU Cores: Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
Memory: 16GB
RAID: RAID 5

search head:

4 CPU Cores: Dual-Core AMD Opteron(tm) Processor 2220 @ 2.8GHz
Memory: 32GB

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

But given what you've said, either your indexer disks are too slow, or your machines don't have enough CPU cores.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

It would be helpful if you also provided specs on the current machines, particularly CPU and core count, and the type of disk/RAID the indexes are on. Also if you could quantify your approximate data volume and amount of summarization that would let us provide more specific answers.

0 Karma

hulahoop
Splunk Employee
Splunk Employee

Hi Brian,

Have you already discovered the Capacity Planning topic on the Community Wiki?

The general rule for sizing search head capacity is 1 core per active user, but the capacity of your indexing servers also plays an important role. In any case, the link above reviews the interplay between search head and indexers and additional considerations for sizing each. It also provides specs for Splunk recommended hardware.

hulahoop
Splunk Employee
Splunk Employee

One of our Solution Architects told me once--there's no good reason to use RAID 5 or 6. Splunk suffers pretty bad performance under these RAID configurations. Glad you got to the bottom of this and posted your specs for reference.

0 Karma

balbano
Contributor

I think its a combination of mismatch of hardware between the 2 indexers, one having fairly newer hardware and the other being older, and the RAID Configurations, I believe our indexers have RAID 5 configuration instead of recommended recommendation of RAID 10...

Guess that answers my question... thanks...

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...