I've been using Splunk for the past four years and am loving it. I would like to know from the Splunk Community what you all think of the following configuration setups.
I am currently running a single instance Splunk server with 16 cores, 64 GB of RAM and 7.2k HDD's. I'm monitoring about 50-100 servers (90%+ are different flavors of Linux) which have very low indexing. All servers together amount for about 150-200MB/day. But I also have about 75 users which need access to the dashboards. I inherited the server about four years ago and it was intially deisgned as of a PoC and it eventually got shipped into Production (with no changes). The server is old and is needs to be evergreened. My question is this: If you could design a new system, what would you go with? I have the possibility of using VM's to create a more distributed environment instead of a single instance, but is it worth it? My idea was to throw AMD's new 64 core EPYC chips with similar 64GB of RAM, have a 500GB SATA SSD and 4TB of 7.2k drives for historical searches. I'm curious to see what the Splunk Trust & Community has to say, because I was talking with several people at .conf19 and most of them had not seen this kind of environment before (low amounts of data and high numbers of users). Any ideas or suggestions would be appreciated.
redesigning you architecture, in my opinion, you should make some checks on your existing infrastructure and then design your new architecture defining the requisites (see below).
Analyzing your actual infrastructure, you could know (using Monitoring Console) how many searches you have in a day and in the peaks.
Using these informations you can define how many CPU's you need.
In addition, you should define if you have HA requisites or not, if yes, you should migrate your infrastructure from a stanb alone to a distributed one with Indexers and/or Search Heads Clusters.
Another parameter to analyze is the IOPS of your storage: Splunk recommends at lest 800 IOPS for the storage, that means at least n.8 15k SAS disks; to reach this you could divide storage in more Indexers.
In addition, you could also analyze if some of your searches can be optimized:
As I said, the main parameters (but there could also be other) to design your architecture are:
In other words, this isn't a question for the Community, but I think that it needs at least a Splunk Architect or (better) a Splunk PS.
If you want to look Splunk Validated Architecture https://www.splunk.com/pdfs/technical-briefs/splunk-validated-architectures.pdf it describes your options as @gcusello said, but it don't give to you any exact HW combinations to deploy.
Remember that if you are going to distributed architecture then you need more cores to indexer layers also. It not make sense if you have lot of cores and memory only on SH( C ) -layer if there haven't been enough resources to server them!
Since you have mentioned 75 users and you need to allocate 1 cpu= 1 search for 1 user at any time.
having 75 cpus in box may cost you more. I would suggest you to have search head cluster to balance resources very effectively.
search head cluster with 3 search members and one load balancer.
each search member should have at least 24 cpus. The reason for taking more cpus, we are not sure how many concurrent searches are received by search members.
there should not be any delay while serving to users.