1TB/day of log volume. The log volume can go up to 2TB/day.
Number of concurrent users : >50
Number of concurrent searches: > 100
Product will be deployed on: VMs/physical
6 with /splunk/logs 3TB to save hot/warm data
if my replication factor is 3 so im assuming i need 3 nodes for search head clustering.
what is the recommended size of file system need on /opt/splunk on each search head for 1TB data.
here im not using indxer clustering so 3-4TB SAN is fine to keep my logs like hot/warm/frozen data.
how we setup load balance on cluster for global url.
I have already read all splunk URL"s so expecting straight answers from folks.
who ever had cluster setup please post your recommendations.
You need at least 3 nodes for search head clustering, regardless; that is the required minimum. You aren't storing any of the data on the search heads. However, what you are storing is
search artifacts: the logs and results from the many searches - size depends on how many searches and how long you store the results
configuration files: settings for reports, apps, etc.; this will probably be smaller
So I don't think anyone can say exactly how much storage you need on the search heads. The "dedicated search head recommendation" of
2 x 300GB, 10,000 RPM SAS hard disks, configured in RAID 1
is a good starting point.
You will need a load-balancer that can provide "layer 7" load-balancing (also known as "session sticky"). The actual loading balancing setup is not done in Splunk at all - it must be set up on the load balancer. I can't help you with that.
Personally, I would not use a Virtual Machine for either a search head or an indexer for an environment of this size. You should probably review the Splunk Capacity Planning manual. I would also check out the Search head clustering information. My apologies if you have already read all of this, but I wasn't sure what "all splunk URLs" meant.