Installation

Hardware recommendation for high log volume Splunk deployments to optimize performance and provide indexer replication?

strive
Influencer

Hi,

I read the Splunk documents available at:

http://docs.splunk.com/Documentation/Splunk/6.1.2/Deploy/Deploymenttoplogies 
http://docs.splunk.com/Documentation/Splunk/6.1.2/Deploy/Referencehardware
http://docs.splunk.com/Documentation/Splunk/6.1.2/Deploy/Accommodatemanysimultaneoussearches
http://docs.splunk.com/Documentation/Splunk/6.1.2/Deploy/Summaryofperformancerecommendations

We have a requirement to support:

  1. 1TB/day of log volume. The log volume can go up to 2TB/day.
  2. Number of concurrent users : ~20
  3. Number of concurrent searches: ~40
  4. Product will be deployed on: VMs

We are concentrating only on performance for phase 1. In phase 2 we will be providing indexer replication.

My hardware recommendation for phase 1 is:

Deployment Server:

  • Count - 1
  • Hardware - 1VM (4 vCPU and 8GB RAM totally)

Search Head:

  • Count - 3 (1 for summary indexing and 2 for searches)
  • Hardware - 3VM (4 CPUs per VM, 4 cores per CPU, 12GB RAM per VM)

Indexer:

  • Count - 6
  • Hardware - 6VM (2 CPUs per VM, 6 cores per CPU, 12GB RAM per VM)

Our hardware is:

CPU - 2.90 GHz E5-2690/135W 8C/20MB Cache/DDR3 1600MHz (per part. We will add as many parts as we need)

HDD - 300GB 6Gb SAS 15K RPM (per disk. We will add as many disks as we need)

Note: The Universal forwarders are already available on the log generation sources. We are planning to have 3 heavy weight forwarders. Yet to decide fully on this.

Do you have any recommendations for us. Please suggest.

Thanks

Strive

Tags (2)
1 Solution

Ayn
Legend

If you're planning to run Splunk in this kind of high log volume environment you definitely should contact Splunk directly and get in contact with their PS guys who can look at your case more thoroughly. The devil is in the details when you're ingesting so much data so it's important to have a look at nost just the hardware specs but also how you split up the log streams, how you configure your OS, how to deal with...well, everything. Trying to simplify this in a Splunkbase answers does not do the challenge justice. I'm sure Splunk would be more than happy to help you out in this (in fact, if you approached them already with your plans I'm surprised if they didn't already offer it).

View solution in original post

Kieffer87
Communicator

We just started our discussions with Splunk to ingest 500GB a day. I will tell you off the bat Splunk does recommend VMs unless you can reserve resources on the host which typically defeats the purpose of going virtual (in our case anyways running vsphere/vmotion).

I can share with you what we landed on to ingest 500GB a day which may help you to plan before talking with splunk. Keep in mind we are still in the planning stages and have not implemented this yet so I can't tell you one way or another if it works 🙂

(1) Deployer (Manages search head cluster), Cluster Master (Manages Indexer Cluster) and Deployment Manager on VM (6 Cores @ 2.3GHz, 12GB).
(3) Physical Search Heads Clustered (2x14 core CPU @ 2.62GHz, 64GB each)
(4) Physical Indexers Clustered (2x14 core CPU @ 2.62GHz, 128GB each)
Indexer Hot Storage: 5TB SSD per indexer
Indexer Warm Storage: 15TB RAID6 (4-6k IOPS) per indexer
Indexer Cold Storage: 135TB (looking at Hadoop)
(7) Universal Forwarders on VM (8 Cores @ 2.3GHz, 8GB each). 1 per datacenter around the globe.

According to our Splunk reps a reference Indexer (12 cores, 12GB, 800 IOPS) can ingest 250GB/day if doing nothing else. Once you add indexing, replication to other indexers, searches, etc. That number is reduced. We are told we need 3 Indexers that meet the reference hardware to ingest 500GB/day and a fourth indexer to pickup the load should one indexer go down.

That all being said, we have not implemented this yet to confirm if this hardware is really needed for 500GB a day. I hope for my sake it leaves us plenty of room to grow. My gut tells me it does, but I get a different feeling from the Splunk rep.

kaufmanm
Communicator

I'd recommend gathering requirements around response times and performance. Your deployment might be able to meet the volume, but if a common inexpensive search starts taking 60 seconds to complete, you might get complaints depending on who your users are. Same with dashboards that take minutes to load.

Ayn
Legend

If you're planning to run Splunk in this kind of high log volume environment you definitely should contact Splunk directly and get in contact with their PS guys who can look at your case more thoroughly. The devil is in the details when you're ingesting so much data so it's important to have a look at nost just the hardware specs but also how you split up the log streams, how you configure your OS, how to deal with...well, everything. Trying to simplify this in a Splunkbase answers does not do the challenge justice. I'm sure Splunk would be more than happy to help you out in this (in fact, if you approached them already with your plans I'm surprised if they didn't already offer it).

strive
Influencer

Hi Ayn,

Thanks you for your response. We are yet to get in touch with Splunk PS guys. I will update this post once we have the answer.

Regards
Strive

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...