Monitoring Splunk

Splunk performance problems

jambajuice
Communicator

We've got Splunk running on a Windows 2003 R2 x64 server with 8 GB of memory, and two dual-core 3.0 GHz processors.

Our Splunk installation is split between two drives. The application files and the hot index for the defaultdb are running on a SSD installed in the server. All of the other indexes and the warm/cold defaultdb indexes exist on an iSCSI SAN. Those indexes are running on 7200 RPM SATA disks. Those indexes are about 600 GB in size.

Several of these indexes are part of the default search indexes as defined in the authorize.conf file.

The performance of the Splunk GUI is terrible. It can take up to two minutes to login or switch between applications. Since all of the GUI and executable files are running off an SSD, I'd expect the performance to be screaming. But it isn't.

Using perfmon, the CPU and memory on the machine are hardly taxed at all. I do notice that when I switch between applications in Splunk there is a large, sustained spike in % Disk Time for the iSCSI network.

If I want Splunk to take advantage of the performance of the SSD, what is the best way to carve up my indexes so the performance of the whole app isn't bogged down by the iSCSI/SATA disks?

Thx.

Tags (1)

Dan
Splunk Employee
Splunk Employee

What do the data inputs look like? Do you happen to have a lot of WMI queries set up for remote collections?

gkanapathy
Splunk Employee
Splunk Employee

I assume that you really mean that you have:

  • SSD: Splunk home directory, including etc and var; hot and warm for defaultdb
  • Spinning disk: cold for defaultdb; all other indexes

It would be helpful to know where your OS and OS page file are located. I assume the OS is probably on the SSD, but do not know about the page file.

It would also help to know the complete index configuration specifically of defaultdb and what, e.g., the number of buckets and bucket sizes are.

SSD realistically won't provide significant performance gains at runtime for the OS and Splunk executables, since those will in all likelihood be served from RAM.

For write-intensive applications, most SSDs are slower than conventional hard disks. Splunk's var directory and the hot/warm volume is very write-intensive, with a mix of sequential and random writes. I also suspect that under Windows 2003, it would be a bad idea to have the page file on SSD.

balbano
Contributor

Try to check this link out if you haven't yet. I haven't tried testing a splunk instance on Windows yet (all Linux)... but hopefully this will push you to some answers.

http://www.splunk.com/base/Documentation/latest/Installation/CapacityplanningforalargerSplunkdeploym...

I guess it all comes down to how much your daily indexing quota might be. Also not sure if you are running this one instance to serve both as a web server and an indexer.

That might be an issue with you current specs. While you meet the minimals, bigger is always better depending on the amount of data you index and the amount of searching you will need to be doing.

Let me know if this answers your question.

Brian

Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...

SPL2 Deep Dives, AppDynamics Integrations, SAML Made Simple and Much More on Splunk ...

Splunk Lantern is Splunk’s customer success center that provides practical guidance from Splunk experts on key ...