Hi
How can i see what Splunk is doing when the CPU goes to ~90% for 30 minutes on my Splunk box and stays there.
I have one search head and one indexer all on the same box.
How can i tell what is going on, is it too much data coming in or too many data models etc...?
Or do i just need to restart etc..
The splunk screens are going very slow - we have about 36 jobs running in parallel.
Thanks in advance for any help
Rob 🙂
Go to monitoring console and Run the health check script. it will show you what is the issue.
Hi, Rob,
In the top
output you showed, you can type c
to show detailed command parameters. From the commands details you should be able to tell exactly what's running, e.g., ad-hoc searches, scheduled searches, with their associated search IDs. With the search IDs, you can go into $SPLUNK_HOME/var/log/splunk/dispatch
directory to see what they actually are.
In addition, you mentioned 36 concurrent jobs running; it sounds like your concurrency is higher than your available CPU cores. When you have more parallel jobs than available CPU cores, jobs wait, and everything slows down (including Splunk Web UI).
On a positive note, you managed to max out your BOX, so all you need to do is to reduce your load to, say, 80% CPU, and you are good to go. Time to think about adding another box no matter what 🙂
~Jesse
Hi
Thanks for you comments i will look into a new box
Rob
Go to monitoring console and Run the health check script. it will show you what is the issue.
Thanks - we solved it with a reboot - but next time i will check