Hoping someone can point me in the right direction. Our Splunk monitoring keeps reporting 90-100% CPU utilization however when checking the servers one core will be close to maxing during a few functions for up to 20 min but the rest of the cores are quite low with no perf issues with the server. So looking for a better way to report, is there a core level monitoring or a field I can add to the CPU monitoring to address this? Thank you in advance.
if your question is relative to the CPU cores monitoring, it depends on the operative system you're using: Linux or Windows.
Anyway, you have to use the TA for your OS and extract the values for CPU and display in a dashboard.
To create the searches, you can see in the nix monitoring app (https://splunkbase.splunk.com/app/3777) or in the windows Infrastructure app (https://splunkbase.splunk.com/app/1680/ even if archived you can find the search you need).
If instead your questin is that one CPU is almost full and another is few used, you should aenable parallel searching, how described at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and at https://docs.splunk.com/Documentation/Splunk/9.0.0/DistSearch/Parallelreducesearches