I am using the Universal Forwarder to collect information on a Java Process. When monitoring "% Processor Time" for a specific process, I noticed a discrepancy between the results from Performance Monitor vs. Resource Monitor for this process when pulling the information using default values (below is my input.conf value):
[perfmon://Process]
object = Process
counters = % Processor Time;Working Set;Working Set Peak;
instances = _Total;java;javaw
interval = 10
It seems to be corrected when I scale % Processor Time down to 10% of the default value when building my splunk search (shown below):
host="" (counter="% Processor Time" AND instance=java OR instance=javaw) OR (collection="CPU Load" AND instance=_Total)
| timechart span=20s avg(Value) AS "CPU Utilization" by instance
| eval java=java/10
| eval javaw=javaw/10
| rename java as "Appserver CPU" javaw as "Client CPU" VALUE_Total as "Total Machine CPU"
I thought this fixed the issue until we actually get a spike in CPU and I noticed the java and javaw values now max out at 10% and wont go any higher.
I know it is supposed to max out at 100% and I scaled it down to 10% of the normal value, that makes sense to me but if it really is maxing out there, why can't I get the real values sent to the indexer from perfmon in the first place.
Any help would be much appreciated. Thanks!
I presume you are seeing the known issue with perfmon process CPU collection. There is a specific page which covers this in the docs - https://docs.splunk.com/Documentation/Splunk/7.3.0/ReleaseNotes/WorkaroundforPerformanceDataHelperAP...
It looks like a workaround was added to v7.3 and above by using the useWinApiProcStats option. However this does not look like a permanent fix, just a workaround for some people. You would need to test if this works for your environment.
The other options are to use other sources for perfmon data rather than using Splunk modular inputs - for example use a powershell script to collect the data and read the output, or run perfmon in CSV collection mode and read the output files. None of ideal and will all need work to get up and running.
I presume you are seeing the known issue with perfmon process CPU collection. There is a specific page which covers this in the docs - https://docs.splunk.com/Documentation/Splunk/7.3.0/ReleaseNotes/WorkaroundforPerformanceDataHelperAP...
It looks like a workaround was added to v7.3 and above by using the useWinApiProcStats option. However this does not look like a permanent fix, just a workaround for some people. You would need to test if this works for your environment.
The other options are to use other sources for perfmon data rather than using Splunk modular inputs - for example use a powershell script to collect the data and read the output, or run perfmon in CSV collection mode and read the output files. None of ideal and will all need work to get up and running.