I added an app recently to pull in PowerShell Transcription logs that are output to C:\Logs\YYYYMMDD\YYYYMMDDHHSS.randomstring.log
So I created the following app:
[monitor://C:\Logs\*\*.txt]
followTail=false
disabled = false
sourcetype = ps_transcript
index = powershell
On some systems, PS is being run constantly from certain program/script updates. (10k in 24 hours on one server in particular). This creates a lot of small files that Splunk universal forwarder (UF) picks up. However, Splunk UF's CPU and memory usage has been going crazy with this. It isn't the size of the events, but I believe more of the number of files it has to monitor. Is this accurate? Is there a way to return the CPU usage to normal while still consuming the PS logs?
1) They are running 7.2 or higher. So N/A.
2) Wouldn't this solution just give it access to more CPU? I mean its not a problem of volume of processing events as the files themselves are not that large and wineventlogs is like 99% of the total logs coming out of the box.
3) Since windows transcription is a windows based logging system they have an on/off. There is no way to define how windows puts PS logs in a thing especially since multiple scripts could be running at once and if written to the same file it could easily be confusing to the system.
Is there a way to see what is causing CPU utilization issues? I feel like its a stab in the dark but its seriously the only thing different about the system than 2 weeks ago when it was running fine.
tkoster8 - did you find a solution? I'm seeing the same issue on a server in my environment.
jessec_splunk - The documentation you referenced says "For universal forwarders, a single pipeline set uses, on average, around 0.5 of a core, but utilization can reach a maximum of 1.5 cores. Therefore, two pipeline sets will use between 1.0 and 3.0 cores. If you want to configure more than two pipeline sets on a universal forwarder, consult with Professional Services first.” Would adding an additional pipeline for the UF reduce CPU utilization?