Archive

Universal forwarder constantly at a 100%

Path Finder

There are about 1600 files that are actively monitored by the forwarder (64bit, v5.0.2) on our central syslog server.
The forwarder process constantly hogs one whole core.

Where to start looking for possible problems and solutions?

Regards,
Mitja

Tags (2)
0 Karma

Path Finder

SoS unfortunatly does not show much else for our universal forwarder. The most I could conclude from SoS was that file descriptor and memory usage are within acceptable limits and that CPU usage is at 100%. It also provided me with the information, that the TailingProcessor was reporting errors which was a good starting point.

After using system profiling tools I found out however that out of 24 threads of splunkd running only one of them is constantly at 100%. Further digging led me to a stanza that was causing errors for the TailingProcessor. It was not the stanza itself but rather the data that was being monitored. Temporary files were being created then removed in a certain spool folder and it seems that this was causing problems for the UFW. After blacklisting the said spool folder load dropped to almost zero and it is staying there for now.

0 Karma

Splunk Employee
Splunk Employee

When you say "There are about 1600 that are actively monitored", 1600 of what are you referring to?

Here is a link to a previous Answer that hopefully touches on your issue:
http://splunk-base.splunk.com/answers/5400/high-cpu-usage-on-splunk-forwarder

If you haven't already, download the SoS (Splunk on Splunk app). This already great app just got updated.
http://splunk-base.splunk.com/apps/29008/sos-splunk-on-splunk

So before contacting support, use this app first and see what you can find.

Wish you success!

Path Finder

Well spotted. I was referring to 1600 files. I edited the original question and added the missing word.
I will check out the new version of SoS.

0 Karma