Getting Data In

Is there a way set CPU and Memory consumption for splunkd process to a particular limit?

sandyIscream
Communicator

We have more than 3000+ forwarders in our environment. Few weeks back unix team has published a report showing all the top process that consume more cpu and memory usage.

Splunkd was among the top 3. We need to somehow restrict splunkd from taking up so much usage. on few them the report was showing splunkd had taken more than 90% mem usage.

Please suggest a way to mitigate this issue.

1 Solution

lguinn2
Legend

The CPU and memory usage for splunkd on a forwarder is directly related to the amount of work it has to do.

One of the most common reasons for a busy splunkd is that the forwarder is monitoring a lot of files. Even if the files are inactive, it still requires resources to monitor them. If you run splunk list monitor on a forwarder, you may be surprised at the list of files that are being watched.

If that is the case, you should set up regular log file rotation to remove older files from the production servers where the forwarders run. Another alternative is to use the ignoreOlderThan setting in inputs.conf. See Monitor files and directories for more details, but be careful not to exclude files that might be updated.

You should find that reducing the number of files monitored will reduce the memory footprint and the cpu usage.

PS. Greater than 90% CPU usage for splunkd definitely indicates a problem. If not this, then something else is misconfigured. The only time that splunkd might need this much CPU is if it is monitoring a very high volume input... and that may require special attention.

View solution in original post

lguinn2
Legend

The CPU and memory usage for splunkd on a forwarder is directly related to the amount of work it has to do.

One of the most common reasons for a busy splunkd is that the forwarder is monitoring a lot of files. Even if the files are inactive, it still requires resources to monitor them. If you run splunk list monitor on a forwarder, you may be surprised at the list of files that are being watched.

If that is the case, you should set up regular log file rotation to remove older files from the production servers where the forwarders run. Another alternative is to use the ignoreOlderThan setting in inputs.conf. See Monitor files and directories for more details, but be careful not to exclude files that might be updated.

You should find that reducing the number of files monitored will reduce the memory footprint and the cpu usage.

PS. Greater than 90% CPU usage for splunkd definitely indicates a problem. If not this, then something else is misconfigured. The only time that splunkd might need this much CPU is if it is monitoring a very high volume input... and that may require special attention.

sandyIscream
Communicator

Yes this was the exact thing we are looking for. On that particular system splunk indeed is monitoring a lot of files. Around 100k different source paths, though there filesystem do have a log rotation policy but maybe the no of files is the problem here.

0 Karma

ddrillic
Ultra Champion
0 Karma
Get Updates on the Splunk Community!

Dashboards: Hiding charts while search is being executed and other uses for tokens

There are a couple of features of SimpleXML / Classic dashboards that can be used to enhance the user ...

Splunk Observability Cloud's AI Assistant in Action Series: Explaining Metrics and ...

This is the fourth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how ...

Brains, Bytes, and Boston: Learn from the Best at .conf25

When you think of Boston, you might picture colonial charm, world-class universities, or even the crack of a ...