Splunk AppDynamics

Monitoring JVM heap space more effectively

Hari_Shree_Josh
Explorer

For monitoring the JVM heap space we have health rules for the overall used% of the memory. This system works well but we have a few applications in which when routine jobs are being executed the old gen stays almost full for days with little space reclaimed up after each major GC and the new-gen keeps on being completely used and freed up between frequent GC cycles. 

The heap space of such nodes gets used up to 97-98% at times before it is freed up and this creates a lot of unnecessary events on AppD. 

How do we configure health rule for JVM heap space of such nodes so that fake alerts are minimized and OOM errors are prevented from occurring?

Labels (3)
0 Karma

Allan_Schiebold
Communicator

Hi Hari. 

You have three  options here. 

1.) You can modify the health rules to be whatever threshold you want: 

image.png

2.) You can utilize Action Suppression during those periods (health rules will still trigger, but you won't get alerts):

image.png

3.) An advanced move would be if you have any automation systems in place you could make an API call to modify the health rule for just those time ranges of the jobs. 

Get Updates on the Splunk Community!

What the End of Support for Splunk Add-on Builder Means for You

Hello Splunk Community! We want to share an important update regarding the future of the Splunk Add-on Builder ...

Solve, Learn, Repeat: New Puzzle Channel Now Live

Welcome to the Splunk Puzzle PlaygroundIf you are anything like me, you love to solve problems, and what ...

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...