Splunk AppDynamics

Monitoring JVM heap space more effectively

Hari_Shree_Josh
Explorer

For monitoring the JVM heap space we have health rules for the overall used% of the memory. This system works well but we have a few applications in which when routine jobs are being executed the old gen stays almost full for days with little space reclaimed up after each major GC and the new-gen keeps on being completely used and freed up between frequent GC cycles. 

The heap space of such nodes gets used up to 97-98% at times before it is freed up and this creates a lot of unnecessary events on AppD. 

How do we configure health rule for JVM heap space of such nodes so that fake alerts are minimized and OOM errors are prevented from occurring?

Labels (3)
0 Karma

Allan_Schiebold
Communicator

Hi Hari. 

You have three  options here. 

1.) You can modify the health rules to be whatever threshold you want: 

image.png

2.) You can utilize Action Suppression during those periods (health rules will still trigger, but you won't get alerts):

image.png

3.) An advanced move would be if you have any automation systems in place you could make an API call to modify the health rule for just those time ranges of the jobs. 

Get Updates on the Splunk Community!

Dashboards: Hiding charts while search is being executed and other uses for tokens

There are a couple of features of SimpleXML / Classic dashboards that can be used to enhance the user ...

Splunk Observability Cloud's AI Assistant in Action Series: Explaining Metrics and ...

This is the fourth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how ...

Brains, Bytes, and Boston: Learn from the Best at .conf25

When you think of Boston, you might picture colonial charm, world-class universities, or even the crack of a ...