Splunk AppDynamics

Monitoring JVM heap space more effectively

Hari_Shree_Josh
Explorer

For monitoring the JVM heap space we have health rules for the overall used% of the memory. This system works well but we have a few applications in which when routine jobs are being executed the old gen stays almost full for days with little space reclaimed up after each major GC and the new-gen keeps on being completely used and freed up between frequent GC cycles. 

The heap space of such nodes gets used up to 97-98% at times before it is freed up and this creates a lot of unnecessary events on AppD. 

How do we configure health rule for JVM heap space of such nodes so that fake alerts are minimized and OOM errors are prevented from occurring?

Labels (3)
0 Karma

Allan_Schiebold
Communicator

Hi Hari. 

You have three  options here. 

1.) You can modify the health rules to be whatever threshold you want: 

image.png

2.) You can utilize Action Suppression during those periods (health rules will still trigger, but you won't get alerts):

image.png

3.) An advanced move would be if you have any automation systems in place you could make an API call to modify the health rule for just those time ranges of the jobs. 

Get Updates on the Splunk Community!

Upcoming Webinar: Unmasking Insider Threats with Slunk Enterprise Security’s UEBA

Join us on Wed, Dec 10. at 10AM PST / 1PM EST for a live webinar and demo with Splunk experts! Discover how ...

.conf25 technical session recap of Observability for Gen AI: Monitoring LLM ...

If you’re unfamiliar, .conf is Splunk’s premier event where the Splunk community, customers, partners, and ...

A Season of Skills: New Splunk Courses to Light Up Your Learning Journey

There’s something special about this time of year—maybe it’s the glow of the holidays, maybe it’s the ...