Splunk AppDynamics

How to unregister batch tier

Carlos_Ortega
Explorer

Hi,

We have a tier related to a java process. The point is that this tier is related to a batch process. However, it is recognized as a normal process and it is not unregistered after a time although it finished its function time before.

So, is there any way to control the time in which a node can be unregistered automatically if it doesn't have any traffic for a time?

Thanks,

Carlos

Labels (1)
Tags (2)
0 Karma
1 Solution

Hirokazu_Mori1
Path Finder

Thank you for your reply!

A suitable retention period value depends on your requirement.
What I can say is if your process works periodically and you would like to delete a node to the process during waiting time to the next, a retention period must be smaller than that repeating period, otherwise the node will stay on the controller UI.
A retention period is the elapsed time after a node loses a connection to a process, no data transfer happens or a JVM is shutdown.
Therefore I guess a retention period setting will work well as long as no data transfer happens.

And "-Dappdynamics.jvm.shutdown.mark.node.as.historical=true" only works after a JVM shutdown.
This makes a node historical and the node will disappear soon after a JVM shutdown. 

I hope this answer can help you.


Best regards,

Hirokazu

View solution in original post

Carlos_Ortega
Explorer

thank!

0 Karma

Hirokazu_Mori1
Path Finder

Hi @Carlos.Ortega,

 

Thank you for your post to the community!

 

The AppDynamics provides Node Activity Settings.

In the settings, a retention period setting will match your request. It defines how long Nodes are kept on the Controller UI after finishing.

You can set a small value to the retention period.

Or you can also use ‘-Dappdynamics.jvm.shutdown.mark.node.as.historical=true’ property which marks node historical at JVM shutdown.

 

I hope this document below will help you well.

https://docs.appdynamics.com/22.6/en/application-monitoring/administer-app-server-agents/historical-...

 

Best regards,

 

Hirokazu

Carlos_Ortega
Explorer

Hi @Hirokazu.Mori,

thanks for your answer. Can you recommend to me some values for the time retention period, please?

also, using "-Dappdynamics.jvm.shutdown.mark.node.as.historical=true", Are going to be disconnected the agents after being used? We have a point that these nodes are connected but they did their function and after it, they are connected for some time instead of being unregistered.

thanks

0 Karma

Hirokazu_Mori1
Path Finder

Thank you for your reply!

A suitable retention period value depends on your requirement.
What I can say is if your process works periodically and you would like to delete a node to the process during waiting time to the next, a retention period must be smaller than that repeating period, otherwise the node will stay on the controller UI.
A retention period is the elapsed time after a node loses a connection to a process, no data transfer happens or a JVM is shutdown.
Therefore I guess a retention period setting will work well as long as no data transfer happens.

And "-Dappdynamics.jvm.shutdown.mark.node.as.historical=true" only works after a JVM shutdown.
This makes a node historical and the node will disappear soon after a JVM shutdown. 

I hope this answer can help you.


Best regards,

Hirokazu
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Take Action Automatically on Splunk Alerts with Red Hat Ansible Automation Platform

 Are you ready to revolutionize your IT operations? As digital transformation accelerates, the demand for ...

Calling All Security Pros: Ready to Race Through Boston?

Hey Splunkers, .conf25 is heading to Boston and we’re kicking things off with something bold, competitive, and ...

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...