Hi ,
I have a requirement to install Dynatrace Application Performance Management app from Splunkbase. I installed this app on one of my search heads and looking for the next steps to get Dynatrace data into Splunk.
Can anyone give some steps how to get/configure the Dynatrace data into Splunk indexers?
Thanks,
-PR
Can this app/add-on be installed in a Search Head Cluster deployment? unclear from the documentation. Thanks in advance.
Joe
We have some additional documentation available on our community portal here: https://community.dynatrace.com/community/display/DL/Splunk+Application
The documentation above includes a link to an additional download Dynatrace system profile with a pair of catch-all business transactions which can be loaded into your active system profile to export all user-actions and visit data from the active environment.
hope this helps!
@Dynatrace
I have seen this document,but not clear
1.where to install the a APM app .it should be installed on a Search head or Indexer or universal forwarder?
2.what are the additional plug-inns(JAVA,FLUME...etc) i need to install along with app and where i need to install those plug-inns
3.which IP address:4321(indexer or SH or Forwarder) should i give in enabling business transaction feed.
Thanks,
PR
The Splunk app includes Flume as well as start scripts and configuration files necessary to decode the protobuf data being sent by the Dynatrace Big-Data Transaction Bridge. The Splunk app also includes sample searches, visualizations and dashboards that are meant as examples for utilizing the data transmitted by Dynatrace into Splunk.
The only pre-requisites for the app is the installation of java on the Splunk host running the app as well as the GoogleMaps and MaxMind Splunk apps if you would like the included dashboards to work. If you are not interested in the included sample dashboards you could avoid installation of those apps.
The app will not cleanly support clustered environments at this time as installation of the app would result in multiple instances of Flume across multiple hosts. If you are utilizing a clustered or more complex Splunk environment our recommendation is to utilize the Dynatrace BigData Transaction Bridge (https://community.dynatrace.com/community/display/DL/Big+Data+Business+Transaction+Bridge) on something like a forwarder to consume the protobuf stream from Dynatrace and write that data onto the forwarder's disk and then have that data indexed in Splunk.
Also, for some more information on installations in an environment with multiple search-heads I believe Flume will be running on all of them so you can simply select one of those hosts for configuration in the Dynatrace client.