From my very limited understanding of the JMX plugin for splunk, the plugin needs to be set up on the server where all the indexing happens. Is it possible to move this plugin from the indexer out to the individual forwarders instead and, if so, how could I do so?
You can certainly deploy the JMX Mod Input on Universal Forwarders. This is a recommended approach for deploying in distributed environments and also scaling out to be able to poll large numbers of JVM's.
On a UF you'll need to install a system Python 2.7 runtime.
You then need to split out the components of the app. You might use Deployment Manger , Chef , Puppet etc.. but below I'll detail the manual steps.
1) the data collection logic goes on the Splunk UF.
jmx_ta/bin/*
jmx_ta/default/inputs.conf
jmx_ta/default/app.conf
jmx_ta/README/*
jmx_ta/metadata/*
2) the index definition goes on the Splunk Indexer
jmx_ta/default/indexes.conf
jmx_ta/default/props.conf
jmx_ta/default/transforms.conf
jmx_ta/metadata/*
3) the UI logic and Knowledge objects go on your Search Heads
jmx_ta/default/props.conf
jmx_ta/default/transforms.conf
jmx_ta/default/app.conf
jmx_ta/default/data/*
jmx_ta/static/*
jmx_ta/metadata/*
There is no setup UI on a Universal forwarder , but the manual setup steps are simple.
1) setup your config.xml file
2) in default/inputs.conf , enable the input for the config.xml file
3) data will then be collected and forwarded to your indexer(s)
4) any errors will be searchable at "index=_internal ExecProcessor error jmx.py"
You can certainly deploy the JMX Mod Input on Universal Forwarders. This is a recommended approach for deploying in distributed environments and also scaling out to be able to poll large numbers of JVM's.
On a UF you'll need to install a system Python 2.7 runtime.
You then need to split out the components of the app. You might use Deployment Manger , Chef , Puppet etc.. but below I'll detail the manual steps.
1) the data collection logic goes on the Splunk UF.
jmx_ta/bin/*
jmx_ta/default/inputs.conf
jmx_ta/default/app.conf
jmx_ta/README/*
jmx_ta/metadata/*
2) the index definition goes on the Splunk Indexer
jmx_ta/default/indexes.conf
jmx_ta/default/props.conf
jmx_ta/default/transforms.conf
jmx_ta/metadata/*
3) the UI logic and Knowledge objects go on your Search Heads
jmx_ta/default/props.conf
jmx_ta/default/transforms.conf
jmx_ta/default/app.conf
jmx_ta/default/data/*
jmx_ta/static/*
jmx_ta/metadata/*
There is no setup UI on a Universal forwarder , but the manual setup steps are simple.
1) setup your config.xml file
2) in default/inputs.conf , enable the input for the config.xml file
3) data will then be collected and forwarded to your indexer(s)
4) any errors will be searchable at "index=_internal ExecProcessor error jmx.py"
hi, I notice that props.conf
and transforms.conf
go both in the Indexer and the search head?
That is correct.
Is this it? Do we just copy the bits over to the different components? Do we just delete the original app on the indexer except for the files which are supposed to stay on the indexer?
I would start clean to be safe.
Copy (or use whatever deployment tool) , the various artifacts to the respective Splunk nodes that I have described above.
The JMX application needs python to run, which is not included on the UF. In my organization, we use "Heavy Forwarders" for this task (Full Splunk installs only used for capturing data from databases, jvm's, ect. as well as parsing and sending useless data to the nullq to reduce license consumption). IE, 47TB becomes ~400GB.