still trying to get the nmon App running. The AIX Server is collecting all nmon-Files one one server, on which also is installed the Universal Forwarder.
The Problem is, that only the nmon Files from the localhost, on which the forwarder is running, are converted to csv-Files. I tried *, //, /.../ for the wildcard function but allways the same problem. If i changed the path and set on servername instead of the wildcard, the nmon files from this server will be converted.
Tested and approved, please follow this configuration and steps:
Note: I assume your forwarder is well connected to your indexer, but as far as i understood this is the case as you have data for it in your indexer (local nmon perf data of the forwarder)
1. Delete your $SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf and props.conf
2. Create inputs.conf and props.conf with the following content:
Once you're here, any directory within "nmon_collect" containing nmon files will be considered by the forwarder, converted and the data steamed to your indexer.
In the local forwarder log, you will see events like:
07-25-2014 12:04:09.652 +0200 INFO ArchiveProcessor - handling file=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon
07-25-2014 12:04:09.652 +0200 INFO ArchiveProcessor - reading path=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon (seek=0 len=1293134)
07-25-2014 12:04:11.301 +0200 INFO ArchiveProcessor - Finished processing file '/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon', removing from stats
07-25-2014 12:04:11.353 +0200 WARN TcpOutputProc - The event is missing source information. Event : pÉ:^A
Within the indexer, you can search for the nmon_processing activity like this:
index="nmon" sourcetype="nmon_processing"
2014-07-25 12:04:11
host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, Process done.
2014-07-25 12:04:11
host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, NMON file cksum: 2276986525
And you will find events corresponding to the processing steps of this nmon file
And finally, the nmon_data available for theses hosts will be available within interfaces as usual.
Note: This configuration won't purge any nmon file once managed (which is required off course), this may implies CPU load on the forwarder if there a very very large number of nmon files in your repository.
But once managed by the forwarder, an nmon file is known as already proceeded and won't be managed again.
So you should have a retention policy or archiving procedure to avoid having infinite number of nmon files in it
Tested and approved, please follow this configuration and steps:
Note: I assume your forwarder is well connected to your indexer, but as far as i understood this is the case as you have data for it in your indexer (local nmon perf data of the forwarder)
1. Delete your $SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf and props.conf
2. Create inputs.conf and props.conf with the following content:
Once you're here, any directory within "nmon_collect" containing nmon files will be considered by the forwarder, converted and the data steamed to your indexer.
In the local forwarder log, you will see events like:
07-25-2014 12:04:09.652 +0200 INFO ArchiveProcessor - handling file=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon
07-25-2014 12:04:09.652 +0200 INFO ArchiveProcessor - reading path=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon (seek=0 len=1293134)
07-25-2014 12:04:11.301 +0200 INFO ArchiveProcessor - Finished processing file '/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon', removing from stats
07-25-2014 12:04:11.353 +0200 WARN TcpOutputProc - The event is missing source information. Event : pÉ:^A
Within the indexer, you can search for the nmon_processing activity like this:
index="nmon" sourcetype="nmon_processing"
2014-07-25 12:04:11
host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, Process done.
2014-07-25 12:04:11
host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, NMON file cksum: 2276986525
And you will find events corresponding to the processing steps of this nmon file
And finally, the nmon_data available for theses hosts will be available within interfaces as usual.
Note: This configuration won't purge any nmon file once managed (which is required off course), this may implies CPU load on the forwarder if there a very very large number of nmon files in your repository.
But once managed by the forwarder, an nmon file is known as already proceeded and won't be managed again.
So you should have a retention policy or archiving procedure to avoid having infinite number of nmon files in it
I tried the steps above but still didn't get to make it work yet. I have the nmon under /opt/splunkforwarder/var/run/nmon/var/nmon_repository however it doesn't get refreshed. Anycase, the splunk server lists 0 deployed apps (it does show the client host name). Forwarder etc. are set correctly and the splunkd.log in forwarder shows
06-01-2015 01:25:04.227 -0700 INFO ArchiveProcessor - new tailer already processed path=/opt/splunkforwarder/var/run/nmon/var/nmon_repository/g-VirtualBox_150531_1716.nmon
Can we exchange by mail ? (you will find my mail in the Help page within the App, marker icon of the home page)
To be honest, i do not know i you are the origin person who opened this question or is it a new one ? (in such a case, please consider opening new answers or you request cannot be in visibility....)
A new release, Version 1.4.1, has been published today.
This corrects the host default field assignment by evaluating it from the Nmon data.
New indexed data will have the host field equivalent to the custom hostname field
A new main release of the App has been published today. (Version 1.4.0)
This introduces the new Python rewritten converter.
To upgrade, please ensure to modify your local configuration (such as the above configuration) to modify the nmon2csv line from "nmon2csv.pl" to "nmon2csv.py"
To answer, this is not a problem and this is expected.
The application does not use the default "host" field to identify the Nmon host source to address this kind of case.
It use a custom field "hostname" which is extracted from the Nmon data.
So:
- When the host generates its own Nmon data, then the default host fields is indeed the same than hostname will be
- When the host manages Nmon files he did not generated (such as coming from an external share), then the host field will always be its own.
That's why the App uses hostname field as you will see in interfaces
Hi Guilhem,
it seems to work. Thanks. But i see another Problem. In the inputs.conf i have the parameter host_segment=4 but all inputs are indexed with the Forwarder as Host. Should it work with the host_segment parameter?
Thanks a lot Guilhem I will try this and give you a feedback. Yes it's right, my post was not readable. Shame on me, I have to learn how to write a post 🙂