Hi,
In addition to great Giuseppe's answer.
If there are any reason that you may to highlight for not wanting to use the Nmon app to ingest Nmon data in Splunk, feel free I will be happy to read.
At the origin of the Nmon App development, I intensively searched for the better way, most simple, most optimised to ingest Nmon data into Splunk, after numerous tests and attempts I have designed things the way they are (using third party parsers) for various reasons related to the structure of nmon files.
The structure of Nmon data is (for the performance data, not inventory) indeed a structured csv format, however timestamps are in a specific format Splunk cannot understand in any case (ZZZZ lines defines the timestamp in the first line of every measure collection)
This is one of the key that makes it almost impossible for a direct Splunk ingestion, there also other reasons like the inventory data (multi-line events in the AAA and BBB sections), and the per column structure for devices statistics.
There are also numerous ways of using nmon, one of those generates nmon files that you can later ingest, this is not the same than indexing the result of a simple script output.
You can start having a read at the very interesting Nmon FAQ:
http://nmon.sourceforge.net/pmwiki.php?n=Site.NmonFAQ
In any case, the Nmon app exists since the beginning of 2014, since that time there are have been numerous large and different deployments in various conditions, which makes from the solution a very robust, efficient and simple to implement solution.
Finally, if transporting the information through syslog is the purpose (eg. no deployment of UF on servers), I also provide a solution with the nmon-logger package:
http://nmon-for-splunk.readthedocs.io/en/latest/rsyslog_deployment.html
http://nmon-for-splunk.readthedocs.io/en/latest/syslogng_deployment.html
Hope this helps.
Guilhem
... View more