All Apps and Add-ons

How do implement the nmon app for cold data?

maciep
Champion

I'm hoping to get some help with the NMON app, because I'm having trouble getting anything to show up in the nmon_data sourcetype. In our environment, we're in "cold" mode. Our linux team has a central repository for nmon files. They get copied from the servers to the central repository once a day.

For now, I'm just trying to get his working in our Test splunk env. For that environment, we have a Search Head and an Indexer. The Search Head has access to the central repository, so it is the forwarder in this case as well.

I have the nmon app deployed to the search head and created a local inputs.conf to monitor our share. I also disabled the default nmon monitor that pointed $SPLUNK_HOME/var/run/nmon....

I created the index manually on our indexer and also installed the PA-nmon app as well (although not sure if necessary since parsing should take place on the search head). I disabled all of the inputs in that app.

Once I restart Splunk, I do see that the nmon files are processed (nmon_processing sourcetype), but no perfdata ever gets indexed. I verified that Splunk is watching the csv_repository directory. But I don't think anything ever shows up in that directory.

I've tested the script manually by cat'ing a file directly to the script (as outlined in the nmon wiki). When running the python script, there was an error about "Encountered an Unexpected error while trying to analyse the ending period of this Nmon". So I updated nmon2csv.sh to launch the perl script instead. When testing the perl script directly, the output looks good (I think). But no nmon.csv files are created there either, only the config data is created.

I'm not sure what I'm missing. I follow the script as best I can, but not well enough to determine in what scenario the nmon.csv file doesn't get created.

Any help would be appreciated. I didn't want to clutter this initial post with all of the config files / script outputs / splunk logs / etc. But I can provide whatever might help resolve the issue.

0 Karma

guilmxm
SplunkTrust
SplunkTrust

Right, so i have verified step by step and following our exchange:

  • "Encountered an Unexpected error while trying to analyse the ending period of this Nmon" is encountered because of the nmon file is not valid, it must contain at least one ZZZZ timestamp.
    When it does not, then this is not a normal situation, the file only contains config data (not normal), or has been corrupted or truncated

  • Then, a bug has been introduced in 1.6.07 with the change of the temporary directory for the nmon2csv.sh (don't use anymore /tmp but $SPLUNK_HOME/var/run/nmon), this ONLY affects systems not generating nmon perf data AND that manages nmon external collections.

Unfortunately, this is your case, so please download the last version i have just released V1.6.09

  • About Perl and Python, you should not have trouble as you have full splunk instances that comes with a 2.7.x Python interpreter, the nmon2csv.sh will choose Python and not Perl. Not that system not having Python 2.7.x available must have a perl core dependency installed: Time::HiRes (yum install perl-Time-HiRes for RHEL)

FINALLY HERE IS YOUR PROCEDURE (quite simple !!!):

1- On the indexer, manually declare the "nmon" index, in your example the PA-nmon is not required because the parsing will be done by the search head (it would be required with UF indexing data directly to the indexer layer)

2- On the search head, install the Nmon App

3- Ensure that your search head sends forwards its data to the indexer layer (This is very very important !)

http://docs.splunk.com/Documentation/Splunk/6.3.0/DistSearch/Forwardsearchheaddata

a Splunk search on the indexer over internal events MUST return data for both the indexer and the search head (ex: index=_internal sourcetype=splunkd)

4- On the search head, create a local/inputs.conf to manage your nmon collection:

Example:

[monitor:///data/nmon_repository/.../*.nmon]
disabled = 0
followTail = 0
index = nmon
sourcetype = nmon_processing
crcSalt = <SOURCE>

Which assumes that your have various sub-directories, then nmon files

5 - Stop the search head

6 - Remove the fishbucket index to force Splunk re-managing already seen data:

./splunk clean eventdata _thefishbucket

Or even deleting every index from the search head.

7 - Start the search head

8 - Verify the nmon_processing sourcetype, and nmon_data

And you will be good 🙂

Don't hesitate to revert.

I have done this step by step from scratch, there is no customization required.

IMPORTANT: Keep in mind that is not a recommended solution for a deployment scenario, a search head should not index data (but Splunk internal and eventually a few things like system logs...)
This role should be involved by a dedicated heavy forwarder, or even universal forwarder. (but it is best to manage large external collection using heavy forwarders to have the parsing done closer to the data)

maciep
Champion

Thanks for responding so quickly, Guilhem! I really appreciate it!

I know very little about nmon myself, so I wouldn't have even thought the files were missing the performance data or timestamp. I'll review the nmon repository to see if I can find better ones to test with. If not, I'll engage our linux team that actually manages nmon to see if they an assist. Maybe all they need is the config data?

And I will upgrade to the latest release. Not a problem at all. Thanks for confirming that I only need the index on on the indexer, no apps.

If all goes well in our test environment, we'll migrate to production. In that environment we have a SHC, Indexer Cluster, Deployment Server and a Heavy Forwarder we can use for this process. With your documentation and a better understanding of how the app works in general, I don't think we'll have a problem with the deployment there.

I'll follow up here once I have some more results.

Thanks again!

0 Karma

guilmxm
SplunkTrust
SplunkTrust

Hi !

This should not be an issue to use the search head as the Splunk instance that manages cold nmon data collections, therefore this role is expected to involved by an independent Heavy Forwarder instance in normal circumstances.

Can we exchange by mail ? (guilhem.marchand@gmail.com)

I will test this scenario and will update if there are any customization required.

Guilhem

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...