All Apps and Add-ons

How do I configure NMON pieces for Splunk Cloud?

Explorer

I am trying to following the instructions on the nmon performance monitor splunk app for unix and linux documentation (I don't have enough karma points to post the link, apparently, so the below link may not show):

https://nmon-for-splunk.readthedocs.io/en/latest/installation_splunkcloud.html

Anyway, I am on the "Deploy to Splunk Cloud" page in the documentation, looking at the deployment matrix. I have:

Name: NMON Performance by Octamis
Folder: nmon
Version: 1.9.19

Self-service installed on Splunk Cloud with install location of Search Heads and Indexers (I did not specify the location, it just installed there).

Name: PA-nmonlight

Folder: PA-nmon
light
Version: 1.3.22

Self-service installed on Splunk Cloud with install location of Search Heads and Indexers.

Name: SplunkTAnmon deployed to a linux box via the deployment server.

I have a custom inputs.conf defined on that same host in a local directory that has this:

[monitor:///var/nmon_repo/]
disabled = false
whitelist = \.nmon$
index = nmon
sourcetype = nmon_processing
crcSalt = 

(stolen mostly from the "Indexing Nmon data generated out of Splunk" section on the page after "Deploy to Splunk Cloud")

Finally, I created the nmon index in Splunk Cloud. I do have data going into Splunk Cloud, but that data just looks like raw text and is not being parsed in the ways that it apparently needs to be in order for the NMON app dashboards to work as expected.

I did not configure nmon on the server (another developer did that) - is there some special way that has to be done in order for all of this to work? Am I missing some step to generate data models or something like that?

I know that Guilhem Marchand visits these forums and answers a lot of NMON questions, so hopefully this question will attract his attention! Of course, I am happy to receive help from anyone else who has gotten this working for Splunk Cloud!

Thanks, Splunkers!

0 Karma
1 Solution

SplunkTrust
SplunkTrust

Hello @bill_kirby !

I'm there yes 😉 lol

So I understand that you want to perform indexing and management of cold nmon data files, meaning nmon files that are not being generated per server using the TA provided, but having one server that has access to nmon files collection (most likely via NFS or SCP transfers), then processing these.

The recommended approach is to have the TA dealing with the generation and the processing of the data locally, and forwarding the data to Splunk.
The cold data is relevant for customers that have an existing workflow (mainly old historical thing) and do not want to deploy the TA or Splunk UF.

Question 1: What are the reasons why you use the cold data scenario ?

Question 2: Did you deploy the TA-nmon on the box that is monitoring the directory ?

When using this mode of monitoring of nmon files, the sourcetype nmonprocessing are being handled by a very specific parameter called "unarchivecmd", which finally call the shell wrapper.

[source::.../*.nmon]
invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.sh --mode realtime
sourcetype = nmon_processing
NO_BINARY_CHECK = true

# To manage repositories archives of cold nmon files (add you own for other compressed formats)
[source::.../*.nmon.gz]
invalid_cause = archive
unarchive_cmd = gunzip | $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.sh
sourcetype = nmon_processing
NO_BINARY_CHECK = true 

So to summarise, whenever that Linux box is a UF or an HF, you need to have the TA-nmon deployed here.

Let me know how this goes !

Last but not least, there is better version of the apps, called metricator in Splunk base, these are better in the meaning that these apps use the metric store rather than regular events for storing the metrics, providing better performances at a lower cost on the Splunk side (for the data models) but a bit more or license usage.
But cold data mode is not officially supported, at least it was not the main goal.

Guilhem

View solution in original post

Explorer

Thank you again, Guilhem. It looks like the piece I was missing was the TA on the host. I don't know what I deployed there, but it was apparently not the TA!

-Bill

0 Karma

SplunkTrust
SplunkTrust

Fair enough @bill_kirby
That was my guess 😉
Awesome

0 Karma

SplunkTrust
SplunkTrust

Hello @bill_kirby !

I'm there yes 😉 lol

So I understand that you want to perform indexing and management of cold nmon data files, meaning nmon files that are not being generated per server using the TA provided, but having one server that has access to nmon files collection (most likely via NFS or SCP transfers), then processing these.

The recommended approach is to have the TA dealing with the generation and the processing of the data locally, and forwarding the data to Splunk.
The cold data is relevant for customers that have an existing workflow (mainly old historical thing) and do not want to deploy the TA or Splunk UF.

Question 1: What are the reasons why you use the cold data scenario ?

Question 2: Did you deploy the TA-nmon on the box that is monitoring the directory ?

When using this mode of monitoring of nmon files, the sourcetype nmonprocessing are being handled by a very specific parameter called "unarchivecmd", which finally call the shell wrapper.

[source::.../*.nmon]
invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.sh --mode realtime
sourcetype = nmon_processing
NO_BINARY_CHECK = true

# To manage repositories archives of cold nmon files (add you own for other compressed formats)
[source::.../*.nmon.gz]
invalid_cause = archive
unarchive_cmd = gunzip | $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.sh
sourcetype = nmon_processing
NO_BINARY_CHECK = true 

So to summarise, whenever that Linux box is a UF or an HF, you need to have the TA-nmon deployed here.

Let me know how this goes !

Last but not least, there is better version of the apps, called metricator in Splunk base, these are better in the meaning that these apps use the metric store rather than regular events for storing the metrics, providing better performances at a lower cost on the Splunk side (for the data models) but a bit more or license usage.
But cold data mode is not officially supported, at least it was not the main goal.

Guilhem

View solution in original post

Explorer

Yes, I looked at the Metricator version of the apps, but they are not certified for Splunk Cloud. In my experience, Splunk Support generally will not install apps in Splunk Cloud that are not certified as such (and I'm almost certain they won't allow me to self-install apps that are not certified).

To answer your questions:

Question 1: What are the reasons why you use the cold data scenario ?
Pretty much because the other developer who installed nmon was already familiar with it and he did so, and we initially started reviewing the results manually. Later we decided to try to get the logs into Splunk for easier analysis. This is not to say that we couldn't change the way we're collecting nmon data, I was just trying to find a way to avoid doing that.

Question 2: Did you deploy the TA-nmon on the box that is monitoring the directory ?
Yes, the TA-nmon is deployed on that box.

I will explore your answer a bit, including the unarchive stuff, and will also work with the other devloper to see about changing up the way we run it.

Thank you for your quick response, Guilhem!

0 Karma