All Apps and Add-ons

NMON Data from central share - no Data Inputs

eichfuss
Path Finder

Hello,

still trying to get the nmon App running. The AIX Server is collecting all nmon-Files one one server, on which also is installed the Universal Forwarder.

/opt/splunkforwarder/etc/apps/TA-nmon/local/inputs.conf


[monitor:///opt/nmon/nmon_collect//nmon]
disabled = false
index = nmon
sourcetype = nmon_processing
crcSalt =
host_segment = 4

[batch:///opt/splunkforwarder/etc/apps/TA-nmon/var/csv_repository/*nmon
.csv]
disabled = false
move_policy = sinkhole
recursive = false
crcSalt =
index = nmon
sourcetype = nmon_data
source = nmon_data


[batch:///opt/splunkforwarder/etc/apps/TA-nmon/var/config_repository/nmon.csv]
disabled = false
move_policy = sinkhole
recursive = false
crcSalt =
index = nmon
sourcetype = nmon_config
source = nmon_config


[script://./bin/nmon_helper.sh]
disabled = false
index = nmon
interval = 60
source = nmon_collect
sourcetype = nmon_collect



/opt/splunkforwarder/etc/apps/TA-nmon/local/props.conf

[source::/opt/nmon/nmon_collect//nmon]
invalid_cause = archive
unarchive_cmd = /opt/splunkforwarder/etc/apps/TA-nmon/bin/nmon2csv.pl
sourcetype = nmon_processing
NO_BINARY_CHECK = true


[nmon_data]
FIELD_DELIMITER=,
FIELD_QUOTE="
HEADER_FIELD_LINE_NUMBER=1


INDEXED_EXTRACTIONS=csv
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=false
TIMESTAMP_FIELDS=ZZZZ
TIME_FORMAT=%d-%m-%Y %H:%M:%S


KV_MODE=none
pulldown_type=true


[nmon_processing]
EXTRACT-cksum = (?i) .
?: (?P\d+)
TIME_FORMAT=%Y-%m-%d %H:%M:%S


[nmon_config]
BREAK_ONLY_BEFORE=CONFIG,
MAX_EVENTS=10000
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
TIME_FORMAT=%d-%b-%Y:%H:%M
TIME_PREFIX=CONFIG,
TRUNCATE=0

The Problem is, that only the nmon Files from the localhost, on which the forwarder is running, are converted to csv-Files. I tried *, //, /.../ for the wildcard function but allways the same problem. If i changed the path and set on servername instead of the wildcard, the nmon files from this server will be converted.

The path looks like this:

/opt/nmon/nmon_collect/SERVERNAME/

Can someone help me to find out the mistake?

Thanks
Cheers Sven

0 Karma
1 Solution

guilmxm
SplunkTrust
SplunkTrust

Sven,

Tested and approved, please follow this configuration and steps:

Note: I assume your forwarder is well connected to your indexer, but as far as i understood this is the case as you have data for it in your indexer (local nmon perf data of the forwarder)

1. Delete your $SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf and props.conf

2. Create inputs.conf and props.conf with the following content:

$SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf

##################################
#           nmon2csv stanza         #
##################################

[monitor:///opt/nmon/nmon_collect/*/*nmon]
disabled = false
index = nmon
sourcetype = nmon_processing
crcSalt = <SOURCE>

$SPLUNK_HOME/etc/apps/TA-nmon/local/props.conf

##################################
#           nmon2csv stanza         #
##################################

[source::/opt/nmon/nmon_collect/*/*nmon]
invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.pl
sourcetype = nmon_processing
NO_BINARY_CHECK = true

3. Restart the forwarder

Once you're here, any directory within "nmon_collect" containing nmon files will be considered by the forwarder, converted and the data steamed to your indexer.

In the local forwarder log, you will see events like:

07-25-2014 12:04:09.652 +0200 INFO  ArchiveProcessor - handling file=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon
07-25-2014 12:04:09.652 +0200 INFO  ArchiveProcessor - reading path=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon (seek=0 len=1293134)
07-25-2014 12:04:11.301 +0200 INFO  ArchiveProcessor - Finished processing file '/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon', removing from stats
07-25-2014 12:04:11.353 +0200 WARN  TcpOutputProc - The event is missing source information. Event : pÉ:^A

Within the indexer, you can search for the nmon_processing activity like this:

index="nmon" sourcetype="nmon_processing"

2014-07-25 12:04:11
 host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, Process done.
2014-07-25 12:04:11
 host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, NMON file cksum: 2276986525

And you will find events corresponding to the processing steps of this nmon file

And finally, the nmon_data available for theses hosts will be available within interfaces as usual.

Note: This configuration won't purge any nmon file once managed (which is required off course), this may implies CPU load on the forwarder if there a very very large number of nmon files in your repository.
But once managed by the forwarder, an nmon file is known as already proceeded and won't be managed again.
So you should have a retention policy or archiving procedure to avoid having infinite number of nmon files in it

Please tell me if this ok for you.

Cheers,

Guilhem

View solution in original post

guilmxm
SplunkTrust
SplunkTrust

Sven,

Tested and approved, please follow this configuration and steps:

Note: I assume your forwarder is well connected to your indexer, but as far as i understood this is the case as you have data for it in your indexer (local nmon perf data of the forwarder)

1. Delete your $SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf and props.conf

2. Create inputs.conf and props.conf with the following content:

$SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf

##################################
#           nmon2csv stanza         #
##################################

[monitor:///opt/nmon/nmon_collect/*/*nmon]
disabled = false
index = nmon
sourcetype = nmon_processing
crcSalt = <SOURCE>

$SPLUNK_HOME/etc/apps/TA-nmon/local/props.conf

##################################
#           nmon2csv stanza         #
##################################

[source::/opt/nmon/nmon_collect/*/*nmon]
invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.pl
sourcetype = nmon_processing
NO_BINARY_CHECK = true

3. Restart the forwarder

Once you're here, any directory within "nmon_collect" containing nmon files will be considered by the forwarder, converted and the data steamed to your indexer.

In the local forwarder log, you will see events like:

07-25-2014 12:04:09.652 +0200 INFO  ArchiveProcessor - handling file=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon
07-25-2014 12:04:09.652 +0200 INFO  ArchiveProcessor - reading path=/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon (seek=0 len=1293134)
07-25-2014 12:04:11.301 +0200 INFO  ArchiveProcessor - Finished processing file '/opt/nmon/nmon_collect/spou765/spou765_140316_0000.nmon', removing from stats
07-25-2014 12:04:11.353 +0200 WARN  TcpOutputProc - The event is missing source information. Event : pÉ:^A

Within the indexer, you can search for the nmon_processing activity like this:

index="nmon" sourcetype="nmon_processing"

2014-07-25 12:04:11
 host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, Process done.
2014-07-25 12:04:11
 host: spou765, Nmon data in date of 16-MAR-2014, starting time 00:00:03, NMON file cksum: 2276986525

And you will find events corresponding to the processing steps of this nmon file

And finally, the nmon_data available for theses hosts will be available within interfaces as usual.

Note: This configuration won't purge any nmon file once managed (which is required off course), this may implies CPU load on the forwarder if there a very very large number of nmon files in your repository.
But once managed by the forwarder, an nmon file is known as already proceeded and won't be managed again.
So you should have a retention policy or archiving procedure to avoid having infinite number of nmon files in it

Please tell me if this ok for you.

Cheers,

Guilhem

guhanraman
New Member

I tried the steps above but still didn't get to make it work yet. I have the nmon under /opt/splunkforwarder/var/run/nmon/var/nmon_repository however it doesn't get refreshed. Anycase, the splunk server lists 0 deployed apps (it does show the client host name). Forwarder etc. are set correctly and the splunkd.log in forwarder shows
06-01-2015 01:25:04.227 -0700 INFO ArchiveProcessor - new tailer already processed path=/opt/splunkforwarder/var/run/nmon/var/nmon_repository/g-VirtualBox_150531_1716.nmon

0 Karma

guilmxm
SplunkTrust
SplunkTrust

Hi,

Can we exchange by mail ? (you will find my mail in the Help page within the App, marker icon of the home page)
To be honest, i do not know i you are the origin person who opened this question or is it a new one ? (in such a case, please consider opening new answers or you request cannot be in visibility....)

Thanks,

Guilhem

0 Karma

guilmxm
SplunkTrust
SplunkTrust

Sven,

A new release, Version 1.4.1, has been published today.
This corrects the host default field assignment by evaluating it from the Nmon data.
New indexed data will have the host field equivalent to the custom hostname field

Guilhem

0 Karma

guilmxm
SplunkTrust
SplunkTrust

Hi,

A new main release of the App has been published today. (Version 1.4.0)

This introduces the new Python rewritten converter.

To upgrade, please ensure to modify your local configuration (such as the above configuration) to modify the nmon2csv line from "nmon2csv.pl" to "nmon2csv.py"

Guilhem

guilmxm
SplunkTrust
SplunkTrust

Hi Sven,

Great 🙂

To answer, this is not a problem and this is expected.

The application does not use the default "host" field to identify the Nmon host source to address this kind of case.
It use a custom field "hostname" which is extracted from the Nmon data.

So:
- When the host generates its own Nmon data, then the default host fields is indeed the same than hostname will be
- When the host manages Nmon files he did not generated (such as coming from an external share), then the host field will always be its own.

That's why the App uses hostname field as you will see in interfaces

0 Karma

eichfuss
Path Finder

Hi Guilhem,
it seems to work. Thanks. But i see another Problem. In the inputs.conf i have the parameter host_segment=4 but all inputs are indexed with the Forwarder as Host. Should it work with the host_segment parameter?

Cheers, Sven

0 Karma

eichfuss
Path Finder

Thanks a lot Guilhem I will try this and give you a feedback. Yes it's right, my post was not readable. Shame on me, I have to learn how to write a post 🙂

Thanks
Sven

0 Karma

guilmxm
SplunkTrust
SplunkTrust

Hi Eichfuss,

Have you read the deployment scenario in the Help page ?

If well understand you case, you want to use a forwarder to work with nmon files collected by your own, and send the data to your indexer, right ?

If so, that's quite easy, based on the path you mention, you just have to edit your local forwarder configuration as follows:

$SPLUNK_HOME/etc/apps/TA-nmon/local/inputs.conf

[monitor:///opt/nmon/nmon_collect/SERVERNAME/*nmon]

disabled = false
index = nmon
sourcetype = nmon_processing
crcSalt = <SOURCE>

$SPLUNK_HOME/etc/apps/TA-nmon/local/props.conf

[source::/opt/nmon/nmon_collect/SERVERNAME/*nmon]

invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME/etc/apps/TA-nmon/bin/nmon2csv.pl
sourcetype = nmon_processing
NO_BINARY_CHECK = true

Then restart your forwarder.

Any nmon files stored within this directory will handled by the forwarder, converted and the data streamed to the indexer

Take care about directories structure within your repository, if you have sub-directories you have to adapt the above configuration

Let me know how this goes for you

guilmxm
SplunkTrust
SplunkTrust

Hi Sven,

Sorry my response was a little a bit out of subject yesterday, your output was not code formatted and difficult to read.

I'm reproducing your configuration and will revert

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...