Has anyone integrated Splunk v5 with netcool omnibus? I found previous posts on sending SNMP traps via a perl script and we have this setup and sending events but they do not seem to have all the data we would ideally like.
For example, we are monitoring all our nix syslog data and have an alert that fires when a particular AIX ERRPT message comes across and sends an email and an snmp trap. Ideally, we would like in the trap, to contain the host that the message happened on and the particular ERRPT message that appeared but rather the data that comes across is almost meaningless. If we receive host data and messages in the trap we can build out logic on the netcool side to create tickets to the appropriate groups and their severity.
If this idea is not possible are there other ways to integrate alerting to netccol? We could probably send an email and have netcool extract from that or perhaps write a custom script.
Any help is appreciated!
Yes that helps but with those parameters will we be able to include detail like this?
This is a result that is emailed to us when a particular host does not send an event to splunk for a certain duration. Ideally we would like to be able to write this detail to a file so we can pick it up with the netcool probe. So, for example we could write to the syslog alert file with a script and the line would include that host HOSTNAME was affected and has an Age of 13.31 seconds. Is something like this possible to include those details?
Instead of sending an email, call a script while referring the "Scripted Alerts" section mention in my earlier post. The variable SPLUNKARG8 will hold the value of a file where the message is stored. You can use the script to open the file, get an access to the contents and then send the same to the syslog file which in turn is read by the syslog probe in Netcool.
Let me know if still there are some queries.
Refer the section "Configure scripted alerts" in Splunk Alert guide.
Splunk currently enables you to pass arguments to scripts both as command line
arguments and as environment variables. This is because command line
arguments don't always work with certain interfaces, such as Windows.
The values available in the environment are as follows:
· SPLUNKARG0 Script name
· SPLUNKARG1 Number of events returned
· SPLUNKARG2 Search terms
· SPLUNKARG3 Fully qualified query string
· SPLUNKARG4 Name of saved search
SPLUNKARG5 Trigger reason (for example, "The number of events was
greater than 1")
· SPLUNKARG6 Browser URL to view the saved search
SPLUNKARG8 File in which the results for this search are stored (contains
SPLUNKARG7 is not used for historical reasons.
These can be referenced in UNIX shell as $SPLUNKARG0 and so on, or in
Microsoft batch files via %SPLUNKARG0% and so on. In other languages (perl,
python, and so on), use the language native methods to access the environment.
These values are also available as positional arguments passed on the
command line of the script. You can use these as well if they are more
convenient. Relatively old versions of Splunk do not provide the environment
variables. However, due to platform reasons, they are not entirely reliable in
The command line arguments that Splunk passes to the script are:
· 0 = Script name
· 1 = Number of events returned
· 2 = Search terms
· 3 = Fully qualified query string
· 4 = Name of saved search
· 5 = Trigger reason (i.e. "The number of events was greater than 1")
· 6 = Browser URL to view the saved search
· 7 = This option has been deprecated and is no longer used
· 8 = File where the results for this search are stored (contains raw results)
Note: Splunk encourages Windows users to use the $SPLUNKARG
environment variables when passing arguments to scripts.
You can then use the values supplied as an argument in the script to populate the data which will be used by the probe.
Let me know if that helps.
Instead of sending a trap to netcool, you can configure the perl script / any script configured as an alert action in Splunk to send a customized text to syslog file and then use syslog probe in netcool to read that syslog file.
Alternatively, you can create a script which calls "nco_sql" with username and password ( either command line or in HERE "<<" parameter ) and the SQL having details on the alert columns and use that script as an alert action in Splunk.
Apart from that, you can also call a customized script to either send an email to send the alert contents to a specific TCP port and use Netcool email or port probes to read the data.
Let me know if it helps.
Yes this definitely helps us. Are we really only limited to the following parameters though to show us details about the alert? Or are there other ways when writing to a custom output to include more details?
$searchCount = $ARGV;
$searchTerms = $ARGV;
$searchQuery = $ARGV;
$searchName = $ARGV;
$searchReason = $ARGV;
$searchURL = $ARGV;
$searchTags = $ARGV;
$searchPath = $ARGV;