Does anyone know if the Netcool App for Splunk correctly handles updates to existing events in the alert.status table such that updates to it are not sent as new events into Splunk. This has always been the challenge of using an event correlation engine database with Splunk.
I am currently using DB Connect to pull netcool reporting events into a Splunk index. To get past the duplicates issue, I have to clear the index and then pull in the entire netcool event DB again periodically. Just hoping to find a better way.
it does not deduplicate events as the object server does.
a new event will be indexed as an INSERT.
subsequent deltas will be indexed as UPDATE.
removal of an event will be indexed as DELETE.
yes, the entire event is sent each time. the App is not designed to manage events; however, i could see it being developed into an outstanding reporting tool.
now that i think on it ... you could write a Splunk query to grab the "first" of a server serial to get the most up-to-date entry ... at the same time maintaining every statechange that the event has experience (ie audit level reporting)
i hope this helps!
I've been thinking of using the DBconnect tool to query the Netcool reporting DB every 5 minutes for events where deletedat is NULL into an empty index. This index would be cleared of events before every DBconnect query in order to only have currently open Netcool events.
Reports from this index (call it netcoolnearlive_index) would only be 5 minutes off from the live alerts.status table on the object server. In theory, you could make an active event list on Splunk using this index and report on "live" object server events.