Thanks in advance.
We just installed Cisco Prime (well Cisco installed it), it is feeding a syslog-ng server. I have tried both sourcetype=syslog and sourcetype=cisco:ios, while I do get more fields extracted via cisco:ios, we are still not getting good field types and / or data extractions.
Has anybody successfully extracted Prime logs with matching field names? I have scoured the forums and while Mikael Bjerkeland has done quite a bit of work with the Cisco Networks Add-on for Splunk Enterprise: https://splunkbase.splunk.com/app/1467 , still not seeing any further items that will help pull in the data. As a side note, SNMP is not an option so it has to be the feed from syslog.
If anyone has common fields and or props / transforms they would like to share that would be great.
Again thanks for any assistance provided,
If I can contact you out of band, I can send you a large file as it is all over, some of it WLC, some of it network, I think a larger sample would be best for you.
I only need to see about 15 log lines, not the whole file. Please post them inline using the code formatter to keep style.
I've included a sample of the various entries. One thing to note, Cisco truncates @ 1024, for those entries they end with ... and begin with ... on the following entry. Hope that helps. I would be willing to get you a larger sample if it will make more sense.
For some reason it won't let me submit when I use the "Code Sample" button. I also tried to paste directly into the window and it won't let me submit that way, both say I have characters left just the submit button goes grey. Other suggestions?
I completed a fairly large project aimed at onboarding Cisco Prime data into Splunk. There are a few options we discovered (some a lot better than others) and a few lessons we learned along the way, mostly related to the nature of the data that Cisco Prime sends out of the system. For example, one of the syslog-style feeds (maybe the only one) is for this normalized data type that Prime maintains called "Events" which as near as I could tell, were a combination of Prime alarms/alerts, regular syslog messages, and certain SNMP traps.
The focus of our project was mostly related to Cisco Wireless telementry to instrument performance, availability, fault tolerance, and end user experience/activity. In the end, we leveraged the Cisco Prime API, which I strongly encourage you to do as well (for at least part of your solution). The API allows quite a bit more flexibility and control on what you retrieve in addition to allowing for the opportunity to transform the output using a Splunk scripted input prior to indexing. This can be good for search optimization and/or controlling license consumption. The data format comes back as XML by default, but you can optionally request results in JSON.
This link is for the Cisco Prime API reference doc - just use the version that matches your Prime installation.
Thanks!, not my call on the API, I don't have access to the device and limited access to the folks that support it. I'll keep it in mind.
@sdemoss - With the approach you took for Cisco Wireless telemetry, was it necessary to make one call to get a list of devices and then N number of calls per device returned where N is the number if metrics?
We're in the process of doing something that sounds similar and I'm sizing up the effort, but reading through the API documentation makes me think that I'm going to need thousands of calls to gather the metrics.
The answer is, it kind of depends. In our experience, it mostly was making a single API call to the reporting endpoint of interest as most of these have an 'entity detail' view where it will give you one result per entity with all of its respective details. For example, "AP Details" (/webacs/api/v1/data/AccessPointDetails) returns one result per endpoint with a TON of details about the specific Access Point.
I would encourage you to simply login to Prime and enter some of the REST URL's in a browser. You can specify either XML or JSON format for results and it lets you quickly and easily check what data gets returned. We ended up doing some "pre-processing" of the results before indexing it into Splunk to optimize license consumption.