I am attempting to run Splunk App for Cisco UCS, I am attempting to use the syslog plug in. I have a deployment server, a search head server and three indexers. I have installed the SplunkAppForCiscoUCS app via a deployment server to the Search Head, and I have enabled the Splunk_TA_CiscoUCS_Syslog addon. My ucs is currently sending the syslog information to my search head.
I have a few questions on how this works from there.
Do I need to set up a index for "cisco_ucs" in my indexes.conf?
Do I need to also install the Splunk_TA_CiscoUCS_Syslog to each indexer?
Do I need to set up a UDP listener on my search head?
Answering #3 about syslog:
To be honest, you can ignore the *Syslog app. I will likely rip that out of a future version and replace it with:
The app today does not do a whole lot with the syslog data. All of the dashboards in fact, use data gathered from the python scripts (collect.py). There are a few saved searches which do look at syslog data. Syslog is going to have audit log data, and state changes, like whenever someone changes a setting, or if a fault occurs. However, the fault dashboards contain a richer set of data that could only come from the API.
Here are the relevant Splunk configurations that apply to Syslog:
props.conf:
[source::udp:514]
TRANSFORMS-ucs1 = cisco_ucs_rename_1,cisco_ucs_rename_2
transforms.conf:
[cisco_ucs_rename_1]
REGEX = %UCS
FORMAT = sourcetype::ciscoucs:syslog
DEST_KEY = MetaData:Sourcetype
[cisco_ucs_rename_2]
REGEX = %AUTHPRIV
FORMAT = sourcetype::ciscoucs:syslog
DEST_KEY = MetaData:Sourcetype
Here's what is going on with the above:
The syslog events which do not match the above, come from the NX-OS, not the UCS manager. They may or may not be of value, but Cisco advised me that they aren't useful. You could send them to the nullqueue if you wanted.
At search time, there is some field extraction performed on both of the above sourcetypes:
[source::udp:514]
EXTRACT-syslog fields = (?:[^ \n]* ){10}(?P<facility>[^\-]+)\-(?P<severity>[^\-]+)\-(?P<mnemonic>[^:]+)
[ciscoucs:syslog]
EXTRACT-syslog-user = user (?P<user>[^ ]+) (?:logged in from|from) (?P<client_ip>[^ ]+)
[ciscoucs:syslog]
EXTRACT-syslog-bracketed = [^\[\n]*\[(?P<object>[^\]]+)\]\[(?P<scope>[^\]]+)\]\[(?P<action>[^\]]+)
In Splunk terms, the above is really a minor amount of work, and in many environments, people are collecting syslog already. What I want to do is separate out the collection of syslog from the app, as there are many ways to do this and best practices that are discussed elsewhere that have nothing to do with the UCS app.
As long as you can get the data into Splunk with the udp:514 sourcetype, everything else will work. And if you don't like that sourcetype, you can change it in two files, and then the UCS app would comply with whatever you are already doing.
HTH
And last but not least, #2 asks about where to install apps.
Do please go through the readme, this is addressed. It has this to say:
Long story short--put it on an arbitrary job server, or what is now more commonly called in Splunk terms, a "data collection node", or DCN. This is a low-resource task that can be easily placed on a VM. You could put it on your search head if you wanted. If you were doing a proof of concept or lab setup, that's what I'd do. For production, I suggest a separate system for basic reasons of role separation. CPU cycles spent collecting data for Cisco UCS are resources that your users cannot use to search data.
Answering #1 about indexes
The app named Splunk_KB_CiscoUCS contains an indexes.conf file. The act of placing this app in $SPLUNK_HOME/etc/apps on an index server and restarting it, will create the required indexes.
Answering #3 about syslog:
To be honest, you can ignore the *Syslog app. I will likely rip that out of a future version and replace it with:
The app today does not do a whole lot with the syslog data. All of the dashboards in fact, use data gathered from the python scripts (collect.py). There are a few saved searches which do look at syslog data. Syslog is going to have audit log data, and state changes, like whenever someone changes a setting, or if a fault occurs. However, the fault dashboards contain a richer set of data that could only come from the API.
Here are the relevant Splunk configurations that apply to Syslog:
props.conf:
[source::udp:514]
TRANSFORMS-ucs1 = cisco_ucs_rename_1,cisco_ucs_rename_2
transforms.conf:
[cisco_ucs_rename_1]
REGEX = %UCS
FORMAT = sourcetype::ciscoucs:syslog
DEST_KEY = MetaData:Sourcetype
[cisco_ucs_rename_2]
REGEX = %AUTHPRIV
FORMAT = sourcetype::ciscoucs:syslog
DEST_KEY = MetaData:Sourcetype
Here's what is going on with the above:
The syslog events which do not match the above, come from the NX-OS, not the UCS manager. They may or may not be of value, but Cisco advised me that they aren't useful. You could send them to the nullqueue if you wanted.
At search time, there is some field extraction performed on both of the above sourcetypes:
[source::udp:514]
EXTRACT-syslog fields = (?:[^ \n]* ){10}(?P<facility>[^\-]+)\-(?P<severity>[^\-]+)\-(?P<mnemonic>[^:]+)
[ciscoucs:syslog]
EXTRACT-syslog-user = user (?P<user>[^ ]+) (?:logged in from|from) (?P<client_ip>[^ ]+)
[ciscoucs:syslog]
EXTRACT-syslog-bracketed = [^\[\n]*\[(?P<object>[^\]]+)\]\[(?P<scope>[^\]]+)\]\[(?P<action>[^\]]+)
In Splunk terms, the above is really a minor amount of work, and in many environments, people are collecting syslog already. What I want to do is separate out the collection of syslog from the app, as there are many ways to do this and best practices that are discussed elsewhere that have nothing to do with the UCS app.
As long as you can get the data into Splunk with the udp:514 sourcetype, everything else will work. And if you don't like that sourcetype, you can change it in two files, and then the UCS app would comply with whatever you are already doing.
HTH
There should be a readme within the app directory that explains how to use it.