I have the following setup:
The above seems to be working ok. I can see syslogs being received by the syslog server and being written to log file successfully. I can also log into my Splunk Search head under the basic "Searching & Reporting" app and I can search on the custom index which I am sending these syslogs to and can see the syslogs appearing on the Indexer.
My issue however is three-fold:
Thanks in advance 🙂
Best option would be to see if you can let Kiwi write to separate folders or files per originating source host. That way you can set the host field using the host_segment or host_regex setting in inputs.conf. Then you just need to make sure there is not props/transforms being applied that overwrites it again.
Alternatively:
- change kiwi logging format such that it matches what the respective TA expects, so it doesn't write the facility and severity in the place where the TA expects to find the hostname
- write your own props/transforms to extract the host name correctly. If you want help with that, it would be useful to see some sample data.
Also version 2.5.4 of the Cisco Networks app has a bug that breaks the overview dashboard. Upgrade to Splunk latest and version 2.5.5 of the app.
Thanks I downloaded the updated app and that improved things heaps. 🙂
@splkmika1 Can you please share how have you configure the Kiwi Server ? for e.g: data encoding set to UTF-8 ? what is the log file format set to ?
Also, in what app have you put the inputs.conf that has the monitoring stanze reading your Cisco logs ? is it a customised app or Cisco add-on ?
Thanks!
Damode,
Page 117 of the Kiwi Syslog Server Admin Guide (v9.6) goes into configuring up a custom log format for your syslog server to use. I simply configured the custom log format up following these instructions and the following settings:
The big thing for me was adjusting the Kiwi Syslog configuration so that it didn't prepend it's own facility and severity codes to each syslog message coming in. In my case, Splunk was inspecting the syslogs coming in and using the . as the host name. Once Kiwi had been told NOT to add this information in, Splunk was able to correctly pull the hostname out of each syslog message.
I have all of the syslogs from my switches going into the one syslog file, with Kiwi pre-pending the hostname of the device originating the syslog to the start of each syslog message as it writes it to disk. Splunk then Monitors this file and onfowards to Splunk ... and the Indexers are now able to correctly pull the originating hostname out of the received data.
In terms of the inputs.conf .... I simply used the CLI to add the file monitor to the Universal Forwarder running on my syslog server.
splunk add monitor c:\syslogs\switches\switch_logs.txt -index networkstuff -sourcetype syslog
this appears to add it to the following location:
$SPLUNK_HOME/etc/apps/search/local/inputs.conf
Thanks for that, will download the latest version and give it a go.
Best option would be to see if you can let Kiwi write to separate folders or files per originating source host. That way you can set the host field using the host_segment or host_regex setting in inputs.conf. Then you just need to make sure there is not props/transforms being applied that overwrites it again.
Alternatively:
- change kiwi logging format such that it matches what the respective TA expects, so it doesn't write the facility and severity in the place where the TA expects to find the hostname
- write your own props/transforms to extract the host name correctly. If you want help with that, it would be useful to see some sample data.
FrankVI, if I am attempting to put in place the third option that you listed above, would I be correct in assuming that the props/transform would need to be located on the indexer (rather than the Universal Forwarder)?
The syslogs are coming in, in the ISO tab delimited format as follows:
dd-mm-yyyy[tab]**hh:mm:sstab(severity)[tab].... rest of syslog message
e.g
2018-07-19 08:00:00 Local6.Notice switch01
Is it possible to do something similar to the following???:
props.conf
[source::mysource]
TRANSFORMS=hostoverwrite
transforms.conf
[hostoverwrite]
DEST_KEY = Metadata:Host
REGEX = ^\S+\s+\S+\t\w+.\w+\t(?P\w+)
FORMAT = $1
Should indeed go on the indexer. Your attempt is close, but the FORMAT setting should be: FORMAT = host::$1
and the DEST_KEY should be MetaData:Host
with capital D
.
See also: http://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf#KEYS:
Thanks for that. I've had a go at that. I've modified those files as follows:
props.conf
[source::c:\data\syslogd\cisco_sw\syslog_sw.txt]
TRANSFORMS-syslogsw=hostoverwrite
transforms.conf
[hostoverwrite]
DEST_KEY = MetaData:Host
REGEX = ^S+\s+\S+\s+\w+.\w+\s+(?switch\d\d)\s
FORMAT = host::$1
When I push this out from my cluster master to my cluster peers, it appears to be deployed successfully, however when I go back to my search head, I can still see events coming in with the wrong host name. It's still pulling out the . codes like this "Local6.Notice" and using that as a hostname.
As mentioned before the syslog messages coming in have the format:
2018-07-24 08:04:57 Local6.Notice switch21 4221: JUL 24 08:04:58: syslog message body
Am I correct in assuming that the "[source::c:\data\syslogd\cisco_sw\syslog_sw.txt]" line that I have in my props.conf, will only apply the transform if the event is coming from the above file monitor source?
Also, I'm still a little uncertain about my REGEX, particularly the section in the capture brackets (). I don't know whether I need to include the "?" or whether my capture group should just be "(switch\d\d)"
I really appreciate the help that you've provided already on this.
Capture group should be without the ?
You're missing a \
before the first S
You're missing a \
before the .
Try: ^\S+\s+\S+\s+\w+\.\w+\s+(switch\d\d)\s
Make sure to test your regex using tools like regex101.com: https://regex101.com/r/j1DRbL/1
What sourcetype do you use for this? There will be some transforms in place for that sourcetype, that pulls out the hostname as Local6.Notice. You need to ensure you overwrite that transform. Sorry I didn't mention that before.
Thanks for spotting that ..... I could have sworn that I had typed those two missing "\"'s, in the above post. :-). I've just double checked my transform.conf on my indexer and those missing "\"s are present.... though it looks as though I've got the characters within my () incorrect.
I used the CLI on the Universal Forwarder to add the file monitor with sourcetype "syslog" (which then added the file monitor commands under $SPLUNK_HOME/etc/apps/search/inputs.conf).
This sourcetype is being rewritten when it hits the Indexer to cisco:ios by the Cisco Networks TA
I've gone back to your earlier suggestion of fixing this problem back at the syslog server and custom tailored the syslogs being output so that the syslog server doesn't prepend the . fields to incoming syslogs. With these fields not present in the syslog message, Splunk is now correctly picking up the host name of the device originating the syslogs.
I still think I have to do some more learning when it comes to REGEX and transform.conf.... but for the time being I got things working 🙂
Thanks for help in getting this issue resolved.
Thanks for that response. I'll give it a go and let you know how I go.