We're running some pfSense (FreeBSD-based firewall) on our network and dumping it to a dedicated syslog-ng server. When splunk reads the dumped files in syslog, it doesn't break it apart into fields which is what I expected. pfSense uses the pf (packet filter) tool originally from OpenBSD to manage the firewall rules.
Here's a sample line from the log:
Jan 11 07:28:30 141.102.4.254 pf: 000145 rule 141/0(match): block in on bge0: (tos 0x0, ttl 128, id 58078, offset 0, flags [none], proto UDP (17), length 1052) 141.102.12.99.1137 > 188.40.123.111.24460: UDP, length 1024
I just created a blog entry on how I was able to parse the pfSense files. It works for me and hopefully will work for you too. http://blog.basementpctech.com/2012/02/splunk-and-pfsense-what-pair.html
I just created a blog entry on how I was able to parse the pfSense files. It works for me and hopefully will work for you too. http://blog.basementpctech.com/2012/02/splunk-and-pfsense-what-pair.html
Great addition to the community! Thanks!
Try looking here for more info: Parsing pfSense Logs Part 2
Short answer: When setting up the input file, assign a manual sourcetype of pfSense
Then include the following in props.conf
[pfSense]
SHOULD_LINEMERGE=true
BREAK_ONLY_BEFORE=match
You will probably have to define the fields yourself. There are a couple of ways to do that:
Overview of Search-Time Field Extractions has an overview
Create search-time field extractions by editing configuration files - this is my preferred way to do this
Note: you do not want to use index-time extractions.
What is the easiest way to package up a chunk of logs for you to look at ?
And where should I send it ?
Thanks,
-d
It would be helpful to see a sample of what these log files or lines look like.