If you want to bring Check Point logs into Splunk as close to real-time as possible, you'll want to run fw1-loggrabber outside of Splunk and have it write the Check Point log stream(s) to disk. I had great success using the method below to bring up to eight independent Check Point log streams into Splunk at one time.
My lag was < 1 sec.
I intend on writing up a much more thorough document that describes all of the challenges one faces when dealing with Check Point logs, but I'll have to save that for another time. As a result, this procedure will only handle a single Check Point log stream. If you'd like to hear how I scaled out to eight streams put your request in as a comment below this post.
Pre-requisites:
A functional Check Point LEA setup
Knowledge of Linux libraries, package management, and how they work together
Knowledge of Linux SysVinit and/or systemd
Strong knowledge of Splunk "sources", "sourcetypes", "hosts", config files, etc.
You're working with a Linux server, have root access to that server, and that either a Splunk indexer or a Splunk Universal Forwarder is installed onto this server.
Here's a high-level view of the steps you'll need to take:
Acquire fw1-loggrabber software.
Verify that the fw1-loggrabber binaries work on your server.
Build a directory structure to support a locally-installed fw1-loggrabber.
Install fw1-loggrabber.
Migrate existing fw1-loggrabber configurations.
Build a fw1-loggrabber startup initscript.
Configure cron to restart fw1-loggrabber on a schedule.
Point Splunk at the fw1-loggrabber data.
Plan for maintenance of fw1-lograbber data.
Low-level details:
Find yourself the most recent version of fw1-loggrabber you can find. It's probably this one.
Extract the tarball. Manually run the fw1-loggrabber binary to ensure that you have the appropriate 32-bit libraries installed on your server. I'll leave this up to you to figure out how to do. Your server will complain when you try to run the fw1-loggrabber binary without all the necessary libraries. (If you cannot find these libraries for your OS, manually copy the two or three 32-bit library files that fw1-loggrabber needs - from an older 32-bit server you have access to - to /lib and make version symlinks if necessary.)
Decide where to install fw1-loggrabber. Personally, I prefer /usr/local but you could also use /opt. (Please respect FHS and LSB standards in making your decision. You never know who will be administering that server five years from now.) By default, fw1-loggrabber will install into /usr/local/fw1-loggrabber if you run the INSTALL.sh script that is included in the tarball.
Install fw1-loggrabber using your preferred method.
Based on my pre-requisites, I assume that you have a correctly functioning lea.conf file. Copy that file to /usr/local/fw1-loggrabber/etc. Copy your OPSEC p12 certificate file here, too. You should also have a functioning fw1-loggrabber.conf file. That one needs to change slightly. Make these edits:
OUTPUT_FILE_PREFIX="/var/log/fw1-loggrabber"
OUTPUT_FILE_ROTATESIZE=536870912
ONLINE_MODE = "yes"
RESOLVE_MODE = "no" Technically, you should be able to set the rotate size just a hair under 2GB, but that never worked out well for me. fw1-loggrabber would barf at about 750MB and not rotate the fw.log file. Moving on, I used a directory called "/var/log/fw1-loggrabber" for my logs. Do what you wish with your logs. Next, you'll want to keep DNS resolution turned off. You'll lose real-time access if you don't do this. DNS can easily add 5-10 seconds of delay to log processing. (If you really need DNS names, consider using a time-based field lookup.
A side note: there is a limit to the number of fields that can be brought in with fw1-loggrabber using the FIELDS stanza. I don't remember what that limit is, but the consequence is that any field names that exceed the limit simply won't appear in your log output. (I think the problem lies in the length of the variable that holds the FIELDS data in the source code of fw1-loggrabber.)
You should be able to run the command
$ /usr/local/fw1-loggrabber/bin/fw1-loggrabber –c /usr/local/fw1-loggrabber/etc/fw1-loggrabber.conf –l /usr/local/fw1-loggrabber/etc/lea.conf now. If everything goes well, a file should appear in /var/log/fw1-loggrabber with lots of Check Point data.
Build a startup script in /etc/init.d or drop the command above into /etc/rc.local. It's up to you, but keep in mind the admin who eventually takes over your work. You'll make them happy if you respect FHS and LSB. I'll provide the SysVinit and systems scripts that I wrote when I have a chance.
Configure cron to restart fw1-loggrabber once-in-a-while. My experience was that the fw1-loggrabber binary will only run for a few days before it randomly quits. Assuming you created a functioning initscript in the previous step, all you need to do is call that script with $ service fw1-loggrabber restart or $ systemctl status fw1-loggrabber.service using cron once-a-day. Otherwise, you could probably run "pkill fw1-loggrabber" in cron, then rerun the command above in step 5.
Configure Splunk to monitor the newly created file (or the new directory). You'll also need to ensure that the appropriate field extractions are in place and that you've configured the correct source and sourcetypes for the new log file. You can borrow field extractions from the OPSEC LEA for Check Point app, if you'd like.
Create a procedure to manage/compress/delete all of the firewall data that is generated by the steps above. Since your Check Point Security Management server(s) have an official copy of this data (and your Splunk index has another copy of this data), you can safely delete the rotated fw1-loggrabber files on a schedule that is appropriate to you. Don't skip this step or you'll eventually have a full disk partition!
... View more