Activity Feed
- Karma Re: New forwarder: An Admin password is required??? for xpac. 06-05-2020 12:49 AM
- Karma Re: How to configure time in Splunk DB Connect and the Add-on for Oracle Database? for gn694. 06-05-2020 12:49 AM
- Karma Re: Can I download splunkforwarder by using curl ? for ohoppe. 06-05-2020 12:48 AM
- Karma Splunk ISE TA fails when distributed via Cluster Master for bjoernhansen. 06-05-2020 12:48 AM
- Karma Re: How to use the timewrap command and set an alert for +/- 10% on compared values? for mattymo. 06-05-2020 12:48 AM
- Karma Re: How to disable the universal forwarder default management port 8089 with a deployment server? for serwin. 06-05-2020 12:48 AM
- Karma Re: Is it possible to get accounting data with Splunk Add-on for CP OPSEC LEA? for mnatkin_splunk. 06-05-2020 12:48 AM
- Karma Re: Checkpoint Add-on for Check Point OPSEC LEA: Why is the "Manage Connections" page loading forever? for sha1020. 06-05-2020 12:48 AM
- Karma Re: SSL Forwarding: Why does a Splunk forwarder need its own certificate? for dwaddle. 06-05-2020 12:48 AM
- Karma Re: Are there any Splunk training materials for new users? for woodcock. 06-05-2020 12:48 AM
- Karma Re: Help with Query! for javiergn. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk Add-on for Check Point OPSEC LEA: Non-Audit data retrieval fails with "'utf8' codec can't decode byte 0xa0 in position 572: invalid start byte". 06-05-2020 12:48 AM
- Got Karma for Re: Splunk add-on for opsec (4.0): Handling fields containg escaped pipes (\|). 06-05-2020 12:48 AM
- Got Karma for Re: SSL Forwarding: Why does a Splunk forwarder need its own certificate?. 06-05-2020 12:48 AM
- Got Karma for Re: SSL Forwarding: Why does a Splunk forwarder need its own certificate?. 06-05-2020 12:48 AM
- Got Karma for Re: SSL Forwarding: Why does a Splunk forwarder need its own certificate?. 06-05-2020 12:48 AM
- Got Karma for Re: How to configure the Splunk Add-on for Check Point OPSEC LEA to grab older logs named by date, not just FW.log?. 06-05-2020 12:48 AM
- Got Karma for Re: How to configure the Splunk Add-on for Check Point OPSEC LEA to grab older logs named by date, not just FW.log?. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk Add-on for Check Point OPSEC LEA 4.0: Why are we getting "111 - SIC Error for lea: Peer sent wrong DN..."?. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk Add-on for Check Point OPSEC LEA 4.0: Why are we getting "111 - SIC Error for lea: Peer sent wrong DN..."?. 06-05-2020 12:48 AM
Topics I've Started
09-12-2016
12:34 PM
Yes, when I run lea_loggrabber at debug_level 3, I see a DN in my CRL distribution point as well as the URI:
[ 20476 4151933760]@my.internal.server[12 Sep 12:24:05] with CRL:
[ 20476 4151933760]@my.internal.server[12 Sep 12:24:05] Issuer: O=checkpoint.server.6bh0i6
This update: Mon Sep 12 01:48:34 2016 Local Time
Next update: Tue Sep 13 01:48:34 2016 Local Time
Extensions:
Issuing distribution points (Critical):
URI: http://checkpoint.server:18264/ICA_CRL0.crl
DN: CN=ICA_CRL0,O=checkpoint.server.6bh0i6
[ 20476 4151933760]@my.internal.server[12 Sep 12:24:05] fwCert_OurValCerts: validation OK
At the same time, your output pasted below also has similar lines, but then you get "client_send_crlreq: fetching crl failed" at line 415... My ouput is:
[ 26077 4151622464]@my.internal.server[12 Sep 12:41:10] fwCRL_CRLisValid:
[ 26077 4151622464]@my.internal.server[12 Sep 12:41:10] thisUpdate: Mon Sep 12 01:48:34 2016 Local Time
[ 26077 4151622464]@my.internal.server[12 Sep 12:41:10] nextUpdate: Tue Sep 13 01:48:34 2016 Local Time
[ 26077 4151622464]@my.internal.server[12 Sep 12:41:10] now: Mon Sep 12 12:41:10 2016 Local Time
or:
[ 26077 4151622464]@my.internal.server[12 Sep 12:41:10] fwCert_OurValCerts: validation OK
... View more
09-11-2016
07:18 AM
1 Karma
The regex in the auto_kv_for_opsec stanza in transforms.conf breaks when parsing the | in the google fonts URLs.
Create a local folder in the TA and make copies of the default props.conf and transforms.conf.
I added the following to my local transforms.conf, just beneath the [auto_kv_for_opsec] stanza:
#fixes resource field, specifically for some google fonts URLs which contain \|
[application_control_resource]
REGEX = resource=(.*[^\\|])\|proxy_src_ip
FORMAT = resource::$1
and in the local props.conf, just below the REPORT-auto_kv_for_opsec = auto_kv_for_opsec line I added:
REPORT-application_control_resource = application_control_resource
This will give you a clean resource field at search time.
... View more
09-07-2016
11:10 AM
Form the output you pasted below, I'm seeing the same errors as kmeullercm did above, and that the process is failing to read the CRL which is causing everything else to fail out.
Have you tried running tcpdump and analyzing the traffic in wireshark? It would be interesting to see if the CRL retrieval via cURL is the same as via lea_loggrabber.
... View more
08-30-2016
10:15 AM
3 Karma
Take a look at this presentation from .conf 2015:
https://conf.splunk.com/session/2015/conf2015_DWaddle_DefensePointSecurity_deploying_SplunkSSLBestPractices.pdf
Page 6 has a nice exchange / function matrix, if you scan the "Forwarding" line you'll see that adding an SSL configuration stanza to a forwarder adds encryption to the data being sent to the indexers, as well as certificate authentication and CN checking.
See pages 7 and 8 for an attack scenario on an unsecured forwarder (provided the REST API is enabled on the forwarder). Pages 18-20 go over the forwarder setup in depth. I believe that indexer acknowledgements are independent of SSL configuration and not related in this case.
The whole presentation is definitely worth reading (and watching, the recording is here: https://conf.splunk.com/session/2015/recordings/2015-splunk-115.mp4
... View more
08-29-2016
04:04 PM
Have you tried running lea_loggrabber manually? There are different debug levels available that might give you additional information:
cd to /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/
You'll have to use your own values for appname, lea_server_ip, etc:
./lea_loggrabber --data non_audit --debug_level 2 --appname Splunk_TA_checkpoint-opseclealea_loggrabber --lea_server_ip 10.1.2.3 --lea_server_auth_port 18184 --lea_server_auth_type sslca --opsec_sslca_file /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/certs/checkpoint.p12 --opsec_sic_name CN=opsec_splunk_hf,O=your.institution.name.7ag9h5 --opsec_entity_sic_name CN=cp_mgmt_yourmanagementserver,O=your.institution.name.7ag9h5 --no_online --no_resolve 2>&1 | less
Change the debug_level flag to modify the verbosity of the output. You might also want to run tcpdump at the same time so you can review the connection attempts in Wireshark.
... View more
08-29-2016
03:58 PM
2 Karma
It sounds like this is a one-time import of historical data, since once you're up and running you'll always be monitoring the most current log file.
I'm not sure the app supports what you're trying to do, but as a possible workaround:
cd to /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/
Manually run lea_loggrabber to determine the logfiles you need
Retrieve the data you need to temporary text files
Index those files with the proper sourcetype
You'll probably want to test this before applying to a production environment.
To establish the names of the logfiles you want, I'd suggest sending the output to a pager (I use less ), so you can scroll through it.
For example, if you want to get your non-audit data, try this. You'll have to use your own values for appname, lea_server_ip, etc
./lea_loggrabber --data non_audit --debug_level 2 --appname Splunk_TA_checkpoint-opseclealea_loggrabber --lea_server_ip 10.1.2.3 --lea_server_auth_port 18184 --lea_server_auth_type sslca --opsec_sslca_file /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/certs/checkpoint.p12 --opsec_sic_name CN=opsec_splunk_hf,O=your.institution.name.7ag9h5 --opsec_entity_sic_name CN=cp_mgmt_yourmanagementserver,O=your.institution.name.7ag9h5 --no_online --no_resolve 2>&1 | less
You'll find a line that reads: log_level=2 file:lea_loggrabber.cpp func_name:get_fw1_logfiles_dict code_line_no:2414 :Available FW-1 Logfiles with a long list of the file names available.
Once you have a specific file in mind, you can retrieve the specific data you need by adding the --logfile flag to the command above. For example, my historical logs were daily, so to pull the July 17, 2015 data I would use something like:
./lea_loggrabber --data non_audit --debug_level 2 --appname Splunk_TA_checkpoint-opseclealea_loggrabber --lea_server_ip 10.1.2.3 --lea_server_auth_port 18184 --lea_server_auth_type sslca --opsec_sslca_file /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/certs/checkpoint.p12 --opsec_sic_name CN=opsec_splunk_hf,O=your.institution.name.7ag9h5 --opsec_entity_sic_name CN=cp_mgmt_yourmanagementserver,O=your.institution.name.7ag9h5 --no_online --no_resolve --logfile 2015-07-17_235900.log > /var/log/checkpoint-2015-07-17.log
At this point you should be able to index the logfile with the proper sourcetype and have the props and transforms tag up your data as intended.
If that works for you, it's not a big step to write a bash loop to pull all the remaining files into a temporary location prior to indexing. If you can get all the desired logfiles into a text file, then something like this should pull them in sequence for you:
while IFS= read -r line; do /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/lea_loggrabber --data non_audit --debug_level 2 --appname Splunk_TA_checkpoint-opseclealea_loggrabber --lea_server_ip 10.1.2.3 --lea_server_auth_port 18184 --lea_server_auth_type sslca --opsec_sslca_file /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/certs/checkpoint.p12 --opsec_sic_name CN=opsec_splunk_hf,O=your.institution.name.7ag9h5 --opsec_entity_sic_name CN=cp_mgmt_yourmanagementserver,O=your.institution.name.7ag9h5 --no_online --no_resolve --logfile $line > /var/log/checkpoint-$line; done < your-text-file.txt
... View more
08-23-2016
09:37 AM
It looks like you're encountering the same issue I did. I made a workaround that requires modifying a single line in the TA:
https://answers.splunk.com/answers/421857/splunk-add-on-for-check-point-opsec-lea-non-audit.html
... View more
07-27-2016
01:59 PM
We found that a restart of the CP log server was required in this case. You may be stuck until you can restart.
... View more
07-27-2016
01:48 PM
You still may have to bounce your reporting server and restart Splunk on the heavy forwarder. This TA is a bit touchy about the setup, but once working seems to be stable.
... View more
07-27-2016
11:16 AM
If you're trying to pull all the relevant Check Point data, (eg: Audit, Firewall, SmartDefense, VPN, Anti-Bot, etc), then I would recommend you only enable two inputs in the Splunk Add-on:
Audit
Non-Audit
The reason is that Non-Audit will gather your SmartDefense, Firewall and VPN data (as well as Anti-Bot, Anti-Malware, etc). It's not really intuitive, but it is in the docs under item 6: http://docs.splunk.com/Documentation/AddOns/released/OPSEC-LEA/Setup2#Create_a_new_input
Non-Audit: Collects all event types except audit events
When I was testing the TA I did the same thing, and found that the collection process seemed to hang if I enabled too many inputs at the same time. It seemed like multiple fw-loggrabber processes (>2) overwhelms the log server and the connections stop responding. I had to run tcpdump to notice that the connection was open but no data was flowing.
Your alternative is to only disable the Non-Audit setting, but then you won't get Anti-Malware, Anti-Bot, etc.
... View more
07-27-2016
09:48 AM
I can think of 3 options for you, although only one will be stable for the long term:
Build a new linux heavy forwarder that meets the requirements. I know that might not be the preferred option, but this is the only one that will keep you on mainline support.
Clone the Splunk version of lea_loggrabber and try to compile on your existing platform: https://github.com/splunk/opsec_lea Once complete, move the newly compiled version into the TA's folder and restart the Splunk process. You'll need at least gcc and other developer tools installed to make this approach work.
Pull the fw1-loggrabber .tar.gz from Sourceforge and compile yourself: https://sourceforge.net/projects/fw1-loggrabber/files/fw1-loggrabber/1.11.1/ Again, move the newly compiled binary into the TA folder.
... View more
07-27-2016
09:39 AM
I think you're running into the same problem I did before. See this answer:
https://answers.splunk.com/answers/421857/splunk-add-on-for-check-point-opsec-lea-non-audit.html
Although I haven't had any issues with treating the input at latin-1, an alternative edit to line 71 of /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/splunk_ta_checkpoint_opseclea/splunktalib/common/util.py is changing:
data = data. encode("utf-8", errors="xmlcharrefreplace")
to:
data = data.decode("utf-8").encode("utf-8", errors="xmlcharrefreplace")
Credit for the alternative fix goes to @xchen_splunk
You might also want to check this answer:
https://answers.splunk.com/answers/417709/opsec-lea-app-4-state-of-connection.html
When I ran into the same problem those were the steps I needed to take to get the data collection working again.
... View more
07-08-2016
10:06 AM
1 Karma
You need to have a ssl_framework index, or override the destination of the logged events by creating your own indexes.conf and savedsearches.conf in the /local directory of the app.
You can manually trigger a report with the command:
/opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/SOCPrimeSSLFramework/bin/ssl-framework-report.py -c /opt/splunk/etc/apps/SOCPrimeSSLFramework/default/sslframework.conf
You can monitor the reporting progress by following the app's own logfile:
tail -f /opt/splunk/etc/apps/SOCPrimeSSLFramework/bin/ssl-framework-report.log
It seems to take 2-3 minutes per server for the report to be generated. After the scan is finished then an output file is written to /opt/splunk/etc/apps/SOCPrimeSSLFramework/reports . I found that after the initial install, the app didn't schedule a scan until around 12 hours later, so the dashboard was blank unless I manually ran a report using the command above.
Application error events are in the splunkd.log, you can monitor for them by watching:
tail -f /opt/splunk/var/log/splunk/splunkd.log | grep SOCPrimeSSLFramework
You can also do this through the Splunk search interface, by using the search index=_internal SOCPrimeSSLFramework and then filtering your results as needed.
... View more
06-29-2016
12:26 PM
Sure thing. I'm using my config file from a fresh setup of the 4.0 TA as a reference, but that's why I asked edwardrose to validate the via GuiDBEdit.
I believe your comment below about the lea_server_type is probably the issue here, I had missed that previously.
... View more
06-29-2016
12:13 PM
Did you validate that the opsec_entity_sic_name is correct via GuiDBEdit? From what you've posted, I would expect it to look more like cn=cp_mgmt_wvdpclogsvr,O=wvdpcscmgr.wv.mentorg.com.r65zch
... View more
06-29-2016
11:47 AM
3 Karma
Is the DN for for your opsec_entity_sic_name actually cn=cp_mgmt,o=wvdpcscmgr.wv.mentorg.com.r65zch ? It may in fact be something like cn=cp_mgmt_YOURSERVERHOSTNAME,o=wvdpcscmgr.wv.mentorg.com.r65zch
Take a look at chubbybunny's response in this thread:
https://answers.splunk.com/answers/153982/why-am-i-getting-connection-errors-after-configuring-add-on-for-check-point-opsec-lea-linux.html
For how to use GuiDBEdit to get the exact sic name you need.
In terms of where the script is finding the sic name, once a connection is set up you should find the configuration file in /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/local/opseclea_connection.conf . Before a connection has been created the value is only in the input setup modal dialog inside the TA's web interface and not written to the TA's /local directory.
... View more
06-29-2016
09:59 AM
6 Karma
I found a temporary fix, hopefully this will be rolled into the next version of the TA.
Edit line 71 of /opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/splunk_ta_checkpoint_opseclea/splunktalib/common/util.py
from:
data = encode("utf-8", errors="xmlcharrefreplace")
to:
data = data.decode("latin-1").encode("utf-8", errors="xmlcharrefreplace")
You'll have to restart the heavy forwarder or indexer to make the changes take effect.
I believe util.py is expecting valid utf-8 input, however the output of lea_loggrabber can include non-UTF-8 encoded data, especially when retrieving SmartDefense, Anti-Malware or Anti-Virus log entries.
This fix allows me to ignore the encoding of the lea_loggrabber output and treat it as binary data, map the incoming text to a valid unicode range and then encode as utf-8. This way util.py can return valid output back up to ta_data_collector.py and then event_writer.py, both of which also expect utf-8.
If at all possible, future versions of this TA should include try and except handlers so that it fails gracefully.
Credit where credit is due, I found a similar issue and fix here:
http://www.gossamer-threads.com/lists/python/python/623758#623758
... View more
06-27-2016
10:37 AM
3 Karma
When using "Non-Audit" as the data input, I am able to retrieve a few lines, and then the input fails. Other input settings work as intended.
While I was monitoring the output of /opt/splunk/var/log/splunk/splunk_ta_checkpoint-opseclea_modinput.log I noticed the following:
2016-06-27 16:46:37,129 +0000 log_level=INFO, pid=28652, tid=Thread-9, file=ta_opseclea_data_collector.py, func_name=get_logs, code_line_no=62 | [input_name="fwmgmtp02-nonAudit" connection="fwmgmtp02" data="non_audit"]log_level=2 file:lea_loggrabber.cpp func_name:read_fw1_logfile_collogs code_line_no:2052 :LEA collected logfile handler was invoked
2016-06-27 16:47:02,901 +0000 log_level=ERROR, pid=28652, tid=Thread-1, file=event_writer.py, func_name=_do_write_events, code_line_no=79 | EventWriter encounter exception which maycause data loss, queue leftsize=838
Traceback (most recent call last):
File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/splunk_ta_checkpoint_opseclea/splunktalib/event_writer.py", line 62, in _do_write_events
for evt in event:
File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/splunk_ta_checkpoint_opseclea/splunktaucclib/data_collection/ta_data_collector.py", line 59, in <genexpr>
index, scu.escape_cdata(event.event)) for event
File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/splunk_ta_checkpoint_opseclea/splunktalib/common/util.py", line 71, in escape_cdata
data = data.encode("utf-8", errors="xmlcharrefreplace")
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 572: invalid start byte
2016-06-27 16:47:02,914 +0000 log_level=INFO, pid=28652, tid=Thread-1, file=event_writer.py, func_name=_do_write_events, code_line_no=84 | Event writer stopped, queue leftsize=849
2016-06-27 16:47:02,915 +0000 log_level=INFO, pid=28652, tid=Thread-4, file=ta_data_collector.py, func_name=_write_events, code_line_no=122 | [input_name="fwmgmtp02-nonAudit" data="non_audit"] the event queue is closed and the received data will be discarded
2016-06-27 16:47:02,915 +0000 log_level=INFO, pid=28652, tid=Thread-4, file=ta_data_collector.py, func_name=index_data, code_line_no=114 | [input_name="fwmgmtp02-nonAudit" data="non_audit"] End of indexing data for fwmgmtp02-nonAudit_non_audit
It seems that only the Non-Audit setting is retrieving unexpected Unicode values
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 572: invalid start byte
at which point the process hangs and indexing stops. The opsec_lea connection stays up, however, event processing is halted.
The byte value (0xa0) in this case isn't a specific indicator, I've had the same result for 0xf2 and 0xfc. I noticed that all of these are valid ASCII characters, but invalid standard Unicode characters (for example, 0xfc is in the "Specials" block as an o with an umlaut).
I believe this only occurs with Non-Audit as this is the only input setting that also retrieves Anti-Malware and Anti-Virus events (which I need to index). Although it is most likely a Unicode-handling error, I also found a related issue with fw-loggrabber here:
http://manpages.ubuntu.com/manpages/trusty/man1/fw1_lea2dlf.1.html
In the "Other notes" it points out an "unexpected non-continuation byte" -- the Splunk output is almost exactly the same: "invalid continuation byte".
I tested setting the environment variables in /opt/splunk/etc/splunk-launch.conf however had the same issue.
Does anybody else have an idea to enable Non-Audit and keep it stable?
... View more
06-27-2016
09:59 AM
I've noticed the exact same issue as you. I'm just about to open another question thread, with some additional background info that I've found. I believe it's a bug in the data-handling that only comes up with the Non-Audit setting.
... View more
06-23-2016
11:09 AM
Are you using Splunk_TA_opseclea_linux22 (aka version 3.1) or Splunk_TA_checkpoint-opseclea (aka version 4.0)?
If you're using 3.1, are you passing unique values for configentity? e.g.: audit, vpn, ips, etc?
You might also be pulling historical data, as well as current. If that's the case then the data volume should settle down after your initial import.
... View more
06-23-2016
10:45 AM
1 Karma
The opsec:antimalware and opsec:antivirus events should be pulled if you use the Non-Audit input.
I'm having trouble with that setting, but have managed to retrieve some of those events that way. I find that my data collection is hanging after an initial connection.
... View more
06-23-2016
10:42 AM
You can monitor from the heavy forwarder side, as well as from the management server. In my case, I have a heavy forwarder on Red Hat, and a secondary management server that I'm connecting to for log retrieval.
I open a screen session, and split the view into 2 panes.
On the HF:
watch -n 1 "ps aux | grep -i opsec"
On the management server:
watch -n 1 "ps aux | grep -i lea"
From there I can see the number of lea_loggrabber sessions running from the HF, and the number of lea_session instances on the Check Point box.
On a related note, I'm also having trouble retrieving data. It seems to circle around pulling SmartDefense data, or if I use the Non-Audit setting (which also includes SmartDefense).
I'm still testing, but have found that I need to disable all inputs on the HF, restart the splunk process and reboot the management server to get to a clean state to work from.
Hope that helps.
... View more
12-14-2015
09:19 AM
Thanks. I thought forking the code would probably be the easiest thing to do, but I'd like to add improvements back into this TA so other people can benefit.
... View more
12-13-2015
09:05 AM
Is there a github repo for this project?
Specifically, I'd like to submit a pull request for infobloxws.py:
Change line 6: remove the json library from import, replace with ast
Change line 154: results=ast.literal_eval(pagehandle.read())
These changes return properly-encoded JSON to the search results for nested extensible attributes. This change let me use SPATH commands to further process the extattrs into extracted fields for some of my queries.
Thanks!
... View more
12-08-2015
08:49 PM
Have you tried to create a search query that gives you what you need? Your best tools are probably spath or xpath in this case. Something like:
sourcetype=mssql_audit | spath output=action_info_address path=action_info.address | table action_info_address
Xpath syntax is similar, but not exactly the same.
Once you have a working extraction at search time, you should be able to create a calculated field in your props.conf so it's indexed ahead of time.
... View more