Our CrowdStrike Add-on stopped pulling logs via the API giving this error
2021-05-01 19:03:31,879 ERROR pid=31672 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/ta_crowdstrike_falcon_event_streams/aob_py2/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/crowdstrike_event_streams.py", line 71, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/input_module_crowdstrike_event_streams.py", line 358, in collect_events crowdstrike_client() File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/input_module_crowdstrike_event_streams.py", line 234, in crowdstrike_client num_feeds = len(response['resources']) TypeError: object of type 'NoneType' has no len()
I can't understand what happened or how to prevent it for happening again.
Anyone out there with same issue?
Here's a possible explanation for the interruption some folks are seeing.
We observed the same behavior today with our on-prem Splunk heavy-forwarder not getting events from the CrowdStrike Falcon Event Streams API for the past 7 days.
We eventually found that the past 7 days of "missing" events were getting pulled into our Splunk Cloud stack where we also had deployed CrowdStrike Falcon Event Streams add-on for Splunk. i.e., we had 2 separate Splunk deployments requesting the same data. It seems that only one API "client" instance would always get the data, and the other left out to dry.
When we disabled the input configured on Splunk Cloud, the Splunk on-prem HF started to get the event stream again, collecting all 7 days of "missing" events as well as new events.
To enable dual inputs, we plan to configure a separate CrowdStrike API key for the Splunk Cloud stack.
I hope this helps others who've seen this issue.