All Apps and Add-ons

Intermittent CrowdStrike Falcon Event Streams

luispulido
Explorer

Hi everyone,

I'm currently experiencing an intermittent issue with the CrowdStrike Falcon Event Streams Technical Add-On in Splunk Enterprise, and I’d like to know if anyone else has faced something similar or has insights into a possible solution.

Environment:

  • Splunk Enterprise (on-prem)
  • CrowdStrike Falcon Event Streams Technical Add-On (latest version, issue also occurred in previous versions)
  • Indexers and Search Head in cluster

Issue description:
Approximately every 10–15 days, the CrowdStrike input stops ingesting events. The only workaround so far has been to restart the input, after which ingestion resumes normally.

Relevant logs (_internal):

File "/opt/splunk/lib/python3.9/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/opt/splunk/lib/python3.9/site-packages/urllib3/connectionpool.py", line 407, in _make_request self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
File "/opt/splunk/lib/python3.9/site-packages/urllib3/connectionpool.py", line 358, in _raise_timeout raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPoo1(host='api.crowdstrike.com'
,port=443): Read timed out. (read timeout=10)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/OAuth2.py", line 35, in get_token
response = helper.send_http_request(url=tokenURL, method="POST", timeout=10, payload-payload, headers=headers, use_proxy-proxy)
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/../lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 496, in send_http_requ est return self.rest_helper.send_http_request(
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/../lib/splunktaucclib/splunk_aoblib/rest_helper.py"
, line 68, in send_http_request
return self.http_session. request(method, url, **requests_args)
File "/opt/splunk/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/splunk/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/splunk/lib/python3.9/site-packages/requests/adapters.py", line 713, in send raise ReadTimeout(e, request=request)
requests. exceptions. ReadTimeout: HTTPSConnectionPoo1(host='api.crowdstrike.com', port=443): Read timed out. (read timeout=10)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/../lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self. collect_events(ew)
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/crowdstrike_event_streams.py", line 485, in collect_events crowdstrike_client()
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/crowdstrike_event_streams.py", line 354, in crowdstrike_client token_result, token_message, token_url= Stream() get_token(clientid, secret, api_endpoint, proxy, stanza_name, helper, user_agent, event_streams_titl
e)
File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/bin/OAuth2.py", line 67, in get_token
result_code = str(response. status_code)
UnboundLocalError: local variable 'response' referenced before assignment

Analysis performed:

  • The issue occurs during the OAuth2 token retrieval process from the CrowdStrike API.
  • A timeout (10 seconds) happens in the HTTPS request.
  • Due to improper exception handling, the response variable is never assigned, leading to an UnboundLocalError.
  • After this failure, the input appears to become “stuck” and stops ingesting new events.
  • Additionally, the offset handling becomes inconsistent: the input attempts to retrieve older events and does not properly resume real-time ingestion.
  • Restarting the input restores normal behavior.

What has been ruled out:

  • No general connectivity issues detected
  • No KV Store-related problems
  • Add-on is up to date

Questions:

  1. Has anyone encountered similar behavior with this add-on or other modular inputs using OAuth2?
  2. Any recommendations or solution to prevent the input from getting stuck after a failed token request?

Any guidance or shared experiences would be greatly appreciated.

Thanks in advance!

Labels (2)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @luispulido 

The failure ultimately stems from the Splunk server/Python not being able to access https://api.crowdstrike.com/ in a timely manner, which is a publicly accessible endpoint with no IP allowlisting required on the Crowdstrike side for use, therefore it suggests that the problem is with the outbound connection from your Splunk instance or a genuine timeout on the CrowdStrike side. 

I cannot find a status page for this API however it might be worth checking with CrowdStrike to see if your failures match up with known issues with the CrowdStrike API. You can reach them at support@crowdstrike.com

 

Do you have a corporate transparent or specific proxy server between your Splunk instance and the internet? It could be that  the proxy is performing SSL inspection or periodically triggering a block on the endpoint which is causing the error.

The other thing that springs to mind is that there could be a large volume of events to retrieve - can you see how many events are likely to be pull down? Is there a spike in events on CrowdStrike around these times? If the endpoint is taking too long to respond in full then the script could fail - again this is something that the CrowdStrike developers/support should be able to check and remediate.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @luispulido 

The failure ultimately stems from the Splunk server/Python not being able to access https://api.crowdstrike.com/ in a timely manner, which is a publicly accessible endpoint with no IP allowlisting required on the Crowdstrike side for use, therefore it suggests that the problem is with the outbound connection from your Splunk instance or a genuine timeout on the CrowdStrike side. 

I cannot find a status page for this API however it might be worth checking with CrowdStrike to see if your failures match up with known issues with the CrowdStrike API. You can reach them at support@crowdstrike.com

 

Do you have a corporate transparent or specific proxy server between your Splunk instance and the internet? It could be that  the proxy is performing SSL inspection or periodically triggering a block on the endpoint which is causing the error.

The other thing that springs to mind is that there could be a large volume of events to retrieve - can you see how many events are likely to be pull down? Is there a spike in events on CrowdStrike around these times? If the endpoint is taking too long to respond in full then the script could fail - again this is something that the CrowdStrike developers/support should be able to check and remediate.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

luispulido
Explorer

Hello @livehybrid 

I understand that the issue could be related to the CrowdStrike API side, so I will continue investigating from that angle and consider reaching out to CrowdStrike support to validate if these timeouts align with any known issue.

Regarding your questions:

We currently do not have a proxy between the Splunk instance and the internet.
We also reviewed the event volume on the CrowdStrike side during the timeframes when the issue occurred and did not observe any unusual spikes that could explain delayed responses.

Additionally, it’s worth mentioning that the issue has not reoccurred in the past two months, which could suggest a transient condition either on the API side or network path.

I’ll continue to monitor the behavior and dig deeper based on your recommendations.

Thanks again for your help!

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...

SPL2 Deep Dives, AppDynamics Integrations, SAML Made Simple and Much More on Splunk ...

Splunk Lantern is Splunk’s customer success center that provides practical guidance from Splunk experts on key ...