Hey Splunk Community 🙂 Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searches can be slow but like EVERYTHING is just draggin" We opened a support ticket, talked about it with AOD, let our Splunk team know, got told we might be under provisioned for SVCs and indexers no wait over provisioned, no wait do better searches, no wait again skynet is like "why is you instance doing that?". We also got a Splunk engineer assigned to our case and were told our instance is fine. Le sigh, when I tell you I rabbled rabbled rabbled racka facka Mr. Krabs .... I was definitely salty. So I took it upon myself to dive deeper then I have ever EEEEEVER dived before... index=_* error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) ) I know I know it was a rough one BUT down the rabbit hole I went. I did this search back as far my instance would go. October 2022 and counted from there. I was trying to find any sort of 'spike' or anomaly something to explain that our instance is not fine. October 2022 -2 November 2022- 0 December 2022- 0 January- 25 February- 0 March- 29 April- 15 May-44 June- 1843 July-40,081 August- 569,004 September-119,696,269 October - dont ask, ok fine, so far in October there are 21,604,091 The climb is real and now I had to find what was doing it? From August and back it was a lot of connection/time out errors from the UF on some endpoints so nothing super weird just a lot of them. SEPTEMBER, specifically 9/2/23 11:49:25.331 AM This girl blew up! The 1st event_message was... 09-02-2023 16:49:25.331 +0000 ERROR PersistentScript [3873892 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zscaler_CIM/bin/TA_Zscaler_CIM_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): The rest of the event messages that followed were these ... see 3 attached screen shots I did a 'last 15 min" search but like September's show this hits the millions. Also, I see it's not just one app, its several of our apps that we use API to get logs into Splunk with, but not all the apps we use shows on the list (weird), and it's not just limited to 3rd party apps, the Splunk cloud admin app is on there among others (see attached VSC doc) I also checked that any of these apps may be out of date and they are all on their current version. I did see one post on community (https://community.splunk.com/t5/All-Apps-and-Add-ons/ERROR-PersistentScript-23354-PersistentScriptIo-From-opt-splunk/m-p/631008) but there was no reply. I also 1st posted on the Slack channel to see if anyone else was or had experienced this happening. https://splunk-usergroups.slack.com/archives/C23PUUYAF/p1696351395640639 and last but not least I did open another support ticket so hopefully I can give an update if I get so good deets! Appreciate you 🙂 -Kelly
... View more