Hello everyone,
I have inherited shared responsibility for a Splunk instance. We recently had a user departure, and one of the other Splunk admins changed that user's password so that they couldn't login.
However, when I look in _audit I see that there is a failed login for that user exactly every 15 minutes around the clock. I fear that they left a shell script behind that is trying to login.
How can I find out the source of these failed attempts?
It turned out that there was a perl script attempting to use this person's login every 15 minutes to pull data. We are going to convert that functionality into a Splunk Alert.
It turned out that there was a perl script attempting to use this person's login every 15 minutes to pull data. We are going to convert that functionality into a Splunk Alert.
Check if there is any saved search owned by that user. That may have an alert action setup or something.
| rest /servicesNS/DeparteUserNameHere/-/saved/searches splunk_server=local
It wasn't a scheduled search inside splunk, but a script outside of splunk that was trying to log in. We found this out because someone remembered which script it was.
If we didn't have this knowledge, does anyone know how I would have identified the script that was trying to use these credentials?
Yeah this sounds like schedule search jobs more than anything.
You could go into Settings -> All configurations in the gui and pick their username as the owner if they are still listed. That would find enough knowledge objects still owned by them to confirm this.