Hi
We have an issue that sometimes we get very large files or a host produces too much data and we need to stop it coming in. By the time we notice too much "bad data" has been sent.
Is it possible to dynamically stop the data via forwarder or via indexers or "somehow" when an alert is thrown?
Thanks in advance
Robert
Hi @robertlynch2020 ,
Not sure if this is a good idea in your environment, but you could probably write a wrapper script that stops the forwarder and in the alert use the Trigger Action "run a script" that calls this script.
The problem i see is, that you would sometimes trigger this alert when there a lot of "good" events, so just a peak in "normal" events.
Maybe you can combine the amount of events with some strings that are unique for the "bad" events and only trigger the alert when both is true.
Hope it gives you an idea.
BR
Ralph
--
Karma and/or Solution tagging appreciated.
Hi
Thanks for the replay. In the end, we are going to try and use a transform to cut out the bad data.
Cheers
Rob
if you go down this path, make your life easier with a scripted alert action on your deployment server and an app for the transforms. It will automate deployment to the parsing nodes, and gives you some flexibility about where this filtering is applied.
Basic algorithm:
1. Search for high volume
2. Trigger update alert script
3. Update app in deployment-apps: append to transform REGEX with offending data: just keep adding |NEW PATTERN on the end of the REGEX. If you only care about hosts then use [HOST::*] in props and SOURCE= MetaData:Host in your transform
4. Restart Splunk on deployment server. This is generally easier than trying to login from scripts. You can also trying doing this via REST API: https://docs.splunk.com/Documentation/Splunk/8.0.5/RESTREF/RESTsystem#server.2Fcontrol.2Frestart