Is it possible to implement event filtering (and/or routing) in a managed Splunk Cloud deployment without the usage of an on-prem Heavy Forwarder?
The scenario is:
- Running a managed Splunk Cloud instance
- Need to ingest AWS Cloudtrail logs (preferably using the "AWS Add-on" app and configure a SQS-Based S3 input)
- Need to filter out the majority of Cloudtrail events before they hit the Index and so impact the license
- (Critical requirement) Need to avoid "external" self-managed components like Heavy Forwarders (main reason to get a managed Cloud instance)
If the answer is no, I am also open to other suggestions.
Note: I am aware that AWS Add-on allows to setup a "Generic S3" input for Cloudtrail that implements "in-line" event blacklisting. Unfortunately this is not an option in my scenario as the input type is way too heavy on S3 operations.
Yes, you can do that with below props and transforms.conf configuration. You need to put this on Indexer on Cloud instance. This configuration will stop the event from indexing on the indexer.
props.conf [<source/sourcetype/host on which you want to filter the events>] TRANSFORMS-filter_events = filter_events_tr transforms.conf [filter_events_tr] REGEX = <regex which defined which events to be filtered out from _raw> DEST_KEY = queue FORMAT = nullQueue
Hope this helps!!!
Thank you @VatsalJagani !
Follow-up question - what's the best way to manage props/transform files in a Cloud instance? I have heard you can manage those through vetted apps, but the process of updating an app through Splunk support can be tedious and slow sometimes...
@stefanovalentino - No, In Splunk cloud, the only way is that you ask the support team to put these configurations. As this requires backend access. Even if there would have been a way, I would have recommend that you go through Cloud support team.