Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situatio...
See more...
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situations where an alert is triggered, but the related lookup file hasn't been updated yet, resulting in missing or incomplete context at the time of the alert. What are the best practices for ensuring that lookup files are refreshed frequently and reliably? Should I be using scheduled saved searches, external scripts, KV store lookups, or another mechanism to guarantee the most recent data is available for correlation in real-time or near-real-time? Any advice or example workflows would be greatly appreciated. Use case for context: I’m working with AWS CloudTrail data to detect when new ports are opened in Security Groups. When such an event is detected, I want to enrich it with additional context — for example, which EC2 instance the Security Group is attached to. This context is available from AWS Config and ingested into a separate Splunk index. I’m currently generating a lookup to map Security Group IDs to related assets, but sometimes the alert triggers before this lookup is updated with the latest AWS Config data. Thanks in advance!