Hi Splunkheads,
Need some advice here. I have built a simple lookup table and simple search for known bad ip addresses. My search runs across the lookup table, and returns a table for any matches across the environment.
Here is my search:
| tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, _time, All_Traffic.action, All_Traffic.bytes, index, sourcetype
| lookup ioc_ip.csv ioc_ip as All_Traffic.src OUTPUT ioc_ip as src_found
| lookup ioc_ip.csv ioc_ip as All_Traffic.dest OUTPUT ioc_ip as dest_found
| where !isnull(src_found) OR !isnull(dest_found)
| fields - src_found, dest_found
| sort -_time
I have been asked to auto-expire rows in the lookup after 30 days. The logic would be something like:
Since you are going to be doing some sort of calculation and comparison on the date, epoch time would be best, although you could store both just to make it easier to check.
| inputlookup ...
| where now - date < 30*60*60*24
| outputlookup ...
The lookup file will only get smaller with this process, so you need to add additional events to this process or have an additional process to keep the file current.