Anyone out there doing time-based lookups with an external python script? How do you handle the time portion of the lookup configuration? Same as you would for a CSV lookup?
Per Steve Zhang, the Director of Search, there are two ways to do a time-based external lookup.
Have Splunk’s lookup mechanism handle the temporal aspect
In this case, the external lookup returns all relevant matches over all-time, and Splunk will constrain matches based on the time_field, time_format, etc specified in transforms.conf. This is analogous to how time-based CSV-based lookups work in Splunk.
The time-based configuration would perform the comparison on the time values and return the relevant results to search based on the lookup configuration.
Let the external script handle the temporal aspect implicitly by adding _time as a field to match on
Which approach to implement depends on how often the matching field changes – in other words, over all time, how many different rows are there that contain a given value to be matched upon?
If the number of rows/changes is small, then either option above should be fine. It the number is large, then letting the script handle time matching is likely better.
In either case, it would be wise to implement caching to reduce calls to the DB.
Per Steve Zhang, the Director of Search, there are two ways to do a time-based external lookup.
Have Splunk’s lookup mechanism handle the temporal aspect
In this case, the external lookup returns all relevant matches over all-time, and Splunk will constrain matches based on the time_field, time_format, etc specified in transforms.conf. This is analogous to how time-based CSV-based lookups work in Splunk.
The time-based configuration would perform the comparison on the time values and return the relevant results to search based on the lookup configuration.
Let the external script handle the temporal aspect implicitly by adding _time as a field to match on
Which approach to implement depends on how often the matching field changes – in other words, over all time, how many different rows are there that contain a given value to be matched upon?
If the number of rows/changes is small, then either option above should be fine. It the number is large, then letting the script handle time matching is likely better.
In either case, it would be wise to implement caching to reduce calls to the DB.