Deployment Architecture

creating a lookup with search-head pooling

a212830
Champion

Hi,

We just implemented search-head pooling. I have a lookup that gets generated via wget. How do people handle situations like this? I don't want to tie the scripts to a specific server, but I also want to make sure it runs when one of the servers is down. Is there a way to have Splunk run it, but not generate indexed data?

0 Karma

ewoo
Splunk Employee
Splunk Employee

One option: create a Python search command that updates your lookup. Then, schedule a search that invokes that search command on a desired interval (e.g. every night at midnight). The search heads in your pool will coordinate such that only one instance runs the scheduled search "at a time" (i.e. only one instance in the pool will run the scheduled search, per interval).

0 Karma
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...