- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
We have some scripts for lookup filling via splunk lookup rest api link text
Also we have search head cluster (SHC).
It would be great to use SHC capability to to run our scripts on the one of alive node.
Best candidate for this procedure - inputs.conf. We can not only run script, but also collect STDOUT and STDERR in to index (docker style), for example:
[script://$SPLUNK_HOME/etc/apps/myapp/bin/lookup_fill.py]
interval = 50 23 * * *
sourcetype = lookup_fill
index = index_for_scripts_output
But, when we use inputs.conf our script start on all SHC nodes.
Can you advise to us way for a single run script from inputs.conf or maybe is the better way to:
1. Run custom script on the on of the SHC nodes (in the best case - less loaded)
2. Collect STDOUT and STDERR from script to index.
Thank you.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You should run the script on a Heavy Forwarder (HF). It's a single place to run that will forward STDOUT and STDERR to your indexers.
Configure the script on the HF the same way you would on a SH.
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You should run the script on a Heavy Forwarder (HF). It's a single place to run that will forward STDOUT and STDERR to your indexers.
Configure the script on the HF the same way you would on a SH.
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
it is recommended solution, as i remember.
But it will be only one HF versus some searchheads, so if HF will down, my script will not work until HF recovered.