Splunk Search

Custom search command called multiple times

Builder

I have made a custom search command which accepts some values, forms a network request and submits it. It works great - but the command is called 3 times per search. I tried hard to discourage Splunk from calling it more than once to no avail.
Here is the full commands.conf:

[sendcustomaudit]
filename = sendcustomaudit.py
streaming = false
run_in_preview = false
required_fields = id,manager
retainsevents = false
needs_empty_results = false
outputheader = true
clear_required_fields = true

And here is the search:

index=rivit host="rivit_ent" source="/var/log/enterprise-server/rivit.log" "assigned tag to" userUID=$username$ tunnel=$tunnel$ | eval facility=substr(tunnel,1,4) | eval System=substr(tunnel,5) | search facility=$facility_id$ | rename userUID AS User packageUID as token | eval row_weight=random() | eval manager="$manager_username$" | sort row_weight | head $total_tags$ | fields id token User tag tagCode facility System manager | stats values(id) AS id first(manager) AS manager | eval send_audit="$send_audit$"  |  search send_audit="true" | sendcustomaudit

The part up to and including | fields ... is the base search which is used in two places: in a table showing the results prior to sending over, and in a separate table, depending on $send_audit$ token - that's why I have it in the search. The stats values(...) was my latest attempt to force the base search to be done before accepting its results and piping them through to the custom command.

Still - the command is called three times, and sending the data over three times - each time with full results! Is it possible to force Splunk to run the search just once?

Labels (3)
1 Solution

Builder

Since I haven't seen any responses here and I really need to solve this, I decided to find some brute force solution. Since there is a base search, I simply save its $job.sid$ inside <progress>, pass it as a field to the command and use that field's value as a "lock file" name inside some agreed upon folder. If the results are not empty and the file is successfully opened with os.O_CREAT | os.O_EXCL, I proceed to execute the rest, otherwise just bail out of the script. The folder where those lock files are created is periodically sweeped to delete all files older than some number of minutes.

View solution in original post

0 Karma

Builder

Since I haven't seen any responses here and I really need to solve this, I decided to find some brute force solution. Since there is a base search, I simply save its $job.sid$ inside <progress>, pass it as a field to the command and use that field's value as a "lock file" name inside some agreed upon folder. If the results are not empty and the file is successfully opened with os.O_CREAT | os.O_EXCL, I proceed to execute the rest, otherwise just bail out of the script. The folder where those lock files are created is periodically sweeped to delete all files older than some number of minutes.

View solution in original post

0 Karma

Contributor

Did you find out why its called multiple times? I am having the similar issue where my logs are logged multiple times but events are only coming in one of the invocations.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!