Splunk Search

How to stop custom search command from calling multiple times?

arkadyz1
Builder

I have made a custom search command which accepts some values, forms a network request and submits it. It works great - but the command is called 3 times per search. I tried hard to discourage Splunk from calling it more than once to no avail.
Here is the full commands.conf:

[sendcustomaudit]
filename = sendcustomaudit.py
streaming = false
run_in_preview = false
required_fields = id,manager
retainsevents = false
needs_empty_results = false
outputheader = true
clear_required_fields = true

And here is the search:

index=rivit host="rivit_ent" source="/var/log/enterprise-server/rivit.log" "assigned tag to" userUID=$username$ tunnel=$tunnel$ | eval facility=substr(tunnel,1,4) | eval System=substr(tunnel,5) | search facility=$facility_id$ | rename userUID AS User packageUID as token | eval row_weight=random() | eval manager="$manager_username$" | sort row_weight | head $total_tags$ | fields id token User tag tagCode facility System manager | stats values(id) AS id first(manager) AS manager | eval send_audit="$send_audit$"  |  search send_audit="true" | sendcustomaudit

The part up to and including | fields ... is the base search which is used in two places: in a table showing the results prior to sending over, and in a separate table, depending on $send_audit$ token - that's why I have it in the search. The stats values(...) was my latest attempt to force the base search to be done before accepting its results and piping them through to the custom command.

Still - the command is called three times, and sending the data over three times - each time with full results! Is it possible to force Splunk to run the search just once?

Labels (3)
1 Solution

arkadyz1
Builder

Since I haven't seen any responses here and I really need to solve this, I decided to find some brute force solution. Since there is a base search, I simply save its $job.sid$ inside <progress>, pass it as a field to the command and use that field's value as a "lock file" name inside some agreed upon folder. If the results are not empty and the file is successfully opened with os.O_CREAT | os.O_EXCL, I proceed to execute the rest, otherwise just bail out of the script. The folder where those lock files are created is periodically sweeped to delete all files older than some number of minutes.

View solution in original post

spunk_enthusias
Path Finder

I also have this issue with a chunked custom search command. I simply call the command in a search (Fast/Verbose doesn't make a difference) and the command is called twice. Preview is off.

The job inspector shows 2 invocations, but doesn't explain them.

My logs also show 2 invocations, with some differences:

  • The metadata field is a bit shorter on the first invocation
  • Only in the second invocation is the search_results_info field filled (with a lot of general information).

Interestingly both times the action is getinfo, preview is False and streaming_command_will_restart is True.

That getinfo is the first call seems to be a thing of the V2-Protocol from my reading of the Splunklib code.

Edit: At some point I get this message:

 

ERROR ChunkedExternProcessor [27114 ChunkedExternProcessorStderrLogger] - stderr: Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>

 

It  should be noted that even though the file is loaded twice and the prepare method is called twice, the generate method is not.

Too bad that for my application loading the file is exactly the problem.

0 Karma

arkadyz1
Builder

Since I haven't seen any responses here and I really need to solve this, I decided to find some brute force solution. Since there is a base search, I simply save its $job.sid$ inside <progress>, pass it as a field to the command and use that field's value as a "lock file" name inside some agreed upon folder. If the results are not empty and the file is successfully opened with os.O_CREAT | os.O_EXCL, I proceed to execute the rest, otherwise just bail out of the script. The folder where those lock files are created is periodically sweeped to delete all files older than some number of minutes.

harshpatel
Contributor

Did you find out why its called multiple times? I am having the similar issue where my logs are logged multiple times but events are only coming in one of the invocations.

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...