Splunk Dev

Piping in Splunk

asarolkar
Builder

Hi all:

It is possible to pipe the results of a temp csv file created in search (using outputcsv) > into a python script located at $SPLUNK_HOME/etc/apps/search/bin > and then pipe it back into Splunk ?

Note that the CSV file itself is an argument to the python script.

with open('data.csv', 'rb') as f:
   reader = csv.reader(f)

So basically:

i) I want to take a search like this : sourcetype="access_combined" host="us-1" | outputcscv data.csv


ii) Then take the data.csv and pass it to a python script (not sure how to go about it)


iii) Then take the output of the python script (which looks like this) - and put it back in Splunk

ORG PROFIT%


1 10


2 5


5 7

Any suggestions would be appreciated

0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

It sounds to me like you really want a custom search command, instead of doing what you described. See here http://docs.splunk.com/Documentation/Splunk/latest/Developer/SearchScripts and any of the *.py files in $SPLUNK_HOME/etc/apps/search/bin/ for examples and the docs for the commands.conf file. But basically, each of these is called as if it's a search command in the Splunk search pipeline, receives CSV (prefixed with a few lines of header) on stdin, and is expected to produce CSV on stdout. Splunk handles moving the CSV between each part of the search query pipeline.

In the examples, you'll see a references to Intersplunk objects, and you can use it, or else you can ignore it and just read and process the CSV from stdin in your script.

View solution in original post

gkanapathy
Splunk Employee
Splunk Employee

It sounds to me like you really want a custom search command, instead of doing what you described. See here http://docs.splunk.com/Documentation/Splunk/latest/Developer/SearchScripts and any of the *.py files in $SPLUNK_HOME/etc/apps/search/bin/ for examples and the docs for the commands.conf file. But basically, each of these is called as if it's a search command in the Splunk search pipeline, receives CSV (prefixed with a few lines of header) on stdin, and is expected to produce CSV on stdout. Splunk handles moving the CSV between each part of the search query pipeline.

In the examples, you'll see a references to Intersplunk objects, and you can use it, or else you can ignore it and just read and process the CSV from stdin in your script.

Kate_Lawrence-G
Contributor

I think you can get close to what you want to do:

  1. You can configure a script to run when an alert or scheduled search is run. This can be the python script that you want to initiate. Splunk supports using the results a search in the script itself with the SPLUNK_ARG_8 option. This is a gzip'd file that you have to read and put into a dictionary to use.
  2. Once your script is done doing it's thing you can then write the results back to Splunk by dropping a file into the sinkhole directory (/opt/splunk/var/spool) so that they get splunk'd again and are available for searching.
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...