I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query my custom query starts running and giving me incomplete data and i have only mentioned filename attribute in the commands.conf file of my custom command will this be the reason
I dont think its a chunked data
What custom command type are you using? Are you accepting chunked data?
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, basically my query will look something like the below one
index=your_index sourcetype=your_sourcetype
| search your_search_conditions | lookup your_lookup_table OUTPUTNEW additional_fields
| eval new_field = if(isnull(old_field), "default_value", old_field)
| table required_fields | exportcsv {file_path} {filename}
in which exportcsv is my custom command and my commands.conf file looks like below
[exportcsv] filename = exportcsv.py
What class is your command (from your python source)?
I am not specifically using any class type i just read the data from my stdin and writting directly to my csv
That's a bit more complicated than that.
See https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/
I could see we need to use splunklib library in custom command creation but when i try to install the library i am getting a exception due to its dependency download which is pycrypto which i understood is not supported in splunk version 9.x, is there a alternate way to do it.
My guess is that your custom command is getting the first chunk and stopping? Or perhaps your script is being called multiple times and you are perhaps overwriting the csv?
It would be helpful if you could suggest a way to overcome this if that's the case
Try implementing using the class model
when i try to install splunklib i am getting a error because of pycrypto ....so couldn't follow that
this is my script, sorry i am new to splunk, so not aware what exactly you are asking
import csv
import os
import sys
def exportcsv():
try:
if len(sys.argv) < 3:
sys.exit(1)
release = sys.argv[1]
foldername = sys.argv[2]
os.makedirs(folder_path, exist_ok=True)
input_data = sys.stdin.read()
rows = input_data.strip().split('\n')
header = rows[0].split(',')
data = [row.split(',') for row in rows[1:]]
filepath = os.path.join(folder_path, filename)
print(f"Writing to file: {filepath}", file=sys.stderr)
with open(filepath, 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(header)
writer.writerows(data)
print("Status")
print("Export Successful to " + filepath)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
exportcsv()