I am reviewing the scheduled jobs on our Splunk system and I noticed that several people are running the same query many times and extracting something slightly different each time.
Your subject mentions writing csv files, so I assume you really do want your data ultimately to come out of the Splunk system and go into several identical copies on your real filesystem.
I would run the one search and pipe it through a custom command that simply writes the data out to several output files.
Here are some samples to start in case you haven't worked with them before (and this is off the top of my head, so beware a random syntax problem... also note that the Splunk documentation on this stuff is a bit convoluted and seems to have some problems):
After the below steps are done, you would run your search and add | script perl mydistrib to the end.
To get set up:
Add an entry to your commands.conf file
[mydistrib]
filename=distrib.pl
type=perl
retainsevents=yes
streaming=no
enableheader=false
Then in your /ops/splunk/etc/searchscripts directory, create a script named the same as "filename" above.
#!/usr/bin/perl
@outfiles = ("/path1/file1","/path2/file2","/path3/file3");
$main_out = "/path/to/primary/outfile";
open(OUTFILE,">$main_out) or die "Cannot open $main_out for writing\n";
# Copy everything Splunk sends via STDIN to a master output file
while (<>) {
print OUTFILE "$_";
}
close OUTFILE;
# Now just duplicate the file.
foreach $target (@outfiles) {
system("cp $main_out $target");
}
Your subject mentions writing csv files, so I assume you really do want your data ultimately to come out of the Splunk system and go into several identical copies on your real filesystem.
I would run the one search and pipe it through a custom command that simply writes the data out to several output files.
Here are some samples to start in case you haven't worked with them before (and this is off the top of my head, so beware a random syntax problem... also note that the Splunk documentation on this stuff is a bit convoluted and seems to have some problems):
After the below steps are done, you would run your search and add | script perl mydistrib to the end.
To get set up:
Add an entry to your commands.conf file
[mydistrib]
filename=distrib.pl
type=perl
retainsevents=yes
streaming=no
enableheader=false
Then in your /ops/splunk/etc/searchscripts directory, create a script named the same as "filename" above.
#!/usr/bin/perl
@outfiles = ("/path1/file1","/path2/file2","/path3/file3");
$main_out = "/path/to/primary/outfile";
open(OUTFILE,">$main_out) or die "Cannot open $main_out for writing\n";
# Copy everything Splunk sends via STDIN to a master output file
while (<>) {
print OUTFILE "$_";
}
close OUTFILE;
# Now just duplicate the file.
foreach $target (@outfiles) {
system("cp $main_out $target");
}
I Fired the command on search box but I am getting error
Error in 'script' command: Cannot find program 'mydistrib' or script 'mydistrib'.
I have copied the distrib.pl in \splunk\etc\apps\search\scripts
and I have two conf file and the path is \Splunk\etc\system\default and the second ones path is
\Splunk\etc\apps\search\default
index=(what ever you index) | convert ctime(_time) as timestamp | table EXAMPLE (timestamp name signature src spt dst dpt) what ever field sets you want to bring back)** | sendmail to=youremailaddress@email.com **server=(your mail server)** sendresults=true inline=false graceful=true**
Everything in bold are your main commands. Also, you put in this command | convert ctime(_time) as timestamp when you care about the time stamp. By default the time will not come out right when you output to CSV, therefore the need for the command.