Hi,
I use summary indexing alot in my custom app. Recently I created a second app and added a summary index. The scheduled search is working fine, and I can successfully view the 'recent results'.
Problem is, no data ends up in my summary index. How can I go about debugging the issue ? There are no errors in splunk.log which marry up with the time I have the search scheduled for. It's definitely running right, and I believe it's configured correctly as the config is identical to the working summary indexes I have setup in my other app.
Thanks.
Sample output from the 'collect' example below
***SPLUNK*** index=summary
08/02/2010 06:00:00, info_min_time=1280692800.000, info_max_time=1280779200.000, info_search_time=1280840177.929, am_approved="-5771.97", am_transactions="-5771.97", no_approved=26, no_transactions=26
If the base search looks like its giving you correct tabular results, then I usually will debug using the collect
command. http://www.splunk.com/base/Documentation/latest/SearchReference/Collect
This is actually what is called when you enable summary indexing. You can add it to the end of your base search. Where I usually start is:
... | collect addtime=true index=summary spool=false file=x.txt
This will cause the summary index contents file to be written to the $SPLUNK_HOME/var/run/splunk/x.txt, instead of the spool directory. This basically prevents the data from being indexed into the summary index by Splunk, and thereby prevents it from deleted after indexing. You can inspect the file to see if it looks like there are problems with it that might prevent Splunk from picking up the file. (You ought to delete it by hand when you're done with it.)
Will do. Thanks for the help.
Well, if the file gets written (which I still assume it does, since it writes it okay to the other location), if you don't see it it's because it was read, indexed, and then deleted by Splunk. So it should be in the summary index (well, the index that was named at the top of the generated file). Not sure what's going on. especially since another summary is working. Might have to open a support case on this one.
I have removed the spool and file arguments, but nothing is written to var/spool/splunk that I can see, even though the GUI reports a .stash file had been written. Perhaps it's loaded too quick ? In any case, still nothing in my summary index
The file looks right. I guess next step is to see (when you remove the spool=false and file=x.txt arguments) that it is written to var/spool/splunk. From there, the default batch input should read and index the file. If it's not indexed, it won't be deleted, if it is, it will be deleted.
Thanks. I did as you suggested and the file generated successfully. I've added the first two lines of the file to the question above. Does it look right to you ?
Hmm, not 100% sure how to debug this as I never ran into problems like this, but you can take a look at your scheduled searches and their actions in the _internal index.
index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" savedsearch_name="Your search name"
Should give you some data on what alert_actions were taken for your saved search. Also interesting would be took at the status value -- maybe there are some failures?
Might be a good starting point.
Nope, nothing like that enabled. This is a tricky one ! Has to be something obvious
This might point at a problem, yeah. You don't have LWF enabled or SplunkForwarder apps enabled?
Yes, that is odd. Did you disable all the default inputs that pull in splunkd.log, etc? Any events in index=_audit?
Admin definitely has rights to 'All internal tables', but search for index=_internal for the past 24 hours returns nothing. Is that odd ?
admin by default can search all internal indexes. You can check what indexes a role can search by going to Manager > Access Controls > Roles > Role_Name > Indexes.
Hmmm, is there something I need todo to be able to search the _internal index ? I'm logged in as admin, and searching for
index="_internal"
gives me no results