Reporting

Saved search running from very long time and not finishing?

saivijayr
Loves-to-Learn

Hi Folks,
From last couple of  weeks we have observed an issue in our newly developed Splunk app(Radware Bot Risk Scanner ). our app schedules a saved search which runs every hour and extract some data from indices and forwards to custom search command which we developed and saves the result in result indices.

Flow:  Splunk Search -> Custom Search Command (which preforms REST API call for each record) -> save result to new indice.

Saved Searches got stuck in Running state. when I try to stop it manually, its going to Finalizing state not done state. Ideal time for this saved search to finish is ~2mins including all Rest API calls, yet you can see often its running from a very long time. please refer attached screenshot for the same
       BRS.PNG
Wanted to attach search log as well but can't due to message restriction

Any help or idea over here is very much appreciated, thanks in advance 😊.

P.S: Very important thing to notice is if I run any job for any hour manually, I wasn't facing any issues at all 😁.

Labels (2)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Have a look at the job inspector for the running jobs. If that search scheduled to run hourly and has a 2 hour expiry, it looks like there are skipped searches.

Check if you do - also you have a yellow warning triangle - does that tell you anything about searches?

Do you have any logging from your custom search command that shows whether the REST commands are working or not?

0 Karma

saivijayr
Loves-to-Learn

Hi @bowesmana , thanks for responding back.

I have debugged search log for few of pending jobs, I didn't get any clue from it.

Yes, yellow warning triangle indicating about skipped searches as current running jobs were reached the threshold and new ones being skipped automatically.

I was actually debugging from my API side only as my custom search command uses multithreading to speedup the of REST API calls. I was checking for scenarios like Unbounded Queue/ Unhandled exceptions that might cause this.

Will add more info once I 'm done with debugging.

0 Karma
Get Updates on the Splunk Community!

Infographic provides the TL;DR for the 2023 Splunk Career Impact Report

We’ve been shouting it from the rooftops! The findings from the 2023 Splunk Career Impact Report showing that ...

Splunk Lantern | Getting Started with Edge Processor, Machine Learning Toolkit ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...

Enterprise Security Content Update (ESCU) | New Releases

In the last month, the Splunk Threat Research Team (STRT) has had 2 releases of new security content via the ...