Splunk Search

Monitoring sources - better way?

Ultra Champion

Hello there

I'm trying to prepare a dashboard that will query indexes for latest events during a given period (let's say - last 30 minutes) from a list of event sources and will warn users if the latest events are older than a given threshold (or maybe I'll apply some more sophisticated logic later; I don't know yet). I also want to know if there are no events whatsoeer

The problem is - I don't just want to query everything - I have a lookup  that defines my event sources to monitor. Depending on the type of the source I might distinguish the source by index/host pair, index/source pair; there may be some other method in the future but for now that's it.

So what is my problem now? 🙂

The problem is that I don't like my solution - it's kinda ugly.

I need to first do a subsearch with inputlookup to define a set of conditions for tstats, then I have to transform (and probably aggregate some results since - for example - for file-based sources I can have multiple results if I do a tstats over index/source/host trio) and after that I have to do a inputlookup again to create a zero-valued fallback to aggregate with tstats result.

So effectively I have something with general structure of:

| tstats [ | inputlookup
   | eval/whatever/prepare conditions]
| stats/transform/whatever
| append
   [ | inputlookup
     | eval/whatever/prepare ]
| stats sum and tidy the results
| check_for_zeros, check threshold and so on...

That's the general idea.

It should work but I don't really like the fact that I need to use subsearched inputlookup twice and results of those subsearches will be - I suppose - highly similar to each other.

Any idea if it can be performed in a more "tidy" way?

Labels (3)
0 Karma

Revered Legend

Your below use-case requires subsearch of inputlookup as Splunk will not know about what host/source you're expecting data from if the data is not there.

I also want to know if there are no events whatsoeer

Ultra Champion

Yes, I know 🙂

That's what the second inputlookup is for - to generate zero-valued "results" to sum with tstats. Maybe I wasn't clear enough about this. But thanks for the heads up.

I suppose I can't remove any of the subsearches because I can't "reuse" part of the results of earlier subsearch so I need to re-run the inputlookup.

Good thing is that the inputlookup subsearches should be very quick compared to the main search so I wouldn't be wasting good searches for long. (This is an environment which has many scheduled searches and even though I have a reasonably powerful searchhead cluster I'm quite conscious about the search number limits).

0 Karma


you could probably do the lookups at the end of the tstats search & all transforms


| tstats ...
| search [| inputlookup field1 lookup1.csv | ... ] | search [| inputlookup lookup2.csv | ... ]


Ultra Champion

I'd still need two subsearches. And I would need to do tstats across all my indexes/sources/host. So that's not really an improvement 😉

But thanks for the idea.

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...