Splunk Search

ES incident_review_lookup

ziax
New Member

Dear All,

In Splunk ES, is it possible to create a realtime alert for any update in incident_review KV store? The search query ( | inputlookup append=T incident_review_lookup) will always list the entire contents of incident_review KV store. I want to use KV store's time field as Splunk search reference time. Any help is really appreciated.

Thank you,

Tags (1)
0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

You could run a search every five minutes like this:

|`incident_review` | where _time >= relative_time(now(), "-6m@m") AND _time < relative_time(now(), "-m@m")

That'll look at each new entry exactly once. You rarely really need actual realtime searches to fulfil requirements - if quick reaction is a thing, you could schedule this every minute and reduce the "indexing delay" shift:

|`incident_review` | where _time >= relative_time(now(), "-75s") AND _time < relative_time(now(), "15s")

Anyone reacting to that alert is hardly going to be quick enough for those 75s maximum delay to matter.

View solution in original post

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

If you have sufficient permissions you could try a different approach: Updates to the incident review lookup should leave a trail in _audit. Building a real-time or scheduled search with the regular timerange controls should be quite simple.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Does this work?

 |'incident_review' | convert ctime(time) | eval _time=time

OR

 |'incident_review' | convert ctime(time) ctime(_time) | eval _time=time

Ive had to do this so many times, but cant remember exactly how. I usually would use strftime or strptime but i cant make it work today... you might try that in place of my convert and evals...

 |'incident_review' | eval _time=strftime(time,"%Y-%m-%d %H:%M:%S.%3N")
0 Karma

ziax
New Member

I have tested all the three search queries. No way 😞 . It was dumping entire contents.

0 Karma

jkat54
SplunkTrust
SplunkTrust

How about this using lookup instead:

 |lookup incident_review_lookup OUTPUT time AS _time rule_id As rule ....
0 Karma

ziax
New Member

For using lookup command instead, we need a lookup field, which we don't have.

Error : Error in 'lookup' command: Must specify one or more lookup fields.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Very true... Interesting problem which has peaked my interest ;-D.

So can you take input lookup results to summary index and then query the summary index directly with some time = _time trickery?

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

That "summary index" already exists - it's called _audit.

jkat54
SplunkTrust
SplunkTrust

I guess you lose the real time then. So it's like you need to copy inputlookup.py to inputlookupRT.py and then dive into Python so that it yields one result at a time versus all results.

I am interested in helping with this custom Splunk command if you like.

You might get better and quicker results opening a ticket with Splunk though.

0 Karma

ziax
New Member

Thank you all for the response.

@jkat54,
Both time field in the lookup and _time field showing in the results are same, only format is different. Please find attached screenshot.
I searched for last 1min logs, it lists the entire contents of lookup table.

alt text

@martin_mueller
The solution you suggested was my workaround. I need a real-time search rather than scheduled search. Because I want to run a script for every search results ( using script argument $8). For real time search alerts only I think the action can be triggered for every search results (Trigger condition is Per result).

Thanks

0 Karma

jkat54
SplunkTrust
SplunkTrust

Thanks for the detail (I don't have Splunk Es license so I needed to see). I will write the search asap. On mobile right now.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

You could run a search every five minutes like this:

|`incident_review` | where _time >= relative_time(now(), "-6m@m") AND _time < relative_time(now(), "-m@m")

That'll look at each new entry exactly once. You rarely really need actual realtime searches to fulfil requirements - if quick reaction is a thing, you could schedule this every minute and reduce the "indexing delay" shift:

|`incident_review` | where _time >= relative_time(now(), "-75s") AND _time < relative_time(now(), "15s")

Anyone reacting to that alert is hardly going to be quick enough for those 75s maximum delay to matter.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Re your comment under the question: You can run per-result alert actions regardless of realtime or not.

0 Karma

ziax
New Member

Sorry, my bad. Yes, we can run it for every results.
This is my work-around solution If I am not able to create a real-time alert.

Thanks,

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Just go with the "workaround" - it's extremely rare that someone actually needs a real-time alert as opposed to a frequently running scheduled alert.

Additionally, running a real-time search on a lookup feels really wrong. The real-time facility is there to intercept matching events during indexing, that doesn't happen with lookups.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Lookups are not the same as kv store. I think you've confused the terms. Lookups are csv files on the filesystem. Kv store is a MongoDB instance.

It sounds like you want to use the time field that is in the lookup in conjunction with your time picker. If you'll give us an example of the time field name and value that is returned from the lookup, then we can tell you how to use |eval _time=convert(lookupTimeField) or similar so that the time picked applies to the time stamp in the lookup, etc

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...