Alerting

How do I include multiple indexes in one search and trigger an alert if there is no data in any one index and include the index name?

varad_joshi
Communicator

I have configured 3 different alerts for 3 indexes. I get an alert if there is no data in an index when the search is fired. I am trying to consolidate 3 searches in 1.

So out of 3 indexes (say xyz, abc, lmn), if 2 have data and 1 doesn't, then it should trigger an alert with the index name which didn't have data.

Any help with the logic?

0 Karma
1 Solution

acharlieh
Influencer

First of all if you're using only default fields like index or splunk_server, you should be using metasearch as that saves you from having to unzip the raw event.

If I wanted to alert based on number of events in 3 indexes my first pass of a search would look like:

| metasearch index=xyz OR index=abc OR index=lmn
| stats count by index
| append [ noop | stats count | eval index=split("xyz;abc;lmn",";") | mvexpand index ] 
| stats max(count) as count by index
| where count = 0

Let's break this down:

The metasearch is just like search but with only the metadata. We count the number of events per index. We then artificially append a record for each index with a count of 0 (offhand I'm not sure if the mvexpand is needed or not. Using stats again we eliminate the 0 rows for those indexes whose count is greater than 0. (Keeping the others. Finally we use the where clause to eliminate rows that do not meet our threshold. (And alert if we have any results)

Yes there are other optimization that could be done, but interestingly if you had a lookup that listed all the desired indexes to check you could use inputlookup in the append for generating the 0 count rows and in a subsearch building the metasearch arguments so you don't have to remember to write the index names multiple times. If you're interested in all or most Splunk indexes, the rest command against the /servicesNS/-/-/data/indexes endpoint could take the place of the two inputlookup commands.

EDIT: If I had a lookup called report_indexes that looked like:

index, notify
xyz, alice@example.com
abc, bob@example.com
lmn, eve@example.com

My search could look like this:

| metasearch [inputlookup report_indexes | fields index]
| stats count by index
| append [ inputlookup report_indexes | eval count = 0 ]
| stats max(count) as count first(notify) as email_to by index
| where count = 0

Now your results includes who to email when this happens for each index... You could additionally pipe to the sendresults command to have Splunk email your index owners that you specify in the lookup file for each index 🙂

View solution in original post

somesoni2
SplunkTrust
SplunkTrust

Try something like this

| gentimes start=-1 | eval index="index1 index2 index3 index4" | table index | makemv index | mvexpand index | eval DataComing="N" | append [search index=xyz OR index=lmn OR index=abc earliest=-1h@h latest=now dedup index |table index | eval DataComing="Y"] | stats values(DataComing) as DataComing by index | where mvcount(DataComing)=1  

Raise an alert if above search returns any row.

acharlieh
Influencer

First of all if you're using only default fields like index or splunk_server, you should be using metasearch as that saves you from having to unzip the raw event.

If I wanted to alert based on number of events in 3 indexes my first pass of a search would look like:

| metasearch index=xyz OR index=abc OR index=lmn
| stats count by index
| append [ noop | stats count | eval index=split("xyz;abc;lmn",";") | mvexpand index ] 
| stats max(count) as count by index
| where count = 0

Let's break this down:

The metasearch is just like search but with only the metadata. We count the number of events per index. We then artificially append a record for each index with a count of 0 (offhand I'm not sure if the mvexpand is needed or not. Using stats again we eliminate the 0 rows for those indexes whose count is greater than 0. (Keeping the others. Finally we use the where clause to eliminate rows that do not meet our threshold. (And alert if we have any results)

Yes there are other optimization that could be done, but interestingly if you had a lookup that listed all the desired indexes to check you could use inputlookup in the append for generating the 0 count rows and in a subsearch building the metasearch arguments so you don't have to remember to write the index names multiple times. If you're interested in all or most Splunk indexes, the rest command against the /servicesNS/-/-/data/indexes endpoint could take the place of the two inputlookup commands.

EDIT: If I had a lookup called report_indexes that looked like:

index, notify
xyz, alice@example.com
abc, bob@example.com
lmn, eve@example.com

My search could look like this:

| metasearch [inputlookup report_indexes | fields index]
| stats count by index
| append [ inputlookup report_indexes | eval count = 0 ]
| stats max(count) as count first(notify) as email_to by index
| where count = 0

Now your results includes who to email when this happens for each index... You could additionally pipe to the sendresults command to have Splunk email your index owners that you specify in the lookup file for each index 🙂

Pawlub1
Engager

I like this query, but I have indices with long names that incorporate underscores "_" and the split command is not working in this scenario. Quotations did not work, but astericks did.  I do not want to use asterisks as I will be generating alerts and do not want extra characters in the message. 

Please let me know how to use split with this naming convention for indices. 

| metasearch index=AB_123_CDE OR index=CD_345_EFG OR index=EF_678_HIJ
| stats count by index
| append [ noop | stats count | eval index=split("AB_123_CDE;CD_345_EFG;EF_678_HIJ",";") | mvexpand index ]
| stats max(count) as count by index
| where count = 0

0 Karma

varad_joshi
Communicator

Thanks for the answer. Metasearch was a new thing for me. I eventually ended up using inputcsv however this was indeed informative.
Cheers.

0 Karma

drumster88
Explorer

If you need to define this hourly then you can do something like this

index=xyz OR index=lmn OR index=abc earliest=-2h@h latest=-1h@h NOT [search index=xyz OR index=lmn OR index=abc earliest=-1h@h latest=now | table index] | table index

jensonthottian
Contributor

This should resolve your issue.

index=xyz OR index=lmn OR index=abc earliest=-1h@h latest=now| dedup index |table index

Alert when value is less than 3 results . Alert Condition If number of events is less than "3"

varad_joshi
Communicator

thanks @jensonthottian for the answer.

This will tell me if any of the indexers will not have events. However what I would also like to know is which indexer didnt get the event. If there was no data in abc then Alert should say that no data was found in abc.

0 Karma

ppablo
Retired

Hi @varad_joshi

Be sure not to confuse Indexers (http://docs.splunk.com/Splexicon:Indexer ) with indexes (http://docs.splunk.com/Splexicon:Index ). These are two very different things. I edited your question content to clarify any confusion for other users.

varad_joshi
Communicator

Hi @ppablo thank you so much for correcting and clearing confusion. Cheers

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...