Alerting

How do I include multiple indexes in one search and trigger an alert if there is no data in any one index and include the index name?

Communicator

I have configured 3 different alerts for 3 indexes. I get an alert if there is no data in an index when the search is fired. I am trying to consolidate 3 searches in 1.

So out of 3 indexes (say xyz, abc, lmn), if 2 have data and 1 doesn't, then it should trigger an alert with the index name which didn't have data.

Any help with the logic?

0 Karma
1 Solution

Influencer

First of all if you're using only default fields like index or splunk_server, you should be using metasearch as that saves you from having to unzip the raw event.

If I wanted to alert based on number of events in 3 indexes my first pass of a search would look like:

| metasearch index=xyz OR index=abc OR index=lmn
| stats count by index
| append [ noop | stats count | eval index=split("xyz;abc;lmn",";") | mvexpand index ] 
| stats max(count) as count by index
| where count = 0

Let's break this down:

The metasearch is just like search but with only the metadata. We count the number of events per index. We then artificially append a record for each index with a count of 0 (offhand I'm not sure if the mvexpand is needed or not. Using stats again we eliminate the 0 rows for those indexes whose count is greater than 0. (Keeping the others. Finally we use the where clause to eliminate rows that do not meet our threshold. (And alert if we have any results)

Yes there are other optimization that could be done, but interestingly if you had a lookup that listed all the desired indexes to check you could use inputlookup in the append for generating the 0 count rows and in a subsearch building the metasearch arguments so you don't have to remember to write the index names multiple times. If you're interested in all or most Splunk indexes, the rest command against the /servicesNS/-/-/data/indexes endpoint could take the place of the two inputlookup commands.

EDIT: If I had a lookup called report_indexes that looked like:

index, notify
xyz, alice@example.com
abc, bob@example.com
lmn, eve@example.com

My search could look like this:

| metasearch [inputlookup report_indexes | fields index]
| stats count by index
| append [ inputlookup report_indexes | eval count = 0 ]
| stats max(count) as count first(notify) as email_to by index
| where count = 0

Now your results includes who to email when this happens for each index... You could additionally pipe to the sendresults command to have Splunk email your index owners that you specify in the lookup file for each index 🙂

View solution in original post

0 Karma

Revered Legend

Try something like this

| gentimes start=-1 | eval index="index1 index2 index3 index4" | table index | makemv index | mvexpand index | eval DataComing="N" | append [search index=xyz OR index=lmn OR index=abc earliest=-1h@h latest=now dedup index |table index | eval DataComing="Y"] | stats values(DataComing) as DataComing by index | where mvcount(DataComing)=1  

Raise an alert if above search returns any row.

Influencer

First of all if you're using only default fields like index or splunk_server, you should be using metasearch as that saves you from having to unzip the raw event.

If I wanted to alert based on number of events in 3 indexes my first pass of a search would look like:

| metasearch index=xyz OR index=abc OR index=lmn
| stats count by index
| append [ noop | stats count | eval index=split("xyz;abc;lmn",";") | mvexpand index ] 
| stats max(count) as count by index
| where count = 0

Let's break this down:

The metasearch is just like search but with only the metadata. We count the number of events per index. We then artificially append a record for each index with a count of 0 (offhand I'm not sure if the mvexpand is needed or not. Using stats again we eliminate the 0 rows for those indexes whose count is greater than 0. (Keeping the others. Finally we use the where clause to eliminate rows that do not meet our threshold. (And alert if we have any results)

Yes there are other optimization that could be done, but interestingly if you had a lookup that listed all the desired indexes to check you could use inputlookup in the append for generating the 0 count rows and in a subsearch building the metasearch arguments so you don't have to remember to write the index names multiple times. If you're interested in all or most Splunk indexes, the rest command against the /servicesNS/-/-/data/indexes endpoint could take the place of the two inputlookup commands.

EDIT: If I had a lookup called report_indexes that looked like:

index, notify
xyz, alice@example.com
abc, bob@example.com
lmn, eve@example.com

My search could look like this:

| metasearch [inputlookup report_indexes | fields index]
| stats count by index
| append [ inputlookup report_indexes | eval count = 0 ]
| stats max(count) as count first(notify) as email_to by index
| where count = 0

Now your results includes who to email when this happens for each index... You could additionally pipe to the sendresults command to have Splunk email your index owners that you specify in the lookup file for each index 🙂

View solution in original post

0 Karma

Communicator

Thanks for the answer. Metasearch was a new thing for me. I eventually ended up using inputcsv however this was indeed informative.
Cheers.

0 Karma

Explorer

If you need to define this hourly then you can do something like this

index=xyz OR index=lmn OR index=abc earliest=-2h@h latest=-1h@h NOT [search index=xyz OR index=lmn OR index=abc earliest=-1h@h latest=now | table index] | table index

Contributor

This should resolve your issue.

index=xyz OR index=lmn OR index=abc earliest=-1h@h latest=now| dedup index |table index

Alert when value is less than 3 results . Alert Condition If number of events is less than "3"

Communicator

thanks @jensonthottian for the answer.

This will tell me if any of the indexers will not have events. However what I would also like to know is which indexer didnt get the event. If there was no data in abc then Alert should say that no data was found in abc.

0 Karma

Community Manager
Community Manager

Hi @varad_joshi

Be sure not to confuse Indexers (http://docs.splunk.com/Splexicon:Indexer ) with indexes (http://docs.splunk.com/Splexicon:Index ). These are two very different things. I edited your question content to clarify any confusion for other users.

Communicator

Hi @ppablo thank you so much for correcting and clearing confusion. Cheers

State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!