Alerting

Usecase and alerting

revanthammineni
Path Finder

Hello All!! I’m looking to set up an alert everyday based on a lookup data comparing with a summary report.

— lookup has hosts that are all reporting.

—- summary report has hosts are reporting everyday and it runs every midnight.

Example: Look up has 2000 hosts And summary report has 1000 hosts. I need to report that delta 1000 hosts. Which would be an alert set up.

How can I achieve this. I’m trying with set and outer join but couldn’t able to get the result.

MY SEARCH : index= summary source=Daily.report reporting=yes earliest=-2d latest=-1d | table host, ip 

lookup : hosts.csv

 

Please help me in getting the solution. Thanks in advance!

 

Labels (2)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @revanthammineni,

if you have the list of hosts to monitor in a lookup the solution id described in hundreds of my answers!

Anyway, if the field in the lookup hosts.csv is called host (if it's different, you have to rename it in the search) and there's also the ip field, you could run something like this:

index= summary source=Daily.report reporting=yes earliest=-2d@d latest=-1d@d 
| eval host=lower(host)
| stats count BY host
| append [ | inputlookup hosts.csv | eval host=lower(host), count=0 | fields host ip count ]
| stats values(ip) AS ip sum(count) AS total BY host
| where total=0
| table host ip 

in this way the hosts with total=0 are missing, the ones with total>o sent logs in the period.

If you like, you could also display in a dashboard all the hosts with own status.

Only one little hint: in this way you'll have the alert that hosts aren't sending logs too late, it could be better to run this alert, not on the summary index, but on the index where logs are stored and not with a frequency of one day but every 5-10 minutes, so you'll be able to intervene soon.

Ciao.

Giuseppe

0 Karma

revanthammineni
Path Finder

This logic isn’t working because I’m not intended to display the count. I want to know the delta with hostnames. 

example:

summary report has: a,b,c,d,e hosts

look up has : a,b,c,d,e,f,g,h

so I need compare two and get the delta “f,g,h”

and get reported. 

Also, we have massive set of data. Like millions worth of data every min. So I cannot do on regular index. So I opted for summary report.

thanks in advance! 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @revanthammineni,

sorry I was'n so crear:

with my search you don't display the count (| table host ip), you use the count to identify the missed hosts from your lookup, because

  • the ones with total>o are sending logs (we're not interested to how many events),
  • the ones with total=0 are missed because the value 0 is coming only from the lookup and not from the summary (the result you want!).

About the second issue: yes I understand that you have many events and it isn't possible to analuze logs of two days, but I hinted to analyze logs of five minutes (also some milion of events can be easily managed by Splunk), in this way you have in near real time the alert of a missed host and not after one day!

In addition, if you use the | metasearch command (that you can use because you're using only the host field), you have a quicker search:

 

| metasearch index=your_index earliest=-5m@m latest=@m 
| eval host=lower(host)
| stats count BY host
| append [ | inputlookup hosts.csv | eval host=lower(host), count=0 | fields host ip count ]
| stats values(ip) AS ip sum(count) AS total BY host
| where total=0
| table host ip 

 

that you can schedule every five minutes and have immediately the alert when an host is missed.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...