Splunk Search

Alerting based off of no events by server

ft_kd02
Path Finder

Hi all,

I'm setting up an alerting process that monitors different servers on a single index and sends an alert out if no events are fired over a 24 hour period. It's set to run at midnight and look back over the last 24 hours. If no events are found on any of the hosts, it should send an email with the details of that host. I'd like to set up 1 alert, if possible, rather than setting up an alert for each host. 

I should specify that host is internal, not the 'splunk_server', but each host is one of our servers.

Here's what I tried so far:

index=#### sourcetype=#### source=/usr/app/*/logs/#####.txt
| rex field=source "\/[^\/]+\/[^\/]+\/(?<env>[^\/]+)\/.*"                // extract our the environment
| stats count by host                                                                                            // count by host
| lookup ######.csv serverLower AS host output IP   // add in the ip of the host to add to the table
| table env, host, IP, count                                                         // inline table passed to alert

This will give us a correct assessment of count by host, but will just return that count to the email alert. I'd like to only send out an email when a count is 0 for a specific host, and then send only the details of that host with count=0. 

Thanks

Labels (1)
0 Karma
1 Solution

somesoni2
Revered Legend

Try something like this:

 

index=#### sourcetype=#### source=/usr/app/*/logs/#####.txt
| stats count by host
| append [| inputlookup ######.csv | rename serverLower AS host | eval count=0]
| stats max(count) as count by host | where count=0
| lookup ######.csv serverLower AS host output IP   // add in the ip of the host to add to the table
| table env, host, IP, count 

View solution in original post

somesoni2
Revered Legend

Try something like this:

 

index=#### sourcetype=#### source=/usr/app/*/logs/#####.txt
| stats count by host
| append [| inputlookup ######.csv | rename serverLower AS host | eval count=0]
| stats max(count) as count by host | where count=0
| lookup ######.csv serverLower AS host output IP   // add in the ip of the host to add to the table
| table env, host, IP, count 

ft_kd02
Path Finder

Hi @somesoni2, thanks for your response. Does this solution assume that the lookup has a count as well? I should have included the format: 

server serverLower IP ENV


Tags (1)
0 Karma

somesoni2
Revered Legend

No, it doesn't. It creates a new field 'count' with value as 0 for all rows of lookup. After final stats, hosts still showing count=0 are the hosts that are not reporting any data.

ft_kd02
Path Finder

I am newish to splunk (so it's probably me), but I was having trouble getting your solution to work for my use case. I put together this SPL based on your suggestion, and it seems to return what I need. Is this functionally the same as your response?

| inputlookup #######-test.csv            // separated the lookup table to prod / test
| rename serverLower AS host
| eval count=0                                                // input all servers, evaluate to 0
| append
[ search index=##### sourcetype=### source=/usr/app/*/logs/#####.txt
| stats count by host
]
| stats sum(count) AS eventCount by host
| where eventCount=0
| lookup ######-test.csv serverLower AS host output IP, ENV
| table ENV, host, IP, eventCount

This returns all hosts where the event count is 0, and I plan to set it up to run over the last 24 hours on a cron schedule. 

0 Karma

somesoni2
Revered Legend

It is a similar solution. I prefer running the indexed data search as primary search and not as subsearch (like search you're running as part of "append" command as subsearches generally have limitations. So basically you can just reverse the order of the search (run indexed data search as primary and lookup table as subsearch, it runs better as its fetching static lookup data), and you should be good.

index=##### sourcetype=### source=/usr/app/*/logs/#####.txt
| stats count by host
| append
[ | inputlookup #######-test.csv            // separated the lookup table to prod / test
| rename serverLower AS host
| eval count=0                                                // input all servers, evaluate to 0
]
| stats sum(count) AS eventCount by host
| where eventCount=0
| lookup ######-test.csv serverLower AS host output IP, ENV
| table ENV, host, IP, eventCount

ft_kd02
Path Finder

Thanks for the explanation.

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust
index=#### sourcetype=#### source=/usr/app/*/logs/#####.txt
| stats count by host
| append [| inputlookup ######.csv | rename serverLower AS host | dedup host]
| stats count by host
| where count=1

ft_kd02
Path Finder

Hi ITWhisperer, thanks for your response. Would you be willing to explain how this works as opposed to something like | where count=0?

| rex field=source "\/[^\/]+\/[^\/]+\/(?<env>[^\/]+)\/.*"
| stats count by env, host
| where count=0                                                                                                  // return only hosts, env where count=0

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Splunk will can events that exist. Since no events exist, in the main search, the counts won't exist for host which don't have events. So, you have to add events for the hosts that are missing. By adding events for all hosts (that exist in the csv file), and counting all the events, hosts which have a count of 1 either have events in the main search and not the csv file or events in the csv file and not in the main search. Assuming your csv file is up to date, this will help you identify which hosts haven't had any events in the main search.

ft_kd02
Path Finder

Thanks for the explanation. Will try and see what I come up with. 

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...