Alerting

How do I merge the following searches together?

macadminrohit
Contributor

Hi Experts,

I have a confusing situation in terms of handling two searches. The situation is like this:

1) We get a Windows event log(TYPE=1) when a server goes down, and we have one saved search which runs every 15 minutes. This search looks for the last 15 minutes of data, and if it finds an alert, it turns the status as RED ex: eval health="RED"

2) We have another search which runs every 16th minute and looks for the resolved event (TYPE=2) which says server has come up. This saved search will again look for the last 15 minutes of data and turns the status as GREEN ex: eval health="GREEN"

I have two questions here :

1) How do I merge these two searches together ?

2) Also here is another scenario: The first search runs every 15 minutes and looks for the last 15 minutes of data and turns the server RED. Next time, it runs again, and if the event is not there, it will turn it GREEN. But the event is still there, back in time, and there is no corresponding (TYPE=2) event yet, so ideally the first search should keep it RED. One way of avoiding this situation is to look for last 1-2 hours data in the first search so that it will find that event far back in time and keep the server RED for good 2 hours rather than just keeping it RED for 15 minutes.

Here are my two searches:

index="netcool" sourcetype="netcool_alerts" ALERTKEY="Failed to Connect to Computer" TYPE=1 NODE="ISP*" NOT NODE=ISP9* 
| rename LOCATION as loc NODE as host 
| stats latest(TYPE) as TYPE,latest(_time) as _time by loc host 
| rex field=host "ISP(?<loc>\d+)(?<hostType>\w)$" 
| eval health=if(hostType="F","YELLOW","RED")
| append 
    [| inputlookup host_list.csv 
    | search NOT host=ISP9* 
    | rex field=host "ISP(?<loc>\d+)(?<hostType>\w)$" 
    | table loc host hostType ] 
| eventstats count as occurence_count by host 
| fillnull value=0 TYPE 
| where NOT (occurence_count=2 AND TYPE=0) 
| fillnull value="GREEN" health 
    | eventstats values(eval(case(hostType="A",health))) as A_Health by loc
    | eval A_Health=if(hostType="B",A_Health,"NA")
    | eval health=if(hostType="B" AND health="RED" AND A_Health="RED","RED",
                       if(hostType="B" AND health="RED" AND A_Health="GREEN","YELLOW",health))

Below is the search that turns the store GREEN when it finds a resolved alert for that server.

index="netcool" sourcetype="netcool_alerts" ALERTKEY="Failed to Connect to Computer" TYPE=2 NODE="ISP*" NOT NODE=ISP9* 
| rename LOCATION as loc NODE as host 
| stats latest(TYPE) as TYPE,latest(_time) as _time by loc host 
| eval health="GREEN" 

As you can see in the first search where i am appending the list of servers, thats because i want to consider all the other servers as GREEN which don't have any alert.

0 Karma

horsefez
Motivator
 index="netcool" sourcetype="netcool_alerts" ALERTKEY="Failed to Connect to Computer" (TYPE=1 OR TYPE=2) NODE="ISP*" NOT NODE=ISP9* 
 | rename LOCATION as loc NODE as host 
 | stats latest(TYPE) as TYPE,latest(_time) as _time by loc host 
 | rex field=host "ISP(?<loc>\d+)(?<hostType>\w)$" 
 | eval health=case(hostType="F","YELLOW",hostType!="F","RED",TYPE=2,"GREEN")
 | append 
     [| inputlookup host_list.csv 
     | search NOT host=ISP9* 
     | rex field=host "ISP(?<loc>\d+)(?<hostType>\w)$" 
     | table loc host hostType ] 
 | eventstats count as occurence_count by host 
 | fillnull value=0 TYPE 
 | where NOT (occurence_count=2 AND TYPE=0) 
 | fillnull value="GREEN" health 
     | eventstats values(eval(case(hostType="A",health))) as A_Health by loc
     | eval A_Health=if(hostType="B",A_Health,"NA")
     | eval health=if(hostType="B" AND health="RED" AND A_Health="RED","RED",
                        if(hostType="B" AND health="RED" AND A_Health="GREEN","YELLOW",health))
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...