Splunk Cloud Platform

How to make alert two different alerts into one alert with their alert name

raghunandan1
Engager

Hi All,

We have index=gems, in the index we have configured gems servers and wms servers and also created one alert.
The alert name is CBSIT Alert GEMS NFS stale.
So, we want create an alert for wms servers with the same alert .

So, here for us a single alert should contain gems alert name when gems server alert trigger and WMS alert name when WMS server alert trigger.

In the index=gems having gems servers 7 and wms servers 7

Ex : Gems server name sclpisgpgemspapp001

WMS server name silpdb5300.ssdc.albert.com

We are using below SQL query for CBSIT Alert GEMS NFS stale

Alert name : CBSIT Alert GEMS NFS stale

 

index = "gems" source = "/tmp/unresponsive" sourcetype=cmi:gems_unresponsive | table host _raw| eval timestamp=strftime(now(),"%Y-%m-%d %H:%M:%S")
| eval correlation_id=timestamp.":".host
| eval assignment_group = "CBS IT - Application Hosting - Unix",impact=3, category="Application",subcategory="Repair/Fix" , contact_type="Event", customer="no573", state=4, urgency=3 , ci=host
| eval description = _raw , short_description = "NFS stale on ".host

 

Can you please help us here.

 

Labels (2)
Tags (2)
0 Karma

Richfez
SplunkTrust
SplunkTrust

Easiest is to just have two alerts.  There's practically *zero* downsides to just building a new search (you can start with your existing one!) and then creating an alert out of it once it appears like you've got the search all sorted out.

That being said, I think you might just need to change

source = "/tmp/unresponsive" sourcetype=cmi:gems_unresponsive

to be able to do both at once.  I'm not sure what you need to change that to though.

MAYBE - if /tmp/unresponsive is the source for either server, maybe all it needs is

source = "/tmp/unresponsive" ( sourcetype=cmi:gems_unresponsive OR sourcetype=<whatever the sourcetype is for the other servers> )

And honestly, I'd go back to that core piece of the search (index=foo, source=bar, sourcetype=baz) and *find the events* first.  It should make it more obvious how to get their data in there too.

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...