Alerting

Alerts based on metadata command output

dearimranz
Engager

I am using the following search to see what hosts have stopped sending data to splunk server.

| metadata type=hosts index=* | where relative_time(now(), "-1d") > lastTime | convert ctime(lastTime) as Latest_Time | sort -lastTime | table host,Latest_Time

It returns me some hosts that have stopped sending data to my splunk server. For example the response from this search is:

0 matching events
(This is always 0 when using metadata command)

13 results over all time
(This is a list of 13 hosts along with time stats about events)

What is the different between events and results when using metadata command? I want to create an alert based on when more than 0 results overall are returned and not based on matching events which are always 0.

I couldn't created an alert by this output since alerts are created based on the "events" (which is always 0 in this case) and not on the "results" (which are not events but some sort of stats about hosts). I have tried several ways to create alert based on above output but couldn't. Please write high level steps about how (logic) to create an alert based on results overall and on not matching events using splunk manager gui AFTER I have received above result from the search.

Tags (2)
0 Karma
1 Solution

kristian_kolb
Ultra Champion

As you've noticed the results presented are not based on the events. I.e. the search process will not search through a gazillion events to find those that match e.g. a string like 'error'. Instead the metadata command will look at the metadata for the indexes and buckets. This is a MUCH faster operation, and there will be no matching events, since the events were not even searched. That is also why you only have a limited set of search options, there simply is not all that much to search upon.

However, In your case it would probably be a good idea to look at the Splunk Deployment Monitor app, which can help you alert on missing forwarders and sourcetypes etc etc.

http://splunk-base.splunk.com/apps/67836/splunk-deployment-monitor

Hope that helps.

/K

View solution in original post

kristian_kolb
Ultra Champion

As you've noticed the results presented are not based on the events. I.e. the search process will not search through a gazillion events to find those that match e.g. a string like 'error'. Instead the metadata command will look at the metadata for the indexes and buckets. This is a MUCH faster operation, and there will be no matching events, since the events were not even searched. That is also why you only have a limited set of search options, there simply is not all that much to search upon.

However, In your case it would probably be a good idea to look at the Splunk Deployment Monitor app, which can help you alert on missing forwarders and sourcetypes etc etc.

http://splunk-base.splunk.com/apps/67836/splunk-deployment-monitor

Hope that helps.

/K

dearimranz
Engager

Thanks for the quick response Kristian. Yes, I have enabled Deployment Monitor app and was just curious if it is possible to create alerts without this app.

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...