Getting Data In

How To Alert if SourceType is Not Logging?

skoelpin
SplunkTrust
SplunkTrust

I have a forwarder which is configured to monitor 5 directories. Each directory has it's own sourcetype and one of them recently stopped logging. I want to create an alert if a sourcetype is not being indexed in the past 10 minutes, how can I do that?

Tags (2)
0 Karma
1 Solution

JDukeSplunk
Builder

I don't know if its the best way to do it, but I run a simple search against the data, count it, and alert if the count is less than whatever.

index=whatever sourcetype=yoursourcetype |stats count(_raw) AS COUNT

Or, if you have a field, count that.

index=whatever sourcetype=yoursourcetype |stats count(FOO) AS COUNT

Then in the trigger condition of the alert if number of events is less than some number.. it will alert.

This is just an example of one I use when an update process that runs daily does not run.

alt text

View solution in original post

Raghav2384
Motivator

Hey @skoelpin,

Setting up an alert will help if it's only one forwarder/host/sourcetype you have complains with. It will get cumbersome if you have 1000s of machines doing the same randomly(Especially when you do not know which one might skip/break).

  1. Use this query to monitor the forwarder status that are reporting to your instance
  2. Run for hosts that are deviated or look suspicious
  3. Have that data written to an event
  4. Build Alerts on those events. It is not going to be easy 🙂 I have done something similar. Atleast i now know that the triggered alerts are denoting a data forwarding issue.

    index=_internal source=*metrics.log group=tcpin_connections
    | eval sourceHost=if(isnull(hostname), sourceHost,hostname)
    | rename connectionType as connectType
    | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")
    | eval version=if(isnull(version),"pre 4.2",version)
    | rename version as Ver
    | fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver
    | eval Indexer= splunk_server
    | eval Hour=relative_time(_time,"@h")
    | stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by Hour connectType sourceIp sourceHost destPort Indexer Ver
    | fieldformat Hour=strftime(Hour,"%x %H")

Hope this helps and please do not forget to post if you find a better solution.

Thanks,
Raghav

JDukeSplunk
Builder

I don't know if its the best way to do it, but I run a simple search against the data, count it, and alert if the count is less than whatever.

index=whatever sourcetype=yoursourcetype |stats count(_raw) AS COUNT

Or, if you have a field, count that.

index=whatever sourcetype=yoursourcetype |stats count(FOO) AS COUNT

Then in the trigger condition of the alert if number of events is less than some number.. it will alert.

This is just an example of one I use when an update process that runs daily does not run.

alt text

Get Updates on the Splunk Community!

Splunk Enterprise Security: Your Command Center for PCI DSS Compliance

Every security professional knows the drill. The PCI DSS audit is approaching, and suddenly everyone's asking ...

Developer Spotlight with Guilhem Marchand

From Splunk Engineer to Founder: The Journey Behind TrackMe    After spending over 12 years working full time ...

Cisco Catalyst Center Meets Splunk ITSI: From 'Payments Are Down' to Root Cause in ...

The Problem: When Networks and Services Don't Talk Payment systems fail at a retail location. Customers are ...