<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Log Ingestion Failure by Sourcetype in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757949#M11075</link>
    <description>&lt;P&gt;This solution is outside my current budgeting options.&lt;/P&gt;</description>
    <pubDate>Wed, 04 Feb 2026 13:15:15 GMT</pubDate>
    <dc:creator>b17gunnr</dc:creator>
    <dc:date>2026-02-04T13:15:15Z</dc:date>
    <item>
      <title>Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757905#M11071</link>
      <description>&lt;P&gt;Hello folks,&lt;/P&gt;&lt;P&gt;I have a compliance control requirement to alert when there is a log ingestion failure to Splunk. The desire is to focus at the sourcetype level as opposed to the host level (too many false positives) or index level (loses granularity as sourcetypes increase). The keys to the requirement are to dynamically expand as new sourcetypes come online and Splunk's results must consider the frequency of events on a per sourcetype basis. For example, a generic 4-hour window wouldn't suffice for a sourcetype getting multiple events every second, nor would it be properly handle a sourcetype that receives events once or twice per day.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've tried the &lt;A href="https://splunkbase.splunk.com/app/2949" target="_self"&gt;Meta Woot&lt;/A&gt; app and while beneficial for other issues, it does not address the control requirements.&amp;nbsp;Has anyone developed a query with reasonable performance times, or found another app to handle compliance logging failures that considers the variance in event frequency and not an absolute?&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Tue, 03 Feb 2026 19:36:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757905#M11071</guid>
      <dc:creator>b17gunnr</dc:creator>
      <dc:date>2026-02-03T19:36:20Z</dc:date>
    </item>
    <item>
      <title>Re: Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757918#M11072</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/308735"&gt;@b17gunnr&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I think creating a search yourself might end up being clumbersome and hard to cover the variance. Have you seen the &lt;A href="https://splunkbase.splunk.com/app/4621" target="_self"&gt;Splunkbase app TrackMe&lt;/A&gt;?&amp;nbsp;&lt;/P&gt;&lt;P&gt;TrackMe is a good for monitoring anomalies in ingestion (per host/sourcetype etc) and looks at things like event count, size, frequency, lag/delay etc.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-unicode-emoji" title=":glowing_star:"&gt;🌟&lt;/span&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Did this answer help you?&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;If so, please consider:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Adding karma to show it was useful&lt;/LI&gt;&lt;LI&gt;Marking it as the solution if it resolved your issue&lt;/LI&gt;&lt;LI&gt;Commenting if you need any clarification&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Your feedback encourages the volunteers in this community to continue contributing&lt;/P&gt;</description>
      <pubDate>Tue, 03 Feb 2026 22:20:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757918#M11072</guid>
      <dc:creator>livehybrid</dc:creator>
      <dc:date>2026-02-03T22:20:43Z</dc:date>
    </item>
    <item>
      <title>Re: Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757919#M11073</link>
      <description>&lt;P&gt;The TrackMe app is powerful and would do what you want - requires a bit of investment in time to set it up.&lt;/P&gt;&lt;P&gt;&lt;A href="https://splunkbase.splunk.com/app/4621" target="_blank"&gt;https://splunkbase.splunk.com/app/4621&lt;/A&gt;&lt;/P&gt;&lt;P&gt;I've rolled my own with a regular saved search that uses tstats to collect index/sourcetype pairs and saves the results to a lookup, calculating the average latency and min/max gaps between events for each. Alerts then run to check current ingestion against those metrics per index/sourcetype.&lt;/P&gt;&lt;P&gt;There's an investment in time either way -&amp;nbsp; but TrackMe is a good place to start.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Feb 2026 22:41:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757919#M11073</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2026-02-03T22:41:52Z</dc:date>
    </item>
    <item>
      <title>Re: Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757948#M11074</link>
      <description>&lt;P&gt;Unfortunately, this solution exceeds my budget.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Feb 2026 13:14:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757948#M11074</guid>
      <dc:creator>b17gunnr</dc:creator>
      <dc:date>2026-02-04T13:14:44Z</dc:date>
    </item>
    <item>
      <title>Re: Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757949#M11075</link>
      <description>&lt;P&gt;This solution is outside my current budgeting options.&lt;/P&gt;</description>
      <pubDate>Wed, 04 Feb 2026 13:15:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757949#M11075</guid>
      <dc:creator>b17gunnr</dc:creator>
      <dc:date>2026-02-04T13:15:15Z</dc:date>
    </item>
    <item>
      <title>Re: Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757996#M11079</link>
      <description>&lt;P&gt;I hadn't realised they had switched to a licenced model.&lt;/P&gt;&lt;P&gt;The basic idea behind a roll your own technique is to have a lookup file that contains the index and sourcetype and threshold in seconds that you need to see data in - you can create a simple example of all index sourcetype pairs seen in the previous hour and give them a threshold of 10 minutes, e.g. like this&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| tstats latest(_time) as last_seen count where index=* earliest=-1h@h latest=@h by index sourcetype
| sort index sourcetype
| table index sourcetype
| eval threshold=600
| outputlookup monitor.csv&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;Now you have a control set that you use to look for missing data outside the threshold.&lt;/P&gt;&lt;P&gt;Now initialise the results file - makes the SPL easier if all data is present there at the start.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| inputlookup monitor.csv 
| fields - threshold
| eval last_seen=now(), missing_data=0
| outputlookup monitor_results.csv&lt;/LI-CODE&gt;&lt;P&gt;Then you can run this as a scheduled alert at the frequency you want - this example you can run every minute.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| tstats max(_time) as last_seen count where [ | inputlookup monitor.csv | fields index sourcetype ] earliest=-1m@m latest=@m by index sourcetype
``` We have data so reset missing indicator ```
| eval missing_data = 0
``` Grab all previous results and combine with what we found ```
| inputlookup monitor_results.csv append=t
| fields - threshold
| stats first(*) as * by index sourcetype
``` Get the threshold and see if the last seen exceeds the configured threshold ```
| lookup monitor.csv index sourcetype OUTPUT threshold
| eval exceeds_threshold = if(now() - last_seen &amp;gt; threshold, 1, 0)
``` Now work out if we need to alert - only alert the first time we exceed the threshold ```
| eval alert=if(exceeds_threshold = 1 AND missing_data = 0, 1, 0)
``` Increment the missing data counter to avoid continual alerts ```
| eval missing_data=if(exceeds_threshold = 1, missing_data + 1, missing_data)

``` Write out these results ```
| outputlookup monitor_results.csv
| where alert = 1&lt;/LI-CODE&gt;&lt;P&gt;The logic for that is&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Search data in the previous minute for index/sourcetypes you want&lt;/LI&gt;&lt;LI&gt;Add in the results you have collected from previous searches&lt;/LI&gt;&lt;LI&gt;Take the first event by index/sourcetype, i.e. keep all the found results and retain only the previous results where you have no current result.&lt;/LI&gt;&lt;LI&gt;Lookup the threshold configured for this index/sourcetype - in seconds&lt;/LI&gt;&lt;LI&gt;Find out if it was last_seen more than that threshold ago&lt;/LI&gt;&lt;LI&gt;Alert the first time it is exceeded&lt;/LI&gt;&lt;LI&gt;Save all the results back to the results CSV&lt;/LI&gt;&lt;LI&gt;And then retain only those items you want to alert on&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;This will alert the first time an index/sourcetype has not been seen for the given number of threshold seconds.&lt;/P&gt;&lt;P&gt;NB: This is a starting point, but gives you the principles of how to manage it.&lt;/P&gt;&lt;P&gt;Hope this helps&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Feb 2026 22:10:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/757996#M11079</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2026-02-04T22:10:29Z</dc:date>
    </item>
    <item>
      <title>Re: Log Ingestion Failure by Sourcetype</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/758174#M11080</link>
      <description>&lt;P&gt;Excellent starting point, very much appreciate the suggestion and the level of detail explaining the thought process.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Feb 2026 13:32:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Log-Ingestion-Failure-by-Sourcetype/m-p/758174#M11080</guid>
      <dc:creator>b17gunnr</dc:creator>
      <dc:date>2026-02-09T13:32:02Z</dc:date>
    </item>
  </channel>
</rss>

