<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: UF Data Interruption in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751990#M119408</link>
    <description>&lt;P&gt;But if your data is destined for both output groups, if one group blocks, the other one blocks as well.&lt;/P&gt;</description>
    <pubDate>Thu, 21 Aug 2025 17:31:20 GMT</pubDate>
    <dc:creator>PickleRick</dc:creator>
    <dc:date>2025-08-21T17:31:20Z</dc:date>
    <item>
      <title>UF Data</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751863#M119380</link>
      <description>&lt;P&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 22 Aug 2025 14:11:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751863#M119380</guid>
      <dc:creator>Priya70</dc:creator>
      <dc:date>2025-08-22T14:11:48Z</dc:date>
    </item>
    <item>
      <title>Re: UF Data Interruption</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751871#M119381</link>
      <description>&lt;P class=""&gt;&lt;SPAN class=""&gt;Hi &lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/273171"&gt;@Priya70&lt;/a&gt;,w&lt;/SPAN&gt;&lt;SPAN class=""&gt;ithout seeing the actual splunkd.log entries during the stall periods, its hard to answer. However, based on your symptoms, the most&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;likely cause is &lt;/SPAN&gt;&lt;SPAN class=""&gt;backpressure&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;Why backpressure fits your pattern:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- High-volume classic logs (Application/Security/System) pause first&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- Lower-volume custom channels (Cisco VPN) continue uninterrupted&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- Multiple input types affected simultaneously (monitor, registry, scripted)&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- Automatic recovery after queues drain&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;To confirm, check splunkd.log during stall periods for:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- "queue is full" messages&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- TCP connection errors to indexers&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- Network timeout warnings&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;Other possibilities to rule out:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- Windows Event Log API resource exhaustion&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- UF memory pressure&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;- Windows Event Log service issues&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;index=_internal&lt;/SPAN&gt; host=&amp;lt;UF&amp;gt; source=*metrics.log* OR &lt;SPAN class=""&gt;source=*splunkd.log&lt;/SPAN&gt;* &lt;SPAN class=""&gt;tcpout&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp; &lt;/SPAN&gt;Hope this helps narrow it down!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Aug 2025 15:39:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751871#M119381</guid>
      <dc:creator>sainag_splunk</dc:creator>
      <dc:date>2025-08-19T15:39:07Z</dc:date>
    </item>
    <item>
      <title>Re: UF Data Interruption</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751880#M119383</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/273171"&gt;@Priya70&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It sounds like the UF might be hitting a resource bottleneck (CPU, memory, disk I/O, or handles) or the Windows Event Log channels may be overwhelmed. If the UF is forwarding to an indexer, intermittent network issues could also create backpressure and stall inputs.&lt;/P&gt;&lt;P&gt;I recommend checking $SPLUNK_HOME/var/log/splunk/splunkd.log for any warnings/errors around the time the data stops, this usually gives good clues on whether it’s resource, input, or connectivity related.&lt;/P&gt;</description>
      <pubDate>Wed, 20 Aug 2025 01:31:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751880#M119383</guid>
      <dc:creator>kiran_panchavat</dc:creator>
      <dc:date>2025-08-20T01:31:48Z</dc:date>
    </item>
    <item>
      <title>Re: UF Data Interruption</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751980#M119405</link>
      <description>&lt;P&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 22 Aug 2025 14:10:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751980#M119405</guid>
      <dc:creator>Priya70</dc:creator>
      <dc:date>2025-08-22T14:10:30Z</dc:date>
    </item>
    <item>
      <title>Re: UF Data Interruption</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751990#M119408</link>
      <description>&lt;P&gt;But if your data is destined for both output groups, if one group blocks, the other one blocks as well.&lt;/P&gt;</description>
      <pubDate>Thu, 21 Aug 2025 17:31:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751990#M119408</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2025-08-21T17:31:20Z</dc:date>
    </item>
    <item>
      <title>Re: UF Data Interruption</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751996#M119411</link>
      <description>&lt;P&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 22 Aug 2025 14:10:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/UF-Data/m-p/751996#M119411</guid>
      <dc:creator>Priya70</dc:creator>
      <dc:date>2025-08-22T14:10:48Z</dc:date>
    </item>
  </channel>
</rss>

