<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Log inconsistantly lagging behind in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707568#M116976</link>
    <description>&lt;P&gt;Yes, they're UFs. I already set&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;[thruput]

maxKBps = 0&lt;/LI-CODE&gt;
&lt;P&gt;in limits.conf in the app.&lt;/P&gt;</description>
    <pubDate>Thu, 26 Dec 2024 15:31:33 GMT</pubDate>
    <dc:creator>tungpx</dc:creator>
    <dc:date>2024-12-26T15:31:33Z</dc:date>
    <item>
      <title>Log inconsistantly lagging behind</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707530#M116972</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I have a case where the logs from 4 host are lagging behind. Why I say inconsistant is the laggig is differ from 5 to 30 minutes, sometime didn't at all.&amp;nbsp; When the log don't show up 30 minutes or more, I go to the forwarder management and disable/enable apps, restart Splunkd, then the log continue with 1, 2 seconds lag.&lt;/P&gt;&lt;P&gt;The other host also lagging behind at peak hour, but only for 1 or 2 minutes (maximum 5' for source with large amount of logs).&amp;nbsp;&lt;/P&gt;&lt;P&gt;I admit that our indexer cluster is not up to par in IOPS requirement but for 4 paticular host to be visible underperform is quite concerning.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can someone show me steps to debug and solve the problems.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Dec 2024 10:50:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707530#M116972</guid>
      <dc:creator>tungpx</dc:creator>
      <dc:date>2024-12-24T10:50:32Z</dc:date>
    </item>
    <item>
      <title>Re: Log inconsistantly lagging behind</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707539#M116973</link>
      <description>&lt;P&gt;1) Share which OS version, which UF version, and roughly how many inputs on those hosts&lt;/P&gt;&lt;P&gt;2) Search _internal for your hostname(IP) for error codes&lt;/P&gt;&lt;P&gt;2.1) Is the UF generating errors&lt;/P&gt;&lt;P&gt;2.2) Does the UF get indexing paused/congested reports back from the IDX tier.&lt;/P&gt;&lt;P&gt;2.3) Does the UF show round robin to all IDX elements or is there a discrepancy in outputs.conf?&lt;/P&gt;&lt;P&gt;Lets start with these.&lt;/P&gt;</description>
      <pubDate>Tue, 24 Dec 2024 16:09:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707539#M116973</guid>
      <dc:creator>dural_yyz</dc:creator>
      <dc:date>2024-12-24T16:09:51Z</dc:date>
    </item>
    <item>
      <title>Re: Log inconsistantly lagging behind</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707545#M116974</link>
      <description>&lt;P&gt;After some investigation, the answer is:&lt;/P&gt;&lt;P&gt;1) The OS is Linux Redhat 8, Splunk UF version 9.1.1, we have 2 deployment of Splunk which is Splunk Enterprise and Splunk Security, on my end (Splunk Enterprise) there are only 2 inputs but on the Security end, there are a lot, with 2 apps HG_TA_Splunk_Nix and TA_nmon (roughly 40 inputs each) over 4 hosts.&lt;/P&gt;&lt;P&gt;2.1) There are some but not noteworthy ERROR. The errors are below:&lt;/P&gt;&lt;P&gt;+700 ERROR TcpoutputQ [11073 TcpOutEloop] - Unexpected event id=&amp;lt;eventid&amp;gt;&amp;nbsp; -&amp;gt; benign ERROR as per Splunk dev&lt;/P&gt;&lt;P&gt;+700 ERROR ExecProcessor [32056 ExecProcessor] - message from "$SPLUNKHOME/HG_TA_Splunk_Nix/bin/update.sh" &lt;A href="https://repo.napas.local/centos/7/updates/x84_64/repodata/repomd.xml:" target="_blank"&gt;https://repo.napas.local/centos/7/updates/x84_64/repodata/repomd.xml:&lt;/A&gt;&amp;nbsp;[Errorno14] curl#7 - "Failed to connect to repo.napas.local:80; No route to host"&lt;/P&gt;&lt;P&gt;2.2) HealthReporter show&lt;/P&gt;&lt;P&gt;+700 INFO PeriodHealthReporter - feature="Ingestion latency" color=red/yellow indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=26684 reason=Events from tracker.log have not been seen for the last 26684 seconds, which is more than the red threshold ( 210 seconds ). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier.&lt;/P&gt;&lt;P&gt;2.3) log _internal |stats count by destIP show&amp;nbsp;&lt;/P&gt;&lt;P&gt;idx1: 14248&lt;/P&gt;&lt;P&gt;idx2: 8014&lt;/P&gt;&lt;P&gt;idx3: 7963&lt;/P&gt;&lt;P&gt;idx4: 7809&lt;/P&gt;&lt;P&gt;Which is more concerning than I thought it would be.&amp;nbsp;&lt;/P&gt;&lt;P&gt;2.4) Another find. The log is now lagging 1 hour behind, and still being pulled/ingest. But the internal log had stop, the time now is 9:08, but the last internal log is 8:19, with no error, which is&lt;/P&gt;&lt;P&gt;+700 Metrics - group=thruput, name=uncooked_output, instantaneous_kbps=0.000, instantaneous_eps=0.000, average_kbps=0.000, total_k_processed=0.000, kb=0.000, ev=0, interval_sec=60&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Dec 2024 03:45:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707545#M116974</guid>
      <dc:creator>tungpx</dc:creator>
      <dc:date>2024-12-25T03:45:44Z</dc:date>
    </item>
    <item>
      <title>Re: Log inconsistantly lagging behind</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707554#M116975</link>
      <description>&lt;P&gt;Are these UFs? Did you change the default thruput limit?&lt;/P&gt;</description>
      <pubDate>Wed, 25 Dec 2024 11:53:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707554#M116975</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-12-25T11:53:24Z</dc:date>
    </item>
    <item>
      <title>Re: Log inconsistantly lagging behind</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707568#M116976</link>
      <description>&lt;P&gt;Yes, they're UFs. I already set&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;[thruput]

maxKBps = 0&lt;/LI-CODE&gt;
&lt;P&gt;in limits.conf in the app.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Dec 2024 15:31:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707568#M116976</guid>
      <dc:creator>tungpx</dc:creator>
      <dc:date>2024-12-26T15:31:33Z</dc:date>
    </item>
    <item>
      <title>Re: Log inconsistantly lagging behind</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707728#M116988</link>
      <description>&lt;P&gt;Here is an excellent conf presentation, how to find the reason for this lag&amp;nbsp;&lt;A href="https://conf.splunk.com/files/2019/slides/FN1570.pdf" target="_blank"&gt;https://conf.splunk.com/files/2019/slides/FN1570.pdf&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Dec 2024 14:25:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Log-inconsistantly-lagging-behind/m-p/707728#M116988</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2024-12-30T14:25:08Z</dc:date>
    </item>
  </channel>
</rss>

