<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Ingesting delay and batch data sending in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694501#M115396</link>
    <description>&lt;P&gt;Right. That was !=, not =.&lt;/P&gt;&lt;P&gt;You're mostly interested in&lt;/P&gt;&lt;PRE&gt;index=_internal component=AutoLoadBalancedConnectionStrategy host=&amp;lt;your_forwarder&amp;gt;&lt;/PRE&gt;</description>
    <pubDate>Sun, 28 Jul 2024 13:03:55 GMT</pubDate>
    <dc:creator>PickleRick</dc:creator>
    <dc:date>2024-07-28T13:03:55Z</dc:date>
    <item>
      <title>Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694234#M115361</link>
      <description>&lt;DIV&gt;Hello,&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on indexer.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;It starts approximatly 21 minutes after its restart. During these 21 minutes is delay constant and low.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;All Linux UFs behave in similar way. It start 21 minutes after UF restart, but period is different.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I use UF version are 9.2.0.1 and&amp;nbsp; 9.2.1.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I have checked&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;- queues state in internal logs, it looks ok&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;- UF truhput is set to 10240&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I have independently tested that after restarting the UF the data is coming in with a low and constant delay. After about 21 minutes it stops for about 9 minutes.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;After 9 minutes, a batch of messages arrive and are indexed, creating a sawtooth progression in the graph.&lt;/DIV&gt;&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="indexing_delay_2.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31883i25E5090A77D27D6F/image-size/large?v=v2&amp;amp;px=999" role="button" title="indexing_delay_2.png" alt="indexing_delay_2.png" /&gt;&lt;/span&gt;&lt;/DIV&gt;&lt;DIV&gt;It doesn't depend on the type of data. It behaves the same for internal UF logs and other logs.&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I currently collect data using file monitor input and journald input.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I can't figure out what the problem is.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Thanks in advanced for help&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Michal&lt;/DIV&gt;</description>
      <pubDate>Wed, 24 Jul 2024 16:49:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694234#M115361</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-07-24T16:49:35Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694265#M115362</link>
      <description>&lt;P&gt;Any errors on either side of the connection?&lt;/P&gt;</description>
      <pubDate>Thu, 25 Jul 2024 06:31:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694265#M115362</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-07-25T06:31:56Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694272#M115363</link>
      <description>&lt;P&gt;UF host for last 60 minutes with now errors and warnings&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-25 09_28_17-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31886i84AC4171306789A0/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-25 09_28_17-Search _ Splunk 9.2.0.1.png" alt="2024-07-25 09_28_17-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;IDX side&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-25 09_33_01-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31887i6F3100DAF2579A2E/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-25 09_33_01-Search _ Splunk 9.2.0.1.png" alt="2024-07-25 09_33_01-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Still a problem here. This morning we had to reboot from the Splunk servers due to a security patch of the operating system. You can see it at the beginning of the graph. This meant that the connection between UF and IDX had to be re-established, i.e. when IDX or UF restarts, about 20 minutes yesterday and today 10 minutes is not the delay or batch processing.&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-25 09_45_34-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31888iE372BB95BF240C9C/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-25 09_45_34-Search _ Splunk 9.2.0.1.png" alt="2024-07-25 09_45_34-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Jul 2024 07:50:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694272#M115363</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-07-25T07:50:54Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694365#M115372</link>
      <description>&lt;P&gt;These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site.&lt;/P&gt;&lt;P&gt;And the shape of your graph does look awfully close to a situation with periodic batch input which then unloads with a limited thruput connection.&lt;/P&gt;</description>
      <pubDate>Fri, 26 Jul 2024 05:24:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694365#M115372</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-07-26T05:24:31Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694391#M115383</link>
      <description>&lt;P&gt;I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages.&lt;/P&gt;&lt;P&gt;Situation is&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Thruput is not limited (thruput se to 10240)&lt;/LI&gt;&lt;LI&gt;Number of logs is low&lt;/LI&gt;&lt;LI&gt;logs in files are generated fluently, i checked by "tail -f"&lt;/LI&gt;&lt;LI&gt;during aprox 20 minutes after UF restart there is no problem&lt;/LI&gt;&lt;LI&gt;after this time&amp;nbsp; problem&amp;nbsp; appears&lt;/LI&gt;&lt;LI&gt;the problem is&lt;UL&gt;&lt;LI&gt;Data are buffered somewhere in front of indexer server, it is buffered aprox 9 minutes. After I restarted UF or droped TCP session, data were suddenly sent to the indexer.&lt;/LI&gt;&lt;LI&gt;I belive It must be buffered on UF side. I saw no dat period and then data burst on indexer site.&lt;/LI&gt;&lt;LI&gt;Shape of the grahp is saying the same thing. Data are somehere for some period of time and then are flushed to indexer. Older data are bigger diff a newer data are lower diff.&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-26 12_26_38-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31913iACCEDE2EB8283CBD/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-26 12_26_38-Search _ Splunk 9.2.0.1.png" alt="2024-07-26 12_26_38-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Index time&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-26 12_26_09-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31914i8AECF18063A8270B/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-26 12_26_09-Search _ Splunk 9.2.0.1.png" alt="2024-07-26 12_26_09-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;SendQ&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-26 10_08_01-Clipboard.png" style="width: 890px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31915iA039EBF60A74E09C/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-26 10_08_01-Clipboard.png" alt="2024-07-26 10_08_01-Clipboard.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;TCPout&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-26 10_18_32-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31916iE1FFFF4202C3DF76/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-26 10_18_32-Search _ Splunk 9.2.0.1.png" alt="2024-07-26 10_18_32-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Queues&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-26 11_13_09-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31917iB9D56F3C7B849767/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-26 11_13_09-Search _ Splunk 9.2.0.1.png" alt="2024-07-26 11_13_09-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;internal messages (clustered)&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-07-26 11_27_02-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/31918i30C15462F860EEBF/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-07-26 11_27_02-Search _ Splunk 9.2.0.1.png" alt="2024-07-26 11_27_02-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 26 Jul 2024 10:40:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694391#M115383</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-07-26T10:40:37Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694427#M115387</link>
      <description>&lt;P&gt;Yeah, you're right. It was the other-way sawtooth. It looks strange. Are you sure you don't have any network-level issues? And don't you see any other interesting stuff in _internal (outside of the Metrics component) for this forwarder?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 26 Jul 2024 15:06:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694427#M115387</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-07-26T15:06:29Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694499#M115395</link>
      <description>&lt;P&gt;I have two weeks off, so I'll continue troubleshooting after that.&lt;/P&gt;&lt;P&gt;In my opinion there are not any interesting stuff in _internal log. You can see it on the screenshot. I used cluster command to reduce log number. There is component != metric in SPL.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2024 11:33:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694499#M115395</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-07-28T11:33:07Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694501#M115396</link>
      <description>&lt;P&gt;Right. That was !=, not =.&lt;/P&gt;&lt;P&gt;You're mostly interested in&lt;/P&gt;&lt;PRE&gt;index=_internal component=AutoLoadBalancedConnectionStrategy host=&amp;lt;your_forwarder&amp;gt;&lt;/PRE&gt;</description>
      <pubDate>Sun, 28 Jul 2024 13:03:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/694501#M115396</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-07-28T13:03:55Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696009#M115532</link>
      <description>&lt;P&gt;I looked at the events for the component you mentioned and found that there is only one type of log entry.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-08-12 16_04_55-Search _ Splunk 9.2.0.1_autoloadbalancedconnectionstrategy.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/32175iC42C6C468F0D3419/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-08-12 16_04_55-Search _ Splunk 9.2.0.1_autoloadbalancedconnectionstrategy.png" alt="2024-08-12 16_04_55-Search _ Splunk 9.2.0.1_autoloadbalancedconnectionstrategy.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;I also tried it for the "last 7 days" time range.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Aug 2024 14:18:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696009#M115532</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-08-12T14:18:03Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696033#M115537</link>
      <description>&lt;P&gt;Which kind of logs you are collecting? Is it possible that there is some log or input which stalled this after it has read and then UF just wait free resources to read next one?&lt;/P&gt;&lt;P&gt;Have you only one or several pipelines in your UF?&lt;/P&gt;&lt;P&gt;Any performance data from OS level and which OS, version you have?&lt;/P&gt;</description>
      <pubDate>Mon, 12 Aug 2024 15:57:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696033#M115537</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2024-08-12T15:57:08Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696040#M115538</link>
      <description>&lt;P&gt;I am collecting logs from some files from /var/log and sysmon from journald.&lt;/P&gt;&lt;P&gt;last 90 minutes&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/audit.log&lt;/TD&gt;&lt;TD&gt;41&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/health.log&lt;/TD&gt;&lt;TD&gt;39&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/metrics.log&lt;/TD&gt;&lt;TD&gt;8911&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/splunkd.log&lt;/TD&gt;&lt;TD&gt;598&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/var/log/audit/audit.log&lt;/TD&gt;&lt;TD&gt;7&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/var/log/messages&lt;/TD&gt;&lt;TD&gt;936&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/var/log/secure&lt;/TD&gt;&lt;TD&gt;10&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;journald://sysmon&lt;/TD&gt;&lt;TD&gt;919&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;inputs.conf&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;[monitor:///var/log/syslog]&lt;/SPAN&gt;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;disabled&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;sourcetype&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;syslog&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;index&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;[monitor:///var/log/messages]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;disabled&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;sourcetype&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;syslog&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;index&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;[monitor:///var/log/secure]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;disabled&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;sourcetype&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux_secure&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;index&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;[monitor:///var/log/auth.log]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;disabled&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;sourcetype&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux_secure&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;index&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;[monitor:///var/log/audit/audit.log]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;disabled&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;sourcetype&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux_audit&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;index&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;[journald://sysmon]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;interval&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;5&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;journalctl-quiet&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;true&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;journalctl-include-fields&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;journalctl-exclude-fields&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;__MONOTONIC_TIMESTAMP,__SOURCE_REALTIME_TIMESTAMP&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;journalctl-filter&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;_SYSTEMD_UNIT=sysmon.service&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;sourcetype&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;sysmon:linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;index&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;linux&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;I did not change number of pipelines. I thing that default count is 1.&lt;BR /&gt;&lt;BR /&gt;I will find out the OS version later. I do not have direct access to the OS. I thing it is CentOS/Redhat 8 or 9, but I may be wrong.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 12 Aug 2024 16:26:40 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696040#M115538</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-08-12T16:26:40Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696042#M115539</link>
      <description>Not so many or floody inputs. Maybe you still should add another pipeline and check if it helps?&lt;BR /&gt;Based on amount of entries from audit.log it is quite low. Can you check is there really so few entries on source?&lt;BR /&gt;If those are entries from one Linux node from 90 minutes period it’s really unused.</description>
      <pubDate>Mon, 12 Aug 2024 16:42:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696042#M115539</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2024-08-12T16:42:27Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696113#M115546</link>
      <description>&lt;P&gt;I got a direct access to the sever again and I checked OS version. It is Red Hat Enterprise Linux release 9.4 (Plow).&lt;/P&gt;&lt;P&gt;I will try to add pipeline and I will check if it helps. I am going to check if there is not something connected with sysmon.&amp;nbsp;&lt;/P&gt;&lt;P&gt;It was right. There were only few log entries in audit.log during the period. I checked it on filesystem. After my ssh connection there is more log entrie.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Last 90 minuts&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/audit.log&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/conf.log&lt;/TD&gt;&lt;TD&gt;1&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/configuration_change.log&lt;/TD&gt;&lt;TD&gt;3&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/health.log&lt;/TD&gt;&lt;TD&gt;26&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/metrics.log&lt;/TD&gt;&lt;TD&gt;8975&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/splunkd-utility.log&lt;/TD&gt;&lt;TD&gt;10&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/splunk/splunkd.log&lt;/TD&gt;&lt;TD&gt;1055&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/opt/splunkforwarder/var/log/watchdog/watchdog.log&lt;/TD&gt;&lt;TD&gt;3&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/var/log/audit/audit.log&lt;/TD&gt;&lt;TD&gt;1337&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/var/log/messages&lt;/TD&gt;&lt;TD&gt;9418&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;/var/log/secure&lt;/TD&gt;&lt;TD&gt;543&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;journald://sysmon&lt;/TD&gt;&lt;TD&gt;6482&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I revealed an interesting correlation. You can see a "gap" or change in behavior in the graph. It starts after the UF is restarted.&amp;nbsp;There are messages "Found currently active indexer. Connected to idx=X.X.X.X:9992:0, reuse=1." before UF restart. After 20 minutes from restart they are back.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-08-12 17_55_40-Search _ Splunk 9.2.0.1.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/32191i60EF9D2CCC982741/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-08-12 17_55_40-Search _ Splunk 9.2.0.1.png" alt="2024-08-12 17_55_40-Search _ Splunk 9.2.0.1.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 13 Aug 2024 09:53:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696113#M115546</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-08-13T09:53:20Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696151#M115551</link>
      <description>&lt;P&gt;I tried setting parallelIngestionPipelines = 2 in server.conf and the behavior did not change.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I also tried stopping sysmon deamon and disabling sysmon journald input. It had no effect on the above behavior.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Aug 2024 15:46:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696151#M115551</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-08-13T15:46:02Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696171#M115552</link>
      <description>&lt;P&gt;Based on number of your log events it had been surprise if that was helped.&lt;/P&gt;&lt;P&gt;Have you look network interface stats, if there is something weird?&lt;/P&gt;&lt;P&gt;Was it so, that this same issue was in all your Linux uf nodes? If yes then it heavily pointed to some configuration issue!&lt;/P&gt;&lt;P&gt;Can you show your outputs.conf settings exported by btool with —debug option?&lt;/P&gt;</description>
      <pubDate>Tue, 13 Aug 2024 17:34:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696171#M115552</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2024-08-13T17:34:31Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696187#M115554</link>
      <description>&lt;P&gt;I did not find anything weird about the interface stats.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-08-13 21_42_03-projekty [SSH_ rockyforwork] - Visual Studio Code.png" style="width: 876px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/32204iF31D632BCA9D07CE/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-08-13 21_42_03-projekty [SSH_ rockyforwork] - Visual Studio Code.png" alt="2024-08-13 21_42_03-projekty [SSH_ rockyforwork] - Visual Studio Code.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-08-13 21_43_14-projekty [SSH_ rockyforwork] - Visual Studio Code.png" style="width: 952px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/32205i2CC5EA2DA8EFF1C0/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-08-13 21_43_14-projekty [SSH_ rockyforwork] - Visual Studio Code.png" alt="2024-08-13 21_43_14-projekty [SSH_ rockyforwork] - Visual Studio Code.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Similar problem occurs in all Linux nodes, but differs in period/delay.&lt;/P&gt;&lt;P&gt;There is btool output configuration&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2024-08-13 21_56_32-projekty [SSH_ rockyforwork] - Visual Studio Code.png" style="width: 999px;"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/32206i3A1DF0F3A54EBC56/image-size/large?v=v2&amp;amp;px=999" role="button" title="2024-08-13 21_56_32-projekty [SSH_ rockyforwork] - Visual Studio Code.png" alt="2024-08-13 21_56_32-projekty [SSH_ rockyforwork] - Visual Studio Code.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 13 Aug 2024 20:06:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696187#M115554</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-08-13T20:06:29Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696221#M115555</link>
      <description>I cannot see anything special here.&lt;BR /&gt;Do you have UFs in other OS like windows or some Unix and if, have those the same issue?&lt;BR /&gt;Can you post your indexer’s relevant inputs.conf output from btool too?</description>
      <pubDate>Wed, 14 Aug 2024 05:08:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696221#M115555</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2024-08-14T05:08:15Z</dc:date>
    </item>
    <item>
      <title>Re: Ingesting delay and batch data sending</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696280#M115562</link>
      <description>&lt;P&gt;I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and the problem disappeared.&lt;/P&gt;&lt;P&gt;The rest was already pretty fast. I found that there was a persistent queue enabled for Linux input/source in the "Alway On" mode. The persistent queue was not turned on for Windows Input/source. Windows logs were OK all the time. After turning it off for Linux data, the problem disappeared.&lt;/P&gt;&lt;P&gt;I don't understand why the persistent queue behaves this way, but I don't have time to investigate further. Maybe it's a Cribl bug or a misunderstanding of functionality. The input queue is not required in the project, so I can leave it off.&lt;/P&gt;&lt;P&gt;For me, it's currently resolved&lt;/P&gt;&lt;P&gt;Thank you all for your help and your time&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2024 14:12:11 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Ingesting-delay-and-batch-data-sending/m-p/696280#M115562</guid>
      <dc:creator>emzed</dc:creator>
      <dc:date>2024-08-14T14:12:11Z</dc:date>
    </item>
  </channel>
</rss>

