<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Inconsistent ingestion in Dashboards &amp; Visualizations</title>
    <link>https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-experiencing-inconsistent-ingestion/m-p/611256#M50108</link>
    <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;probably you have some issues to get data in?&amp;nbsp;&lt;BR /&gt;If you have MC, you could check there if there are missing forwarders etc.&lt;/P&gt;&lt;P&gt;Another way is check if that index contains data like&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| tstats prestats=t count where index=compare_items by _time, host span=1d
| timechart span=1d count by host&lt;/LI-CODE&gt;&lt;P&gt;Then select different time frame for it. That should show when events has stopped to come into splunk. Then just look from UF (host on previous query) side have there happened anything.&amp;nbsp;&lt;/P&gt;&lt;P&gt;r. Ismo&lt;/P&gt;</description>
    <pubDate>Tue, 30 Aug 2022 08:44:20 GMT</pubDate>
    <dc:creator>isoutamo</dc:creator>
    <dc:date>2022-08-30T08:44:20Z</dc:date>
    <item>
      <title>Why am I experiencing inconsistent ingestion?</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-experiencing-inconsistent-ingestion/m-p/611253#M50107</link>
      <description>&lt;P&gt;Hello pls I have a problem with a search.&lt;/P&gt;
&lt;P&gt;if I run this search, it has inconsistent ingestion. Here is the search I ran:&lt;/P&gt;
&lt;P&gt;index=compare_items&amp;nbsp;&lt;/P&gt;
&lt;P&gt;if I put a time range of 60mins even 7days, I do not see results. But if I put 30days, I have like million events populated.&lt;/P&gt;
&lt;P&gt;Here is the error message I got from Splunk.:&lt;/P&gt;
&lt;P&gt;configuration for xyz/123/xxx/ took longer time than expected. This usually indicate problem with underlying storage performance.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;can someone help me if you had similar experience. Thanks&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Aug 2022 12:47:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-experiencing-inconsistent-ingestion/m-p/611253#M50107</guid>
      <dc:creator>Slimbanty1</dc:creator>
      <dc:date>2022-08-30T12:47:22Z</dc:date>
    </item>
    <item>
      <title>Re: Inconsistent ingestion</title>
      <link>https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-experiencing-inconsistent-ingestion/m-p/611256#M50108</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;probably you have some issues to get data in?&amp;nbsp;&lt;BR /&gt;If you have MC, you could check there if there are missing forwarders etc.&lt;/P&gt;&lt;P&gt;Another way is check if that index contains data like&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| tstats prestats=t count where index=compare_items by _time, host span=1d
| timechart span=1d count by host&lt;/LI-CODE&gt;&lt;P&gt;Then select different time frame for it. That should show when events has stopped to come into splunk. Then just look from UF (host on previous query) side have there happened anything.&amp;nbsp;&lt;/P&gt;&lt;P&gt;r. Ismo&lt;/P&gt;</description>
      <pubDate>Tue, 30 Aug 2022 08:44:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-experiencing-inconsistent-ingestion/m-p/611256#M50108</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2022-08-30T08:44:20Z</dc:date>
    </item>
  </channel>
</rss>

