<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Could not send data to output queue (parsingQueue) in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49974#M9503</link>
    <description>&lt;P&gt;&lt;STRONG&gt;Indexer discovery used in Multisite clustering&lt;/STRONG&gt;&lt;BR /&gt;
There can be many reasons for this failure, including the ones listed above.  &lt;/P&gt;

&lt;P&gt;An additional reason that this message comes up is because of indexer discovery when using multisite clustering.  When using multisite clustering, &lt;EM&gt;every forwarder must have a site&lt;/EM&gt;.  If you wish to avoid site affinity, you may use &lt;EM&gt;site0&lt;/EM&gt;.&lt;/P&gt;

&lt;P&gt;The configuration looks like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;# server.conf
[general]
site = site0
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;References:&lt;BR /&gt;
1. &lt;A href="http://docs.splunk.com/Documentation/Splunk/6.4.3/Indexer/indexerdiscovery#Use_indexer_discovery_in_a_multisite_cluster"&gt;http://docs.splunk.com/Documentation/Splunk/6.4.3/Indexer/indexerdiscovery#Use_indexer_discovery_in_a_multisite_cluster&lt;/A&gt;&lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;"Important: When you use indexer discovery with multisite clustering, you must assign a site-id to all forwarders, whether or not you want the forwarders to be site-aware. If you want a forwarder to be site-aware, you assign it a site-id for a site in the cluster, such as "site1," "site2," and so on. If you do not want a forwarder to be site-aware, you assign it the special site-id of "site0". When a forwarder is assigned "site0", it will forward to peers across all sites in the cluster." &lt;/P&gt;
&lt;/BLOCKQUOTE&gt;</description>
    <pubDate>Wed, 07 Sep 2016 19:46:47 GMT</pubDate>
    <dc:creator>amehta_splunk</dc:creator>
    <dc:date>2016-09-07T19:46:47Z</dc:date>
    <item>
      <title>Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49966#M9495</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;I have universal forwarder monitoring a number of directories and forwarding to an indexer.&lt;BR /&gt;
On the forwarder, there are repeating entries in the splunkd.log file:&lt;/P&gt;

&lt;P&gt;03-04-2013 12:12:39.503 +0000 INFO  TailingProcessor - Could not send data to output queue (parsingQueue), retrying...&lt;BR /&gt;
03-04-2013 12:12:44.506 +0000 INFO  TailingProcessor -   ...continuing.&lt;BR /&gt;
03-04-2013 12:12:54.543 +0000 INFO  TailingProcessor - Could not send data to output queue (parsingQueue), retrying...&lt;BR /&gt;
03-04-2013 12:13:09.551 +0000 INFO  TailingProcessor -   ...continuing.&lt;BR /&gt;
03-04-2013 12:13:14.568 +0000 INFO  TailingProcessor - Could not send data to output queue (parsingQueue), retrying...&lt;BR /&gt;
03-04-2013 12:13:19.571 +0000 INFO  TailingProcessor -   ...continuing.&lt;BR /&gt;
03-04-2013 12:13:29.607 +0000 INFO  TailingProcessor - Could not send data to output queue (parsingQueue), retrying...&lt;BR /&gt;
03-04-2013 12:13:34.609 +0000 INFO  TailingProcessor -   ...continuing.&lt;BR /&gt;
03-04-2013 12:13:49.644 +0000 INFO  TailingProcessor - Could not send data to output queue (parsingQueue), retrying...&lt;BR /&gt;
03-04-2013 12:13:54.647 +0000 INFO  TailingProcessor -   ...continuing.&lt;/P&gt;

&lt;P&gt;etc.&lt;/P&gt;

&lt;P&gt;The main effect of this seems to be a delay of ~10 mins to data being searchable.&lt;/P&gt;

&lt;P&gt;I do not believe the indexer is the bottleneck as the indexer. I have Splunk On Splunk and according to that the queue's are pretty much zero&lt;/P&gt;

&lt;P&gt;I have increased the persistent queue size to 100Mb on the forwarder but it still get's the error.&lt;BR /&gt;
The metrics.log on the forwarder shows that the queues don't seem to be near full (either the parsingqueue or the tcpout queue):&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=tcpout_sec-mgr-01_9997, max_size=512000, current_size=65736, largest_size=65736, smallest_size=0
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=aeq, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=aq, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=auditqueue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=fschangemanager_queue, max_size_kb=5120, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=indexqueue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=nullqueue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
**03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=parsingqueue, max_size_kb=102400, current_size_kb=101811, current_size=2434, largest_size=2556, smallest_size=2417**
03-04-2013 12:13:42.031 +0000 INFO  Metrics - group=queue, name=tcpin_queue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;CPU is low on both boxes.&lt;/P&gt;

&lt;P&gt;On forwarder, &lt;CODE&gt;splunk list monitor | wc -l&lt;/CODE&gt; gives 14264&lt;/P&gt;

&lt;P&gt;On indexer &lt;CODE&gt;metrics.log&lt;/CODE&gt; has no instances of &lt;CODE&gt;blocked&lt;/CODE&gt;&lt;BR /&gt;
On forwarder &lt;CODE&gt;metrics.log&lt;/CODE&gt; has a few instances of &lt;CODE&gt;blocked=true&lt;/CODE&gt; but but &lt;CODE&gt;current_size&lt;/CODE&gt; is always low compared to &lt;CODE&gt;max_size&lt;/CODE&gt;+kb:&lt;/P&gt;

&lt;P&gt;Example:&lt;BR /&gt;
    Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=102400, current_size_kb=102399, current_size=1682, largest_size=1689, smallest_size=1662&lt;/P&gt;

&lt;P&gt;Any ideas would be really appreciated. Don't know what the slowness is of how to fix it.&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 13:26:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49966#M9495</guid>
      <dc:creator>philyeo42</dc:creator>
      <dc:date>2020-09-28T13:26:17Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49967#M9496</link>
      <description>&lt;P&gt;Out of interest, does blocked=true appear anywhere in the metrics.log on the indexer or forwarder?&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2013 12:49:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49967#M9496</guid>
      <dc:creator>Drainy</dc:creator>
      <dc:date>2013-03-04T12:49:39Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49968#M9497</link>
      <description>&lt;P&gt;Also, how many files is the forwarder monitoring? On the forwarder, run this command&lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;splunk list monitor&lt;/CODE&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2013 13:14:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49968#M9497</guid>
      <dc:creator>lguinn2</dc:creator>
      <dc:date>2013-03-04T13:14:31Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49969#M9498</link>
      <description>&lt;P&gt;On the indexer I do not see any blocked=true.&lt;BR /&gt;
On the forwarder there are a couple entries over several days but the numbers look odd:  Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=102400, current_size_kb=102399, current_size=1682, largest_size=1689, smallest_size=1662&lt;/P&gt;

&lt;P&gt;On forwarder:&lt;BR /&gt;
splunk list monitor | wc -l&lt;BR /&gt;
14264&lt;/P&gt;

&lt;P&gt;(I have to up ulimits on the OS and increase the max_fd in limits.conf on the splunk forwarder)&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 13:26:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49969#M9498</guid>
      <dc:creator>philyeo42</dc:creator>
      <dc:date>2020-09-28T13:26:19Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49970#M9499</link>
      <description>&lt;P&gt;If you are monitoring anywhere near 14,000 files on a forwarder - I'll bet that this is your problem. You can increase the file descriptors, etc. but you will probably still have performance issues. A ten minute delay in indexing is actually pretty darn good considering the work that Splunk is doing. I'll bet that the forwarder is consuming more CPU and memory than it should, too.&lt;/P&gt;

&lt;P&gt;Even if only a portion of these files are actively being updated, Splunk will monitor ALL of them. This means that Splunk will examine the mod time of each file in a round-robin fashion. Over and over again, even though nothing has (and maybe never will) change. Because Splunk can't know which files will or won't be updated.&lt;/P&gt;

&lt;P&gt;This is obviously a huge waste of machine time if most of the files are &lt;EM&gt;not&lt;/EM&gt; being updated. Here are some steps that you could take:&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Remove the older files.&lt;/LI&gt;
&lt;LI&gt;Rename the older files to a name, perhaps xyz.OLD. Blacklist files using the regex .OLD$ and Splunk will skip them&lt;/LI&gt;
&lt;LI&gt;Use the &lt;CODE&gt;ignoreOlderThan = &amp;lt;time window&amp;gt;&lt;/CODE&gt; in inputs.conf - but BE CAREFUL. &lt;CODE&gt;ignoreOlderThan&lt;/CODE&gt; causes the monitored input to stop checking files for updates if their modtime has passed this threshold. So if you set it for 14d, then you can't ever add a file older than 2 weeks into the directory. (Well, you can, but Splunk will ignore it.)&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;If you &lt;EM&gt;must&lt;/EM&gt; monitor this many files, consider installing 2 copies of the forwarder. Split the monitoring between them by assigning them different directories. I would try to keep the total number of files being monitored by a forwarder under 5,000 if possible.&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2013 13:51:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49970#M9499</guid>
      <dc:creator>lguinn2</dc:creator>
      <dc:date>2013-03-04T13:51:03Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49971#M9500</link>
      <description>&lt;P&gt;Thanks for the tips. Ideally this server (the raw syslog server will KEEP a full set of raw logs so don't really want to delete them)&lt;/P&gt;

&lt;P&gt;I have already got ignoreOlderThan = 2d BUT it is interesting to note that the file list in "list monitor" contains all the entries including files from several days ago.&lt;/P&gt;

&lt;P&gt;There are ~260 logs for today, yesterday and the day before so total of approx 800 logs that should be being monitored if it honours the ingoreolderthan. I guess it still scans them all to check if they are olderthan...&lt;/P&gt;

&lt;P&gt;Might have to go with opt2 then.&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2013 14:08:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49971#M9500</guid>
      <dc:creator>philyeo42</dc:creator>
      <dc:date>2013-03-04T14:08:42Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49972#M9501</link>
      <description>&lt;P&gt;OK. This still doesn't work.&lt;BR /&gt;
There are now &amp;lt;1000 files monitored, the parsingqueue is mostly full (~200MB). The CPU cpu is under 20 and splunk is hardly using it. Why can't it keep up?&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2013 16:54:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49972#M9501</guid>
      <dc:creator>philyeo42</dc:creator>
      <dc:date>2013-03-04T16:54:52Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49973#M9502</link>
      <description>&lt;P&gt;Wow, I am humbled to be so opinionated and yet so wrong. Still, I think that 14K files are a lot, and I am not sure why the ignoreOlderThan = 2d wasn't working for you.&lt;/P&gt;

&lt;P&gt;Could you be hitting the 256 KBPS limit on the universal forwarder? The forwarder limits its use of the network to 256 KBPS to avoid saturating the network on a production machine. You can change this by editing etc/system/local/limits.conf:&lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;[thruput]&lt;BR /&gt;
maxKBps = 0&lt;BR /&gt;
 # means unlimited&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;If you continue to have problems, a call to Splunk Support might be next. You have certainly done your homework!&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2013 23:13:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49973#M9502</guid>
      <dc:creator>lguinn2</dc:creator>
      <dc:date>2013-03-04T23:13:12Z</dc:date>
    </item>
    <item>
      <title>Re: Could not send data to output queue (parsingQueue)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49974#M9503</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Indexer discovery used in Multisite clustering&lt;/STRONG&gt;&lt;BR /&gt;
There can be many reasons for this failure, including the ones listed above.  &lt;/P&gt;

&lt;P&gt;An additional reason that this message comes up is because of indexer discovery when using multisite clustering.  When using multisite clustering, &lt;EM&gt;every forwarder must have a site&lt;/EM&gt;.  If you wish to avoid site affinity, you may use &lt;EM&gt;site0&lt;/EM&gt;.&lt;/P&gt;

&lt;P&gt;The configuration looks like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;# server.conf
[general]
site = site0
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;References:&lt;BR /&gt;
1. &lt;A href="http://docs.splunk.com/Documentation/Splunk/6.4.3/Indexer/indexerdiscovery#Use_indexer_discovery_in_a_multisite_cluster"&gt;http://docs.splunk.com/Documentation/Splunk/6.4.3/Indexer/indexerdiscovery#Use_indexer_discovery_in_a_multisite_cluster&lt;/A&gt;&lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;"Important: When you use indexer discovery with multisite clustering, you must assign a site-id to all forwarders, whether or not you want the forwarders to be site-aware. If you want a forwarder to be site-aware, you assign it a site-id for a site in the cluster, such as "site1," "site2," and so on. If you do not want a forwarder to be site-aware, you assign it the special site-id of "site0". When a forwarder is assigned "site0", it will forward to peers across all sites in the cluster." &lt;/P&gt;
&lt;/BLOCKQUOTE&gt;</description>
      <pubDate>Wed, 07 Sep 2016 19:46:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Could-not-send-data-to-output-queue-parsingQueue/m-p/49974#M9503</guid>
      <dc:creator>amehta_splunk</dc:creator>
      <dc:date>2016-09-07T19:46:47Z</dc:date>
    </item>
  </channel>
</rss>

