<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: tcpin_cooked_pqueue blocking in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530414#M6376</link>
    <description>Definitely it works better if you add second pipeline.&lt;BR /&gt;I think that this conf presentation will help you a lot: &lt;A href="https://conf.splunk.com/files/2019/slides/FN1402.pdf" target="_blank"&gt;https://conf.splunk.com/files/2019/slides/FN1402.pdf&lt;/A&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 23 Nov 2020 11:55:15 GMT</pubDate>
    <dc:creator>isoutamo</dc:creator>
    <dc:date>2020-11-23T11:55:15Z</dc:date>
    <item>
      <title>tcpin_cooked_pqueue blocking</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/437290#M6372</link>
      <description>&lt;P&gt;I've recently made a career change, so I have a new Splunk environment where they leverage intermediary forwarders.  Two of the intermediary forwarders are having their  tcpin_cooked_pqueue fill which causes blocking. I would really appreciate some help troubleshooting and coming up with a suggested fix.&lt;/P&gt;

&lt;P&gt;1,  Since the tcpin_cooked queue is very early, the first question is obviously are later queues filling causing a backup; that's not the case only the tcpin cooked queue is filling.  Also, parallel queues are enabled and set to 2.&lt;BR /&gt;
2.  Once the business day is over, the queue quickly empties.&lt;BR /&gt;&lt;BR /&gt;
3.  The  intermediary forwarders (where the queue filling happens) are physical systems running Suse Enteprise Server 11 with a load average around 2 during the day (1 processor, 16 cores, 32 threads), are using about 5.5GB of the available 32GB of memory.  Network wise its receiving around 300KB/s and transmitting around 3005KB/s and has about 400 forwarders connected to it.&lt;BR /&gt;
3. In terms of ulimits:&lt;BR /&gt;
  virtual address space size: unlimited&lt;BR /&gt;
  data segment size: unlimited&lt;BR /&gt;
  resident memory size: unlimited&lt;BR /&gt;
  stack size: 8388608 bytes [hard maximum: unlimited]&lt;BR /&gt;
  core file size: 1024 bytes [hard maximum: unlimited]&lt;BR /&gt;
  data file size: unlimited&lt;BR /&gt;
  open files: 10240 files&lt;BR /&gt;
  user processes: 256476 processes&lt;BR /&gt;
  cpu time: unlimited&lt;BR /&gt;
  Linux transparent hugepage support, enabled="never" defrag="never"&lt;BR /&gt;
  Linux vm.overcommit setting, value="0"&lt;/P&gt;

&lt;P&gt;The key maybe that the forwarders sending typically are coming over fairly low bandwidth connections, so that may cause a lot of network connections per fairly low data ingestion rate.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 23:01:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/437290#M6372</guid>
      <dc:creator>triest</dc:creator>
      <dc:date>2020-09-29T23:01:18Z</dc:date>
    </item>
    <item>
      <title>Re: tcpin_cooked_pqueue blocking</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530339#M6373</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; I ran across this when researching parallelism on Heavy Forwarders. Did you ever get a resolution here? I was curious if you increased your parallel value or not?&amp;nbsp;&lt;BR /&gt;Thanks!&lt;/P&gt;&lt;P&gt;Stephen&lt;/P&gt;</description>
      <pubDate>Sun, 22 Nov 2020 17:38:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530339#M6373</guid>
      <dc:creator>skirven</dc:creator>
      <dc:date>2020-11-22T17:38:52Z</dc:date>
    </item>
    <item>
      <title>Re: tcpin_cooked_pqueue blocking</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530342#M6374</link>
      <description>Based on my experience, on physical machine it’s good to use parallel pipelines. Have you some bottleneck or why you are looking this?&lt;BR /&gt;Btw. You could add HFs as indexers on MC to better analyze what there is happening. On ideas.splunk.com there is a proposal to add HF as own role in MC, which you could vote if this is what you are needing.&lt;BR /&gt;r. Ismo</description>
      <pubDate>Sun, 22 Nov 2020 22:00:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530342#M6374</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2020-11-22T22:00:25Z</dc:date>
    </item>
    <item>
      <title>Re: tcpin_cooked_pqueue blocking</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530411#M6375</link>
      <description>&lt;P&gt;For my use case, I'm actually trying to facilitate better Search Peer data distribution. So if my Internediate HF (which is a VM. &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; ) had 2 pipelines, would it not then accept 2 streams, and send to potentially 2 different indexers at the same time? So if I have 5 HFs, I could theoretically feed 10 Search Peers at the same time?&lt;/P&gt;&lt;P&gt;That may be slightly off topic here, so I may create a new topic. And I'll have to find the Idea for the HF on the DMC. That would be cool!&lt;/P&gt;&lt;P&gt;Stephen&lt;/P&gt;</description>
      <pubDate>Mon, 23 Nov 2020 11:36:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530411#M6375</guid>
      <dc:creator>skirven</dc:creator>
      <dc:date>2020-11-23T11:36:09Z</dc:date>
    </item>
    <item>
      <title>Re: tcpin_cooked_pqueue blocking</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530414#M6376</link>
      <description>Definitely it works better if you add second pipeline.&lt;BR /&gt;I think that this conf presentation will help you a lot: &lt;A href="https://conf.splunk.com/files/2019/slides/FN1402.pdf" target="_blank"&gt;https://conf.splunk.com/files/2019/slides/FN1402.pdf&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Nov 2020 11:55:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530414#M6376</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2020-11-23T11:55:15Z</dc:date>
    </item>
    <item>
      <title>Re: tcpin_cooked_pqueue blocking</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530415#M6377</link>
      <description>&lt;P&gt;Thanks! I was at .conf last year, and totally didn't see this! I was dealing with other tech debt at the time. We've made a lot of progress since then. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; I'll have to pull the talk and listen to it.&lt;/P&gt;&lt;P&gt;-Stephen&lt;/P&gt;</description>
      <pubDate>Mon, 23 Nov 2020 11:59:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/tcpin-cooked-pqueue-blocking/m-p/530415#M6377</guid>
      <dc:creator>skirven</dc:creator>
      <dc:date>2020-11-23T11:59:47Z</dc:date>
    </item>
  </channel>
</rss>

