<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Splunk Service Getting Down Suddenly in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568020#M197958</link>
    <description>&lt;P&gt;Is it a hijacking ? I've mentioned error msg pointed out by&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/222995"&gt;@medsy&lt;/a&gt;&amp;nbsp;above.&lt;BR /&gt;&lt;BR /&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/213957"&gt;@richgalloway&lt;/a&gt;&amp;nbsp;- do you know anything about them ?&lt;/P&gt;</description>
    <pubDate>Wed, 22 Sep 2021 12:27:16 GMT</pubDate>
    <dc:creator>data_beast</dc:creator>
    <dc:date>2021-09-22T12:27:16Z</dc:date>
    <item>
      <title>Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/541932#M153476</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;So I have an issue with my Splunk Enterprise deployment. I have three instances on my architecture, a Search Head, an Indexer and another Search Head dedicated for Splunk Enterprise Security.&lt;/P&gt;&lt;P&gt;The issue is The service of splunk (splunkd) is getting down suddenly. There is no error in the deployments.&lt;/P&gt;&lt;P&gt;If someone have any explanation or suggestion I'm open to hear it.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 Mar 2021 10:32:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/541932#M153476</guid>
      <dc:creator>medsy</dc:creator>
      <dc:date>2021-03-02T10:32:35Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/541966#M153491</link>
      <description>&lt;P&gt;Splunkd.log should have a log message explaining the sudden disappearance.&amp;nbsp; If it does not then check /var/log/messages for OOM (Out Of Memory) Killer messages.&lt;/P&gt;</description>
      <pubDate>Tue, 02 Mar 2021 13:46:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/541966#M153491</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2021-03-02T13:46:10Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/541996#M153507</link>
      <description>&lt;P&gt;&lt;STRONG&gt;I got those message on splunk log :&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD5a764635f76e33232_at_1614165600_20923&lt;BR /&gt;02-24-2021 13:43:08.726 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD5a6fe1e3b4418dcd2_at_1614132000_11964&lt;BR /&gt;02-24-2021 13:43:08.727 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD542c307ea0744c18c_at_1614049200_17022&lt;BR /&gt;02-24-2021 13:43:08.815 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD542c307ea0744c18c_at_1614135600_12923&lt;BR /&gt;02-24-2021 13:43:08.829 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD5a6fe1e3b4418dcd2_at_1614045600_15966&lt;BR /&gt;02-24-2021 13:43:09.068 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD5a764635f76e33232_at_1614165600_20923&lt;BR /&gt;02-24-2021 13:43:09.081 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD5a6fe1e3b4418dcd2_at_1614132000_11964&lt;BR /&gt;02-24-2021 13:43:09.082 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD542c307ea0744c18c_at_1614049200_17022&lt;BR /&gt;02-24-2021 13:43:09.117 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD542c307ea0744c18c_at_1614135600_12923&lt;BR /&gt;02-24-2021 13:43:09.124 +0100 WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100% for sid=scheduler__admin_U0EtRW5kcG9pbnRQcm90ZWN0aW9u__RMD5a6fe1e3b4418dcd2_at_1614045600_15966&lt;BR /&gt;02-24-2021 13:43:36.734 +0100 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length &amp;gt;= 10468 - data_source="/opt/splunk/var/log/splunk/audit.log", data_host="svlsplunkses", data_sourcetype="splunk_audit"&lt;BR /&gt;02-24-2021 13:44:06.793 +0100 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf&lt;BR /&gt;02-24-2021 13:44:06.897 +0100 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf&lt;BR /&gt;02-24-2021 13:44:08.345 +0100 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf&lt;BR /&gt;02-24-2021 13:45:03.609 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:03.681 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:05.010 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:05.087 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:06.970 +0100 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf&lt;BR /&gt;02-24-2021 13:45:07.041 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:07.149 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:07.152 +0100 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf&lt;BR /&gt;02-24-2021 13:45:07.256 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:07.697 +0100 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length &amp;gt;= 10955 - data_source="/opt/splunk/var/log/splunk/audit.log", data_host="svlsplunkses", data_sourcetype="splunk_audit"&lt;BR /&gt;02-24-2021 13:45:08.477 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:08.547 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:09.609 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;BR /&gt;02-24-2021 13:45:09.672 +0100 WARN DispatchManager - The instance is approaching the maximum number of historical searches that can be run concurrently.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 Mar 2021 15:21:23 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/541996#M153507</guid>
      <dc:creator>medsy</dc:creator>
      <dc:date>2021-03-02T15:21:23Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/567994#M197950</link>
      <description>&lt;P&gt;Anyone knows something about below errors ?&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;WARN JobsFeed - Custom progress indicator signaled progress of &amp;gt; 100%&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;Unfortunately not explained in docs &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Sep 2021 10:18:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/567994#M197950</guid>
      <dc:creator>data_beast</dc:creator>
      <dc:date>2021-09-22T10:18:26Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568018#M197957</link>
      <description>&lt;P&gt;Please don't hijack threads.&amp;nbsp; Post a new question.&lt;/P&gt;</description>
      <pubDate>Wed, 22 Sep 2021 12:17:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568018#M197957</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2021-09-22T12:17:34Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568020#M197958</link>
      <description>&lt;P&gt;Is it a hijacking ? I've mentioned error msg pointed out by&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/222995"&gt;@medsy&lt;/a&gt;&amp;nbsp;above.&lt;BR /&gt;&lt;BR /&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/213957"&gt;@richgalloway&lt;/a&gt;&amp;nbsp;- do you know anything about them ?&lt;/P&gt;</description>
      <pubDate>Wed, 22 Sep 2021 12:27:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568020#M197958</guid>
      <dc:creator>data_beast</dc:creator>
      <dc:date>2021-09-22T12:27:16Z</dc:date>
    </item>
    <item>
      <title>Re: Splunk Service Getting Down Suddenly</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568065#M197971</link>
      <description>&lt;P&gt;Yes, it's a hijacking.&amp;nbsp; The OP is about Splunk going down, not about a specific log message.&lt;/P&gt;</description>
      <pubDate>Wed, 22 Sep 2021 14:39:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Splunk-Service-Getting-Down-Suddenly/m-p/568065#M197971</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2021-09-22T14:39:51Z</dc:date>
    </item>
  </channel>
</rss>

