<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Any fixes or workarounds for these post 6.5.1 upgrade issues? in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269260#M10278</link>
    <description>&lt;P&gt;Looks like the scheduling issue is a carryover from 6.5.0 that was not fixed. Hoping there's a workaround somewhere: &lt;A href="https://answers.splunk.com/answers/456812/why-are-alerts-not-working-after-upgrade-to-splunk-1.html"&gt;https://answers.splunk.com/answers/456812/why-are-alerts-not-working-after-upgrade-to-splunk-1.html&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 08 Dec 2016 23:50:15 GMT</pubDate>
    <dc:creator>twinspop</dc:creator>
    <dc:date>2016-12-08T23:50:15Z</dc:date>
    <item>
      <title>Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269257#M10275</link>
      <description>&lt;P&gt;Upgraded my clusters from 6.4.4 to 6.5.1 last night. Things appeared okay, but this morning 2 problems surfaced:&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;scheduled searches are not running on the SHC. If you open the saved search settings and click save, [EDIT: They show a schedule time, but don't actually fire.]&lt;/LI&gt;
&lt;LI&gt;2/10 of our clustered indexers have filled queues. A restart of splunk gets things moving again for a few minutes, then back to full queues, blocked indexing. No errors are being logged. No indication of why they're blocked. Or why they work for 5-10 minutes, then stop. &lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;Anyone else?&lt;/P&gt;

&lt;P&gt;EDIT: mistaken description. Fake-editing the scheduled search gives it a "scheduled time" in the future, but it doesn't fire.&lt;/P&gt;

&lt;P&gt;EDIT 2: Scheduling problems looks to be related to a known bug that was due to be fixed in 6.5.1, but apparently wasn't. &lt;A href="https://answers.splunk.com/answers/456812/why-are-alerts-not-working-after-upgrade-to-splunk-1.html"&gt;https://answers.splunk.com/answers/456812/why-are-alerts-not-working-after-upgrade-to-splunk-1.html&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;EDIT 3: The problem referenced in EDIT 2 above was not related, although the error message was similar. See answer below.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2016 19:23:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269257#M10275</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2016-12-08T19:23:34Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269258#M10276</link>
      <description>&lt;P&gt;I have found that these kinds of questions involve more diagnosis than can normally be done in an Answers post.&lt;/P&gt;

&lt;P&gt;I recommend you contact Splunk Support for assistance with this.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2016 19:35:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269258#M10276</guid>
      <dc:creator>bshuler_splunk</dc:creator>
      <dc:date>2016-12-08T19:35:42Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269259#M10277</link>
      <description>&lt;P&gt;Done. Still waiting for help. &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; effectively down in the meantime.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2016 23:25:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269259#M10277</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2016-12-08T23:25:32Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269260#M10278</link>
      <description>&lt;P&gt;Looks like the scheduling issue is a carryover from 6.5.0 that was not fixed. Hoping there's a workaround somewhere: &lt;A href="https://answers.splunk.com/answers/456812/why-are-alerts-not-working-after-upgrade-to-splunk-1.html"&gt;https://answers.splunk.com/answers/456812/why-are-alerts-not-working-after-upgrade-to-splunk-1.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Dec 2016 23:50:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269260#M10278</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2016-12-08T23:50:15Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269261#M10279</link>
      <description>&lt;P&gt;Not related. Same error, but different cause.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Dec 2016 01:03:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269261#M10279</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2016-12-13T01:03:44Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269262#M10280</link>
      <description>&lt;P&gt;Answer for problem 1: The error in the _internal index, &lt;CODE&gt;vector::_M_range_check&lt;/CODE&gt;, led us to this more detailed error:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;12-08-2016 20:12:33.160 -0500 ERROR StatsProcessor - Error in 'stats' command: 3 duplicate rename field(s). Original renames: [c ftime ltime I_EMAIL I_CELL I_DIALCODE I_UID VIEW_ID ERROR_ID DELIVERY_METHOD CALLER OUTPUT_TYPE VIEW_ID ERROR_ID URI ORGID AOID ORG_NAME UID VALIDATED_CHANNEL_COUNT UID_RETRIEVED FN_NOT_MATCH R_FN DELIVERY_METHOD]. Duplicate renames: [DELIVERY_METHOD ERROR_ID VIEW_ID].
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;There was an accelerated saved search that had multiple duplicated fields. As soon as I edited the search to remove the dupes, everything cleared up. 6.4.4 did not trip up on this, but 6.5.1 did. Big thanks to Terrance Lam @ Splunk Support for finding this.&lt;/P&gt;

&lt;P&gt;Still dealing with problem 2. No closer to resolution there.&lt;/P&gt;

&lt;P&gt;EDIT - Answer for problem 2: The 2 indexers that were periodically blocking all indexing could not see our AD server for LDAP auth. The connection was timing out. This was always happening, but 6.5.1 appears to handle it badly. The entire splunkd process blocks for long periods of time occasionally. I entered the hostname in my /etc/hosts file pointing to localhost as a quick work around. The connection is immediately refused and Splunk handles that better.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 12:06:40 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269262#M10280</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2020-09-29T12:06:40Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269263#M10281</link>
      <description>&lt;P&gt;for #2&lt;/P&gt;

&lt;P&gt;1) whats your indexing - thruput like for those indexers? pre-upgrade and post-upgrade. (if ur maxed on thruput, could just be an excessive amount of data being forwarded there)&lt;BR /&gt;
2) are those indexers constantly creating hot buckets? (rolling hot buckets and creating new hot buckets slows down indexing rates a lot)&lt;BR /&gt;
3) any consistent ERRORS / WARNS in the logs for those 2 indexers?&lt;/P&gt;</description>
      <pubDate>Tue, 13 Dec 2016 20:22:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269263#M10281</guid>
      <dc:creator>dxu_splunk</dc:creator>
      <dc:date>2016-12-13T20:22:59Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269264#M10282</link>
      <description>&lt;P&gt;1) In the 7 MB/s range. Unchanged from before. If it was load related, when the 2 dropped, I would expect the surfing members to be overwhelmed. They are not. They handle the additional load fine, up to 15 MB/s sometimes.&lt;BR /&gt;
2) BucketMover activity is no more or less than the other indexers. (When they're active. When they block for long periods of time, BucketMover activity disappears, as it should.)&lt;BR /&gt;
3) No. That would be nice! At least I'd have somewhere to start.&lt;/P&gt;

&lt;P&gt;2 pieces of curious info: The 2 usually block together. Rarely does one start blocking without the other. And when they are blocked they take an abnormally long time to restart. While waiting for the restart to happen, with the "..." crawling across the screen, server load drops to 0. Vs while running but blocked, load is at ~ 1. And while running normally, load is ~ 10.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Dec 2016 21:00:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269264#M10282</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2016-12-13T21:00:17Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269265#M10283</link>
      <description>&lt;P&gt;contact support &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;

&lt;P&gt;are there a lot of buckets on those 2 indexers?&lt;/P&gt;</description>
      <pubDate>Thu, 15 Dec 2016 19:38:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269265#M10283</guid>
      <dc:creator>dxu_splunk</dc:creator>
      <dc:date>2016-12-15T19:38:35Z</dc:date>
    </item>
    <item>
      <title>Re: Any fixes or workarounds for these post 6.5.1 upgrade issues?</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269266#M10284</link>
      <description>&lt;P&gt;See the ldap answer above&lt;/P&gt;</description>
      <pubDate>Thu, 15 Dec 2016 19:51:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Any-fixes-or-workarounds-for-these-post-6-5-1-upgrade-issues/m-p/269266#M10284</guid>
      <dc:creator>twinspop</dc:creator>
      <dc:date>2016-12-15T19:51:37Z</dc:date>
    </item>
  </channel>
</rss>

