<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Avoiding CSV lookup replication errors in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402166#M116392</link>
    <description>&lt;P&gt;The problem isn’t going to be fixed by “dribbling in” the csv one piece at a time.  &lt;/P&gt;

&lt;P&gt;Depending on the version of splunk you have limits.conf on the search heads and indexers will have a default setting of 800MB or 2GB for search bundle replication (I think it’s 2GB since 6.6).  &lt;/P&gt;

&lt;P&gt;You’re going over that limit by x MEgabytes when you upload the csv... and causing the issue.&lt;/P&gt;

&lt;P&gt;There are several solutions documented for this.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Increase the limits (just know it affects network bandwidth.  See search bundle replication settings in limits.conf - you can’t do this via UI as far as I know.&lt;/LI&gt;
&lt;LI&gt;Reduce the size and or number of existing lookups&lt;/LI&gt;
&lt;LI&gt;you can probably do this&lt;/LI&gt;
&lt;LI&gt;Index the data via the web UI and use join, append, etc (using an “OR” condition is better than joins, google how to join without join in splunk)&lt;/LI&gt;
&lt;LI&gt;you can probably do this&lt;/LI&gt;
&lt;/OL&gt;</description>
    <pubDate>Tue, 26 Jun 2018 12:54:58 GMT</pubDate>
    <dc:creator>jkat54</dc:creator>
    <dc:date>2018-06-26T12:54:58Z</dc:date>
    <item>
      <title>Avoiding CSV lookup replication errors</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402164#M116390</link>
      <description>&lt;P&gt;I've got a medium-sized (50MB) CSV lookup file with two columns (email address and server name) that I want to use.  I tried a straight upload and managed to put down our Splunk instance because replication failed and blocked all searches.  Can I dribble the file in 100K lines at a time using &lt;CODE&gt;outputlookup append=t&lt;/CODE&gt;?  Or does the replication just take the whole lookup bundle and try and replicate everything?  &lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Please note&lt;/STRONG&gt;: I do not have access to the file system; whatever the solution is, I have to be able to do it from Splunk Web.&lt;/P&gt;

&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Sat, 23 Jun 2018 20:37:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402164#M116390</guid>
      <dc:creator>Kenshiro70</dc:creator>
      <dc:date>2018-06-23T20:37:31Z</dc:date>
    </item>
    <item>
      <title>Re: Avoiding CSV lookup replication errors</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402165#M116391</link>
      <description>&lt;P&gt;Hi There,&lt;/P&gt;

&lt;P&gt;That seems a tad bit more than a medium size csv you have there, how many records have you got within it? Have you looked into utilising a KV store instead?&lt;/P&gt;</description>
      <pubDate>Tue, 26 Jun 2018 11:01:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402165#M116391</guid>
      <dc:creator>paulbannister</dc:creator>
      <dc:date>2018-06-26T11:01:19Z</dc:date>
    </item>
    <item>
      <title>Re: Avoiding CSV lookup replication errors</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402166#M116392</link>
      <description>&lt;P&gt;The problem isn’t going to be fixed by “dribbling in” the csv one piece at a time.  &lt;/P&gt;

&lt;P&gt;Depending on the version of splunk you have limits.conf on the search heads and indexers will have a default setting of 800MB or 2GB for search bundle replication (I think it’s 2GB since 6.6).  &lt;/P&gt;

&lt;P&gt;You’re going over that limit by x MEgabytes when you upload the csv... and causing the issue.&lt;/P&gt;

&lt;P&gt;There are several solutions documented for this.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Increase the limits (just know it affects network bandwidth.  See search bundle replication settings in limits.conf - you can’t do this via UI as far as I know.&lt;/LI&gt;
&lt;LI&gt;Reduce the size and or number of existing lookups&lt;/LI&gt;
&lt;LI&gt;you can probably do this&lt;/LI&gt;
&lt;LI&gt;Index the data via the web UI and use join, append, etc (using an “OR” condition is better than joins, google how to join without join in splunk)&lt;/LI&gt;
&lt;LI&gt;you can probably do this&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Tue, 26 Jun 2018 12:54:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Avoiding-CSV-lookup-replication-errors/m-p/402166#M116392</guid>
      <dc:creator>jkat54</dc:creator>
      <dc:date>2018-06-26T12:54:58Z</dc:date>
    </item>
  </channel>
</rss>

