<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219331#M43080</link>
    <description>&lt;P&gt;So just to confirm, same source, sourcetype, index and host is producing duplicate log entries? &lt;/P&gt;

&lt;P&gt;If you count the number of lines per hour or similar, does that add up to more than the number of lines within the source file on the server ?&lt;/P&gt;

&lt;P&gt;If it is consistently duplicated then it would be worth checking the configuration of your outputs.conf ...&lt;/P&gt;</description>
    <pubDate>Fri, 13 Jan 2017 02:46:39 GMT</pubDate>
    <dc:creator>gjanders</dc:creator>
    <dc:date>2017-01-13T02:46:39Z</dc:date>
    <item>
      <title>Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219324#M43073</link>
      <description>&lt;P&gt;Hi All;&lt;/P&gt;

&lt;P&gt;Noticed something very interesting and I don't seem to find the smoking gun.  Today I was alerted by one of my users that he saw some duplicate events, more recently today at 2:20pm.  I took a look, and sure enough I was able to find that an event was duplicated.  I narrowed the result down to the following search:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=app_harmony sourcetype=harmony:\*:access "[04/Jan/2017:14:20*" host=cimasked0047 | eval idxtime=_indextime | table _time idxtime host source splunk_server _raw
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;The results were as follows:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;_time   idxtime host    source  splunk_server   _raw
2017-01-04 14:20:02 1483557602  hostname    /usr/local/openresty/nginx/logs/access.log  splunkindex0006 xx.xx.xxx.xxx - - [04/Jan/2017:14:20:02 -0500] "GET /api/harmony/v1/User/G_IAS_6e680df53be6a06f9e11faf40812dc8c?domain=internal HTTP/1.1" 200 317 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36" "-"

2017-01-04 14:20:02 1483557602  hostname    /usr/local/openresty/nginx/logs/access.log  splunkindex0009 xx.xx.xx.xxx - - [04/Jan/2017:14:20:02 -0500] "GET /api/harmony/v1/User/G_IAS_6e680df53be6a06f9e11faf40812dc8c?domain=internal HTTP/1.1" 200 317 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36" "-"
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;The raw event is exactly the same, as is the index time.  The only difference between the two events is our indexer.  We have a cluster of 5 indexers with a RF of 3.&lt;/P&gt;

&lt;P&gt;I double checked the file, and there isn't more than 1 event at that time.&lt;/P&gt;

&lt;P&gt;I also verified that I dont see any random '.filepart' or temp files or anything appearing&lt;/P&gt;

&lt;P&gt;the input for that particular file is pretty strict and as follows:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[monitor:///usr/local/openresty/nginx/logs/access.log]
index = app_harmony
sourcetype = harmony:openresty:access
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I took a look at the internal logs, and didn't see any errors around that time.&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jan 2017 20:46:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219324#M43073</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-04T20:46:39Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219325#M43074</link>
      <description>&lt;P&gt;Does the forwarder have a useACK=true in the outputs.conf?  &lt;/P&gt;

&lt;P&gt;One of the known side effects of this setting can be data duplication in rare cases.&lt;/P&gt;

&lt;P&gt;Search for this message in the _internal splunkd.log on the forwarder:&lt;/P&gt;

&lt;P&gt;WARN  TcpOutputProc - Possible duplication of events with &lt;/P&gt;</description>
      <pubDate>Wed, 04 Jan 2017 22:23:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219325#M43074</guid>
      <dc:creator>sjohnson_splunk</dc:creator>
      <dc:date>2017-01-04T22:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219326#M43075</link>
      <description>&lt;P&gt;Thanks for the quick response.  I checked for that yesterday around the time of the duplication, and didn't see anything.  Also tried a week long search to verify.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jan 2017 18:47:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219326#M43075</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-05T18:47:07Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219327#M43076</link>
      <description>&lt;P&gt;Does the log roll at this particular time ? Do you see any mentions of CRC error in the splunkd.log of the forwarder and a mention of re-reading the file ?&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jan 2017 18:59:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219327#M43076</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2017-01-05T18:59:39Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219328#M43077</link>
      <description>&lt;P&gt;The file doesn't roll at all at the moment, not since december at least.&lt;/P&gt;

&lt;P&gt;As far as any messages in the log, nothing other than the typical:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;01-04-2017 14:19:49.884 -0500 INFO  TcpOutputProc - Connected to idx=****:9997 using ACK.
01-04-2017 14:20:29.781 -0500 INFO  TcpOutputProc - Closing stream for idx=****:9997
01-04-2017 14:20:29.781 -0500 INFO  TcpOutputProc - Connected to idx=****.108:9997 using ACK.
01-04-2017 14:20:41.640 -0500 INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_****_8089_****_****_268231C9-FA74-4B0E-8BE7-3A6C4AD83F2E
01-04-2017 14:21:09.647 -0500 INFO  TcpOutputProc - Closing stream for idx=****:9997
01-04-2017 14:21:09.647 -0500 INFO  TcpOutputProc - Connected to idx=****:9997 using ACK.
01-04-2017 14:21:41.645 -0500 INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_****_8089_****_****_268231C9-FA74-4B0E-8BE7-3A6C4AD83F2E
01-04-2017 14:21:49.498 -0500 INFO  TcpOutputProc - Closing stream for idx=****:9997
01-04-2017 14:21:49.498 -0500 INFO  TcpOutputProc - Connected to idx=****:9997 using ACK.
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Thu, 05 Jan 2017 19:10:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219328#M43077</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-05T19:10:14Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219329#M43078</link>
      <description>&lt;P&gt;I took another look and did notice that I do uave 'useACK' as true.  This is a default on all of our forwarders:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[tcpout]
defaultGroup = companyPVSNewIndexers
# These two options below are required for forwarders when clustering.
# Max queue size ensures that the forwarder has enough of a buffer while
# waiting for the ACK from the indexer; without useACK, the search head
# will spout yellow warning banners in a clustered environment.
maxQueueSize = 7MB
useACK = true
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Should I try running without useACK for this one forwarder? &lt;/P&gt;</description>
      <pubDate>Tue, 10 Jan 2017 16:56:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219329#M43078</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-10T16:56:57Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219330#M43079</link>
      <description>&lt;P&gt;FYI I have tried running the forwarder with useACK=false with no success.  Any other ideas would be greatly appreciated as this is still happening for the index &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 12 Jan 2017 18:57:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219330#M43079</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-12T18:57:36Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219331#M43080</link>
      <description>&lt;P&gt;So just to confirm, same source, sourcetype, index and host is producing duplicate log entries? &lt;/P&gt;

&lt;P&gt;If you count the number of lines per hour or similar, does that add up to more than the number of lines within the source file on the server ?&lt;/P&gt;

&lt;P&gt;If it is consistently duplicated then it would be worth checking the configuration of your outputs.conf ...&lt;/P&gt;</description>
      <pubDate>Fri, 13 Jan 2017 02:46:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219331#M43080</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2017-01-13T02:46:39Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219332#M43081</link>
      <description>&lt;P&gt;So here is what I got:&lt;/P&gt;

&lt;P&gt;In Splunk-&amp;gt; index=app_harmony source=/usr/local/openresty/nginx/logs/access.log "13/Jan/2017:08:"&lt;BR /&gt;
events = 4678&lt;/P&gt;

&lt;P&gt;Grepping the actual log file : less access.log | grep -o '13/Jan/2017:08' | wc -l&lt;BR /&gt;
lines = 1080&lt;/P&gt;</description>
      <pubDate>Fri, 13 Jan 2017 15:09:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219332#M43081</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-13T15:09:01Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219333#M43082</link>
      <description>&lt;P&gt;If you run a dedup in Splunk so:&lt;BR /&gt;
index=app_harmony source=/usr/local/openresty/nginx/logs/access.log "13/Jan/2017:08:" | dedup &lt;/P&gt;

&lt;P&gt;Do you see much less events ? The numbers are strange you have a lot more than double the number of events in Splunk...also I notice you did not narrow down to a host, are there multiple hosts that this log comes from or just one?&lt;/P&gt;</description>
      <pubDate>Mon, 16 Jan 2017 00:14:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219333#M43082</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2017-01-16T00:14:57Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219334#M43083</link>
      <description>&lt;P&gt;Ah Jeez!! You hare 100% correct, sorry, retried:&lt;/P&gt;

&lt;P&gt;[splkadmn@cmasked0083 logs]$ less access.log | grep "13/Jan/2017:08:" | wc -l&lt;BR /&gt;
&lt;STRONG&gt;1140&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;index=app_harmony host=cmasked0083 source=/usr/local/openresty/nginx/logs/access.log "13/Jan/2017:08:"&lt;BR /&gt;
&lt;STRONG&gt;1,959 events&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;index=app_harmony host=cmasked0083 source=/usr/local/openresty/nginx/logs/access.log "13/Jan/2017:08:" | dedup _raw&lt;BR /&gt;
 &lt;STRONG&gt;1,140 events&lt;/STRONG&gt; &lt;/P&gt;</description>
      <pubDate>Mon, 16 Jan 2017 22:52:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219334#M43083</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-16T22:52:43Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219335#M43084</link>
      <description>&lt;P&gt;Interesting, during the time period where you have the duplicates, are there any warning/error messages from the splunkd? crc checksum or duplication errors in particular...&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 00:00:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219335#M43084</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2017-01-17T00:00:39Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219336#M43085</link>
      <description>&lt;P&gt;maybe it's worth checking the setting for the servers in &lt;CODE&gt;outputs.conf&lt;/CODE&gt; as well, because if they are wrong this can end in duplicated events as well.&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 01:26:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219336#M43085</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2017-01-17T01:26:14Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219337#M43086</link>
      <description>&lt;P&gt;So here is the whole outputs file.  To save space on this post I have removed all the settings that have been commented out.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;Company BASE SETTINGS

[tcpout]
defaultGroup = companyPVSNewIndexers
# These two options below are required for forwarders when clustering.
# Max queue size ensures that the forwarder has enough of a buffer while
# waiting for the ACK from the indexer; without useACK, the search head
# will spout yellow warning banners in a clustered environment.
maxQueueSize = 7MB
useACK = true

[tcpout]
defaultGroup = companyPVSNewIndexers
indexAndForward = false

# When indexing a large continuous file that grows very large, a universal
# or light forwarder may become "stuck" on one indexer, trying to reach
# EOF before being able to switch to another indexer. The symptoms of this
# are congestion on *one* indexer in the pool while others seem idle, and
# possibly uneven loading of the disk usage for the target index.
# In this instance, forceTimebasedAutoLB can help!
# ** Do not enable if you have events &amp;gt; 64kB **
forceTimebasedAutoLB = true

# Correct an issue with the default outputs.conf for the Universal Forwarder
# or the SplunkLightForwarder app; these don't forward _internal events.
forwardedindex.2.whitelist = (_audit|_introspection|_internal)

[tcpout:companyPVSNewIndexers]
server = masked6.com:9997, masked7.com:9997, masked8.com:9997, masked9.com:9997, masked10.com:9997
autoLB = true
autoLBFrequency = 40
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I checked the servers, and they are correct&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 13:28:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219337#M43086</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-17T13:28:09Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219338#M43087</link>
      <description>&lt;P&gt;I wish there were, but nothing in the splunkd on those forwarders &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 13:28:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219338#M43087</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-17T13:28:35Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219339#M43088</link>
      <description>&lt;P&gt;Actually, you have an unnecessary second tcpout stanza in there. I don't know if that's the problem, but you should remove one tcpout stanza.&lt;/P&gt;

&lt;P&gt;Do you use a master server? If so, change your tcpout stanzas to something like this (and the others settings you'd like to have):&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[tcpout]
defaultGroup = companyPVSNewIndexers
indexAndForward = false

[tcpout:companyPVSNewIndexer]
indexerDiscovery = YourIndexerCluster
//additional settings for SSL required when enabled

[indexer_discovery:YourIndexerCluster]
pass4SymmKey = YourKey    //defined on master
master_uri = &lt;A href="https://YourMasterServer:8089" target="test_blank"&gt;https://YourMasterServer:8089&lt;/A&gt;
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;If you don't use a master (I don't think this is recommended), just merge both tcpout stanzas into one. Not sure about the autoLB settings though.&lt;/P&gt;

&lt;P&gt;Edit: typo&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 13:45:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219339#M43088</guid>
      <dc:creator>skalliger</dc:creator>
      <dc:date>2017-01-17T13:45:46Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219340#M43089</link>
      <description>&lt;P&gt;Oh very interesting!  We do have a cluster master.  We actually had a consultant set our outputs.conf up that way.  I wonder why he did it that way instead of relying on the CM to delegate the indexers&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 15:14:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219340#M43089</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-17T15:14:47Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219341#M43090</link>
      <description>&lt;P&gt;I've had a number of issues the the indexer discovery and I prefer not to use it now!&lt;/P&gt;

&lt;P&gt;Your duplicate tcpout stanza might be an issue though...that's an easy fix.&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2017 23:46:53 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219341#M43090</guid>
      <dc:creator>gjanders</dc:creator>
      <dc:date>2017-01-17T23:46:53Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219342#M43091</link>
      <description>&lt;P&gt;I think that was the issue, the duplicate stanza!  Thanks @skalliger for pointing that one out!  Sent a note to the admins to promote as the correct answer.&lt;/P&gt;</description>
      <pubDate>Thu, 19 Jan 2017 14:32:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219342#M43091</guid>
      <dc:creator>paimonsoror</dc:creator>
      <dc:date>2017-01-19T14:32:09Z</dc:date>
    </item>
    <item>
      <title>Re: Why Was My Event Indexed Twice? Same Host, Same File, Same _raw, Same Indextime, different splunk_server</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219343#M43092</link>
      <description>&lt;P&gt;Glad it helped you! Just wondering why you reported my comment though, haha.&lt;/P&gt;</description>
      <pubDate>Thu, 19 Jan 2017 14:40:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-Was-My-Event-Indexed-Twice-Same-Host-Same-File-Same-raw-Same/m-p/219343#M43092</guid>
      <dc:creator>skalliger</dc:creator>
      <dc:date>2017-01-19T14:40:26Z</dc:date>
    </item>
  </channel>
</rss>

