<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to avoid / delete duplicate events using routers logging to central syslog in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80035#M16466</link>
    <description>&lt;P&gt;I've never used haproxy/keepalived but I think that for practical matters here they'd function similarly.&lt;/P&gt;</description>
    <pubDate>Tue, 28 Feb 2012 18:48:26 GMT</pubDate>
    <dc:creator>dwaddle</dc:creator>
    <dc:date>2012-02-28T18:48:26Z</dc:date>
    <item>
      <title>How to avoid / delete duplicate events using routers logging to central syslog</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80031#M16462</link>
      <description>&lt;P&gt;Currently we are logging all our network device data from our routers to a single syslog host.&lt;BR /&gt;
This syslog host forward to a central syslog logger which our splunk indexer monitors directly.&lt;/P&gt;

&lt;P&gt;However we would like to log to multiple syslog hosts from the routers instead of just one but this would cause a lot of duplicate entries in our central syslogger. Anyone have a good approach to handle routers logging to multiple syslog hosts(for redundancy) but filtering duplicates before they index into the Splunk indexers? &lt;/P&gt;

&lt;P&gt;Would rather not just pipe to dedup&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;results | dedup 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;hopefully there is a solution to throw away dupes or an entirely new approach.&lt;/P&gt;</description>
      <pubDate>Mon, 27 Feb 2012 23:39:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80031#M16462</guid>
      <dc:creator>sonicZ</dc:creator>
      <dc:date>2012-02-27T23:39:00Z</dc:date>
    </item>
    <item>
      <title>Re: How to avoid / delete duplicate events using routers logging to central syslog</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80032#M16463</link>
      <description>&lt;P&gt;You can collect your logs to as many syslog servers and have those send to a central syslog server then have the central syslog server send to Splunk.  Syslog-ng is very configurable.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Feb 2012 00:41:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80032#M16463</guid>
      <dc:creator>jgedeon120</dc:creator>
      <dc:date>2012-02-28T00:41:59Z</dc:date>
    </item>
    <item>
      <title>Re: How to avoid / delete duplicate events using routers logging to central syslog</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80033#M16464</link>
      <description>&lt;P&gt;Well, Splunk itself isn't going to be able to know those events coming from different syslog servers are actually duplicates.  So, there's no real way (within Splunk) to avoid the duplication.&lt;/P&gt;

&lt;P&gt;One viable alternative is to cluster your syslog servers - use a floating IP address between the two (Red Hat's piranha / pulse comes to mind) and send all of your log data to the floating IP.  Then you keep your high availability, but with only one copy of each event.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Feb 2012 14:36:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80033#M16464</guid>
      <dc:creator>dwaddle</dc:creator>
      <dc:date>2012-02-28T14:36:03Z</dc:date>
    </item>
    <item>
      <title>Re: How to avoid / delete duplicate events using routers logging to central syslog</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80034#M16465</link>
      <description>&lt;P&gt;Thanks for the info Dwaddle, We were thinking of using haproxy and keep aliveD on two different syslog servers basically doing a software VIP load balanced. I'll check into piranha / pulse too.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Feb 2012 17:04:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80034#M16465</guid>
      <dc:creator>sonicZ</dc:creator>
      <dc:date>2012-02-28T17:04:54Z</dc:date>
    </item>
    <item>
      <title>Re: How to avoid / delete duplicate events using routers logging to central syslog</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80035#M16466</link>
      <description>&lt;P&gt;I've never used haproxy/keepalived but I think that for practical matters here they'd function similarly.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Feb 2012 18:48:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-avoid-delete-duplicate-events-using-routers-logging-to/m-p/80035#M16466</guid>
      <dc:creator>dwaddle</dc:creator>
      <dc:date>2012-02-28T18:48:26Z</dc:date>
    </item>
  </channel>
</rss>

