Splunk Search

duplicate events showing up in search

builder
Path Finder

We are running a rails application and are using splunk to parse our rails logs. We have a search-head and 2 indexers. On the indexers, I have added the following to /opt/splunk/etc/apps/search/local/props.conf to ensure that the logging for each rails request is are parsed as a single event:

[(?:::){0}*rails]
LINE_BREAKER = ([\r\n]).* [\r\n]+Started (POST|GET)

Each application server runs a forwarder. The rails log for each project is added using the following type of command, so that the source types match the pattern in the above stanza.

/opt/splunkforwarder/bin/splunk add monitor -source '/home/builder/abitlucky/web/luckyonrails/log/production.log' -sourcetype project1-rails
/opt/splunkforwarder/bin/splunk add monitor -source '/home/builder/abitlucky/web/luckyonrails/log/production.log' -sourcetype project2-rails

The issue is that I am seeing every single rails request duplicated in the search application. If I click on 'Show Source' for both of the events for a duplicated request, I get strange results. For some cases, it highlights the original request in the source for one of the events and a completely different request in the source for the duplicate (i.e., the request it shows as the source does not match the request I clicked 'Show Source' for). In other cases, it highlights the original request in the source for one of the events, and then highlights a second listing of that request in the source for the second event. In these cases, I can see the request repeated in the source it is showing me. However, if I go back to the original log, the request only shows up once, so the repeated request in the source Splunk is showing me is a phantom/fake request that was not in the original log for these cases.

I'm not sure if that makes sense without actually seeing what I am referring to, but I explained it as best I could. Has anyone seen this behavior? What could be causing it?

0 Karma
1 Solution

builder
Path Finder

I got an answer from a splunk employee on this. I had originally been instructed to add my load balanced indexers using the following commands, as mentioned in my previous comment:

/opt/splunkforwarder/bin/splunk add forward-server splunki1.myhost.com:9997
/opt/splunkforwarder/bin/splunk add forward-server splunki2.myhost.com:9997

That resulted in the following outputs.conf file:

[tcpout:splunki1.myhost.com_9997]
server = splunki1.myhost.com:9997

[tcpout-server://splunki1.myhost.com:9997]

[tcpout]
defaultGroup = splunki1.myhost.com_9997,splunki2.myhost.com_9997
disabled = false

[tcpout:splunki2.myhost.com_9997]
server = splunki2.myhost.com:9997

[tcpout-server://splunki2.myhost.com:9997]

Apparently that results in the events being sent independently to both indexers, rather than being load balanced. With the splunk employee's help, I have manually updated my outputs.conf to the following and I am no longer getting duplicate events:

[tcpout:splunki]
server = splunki1.myhost.com:9997,splunki2.myhost.com:9997

[tcpout]
defaultGroup = splunki
disabled = false

View solution in original post

0 Karma

builder
Path Finder

I got an answer from a splunk employee on this. I had originally been instructed to add my load balanced indexers using the following commands, as mentioned in my previous comment:

/opt/splunkforwarder/bin/splunk add forward-server splunki1.myhost.com:9997
/opt/splunkforwarder/bin/splunk add forward-server splunki2.myhost.com:9997

That resulted in the following outputs.conf file:

[tcpout:splunki1.myhost.com_9997]
server = splunki1.myhost.com:9997

[tcpout-server://splunki1.myhost.com:9997]

[tcpout]
defaultGroup = splunki1.myhost.com_9997,splunki2.myhost.com_9997
disabled = false

[tcpout:splunki2.myhost.com_9997]
server = splunki2.myhost.com:9997

[tcpout-server://splunki2.myhost.com:9997]

Apparently that results in the events being sent independently to both indexers, rather than being load balanced. With the splunk employee's help, I have manually updated my outputs.conf to the following and I am no longer getting duplicate events:

[tcpout:splunki]
server = splunki1.myhost.com:9997,splunki2.myhost.com:9997

[tcpout]
defaultGroup = splunki
disabled = false
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Perhaps you are cloning each event to each indexer, rather than splitting and load-balancing them between the indexers?

0 Karma

builder
Path Finder

This turned out to be the case. See my answer below for the full details (I posted it as a separate answer instead of a comment so that I could do formatting).

0 Karma

builder
Path Finder

On my forwarder machines, I am adding the indexers using the commands:

/opt/splunkforwarder/bin/splunk add forward-server splunki1.myhost.com:9997

/opt/splunkforwarder/bin/splunk add forward-server splunki2.myhost.com:9997

I believe that should properly load balance them, no?

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...