Symptom of the problem:
When configuring a http_collector input with an outputs.conf group in an active splunk indexer, It looks like, the indexer in question forwards the events received from other sources to the indexers found in the /opt/splunk/etc/apps/splunk_httpinput/outputs.conf. This is not the expected behaviour.
Events from the http collector end point should be the only ones forwarded to the indexers found in the outputs.conf. None of the events hitting "receiving port 9997" should be forwarded.
Splunk Version: 6.4.1 Splunk Build: debde650d26e
Configs:
**/opt/splunk/etc/apps/splunk_httpinput/local**
**/opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf**
[http://Token for DMD Team]
disabled = 0
index = dmd
indexes = dmd
token = 1ZZWWA62-AGFA-43BF-9B29-41S0E39335GF
useACK = 0
[http]
disabled = 0
outputGroup = httpC
**/opt/splunk/etc/apps/splunk_httpinput/local/outputs.conf**
[tcpout:httpC]
server = server1:9997,serverm:9997
autoLB = true
autoLBFrequency = 30
**/opt/splunk/etc/apps/search/local/inputs.conf:**
[splunktcp://9997]
connection_host = ip
disabled = 0
Does anyone have experience this issue? Or is there something wrong with the configs?
Thanks,
Lp
I solved this issue by setting up a Virtual Machine dedicated as httpInputCollector and forwarder. Otherwise, Indexers should be blacklisted which make the configuration awkward.
Generally, btool output for outputs.conf will tell us more.
If all events are forwarded to httpC output group. Something is related to defaultGroup attribute value in general.
But, it could be different reason. Of course, for potential a bug, it is worth filing a Support case and upload diag to the case.