We have a single inputs.conf stanza that sends the data from "targetLog.log" to a different indexer, "indexerB", than everything else being sent from that forwarder. When the app is enabled, the Universal Forwarder (UF) also sends its internal logs to that same indexer. It should only be sending data from "targetLog.log" and not the internal logs. When I found out, I attempted to force the internal logs to only be sent to the correct indexers, "indexerA", through the props.conf and transforms.conf. That didn't seem to work either, even though they're supposed to override the inputs.conf which overrides the outputs.conf. Why would it still send the internal logs to indexerB even when I've specifically told it not to? I don't have direct access to the UF, so certain methods of troubleshooting prove challenging.
[monitor://targetLog.log] disabled = 0 index = dest_index sourcetype = dest_sourcetype _TCP_ROUTING = indexerB
[tcpout:indexerB] autoLBFrequency = 60 autoLB = true compressed = false server = indexerB_1:9997, indexerB_2:9997 [tcpout:indexerA] server = indexerA_1:9997
[source::*/var/log/splunk/*.log] TRANSFORMS-linux_internal_logs_to_indexerA = internal_logs_to_indexerA [source::*\\var\\log\\splunk\\*.log] TRANSFORMS-windows_internal_logs_to_indexerA = internal_logs_to_indexerA
[internal_logs_to_indexerA] SOURCE_KEY = _MetaData:Index REGEX = _.* DEST_KEY = _TCP_ROUTING FORMAT = indexerA
At this point, I have no idea what's going on to cause it or if it's even preventable. I tried pushing a custom app to retrieve btool info, but then I had to deal with Splunk not properly parsing the data. It'd merge the output from one command but wouldn't do it for others. I'd have separate events for each line and wouldn't be able to tell which line went to which stanza, even though I specified "SHOULD_LINEMERGE = true". That's a problem in and of itself that I don't have time to mess with right now.
Anyone have any thoughts on why it would still send the internal logs when 1) I only told it to forward data from targetLog.log, and 2) I've even specifically told it not to forward the internal logs?
could you share your forwarder's outputs.conf?
because the correct indexer addressing is managed in this file (in http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad see "Route inputs to specific indexers based on the data input").
So to send some logs to an indexer and others to another one you have to configure:
In outputs.conf, create stanzas for each receiving indexer:
[tcpout:systemGroup] server=server1:9997 [tcpout:applicationGroup] server=server2:9997
In inputs.conf, specify _TCP_ROUTING to set the stanza in outputs.conf that each input should use for routing:
[monitor://.../file1.log] _TCP_ROUTING = systemGroup [monitor://.../file2.log] _TCP_ROUTING = applicationGroup
I've updated the question to include the outputs.conf. I completely forgot about including it in the question and posted the ":9997" in the inputs.conf on the question by mistake when it wasn't actually in the inputs.conf. How it's shown now is what is actually in the files. My apologies. It fully lines up with what you posted, though.
Internal logs are sent to all Indexers by default, if you want to modify this, you have to:
_TCP_ROUTING = indexerA.
In this way you send internal logs only to IndexerA.
yes but if your problem it the correct addressing of _internal logs, the only configuration file containing _internal logs is in $SPLUNK_HOME/etc/system folders.
If instead you need to correctly address other logs, you can use btool:
./splunk cmd btool inputs list --debug
to understand what is the stanza in use.
Check all available inputs.conf on the forwarder. We saw an app called SplunkUniversalForwarder on our forwarders which had below settings and was showing the same behavior -
[monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log] _TCP_ROUTING = * index = _internal