Hello,
I'm playing around in the lab and I've set up a configuration where a Splunk heavy forwarder is receiving Windows events from another computer, then forwarding via syslog to a SIEM system. Looking at the traffic in Wireshark, I can see the event logs going across, but I'm also seeing lots of Info Metrics events going as well.
How do I stop the Info Metrics events?
On the computer with the Splunk universal forwarder, I have the Windows security events logs forwarding to the Heavy forwarder using _TCP_ROUTING and the other logs/perfmon etc should be going via the standard defaultGroup parameter to another server.
On the heavy forwarder, the received data is routed to the syslog server (SIEM) using the _SYSLOG_ROUTING parameter in the inputs file, so only the received events from there should be routed to the syslog group.
My Splunk Heavy Forwarder inputs/outputs conf files are as follows
inputs.conf
# input for other servers
[splunktcp://9998]
disable=0
_SYSLOG_ROUTING = siem
[WinEventLog://Security]
_SYSLOG_ROUTING = siem
disabled = 0
index = wineventlog
outputs.conf
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = splunk.lab.local:9997
[syslog:siem]
server = siem.lab.local:514
type = udp
Just wondering if you ever got a satisfactory answer to this? I have the same problem but with TCP forwarder:
[tcpout] defaultgroup = logstash
disabled = falseforwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.blacklist =
(_audit|_internal|_introspection)[tcpout:logstash]
server=localhost:7777 sendCookedData =
false useACK = true
Seeing loads of messages like:
INFO Metrics - group=thruput, name=uncooked_output, instantaneous_kbps=0.176377, instantaneous_eps=0.096773, average_kbps=0.355449, total_k_processed=44.000000, kb=5.467773, ev=3.000000
INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=0.176377, instantaneous_eps=0.096773, average_kbps=0.371606, total_k_processed=46.000000, kb=5.467773, ev=3.000000, load_average=0.030000
INFO Metrics - group=tcpout_connections, name=logstash:127.0.0.1:7777:0, sourcePort=8090, destIp=127.0.0.1, destPort=7777, _tcp_Bps=186.73, _tcp_KBps=0.18, _tcp_avg_thruput=0.39, _tcp_Kprocessed=46, _tcp_eps=0.10, kb=5.47
This is just a hunch, but maybe adding the setting sendCookedData = false
to your outputs.conf is all you need. See docs here.
Thanks for the suggestion jeff, I didn't see that as an option under the syslog: stanza but gave it a go and it didn't do anything. I also tried putting it in the computer sending logs to the heavy forwarder in it's tcpout: stanza and it pretty much blocked everything, I could see some data between the the computer and the heavy forwarder but nothing was going to the siem, I assume because the heavy forwarder didn't know how to process any of the uncooked data.
Yeah, I wouldn't suggest turning this switch on when forwarding data to other splunk instances. It's intended to be deployed when sending data to third party systems as described here.