Getting Data In

Heavy Forwarder send events to remote syslog

Communicator

I am being asked to forward events from a Heavy Forwarder, to a remote ArcSight server as raw events. Our HF's receive events from UF's un-indexed, and they pass-through the HF's un-indexed. Is this possible what I am trying to do? Below the config from one of our HF's, with my basic forwarding config (will refine once I make it work).

-----------------------------------------
OUTPUTS.CONF
-----------------------------------------
[tcpout:site-hub]
server = *********
sslPassword = password
sslCertPath = ********
sslRootCAPath = ********

[tcpout]
defaultGroup = site-hub
indexAndForward = false
useACK=true
maxQueueSize=128MB
useClientSSLCompression = true
sslVersions = tls1.1, tls1.2
heartbeatFrequency=167
autoLBFrequency = 10

[syslog:windows_events_alert]
server = <REMOTE SYSLOG IP>:5166
type = tcp

-----------------------------------------
PROPS.CONF
-----------------------------------------
[default]
TRUNCATE = 100000

[source::WinEventLog:Security]
TRANSFORMS-windows_security_events = send_to_arcsight
----------------------------------------
TRANFORMS.CONF
-----------------------------------------
[send_to_arcsight]
REGEX = "EventCode=4725"
DEST_KEY = _SYSLOG_ROUTING
FORMAT = windows_events_alert
0 Karma

Builder

You have the basic idea. I used inputs.conf, props.conf, and transforms.conf (this is from a Windows HF)

inputs.conf:
[monitor://\dns.log]
disabled = 0
sourcetype = dns
_SYSLOG_ROUTING = my_syslog_group

props.conf
[dns]
TRANSFORMS-dns = send_to_syslog

transforms.conf
[send_to_syslog]
REGEX = .
DEST_KEY = _SYSLOG_ROUTING
FORMAT = my_syslog_group

and in my /etc/system/local/outputs.conf

[syslog:my_syslog_group]
server = IP:514

I believe there is a Splunk doc for this that goes into more detail.....
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd

0 Karma

Communicator

Thanks... got my config (such as it is) from reading the referenced doc, and according to the doc it 'should' work. The difference between yours and mine: you have an inputs.conf that identifies a source of data, which is not discussed in the document 😕

In a HF, where data/events is a pass through with no indexing or local consumable content, how does such a config in the ref. doc get the raw event for forwarding? If I DID in fact need an inputs.conf, what would it pull from?

0 Karma

Builder

I believe it needs the inputs.conf to get the data initially, but then you can tell it to also output to the syslog.

The problem is to do routing of this source ONLY to syslog, you need to in effect turn off the default routing. IOW, you can ADD an additional route to a default route (which is usually your indexers) and have the data go to two different places. But to send data to EITHER the indexers OR the alternative route, you have to change the outputs.conf, remove the default route, give the route to your indexers a name, then reference that for each source of input in your inputs.conf.
And if you're like me and have multiple apps handling many different inputs, doing this for only ONE input out of many is too much of a pain because you have to edit the local.conf, props.conf and transforms.conf for every source. So I just have a copy of this input in my Splunk as well as the copy I sent to an alternative route.

The config for the data you want to go to Splunk would look something like this (although don't use mine as it never worked right, it should get you the idea):

inputs.conf
[monitor://\iis.log]
disabled = 0
ignoreOlderThan = 180d
sourcetype = iis
_TCP_ROUTING = primary_indexers

props.conf
[iis]
TRANSFORMS-iis = send_to_splunk

transforms.conf
[send_to_splunk]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = primary_indexers

0 Karma

Communicator

Regarding apps, forwarders, and indexers.... 80 apps on 9 clustered HF's receiving from 10,000 UF's feeding 6 indexers. Don't want to route ONLY to Syslog. In my originally posted configs the long line of servers in outputs.conf (server=*****) was simply commented out. At the end of this conf was what I thought was the "additional" route statement that would handle this.

Maybe the problem is not with my outputs.conf, but rather with the other configs. Or the fact that my inputs.conf is simply a listener for the 10,000 UF's. Since indexing does not occur until further down the line, there is no "content", simply receive and forward. Am I over thinking this 😞

INTPUTS.CONF

[splunktcp-ssl:11001]
compressed=true
sslVersions = *,-ssl
queueSize = 128MB
persistentQueueSize = 30GB
disabled = false
0 Karma