Hi community,
I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM
i have read something about HF that it's possible but i don't understand it fully
if Yes, please give me approach to do this ..
thanks
@tscroggins i did the following ...
outputs.conf
[tcpout:tmao]
server = xxx.xxx.xxx.xxx:9997
#Forward data for the "myindex" index
forwardedindex.0.whitelist = tmao
sendCookedData = false
props.conf
[source::udp:1517]
TRANSFORMS-routing = route_to_tmao_index
transform.conf
[route_to_tmao_index]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = tmao
is my configuration good because i want to forward tmao index to another third-party system ...
thanks
Hi @KhalidAlharthi,
Forwarding filter settings--forwardedindex.*--are only valid globally. See "Index Filter Settings" at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf. If you want to conditionally forward a set of indexes to every configured output, you can use this configuration. This is useful, for example, when an intermediate forwarder receives data from upstream forwarders that are outside your control, and you only need to forward a subset of events targeting specific indexes.
If you have a dedicated heavy forwarder receiving data on 1517/udp, and you want to clone that data to a syslog destination, you can add the _SYSLOG_ROUTING setting to the input stanza:
# inputs.conf
[udp:1517]
index = tmao
connection_host = none
no_priority_stripping = true
no_appending_timestamp = true
sourcetype = tmao_sourcetype
_SYSLOG_ROUTING = send-to-remotesiem
# outputs.conf
[syslog:send-to-remotesiem]
server = remotesiem:514
# Set type = tcp or type = udp; TLS is not supported.
# If you use a hostname in the server setting, I recommend using a local
# DNS resolver cache, i.e. bind (named), systemd-resolved, etc.
type = udp
# Use NO_PRI to indicate _raw already includes PRI header.
priority = NO_PRI
# syslogSourceType tells Splunk that tmao:log already has syslog
# headers. If you do not use the syslogSourceType setting, Splunk
# will prefix _raw with additional host and timestamp data.
syslogSourceType = sourcetype::tmao_sourcetype
# maxEventSize is the maximum payload for UDPv4. Use a value compatiable
# with your receiver.
maxEventSize = 65507
Splunk does not differentiate between RFC 3164 and RFC 5424, and _raw should be maintained and forwarded in its native format. This is aided by the inputs.conf no_priority_stripping and no_appending_timestamp settings, which may conflict with your source or source type's expectations for parsing _raw at index and search time. Regardless, Splunk will inject a malformed host value in _raw, e.g.:
In: <0>Jun 9 11:15:00 host1 process[1234]: This is a syslog message.
Out: <0> splunk Jun 9 11:15:00 host1 process[1234]: This is a syslog message.
Your downstream receiver must handle the data appropriately. All of this can be mitigated using complex transforms to rewrite _raw into the format you need, but I recommend asking an additional question specific to that topic after you have syslog routing working.
Note that Splunk's syslog output can block indexQueue, which also contains tcpout outputs, and prevent Splunk from indexing and forwarding data. The outputs.conf [syslog] stanza dropEventsOnQueueFull setting can help mitigate blocking at the expense of data loss. The syslog output queue size cannot be changed, and the only way to scale syslog output is to increase the server.conf [general] stanza parallelIngestionPipelines setting.
Splunk wasn't designed specifically for syslog routing. If you need more control over syslog itself, you may want to use a syslog service like rsyslog or syslog-ng to receive and route the data to both Splunk and another downstream system. There are numerous third-party products with similar capability.
Hi @KhalidAlharthi,
The basic process is documented at https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd. Summarizing:
For example, to redirect all index=foo events from a heavy forwarder to a remote SIEM on port 1234:
# props.conf
[default]
TRANSFORMS-send_foo_to_remote_siem
# transforms.conf
[send_foo_to_remote_siem]
REGEX = foo
SOURCE_KEY = _MetaData:Index
DEST_KEY = _TCP_ROUTING
FORMAT = remote_siem
# outputs.conf
[tcpout:remote_siem]
server = remotesiem:1234
sendCookedData = false
If defined on an indexer, the events will indexed locally and forwarded. Note that when using [default], all events will inspected.
The exact settings you need depend on your Splunk architecture and the remote SIEM. I would start by reading Splunk Enterprise Forwarding Data at https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Aboutforwardingandreceivingdata and asking new questions as needed.
thanks for your reply
@tscroggins can i forward using syslog not TCP because take time to handshaking ...
thanks again....