Hi,
I am getting a weird issue. If the syslog server fails, it stops all data being indexed by the default TCP out, and then Splunk fills its buckets and falls over. Am I missing something to set it to continue if it can't connect to a output.
cat outputs.conf
[syslog]
defaultGroup = xxxxx_indexers
[syslog:xxxxx_indexers]
server = xxx.xxx.xxx.xxx:9997
type = tcp
timestampformat = %Y-%m-%dT%T.%S
cat transforms.conf
[mehRouting]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = xxx_cluster_indexers
[Routing_firewalls]
SOURCE_KEY = MetaData:Sourcetype
REGEX = (fgt_traffic|fgt_utm)
DEST_KEY = _SYSLOG_ROUTING
FORMAT = xxxx_indexers
cat props.conf
[host::xxxxxxx1c]
TRANSFORMS-routing = mehRouting, Routing_firewalls
[host::xxxxxc]
TRANSFORMS-routing = mehRouting, Routing_firewalls
Hi @lukessi,
Can you please provide your full configuration from outputs.conf because I can't see xxx_cluster_indexers
in your outputs.conf
confirmed if I loss the 3rd party syslog, it stops forwarding to our indexers as well.
Based on answer https://answers.splunk.com/answers/556975/how-to-forward-data-from-an-indexer-to-a-3rd-party.html , it looks like known issue when you'll send data to syslog over TCP. Splunk stops sending data to indexers as well when syslog server is down, better to switch it to UDP or raise case with splunk support.
Ah yes its picked up from the default index app we have...
[tcpout]
useACK = true
defaultGroup = xxx_cluster_indexers
disabled = false
[tcpout:xxx_cluster_indexers]
server = index1:9997,index2:9997