Getting Data In

Splunk to syslog with raw files

New Member

So here’s my situation:

Multiple CentOS boxes running Suricata IDS.

Suricata logs events to both:

/opt/log/suricata/eve.json (basically raw JSON objects)


/opt/log/suricata/fast.log (a syslog style summary of events)

The Suricata boxes have a UF on them to forward over the contents of those files to my indexers. That works fine - everything’s indexed and searchable and is great.

However, I also need to send these same logs off via syslog to a third destination. For reasons, the easiest way to do this is to setup syslog forwarding on the indexers (I know I can’t do it on the UFs, and indexers are basically HFs with extra stuff), which is something I’ve done before for other things and has been fine.

Here’s the relevant snippets of config on the indexers:


#defaultGroup = syslogtest

type = tcp
server = 10.x.x.x:9997
priority = <182>
maxEventSize = 8192
timestampformat = %b %e %H:%M:%S





Couple of quick notes:

  • Yes, I know I’m using port 9997 as the output port. That’s just me being creative with the firewall rules in place. There’s an rsyslog listening on that server on port 9997, not Splunk.
  • The REGEX is there in transforms to match specific hostnames. It behaves the same with REGEX=.

With this, the events from fast.log arrive on the destination syslog server just fine. However, the events from eve.json do not - they’re just nowhere to be found.

If I use the [syslog] stanza in outputs.conf with defaultGroup (the piece that’s commented out) the events do come over, but as part of a giant flood of everything. Not great. But with the regular syslog:foo stanzas, it just won’t work.

Similarly, I have another application which writes its output to ‘raw’ files (but not JSON objects, just raw log data in KV pairs). Those files show up in Splunk too, but the events don’t get forwarded over when I try to send them by syslog.

Any ideas? It smells like a bug to me but I don’t know if I’m missing anything.

0 Karma

Ultra Champion

Few thoughts:

Have you checked whether data is forwarded using some network sniffer (e.g. tcpdump) to check whether data is being sent and received? Could also be that the receiving syslog daemon somehow doesn't like the data and drops it or so?

Is that json data single line, or multiple lines per event?

As an alternative, you could also consider configuring a syslog daemon on the suricata boxes, to read from file and forward to the destination syslog server.

0 Karma

New Member

I don't have access to network traffic to see if they're even being sent out of the indexers - I suspect not, because I do see them when I use the defaultGroup in the [syslog] stanza. The receiver shouldn't be an issue - I can telnet to it and type nonsense and it records it...

Multiple lines per event, they're definitely large (and one of the reasons I have MaxEventSize fairly high.

I could setup rsyslog on the suricata boxes, but I really want to avoid that if I can - it's not easy to manage cleanly even with Ansible and I think in the past they've had issues between syslog, systemd and the event rate... To me, this should work as I've configured it, and I'd rather try and do it this way before doing something else.

0 Karma


The indexer cannot transfer logs like UF or HF. There is "Splunk app for CEF" as a method of transferring the indexed log.

0 Karma

New Member

It definitely can transfer logs via syslog - I have it working for Windows logs, regular syslogs and even different files in the same directory on the original endpoint.

The raw JSON file should work just like those, but it's not.

0 Karma