I want to send the same json-encoded structures on HTTP Event collector/REST API as well as syslog udp/tcp.
One of the fields in the structure is sourcetype=JSON, and I have a proper entry for JSON in prop.conf.
Yet when syslog udp:514 messages come in, they are tagged sourcetype=udp:514, and the fields don't get extracted.
I suppose I could enable JSON parsing for udp:514, but this seems wrong, since the majority of syslog data is not structured.
How can I "deflect" these specific messages to be handled as a different sourcetype?
(I was sure to copy-paste this...)
Thank you VERY MUCH for your speedy suggestion.
I tried dedicated syslog "data inputs" (with TCP:515, for instance), but no luck.
I ended up serializing the structures as attribute=value list when sending to syslog, and encode_json when sending to HTTP.
I 'll update this question later on if I come across a good way to accomplish my initial goal, or I become more fluent with Splunk.
Right. I did not see fields extracted.
Since I've been "talking Splunk" for ~two weeks only, I probably did things wrong.
Since I have the events reported with common ground now (and they all can be picked up by the same set of reports), I am good to go.
Thank you so much
no worries - You'll save money by using field=value instead of JSON, as it takes up less space 🙂
After you get more comfortable have a look on splunk answers for other questions relating to JSON field extractions, I am sure it will make more sense to you.
You should probably handle this with syslog itself. That is, have syslog send these messages to a different port, and create an alternative input on your Splunk indexer.
Even better, have a server that collects all your syslog messages centrally, and install a universal forwarder on that host. That way if something happens to Splunk you can still collect the data