Getting Data In

Sending JSON log via rsyslog.

gitingua
Communicator

Hi colleagues, hope everyone is doing well! I need some advice.

I have a server that writes logs to /var/log/test_log.json.

On the Splunk side, I opened a port via "Data Input -> TCP".

The logs in  test_log.json are written line by line. Example:

{"timestamp":"2025/02/27 00:00:15","description":"Event 1"}
{"timestamp":"2025/02/27 00:00:16","description":"Event 2"}
{"timestamp":"2025/02/27 00:00:17","description":"Event 3"}

Could anyone suggest if they have a ready-made rsyslog configuration file for correctly reading this log file?

The file is continuously updated with new logs, each on a new line. I want rsyslog to read the file and send each newly appearing line as a separate log.

Has anyone encountered this before and could help with a ready-made rsyslog configuration?

Thank you!

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @gitingua ,

you did a little confusion: do you want to ingest syslogs using rsyslog or TCP Input?

They are two different ways to ingest syslogs:

using rsyslog, you use rsyslog to ingest logs and write them in a text file that you have to read using a File input.

Using TCP Input, you configure the input in Splunk without using rsyslog and you directly forward them to Splunk.

The second solution is easier to implement but has the problem that can run only when Splunk is up, e.g. during restart you loose syslogs.

For this reason the solution using rsyslog is prefeable even if you have to configure:

At least, having a json format, remember to add INDEXED_EXTRACTIONS= JSON to your props.conf, in this way you automatically extract all the fields.

Ciao.

Giuseppe

0 Karma

gitingua
Communicator

@gcusello 

Sorry, I might have confused you. Let me try to illustrate this clearly.

I have server.host — this is where the test_log.json log is being collected.
There is also splunk.test.host — I configured a Data Input, opened port 765 TCP, assigned it the index test_index, and set the sourcetype to _json.

The setup on the splunk.test.host side is complete, and all network access is in place.

Now, on the server.host side:
In /etc/rsyslog.d/, I created a file called send_splunk.conf.
In this config file, I specify the address splunk.test.host, port 765, and the TCP protocol.

However, I’m having trouble correctly configuring /etc/rsyslog.d/send_splunk.conf so that rsyslog reads the test_log.json file and sends each new line to Splunk as it appears in the file.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @gitingua ,

sorry if I wasn't clear: you have to choose one solution: TCP input in Splunk or rsyslog, not both of them!

If you choose the first (and you did it!) you have syslogs in Splunk.

If you want rsyslog, delete the other configuration and create a new one in /etc/rsyslog.d/To cofigure rsyslog, you can follow the documentation at the above link.

A sample of rsyslog configuration is:

ruleset(name="your_ruleset"){
        action(type="omfile" file="/data/syslog/your_technology/%fromhost-ip%/%$YEAR%/%$MONTH%/%$DAY%/your_technology.log" fileOwner="splunk" fileGroup="splunk" dirOwner="splunk" dirGroup="splunk")
}
module(load="imudp")
input(type="imtcp" port="765" ruleset="your_ruleset")

 But read the documentation.

Ciao.

Giuseppe

0 Karma

isoutamo
SplunkTrust
SplunkTrust

I just wondering if you have those events on file in some linux node, why you don't read those by UF directly? It's much better solution than add additional rsyslog there. Of course you can do it if it's mandatory for some unknown reason, but personally I try to avoid this kind of additional component.

Another thing. You should never use _json as a sourcetype for any real data source. It's like syslog, basically it told that this file contains some json data, but nothing else. In Splunk we should use always sourcetype as told what is format of this event. As you know there are several different json schemas and you should have separate sourcetype names for those. You should just copy the definition of _json sourcetype for your own sourcetype.

You should have already defined naming standards for all you Splunk KOs to keep your environment in manageable state.

There are several different naming standards which you can look and use what is best for your company. Or just create your own combining the part of other. 

0 Karma
Get Updates on the Splunk Community!

Announcing the Expansion of the Splunk Academic Alliance Program

The Splunk Community is more than just an online forum — it’s a network of passionate users, administrators, ...

Learn Splunk Insider Insights, Do More With Gen AI, & Find 20+ New Use Cases You Can ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Buttercup Games: Further Dashboarding Techniques (Part 7)

This series of blogs assumes you have already completed the Splunk Enterprise Search Tutorial as it uses the ...