Getting Data In

How to configure a universal forwarder to forward 2 syslogs from 1 IP address to 2 different indexes?

rene847
Path Finder

Hi,
I have a Linux server and the syslog is functional. Here is UF config (inputs.conf) :

[udp://xx.xx.xx.140:514]
        host = MyHostName
        acceptFrom = xx.xx.xx.140
        index = TI-LNXEVENTS
        disabled = false

Today, new project…. They request to add "same server" and the logs from specific software with index "TI-lav". But I already send logs (events) to Splunk with Index "TI-LNXEVENTS"

I make test with this (in inputs.conf):

[udp://xx.xx.xx.140:514]
        source = lav_syslog
        sourcetype = lav:prod
        host = MyHostName
        connection_host = none
        acceptFrom = xx.xx.xx.140
        disabled = false
        index = TI-lav

But obviously it sends to index TI-LNXEVENTS.

My question is: I would like to have 2 logs and 2 indexes from this server. What are the good practices?

Best Regards

0 Karma
1 Solution

ekost
Splunk Employee
Splunk Employee

The best practice for ingesting the same log/stream/event twice is: don’t do it. Unless the second destination index is completely isolated (ex. limited Role access) from being searched by most users, any search across both indexes would turn up duplicate events. This causes more havoc later with any search requiring a count on those events. Use search time knowledge to stitch the data together, as long as there are little to no security restrictions on who can search in either index.
Example: use a macro that searches TI-LNXEVENTS for the syslog events (by host, source, or sourcetype, whichever is easier); and teach the other team how to integrate the macro into their searches. I’m sure there are other cool ways to do this as well.

View solution in original post

esix_splunk
Splunk Employee
Splunk Employee

It depends on the purpose and reason behind this. As mentioned, you shouldnt double index data unless there is a specific requirement for segregation of data where RBAC (role based access control) wont be sufficient for purposes. (Double data = double license usage )

While not best practices, the quickest way to achieve what you are wanting to do specifically would be to install an additional UF instance on the machine, and configure and inputs.conf on that UF to monitor and index that to the TI-lav index.

There is a trade off here of course, the additional client to have to manage and maintain on the host.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

This is simple for Splunk.

First you need to identify how and where your application logs to on the file system. After you do this, create a new monitor statement in inputs.conf.
In that monitor stanza, you can define the index and the source type for the Application log files.

rene847
Path Finder

Thank you for your reply, I appreciate.

However, Linux events are one thing (TI-LNXEVENTS). These are specific events to the operating systems only.

The other index (TI-lav) is a specific application, an this application generate connections and events specific to this application, never recorded by Linux events.

That's my problem, 1 ip and 2 index from the same server.
Do you have any ideas?

0 Karma

ekost
Splunk Employee
Splunk Employee

The best practice for ingesting the same log/stream/event twice is: don’t do it. Unless the second destination index is completely isolated (ex. limited Role access) from being searched by most users, any search across both indexes would turn up duplicate events. This causes more havoc later with any search requiring a count on those events. Use search time knowledge to stitch the data together, as long as there are little to no security restrictions on who can search in either index.
Example: use a macro that searches TI-LNXEVENTS for the syslog events (by host, source, or sourcetype, whichever is easier); and teach the other team how to integrate the macro into their searches. I’m sure there are other cool ways to do this as well.

ekost
Splunk Employee
Splunk Employee

Hello, You'll have to be very clear about your use-case and details to get a good response. I am guessing that:
1. All system and application events coming from an unnamed Linux host are being sent to Splunk via syslog.
2. You want to parse the syslog stream and separate application events from system events by index.
3. If possible tag system and application events with a unique sourcetype.

If that's the case, you would:
1. Get the events to Splunk the same way you already are, but add a sourcetype in the inputs.conf
2. On the main Splunk instance (or indexer) write a props.conf/transforms.conf combination to filter the destination index based upon the sourcetype and a regex match. The setting you want is the Metadata:Index to change the destination index. There are partial examples several places in Answers, such as: here.

rene847
Path Finder

Thank you for your reply, I appreciate.

However, Linux events are one thing (TI-LNXEVENTS). These are specific events to the operating systems only.

The other index (TI-lav) is a specific application, an this application generate connections and events specific to this application, never recorded by Linux events.

That's my problem, 1 ip and 2 index from the same server.
Do you have any ideas?

0 Karma

jeffland
SplunkTrust
SplunkTrust

If you want to distinguish the events from the linux system from those of the application in a search, you could go by sourcetype. If that is what you were trying to accomplish.

Get Updates on the Splunk Community!

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...