Getting Data In

enforce linebreaking at Indexer after Heavy Forwarder override sourcetype

fernandoandre
Communicator

At Indexer level how to force props.conf linebreaking setup to be applied to a specific sourcetype of data arriving from a Heavy Forwarder?

I have a Heavy Forwarder (HF) which is monitoring a folder where syslog events are being written to a file. Afterwards the HF sends the events to the Indexer (IDX) to be indexed.

Since there are syslog events arriving from several sources and we need to classify them with different sourcetypes and we use props.conf and transforms.conf at HF to do so as the following:

HF props.conf

[syslog]
TRANSFORMS-set = force_syslog_SW, force_syslog_FW, force_syslog_ESX

HF transforms.conf

[force_syslog_SW]
DEST_KEY = MetaData:Sourcetype
REGEX = myregex1
FORMAT = sourcetype::cisco_switch

[force_syslog_FW]
DEST_KEY = MetaData:Sourcetype
REGEX = myregex2
FORMAT = sourcetype::cisco_firewall

[force_syslog_ESX]
DEST_KEY = MetaData:Sourcetype
REGEX = myregex3
FORMAT = sourcetype::esxi

The above works very good. However one of these sourcetypes (esxi) have multiline events that must be handled. I have try to use props.conf at Indexer to accomplish this but it's not working and I can't figure out the reason!

Can anybody help or provide ideas to solve this and enforce line breaking? I try to use this both at Indexer and at HF but none works.

Indexer props.conf

[esxi]
MUST_BREAK_AFTER=\[[A-F0-9]{8}\s+\w+\s+\\\'
MUST_NOT_BREAK_AFTER=\-\->    
NO_BINARY_CHECK=1    
SHOULD_LINEMERGE=true

Thank you in advance.

0 Karma
1 Solution

kristian_kolb
Ultra Champion

The problem is that when the HF sends the data to the indexer, it's already 'parsed' data and will not go through the parsing stage again. That is not a bug in any way, but rather the main idea behind having a HF at all - to offload the indexer.

As you might know from the docs, the parsing phase is where linebreaking takes place. http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F

Why not do it all on the HF? Put the [esxi] settings from the indexer's props.conf into the HF's props.conf. Or.. hmm... may that won't work. Splunk would have to go through props.conf twice - once to find out that it has to do transforms, and then again to apply props for the newly defined sourcetype. Then again, you're probably not the first person to have this problem, so perhaps it works anyway. Have not done this myself before, but is should be rather easy to test for you.

Another approach is to have the syslog-daemon split the incoming data stream into different files and apply the sourcetype in the monitor stanza for the file in question.

Hope this helps,

Kristian

View solution in original post

kristian_kolb
Ultra Champion

The problem is that when the HF sends the data to the indexer, it's already 'parsed' data and will not go through the parsing stage again. That is not a bug in any way, but rather the main idea behind having a HF at all - to offload the indexer.

As you might know from the docs, the parsing phase is where linebreaking takes place. http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F

Why not do it all on the HF? Put the [esxi] settings from the indexer's props.conf into the HF's props.conf. Or.. hmm... may that won't work. Splunk would have to go through props.conf twice - once to find out that it has to do transforms, and then again to apply props for the newly defined sourcetype. Then again, you're probably not the first person to have this problem, so perhaps it works anyway. Have not done this myself before, but is should be rather easy to test for you.

Another approach is to have the syslog-daemon split the incoming data stream into different files and apply the sourcetype in the monitor stanza for the file in question.

Hope this helps,

Kristian

kristian_kolb
Ultra Champion

You are most welcome! /k

0 Karma

fernandoandre
Communicator

PROBLEM SOLVED.
You're completely right:
UniversalForwarder -> Heavy Forwarder -> Indexer
Input -> Parsing -> Indexing, Search
However, my goal was precisely force another parsing at Indexer level since once data goes through "props.conf -> transforms.conf" it doesn't return to props again at HF level. I have tested that and I was unable to force a new parsing, neither at Indexer or at HF.

I solved the problem with the second approach you suggested, configured the syslog daemon writing to different files the data arriving from different sources. With that just tagged data accordingly. Thank u

0 Karma

splunk24
Path Finder

hi may i know how you split the data . what configuration changes did you make in syslog daemon
please respond ..

0 Karma

fernandoandre
Communicator

Just adding more information.

1) The regex for break multilines works well. I have tested it.

2) Also, I collected several (and only) esxi events on a file and monitored that file on HF with a sourcetpye test. If I used props.conf at HF for that particular sourcetpye test, the events arrive at Indexer broken as expected. If I disable the props.conf at HF and enable it at Indexer, the events don't get broken.

Still trying to figure out what's occurring here. Any ideas?

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...