Getting Data In

Problem with Line breaking between Splunk 6.2.3 vs 6.3.0

Communicator

We have a development environment (replica of prod) running Splunk 6.2.3 (upgraded from 6.1.5). I am testing monitoring of a file which has snmp traps received using net-snmp snmptrapd on *nix platform.

Earlier this week I upgraded Splunk from 6.1.5 to 6.3.0 on a new standalone instance of test environment to validate new feature set. And import of snmp trap file was one of them.

I am noticing that line breaking dosent seems to work on upgraded 6.3.0 release. Is anyone else facing this situation?

In 6.2.3 release, only the first event breaks incorrectly, all other events are breaking with or without TA.

In 6.3.0 release, the events are getting merged.

Note: I added the events using oneshot method.

To force line breaking on both releases I created props.conf with default values as below, still the same behavior:

[snmptrap:generic]
TIME_FORMAT = %Y-%m-%d %H:%M:%S
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE_DATE = true

Sample Traps logged as below:

`2015-09-25 11:30:13 10.11.12.13(via UDP: [trapforwarder]:162->[traprec] TRAP, SNMP v1, community testing

    .1.3.6.1.4.1.6827.10.17.7.1 Enterprise Specific Trap (1035) Uptime: 22 days, 19:41:52.45

    .1.3.6.1.4.1.6827.10.17.3.1.1.1.1 = INTEGER: 1

2015-09-25 11:30:13 10.11.12.13(via UDP: [trapforwarder]:162->[traprec]) TRAP, SNMP v1, community testing

    .1.3.6.1.4.1.6827.10.17.7.1 Enterprise Specific Trap (1034) Uptime: 22 days, 19:41:53.07

    .1.3.6.1.4.1.6827.10.17.3.1.1.1.1 = INTEGER: 1

2015-09-25 11:30:14 10.11.12.13(via UDP: [trapforwarder]:162->[traprec]) TRAP, SNMP v1, community testing

    .1.3.6.1.4.1.6827.10.17.7.1 Enterprise Specific Trap (1035) Uptime: 22 days, 19:41:53.71

    .1.3.6.1.4.1.6827.10.17.3.1.1.1.1 = INTEGER: 1`

alt text

0 Karma
1 Solution

Legend

I have noticed several other similar questions, so the answer may be "yes."

Where did you create the props.conf file? It should be on the indexer - or on the heavy forwarder if you are using one.

Are you sure that the name of the stanza matches the sourcetype of the incoming data?

It should not matter which type of input (oneshot, monitor, etc) you choose, as the line-breaking is done at parsing time, not input time.

Finally, if you are going to supply the props.conf anyway, I suggest that you add the following line to speed processing:

MAX_TIMESTAMP_LOOKAHEAD = 20

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

I'd recommend moving to "SHOULDLINEMERGE = FALSE" and using a "LINEBREAKER". Something like this should be much more consistent:

[snmptrap:generic]
TIME_FORMAT = %Y-%m-%d %H:%M:%S
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}
MAX_TIMESTAMP_LOOKAHEAD = 20

This will look for one or more new like characters, followed by your timestamp. It will then linebreak on the newline characters. I find this to be significantly more consistent and more performant.

Legend

I agree, and it should be the fastest mechanism as well.

But I don't often suggest it because of the complexity of the regex in the LINE_BREAKER. If there is any variability in the format, the regex can be fragile and hard to debug.

0 Karma

Legend

I have noticed several other similar questions, so the answer may be "yes."

Where did you create the props.conf file? It should be on the indexer - or on the heavy forwarder if you are using one.

Are you sure that the name of the stanza matches the sourcetype of the incoming data?

It should not matter which type of input (oneshot, monitor, etc) you choose, as the line-breaking is done at parsing time, not input time.

Finally, if you are going to supply the props.conf anyway, I suggest that you add the following line to speed processing:

MAX_TIMESTAMP_LOOKAHEAD = 20

View solution in original post

0 Karma

Communicator

Lisa,

My upgraded test system is all in one server and props.conf is created at $SPLUNK_HOME/etc/apps/TA-snmptrap/local directory.

I can confidently say my configs are correct as I copied/rsynced them to 6.2.3 release and they work. Moreover based on the sample traps/events, my experience is Splunk should have auto extracted the Date and Time and also break events, as based on documentation SHOULDLINEMERGE and BREAKONLYBEFOREDATE are by default true.

I had MAXTIMESTAMPLOOKAHEAD = 20, in my props, however I removed it, as I read it was best used if I am using TIME_PREFIX and in my events Time Stamp starts at the beginning of the line.

Anyways, I will continue to test the config this morning.

0 Karma

Legend

You do not need to have a TIMEPREFIX for the MAXTIMESTAMPLOOKAHEAD to work, although they are often used together. TIMEPREFIX merely establishes the starting point for timestamp extraction; without it, the starting point is the beginning of the line.

MAXTIMESTAMPLOOKAHEAD will prevent Splunk from scanning further into the event for a "better" timestamp. While generally not required, it will always make processing faster, as by default Splunk always examines the first 150 characters of each event.

0 Karma

Communicator

Several re-start of Splunk Enterprise 6.3.0 and just having TIMEFORMAT in my props.conf seems to have resolved the issue. Based on above suggestions I added back MAXTIMESTAMP_LOOKAHEAD in props.conf.

However I still haven't figured out why events were getting merged together just after upgrade with props.conf.

0 Karma