Getting Data In

Log File not breaking correctly

Splunk Employee
Splunk Employee

Log is similar to this but with many more lines:

Tue Sep 21 00:01:07 MDT 2010 No filename specified, using '*'.
Tue Sep 21 00:01:07 MDT 2010 starting ftp client
---- Resolving host address...
---- 3 addresses found

Props includes:

[getftps]
LINE_BREAKER=(\v+)[^\v]+starting

I have verified that the regex and the syntax are both correct. (also note, "starting" shows ONLY on the line that we actually do want to break, and no where else.)
Also, on my testing this file gets truncated correctly but for the customer it doesnt. What gives?

Adding to the question a bit (the above props is one of the few that actually work for me but not the customer) This props, similar to the one above, works for me, but not for the customer:

[source::/.../getftps.log]
BREAK_ONLY_BEFORE = .*starting.*

The question is, i can break the log where i want it to break, why does the same props not work for the customer?

0 Karma
1 Solution

Splunk Employee
Splunk Employee

Ok folks, working with the customer we were able to find out what the issue is:
It appears that the event in the logs does not get written (appended) to the file immediately. It seems the first line gets appended, then a bunch of lines (about 10-20) and lastly the ending 30 lines. There is about a 3 seconds pause between each of these bunches being appended.
To test this we did two things:
- we shut down splunk for around 10 minutes and waited for the log to be populated with new events, when restarting splunk the events got parsed correctly and had the correct breaking.
- we tailed the file and the customer could see the three seconds delay.

Have already suggested the customer to include some type of a delay in the reading of the data in order to overcome the 3 seconds delay.

View solution in original post

0 Karma

Splunk Employee
Splunk Employee

Ok folks, working with the customer we were able to find out what the issue is:
It appears that the event in the logs does not get written (appended) to the file immediately. It seems the first line gets appended, then a bunch of lines (about 10-20) and lastly the ending 30 lines. There is about a 3 seconds pause between each of these bunches being appended.
To test this we did two things:
- we shut down splunk for around 10 minutes and waited for the log to be populated with new events, when restarting splunk the events got parsed correctly and had the correct breaking.
- we tailed the file and the customer could see the three seconds delay.

Have already suggested the customer to include some type of a delay in the reading of the data in order to overcome the 3 seconds delay.

View solution in original post

0 Karma

Path Finder

I had a similar issue with a props.conf not working at a customers site and found a misleading rule in the "learned" app: etc/apps/learned/local/*

Seems as if you do not clean up after first time testing there still can be some conflicting transformations left from automatic sourcetype recognition in this app.

Splunk Employee
Splunk Employee

And of course, heavy forwarder, light forwarder, weird parsing queue or routing?

0 Karma

Splunk Employee
Splunk Employee

Note that filename may matter, as there are default rules in props.conf based on source:: stanzas.

0 Karma

Influencer

How does the environment of the customer differ to the one you've tested it on? Different OS, monitoring/indexing on the same machine or is it forwarded?

0 Karma

Communicator

You may want to try setting

SHOULD_LINEMERGE = false

in props.conf

Splunk Employee
Splunk Employee

Refer to the case update. The issue here is mainly that the props works for me but not for the customer...

0 Karma