Getting Data In

Is there a way to delay splunk universal forwarder from monitoring specific files?

swamysanjanaput
Explorer

Hello,

We have an issue monitoring os_metrics logs where the log entries are generated from a Windows command wmic and written to a file under this path D:\catmstarFiles\systems\main\logs\os_metrics*.log

The issue is that events are distorted even after placing the props (see below) in our heavy forwarder and search head cluster. The same set of files are read correctly if we are coping it to test server and monitoring it from there, however, in real-time the events are not breaking correctly as expected.

So, just wanted to know if there is an attribute that can be used in inputs.conf to reduce uf file reading/monitoring time? or Is it something to be done at source end to delay writing files to this particular path?
Can anyone please advise, if it's something to be done at source end I will then reach out to the concerned team and get it discussed. Thanks in advance

[sourcetype]
SHOULD_LINEMERGE=true
NO_BINARY_CHECK=true
CHARSET=AUTO
BREAK_ONLY_BEFORE=\w+\s+\d+\/\d+\/\d+\s+\d+:\d+:\d+.\d+
disabled=false
TIME_PREFIX=\w+\s
TIME_FORMAT=%m/%d/%Y %H:%M:%S.%N 
0 Karma
1 Solution

woodcock
Esteemed Legend

The docs:
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf
Say this:

time_before_close = <integer>
* The amount of time, in seconds, that the file monitor must wait for
  modifications before closing a file after reaching an End-of-File
  (EOF) marker.
* Tells the input not to close files that have been updated in the
  past 'time_before_close' seconds.
* Default: 3

multiline_event_extra_waittime = <boolean>
* By default, the file monitor sends an event delimiter when:
  * It reaches EOF of a file it monitors and
  * The last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event to
  arrive.
* Set to "true" to delay sending an event delimiter until the time that the
  file monitor closes the file, as defined by the 'time_before_close' setting,
  to allow all event lines to arrive.
* Default: false

View solution in original post

joesrepsolc
Communicator

Thanks woodcock. Just put this in place for us and solved a problem that we'd been struggling with. Added these two lines to our monitor and worked like a champ (log can take up to 5min to completely populate from all the wave of scripts).

time_before_close = 300
multiline_event_extra_waittime = true

Much appreciated!

0 Karma

woodcock
Esteemed Legend

The docs:
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf
Say this:

time_before_close = <integer>
* The amount of time, in seconds, that the file monitor must wait for
  modifications before closing a file after reaching an End-of-File
  (EOF) marker.
* Tells the input not to close files that have been updated in the
  past 'time_before_close' seconds.
* Default: 3

multiline_event_extra_waittime = <boolean>
* By default, the file monitor sends an event delimiter when:
  * It reaches EOF of a file it monitors and
  * The last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event to
  arrive.
* Set to "true" to delay sending an event delimiter until the time that the
  file monitor closes the file, as defined by the 'time_before_close' setting,
  to allow all event lines to arrive.
* Default: false

swamysanjanaput
Explorer

Thank you so much. I just added following attributes to inputs.conf and it worked like a charm:

time_before_close = 30
multiline_event_extra_waittime = true

arjunpkishore5
Motivator

What exactly do you mean by events are distorted? Do you mean they are not in the same order as the source? If the timestamp format is correct and a match is available in the file, Splunk should display events in the right format. If not, Splunk will use the index time. I would upload the file through the UI and verify if the timestamp format and any other setting in the props.conf are breaking the events right and extracting the data in the desired way.

0 Karma
Get Updates on the Splunk Community!

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...

Splunk and Fraud

Watch Now!Watch an insightful webinar where we delve into the innovative approaches to solving fraud using the ...