Good morning everyone,
I am trying to ingest a log that does not roll over after a new, only when the service that writes the log is restarted. We have done some testing using cRcSalt and so far that has not helped to continually monitor the file as it is written.
Any advice would be appreciated.
inputs.conf
[monitor://E:\Tomcat 9.0\logs\tomcat9-stdout.*.log]
sourcetype = test
index = test
blacklist = \.(gz|bz2|z|zip)$
disabled = false
CRCSALT = <SOURCE>
Props.conf
[test]
DATETIME_CONFIG =
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
category = Custom
pulldown_type = true
CHECK_FOR_HEADER = false
CHARSET = AUTO
EXTRACT-SessionID = (?<=SessionID:)(?P<SessionID>.+)
EXTRACT-Result = (?<=VerificationResult:)(?P<Result>.+)
EXTRACT-UserName = (?<=User:)(?P<UserName>.+)
EXTRACT-Response = (?<=Account Response:)(?P<Response>.+)
EXTRACT-Second_Response = (?<=Verification_test:)(?P<Second_Response>.+)
We haven't got to prod yet and that was estimate. I imagine it will be much less. Thanks for the replies and the help thus far. More testing is scheduled for Monday at this point.
@richgalloway We are starting to see data ingested in real from the log. No changes made in either the application nor the conf file. Before posting this question we did restart the forwarder but we were not seeing the logs ingested.
Next I need to work on the Regex to help parse the fields but I can open a separate question for that.
Thank you for your help.
What is the problem you are having? Does monitoring stop? If so, when? Are there any messages in splunkd.log? Is the file being monitored by a universal forwarder, a heavy forwarder, or the local Splunk instance?
The file is being monitored by a UF, The problem is after the initial log is ingested, if anything additional is written to the log it does not ingest that.
The log source itself does not roll after a certain size, and i am told that file does not get that big, as the service is restart monthly for maintenance.
I'm not seeing any errors in the logs, but i also might be looking for the wrong thing. - Any tips/advice here?
UFs normally read a monitored file continuously so new data is picked up almost immediately. The CRCSALT setting usually takes care of the exceptions. Does restarting the UF help?
How big is "not that big" after a month?
I would look in splunkd.log on the UF for the name of the monitored file/directory.
How many files are being monitored in that directory? If it's too many, the UF may lose track of them.
Not that big is from i understand as under 100GB.
We have restarted the UF with no success. I'll have to review the logs today to see if i can find the issue, and i will share my findings. Thank you for help thus far.
100GB in a single file seems pretty big to me, but it's all relative.
See if this answer https://community.splunk.com/t5/Splunk-Search/maximum-file-size/m-p/85157 sheds any light on the matter.
We haven't got to prod yet and that was estimate. I imagine it will be much less. Thanks for the replies and the help thus far. More testing is scheduled for Monday at this point.