About
The log file is overwritten each time, therefore the MUST_NOT_BREAK_AFTER in the current definition does work, but I realize that there might be better solutions. The problem is that on 1 out of 5 servers, the event gets broken. My guess is that it is caused by delay in the output since it is the output from a command.
Current status
I have created a custom sourcetype like this
[fn:vwtool:loadstatus]
MAX_TIMESTAMP_LOOKAHEAD=100
MUST_NOT_BREAK_AFTER=-+\w+,\s
SHOULD_LINEMERGE=true
TIME_PREFIX=-+\w+,\s
TRUNCATE = 0
MAX_EVENTS = 10000
An event looks like this
---------------------------Mon, 20 Oct 2014 05:00:13---------------------------
vwtool : FILENETPE11 [Server (DbDBType.Oracle Blob 1 MB) -- {pe460.000.1010.101} en ]
Outputting to file 'd:\logs\vwtool\loadstatus_region3.log' and the terminal
<vwtool:3>[ For Region 3 from: Thu, 16 Oct 2014 18:46:05, To: Mon, 20 Oct 2014 05:00:13 ]
[ Total seconds: 296048, minutes: 4934.13, hours: 82.24 ]
Total Average Average
Count Per Min Per Hour
# Executed Regular Steps: 49555 10.04 602.60
# Executed System Steps: 115642 23.44 1406.23
# Java RPCs: 0 0.00 0.00
# Object Service RPCs: 0 0.00 0.00
# Work Object Inject RPCs: 6327 1.28 76.94
# Queue Query RPCs: 2358261 477.95 28676.90
# Roster Query RPCs: 22258 4.51 270.66
# Lock Work Object RPCs: 22710 4.60 276.16
# Update Work Object RPCs: 50099 10.15 609.21
# Invoke Web Service Instructions: 0 0.00 0.00
# Receive Web Service Instructions: 0 0.00 0.00
# Lock work object errors: 61 0.01 0.74
# email notification errors: 0 0.00 0.00
# Transaction deadlock errors: 0 0.00 0.00
# Database reconnect: 0 0.00 0.00
# Timer manager update errors: 660 0.13 8.03
# Work objects skip due to sec errors: 0 0.00 0.00
# Exceed the Work Space Cache: 0 0.00 0.00
# Exceed the Isolated Region Cache: 0 0.00 0.00
# Authentication errors: 0 0.00 0.00
# Authentication token timeouts: 66 0.01 0.80
<vwtool:3>Output to file turned off
---------------------------Mon, 20 Oct 2014 05:00:14---------------------------
I did solve it by updating the script. Now it first create the log file in a temporary location, when the command is finished executing and have logged everything to the file, it will be moved to the permanent location. This solved my issue, now all events are indexed as one.
Best practice? Not sure, but it solved my problem.
I did solve it by updating the script. Now it first create the log file in a temporary location, when the command is finished executing and have logged everything to the file, it will be moved to the permanent location. This solved my issue, now all events are indexed as one.
Best practice? Not sure, but it solved my problem.
Well, this sure worked when nothing else I tried did. Until someone comes up with a better answer, this is how it's done! 😉
Is this output coming from a command being run on some schedule? Could Splunk be tasked with running the command to retrieve the results directly?
The output is coming from a scheduled task being run on the server. And by reading between the lines, are you suggesting scripted input? The script used today is a Powershell-script.