Splunk Search

CRL logs not being properly displayed

Engager

I'm running CRL expiration checks and using splunk to read the logs to track the last check run and when they are next updated. However, some of them aren't being displayed properly. It is always CA-41, 42, 43, 44 that don't get displayed like the others, although they will sometimes, but it's sporadic. I can fix the problem by cleaning the index eventdata, but I have to run that every single time I do a CRL check. How would I go about making it so that I get the proper display each time without having to clean the eventdata and restart the server every time?

Log file data:

07/15/2019 11:01:42 CA-49.crl nextUpdate=Jul 19 17:00:00 2019 GMT
07/15/2019 11:01:37 CA-44.crl nextUpdate=Jul 19 17:00:00 2019 GMT

Properly displayed Splunk data:

07/15/2019 11:01:42 CA-49.crl nextUpdate=Jul 19 17:00:00 2019 GMT
host = SPLUNKSERVER     source = C:\crl_expiration.log     sourcetype = crl

Improperly displayed Splunk data:

07/15/2019 11:01:37 CA-44.crl 
host = SPLUNKSERVER     source = C:\crl_expiration.log     sourcetype = crl
0 Karma
1 Solution

Esteemed Legend

We see problems like this from time to time and the problem is usually that the app writing to the file is pausing longer than 3 seconds at those points so splunk is breaking the event there and the rest of the event is in another/separate event. The best thing to do is to make sure that the app writing the file is flushing to disk more frequently, especially when writing those events/strings. Short of that, you can stage the file by writing it to a non-monitored directory and then when complete, having script/cron move it to the monitored directory (which introduces availability delay). Lastly, you can try these settings in inputs.conf:

time_before_close = <integer>
* The amount of time, in seconds, that the file monitor must wait for
  modifications before closing a file after reaching an End-of-File
  (EOF) marker.
* Tells the input not to close files that have been updated in the
  past 'time_before_close' seconds.
* Default: 3.

multiline_event_extra_waittime = <boolean>
* By default, the file monitor sends an event delimiter when:
  * It reaches EOF of a file it monitors and
  * Ihe last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event to
  arrive.
* Set to "true" to delay sending an event delimiter until the time that the
  file monitor closes the file, as defined by the 'time_before_close' setting,
  to allow all event lines to arrive.
* Default: false.

View solution in original post

Esteemed Legend

We see problems like this from time to time and the problem is usually that the app writing to the file is pausing longer than 3 seconds at those points so splunk is breaking the event there and the rest of the event is in another/separate event. The best thing to do is to make sure that the app writing the file is flushing to disk more frequently, especially when writing those events/strings. Short of that, you can stage the file by writing it to a non-monitored directory and then when complete, having script/cron move it to the monitored directory (which introduces availability delay). Lastly, you can try these settings in inputs.conf:

time_before_close = <integer>
* The amount of time, in seconds, that the file monitor must wait for
  modifications before closing a file after reaching an End-of-File
  (EOF) marker.
* Tells the input not to close files that have been updated in the
  past 'time_before_close' seconds.
* Default: 3.

multiline_event_extra_waittime = <boolean>
* By default, the file monitor sends an event delimiter when:
  * It reaches EOF of a file it monitors and
  * Ihe last character it reads is a newline.
* In some cases, it takes time for all lines of a multiple-line event to
  arrive.
* Set to "true" to delay sending an event delimiter until the time that the
  file monitor closes the file, as defined by the 'time_before_close' setting,
  to allow all event lines to arrive.
* Default: false.

View solution in original post

Engager

You're awesome! Thanks!

0 Karma

SplunkTrust
SplunkTrust

What is the expected display? What are the sporadic improper displays?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Engager

The CA-49 splunk output is what I'm looking for, and the CA-44 splunk output is what I sometimes get only for CA-41, 42, 43, 44. I've updated the post to separate the 2 different splunk results.

0 Karma

SplunkTrust
SplunkTrust

What is your query?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Engager

I'm just searching "index=crl" (no quotes in the actual search). Under the search app, this is the indexes.conf data:
[crl]
homePath = $SPLUNK_DB\crl\db
coldPath = F:\Splunk\crl\colddb
thawedPath = F:\Splunk\crl\thaweddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
maxHotBuckets = 25
maxWarmDBCount = 20
maxDataSize = 10000
maxTotalDataSizeMB = 452000
disabled = 0

0 Karma

SplunkTrust
SplunkTrust

If all you want to see is "CA-49" then that needs to be part of your query. Try index=crl "CA-49".

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Engager

I don't just want the specific CA-x label, I also want to be able to see the next update information and everything else from the log file. I need to be able to create a report of all active CA's, and an alert for when a CA is being revoked. All of the information listed for CA-49, which is shown in the original post, I have to see for every single CA I am using (around 20 CA's total), but CA-41, CA-42, CA-43, and CA-44 almost never show that information despite being identical in the actual log file (outside of the CA-x label). And like I said in the original post, if I clean the event data for crl, it'll display everything for all CA's like I need, but that requires, all through the command line, splunk to be stopped, the crl index to be cleaned of the eventdata, and splunk to be started. That is not a solution that will work for my needs.

0 Karma