Getting Data In

What's the optimal best performance solution for creating log files to avoid file locks?

gunnist
Explorer

F.ex. when using NLog file target:

https://github.com/NLog/NLog/wiki/File-target

 

What's the optimal performance way for creating log files for the Forwarder? One record per file (timestamp + guid.json)...which would create a lot of files.

Or perhaps logging every second (multiple records per log file), but what about file locks? I don't want Splunk Forwarder to fight with nlog about who has a lock on the file.

 

What's the optimal best performance solution for creating log files to avoid file locks?

Labels (1)
0 Karma
1 Solution

codebuilder
Influencer

Batch mode will not attempt to ingest files that are being actively written to.
As soon as your application rotates its log file(s), Splunk will pick it up for ingestion, and delete it if you have sinkhole configured.

Aside from the log rotation schedule, you don't need to configure anything special on your application. There's no issue with file contention.

----
An upvote would be appreciated and Accept Solution if it helps!

View solution in original post

codebuilder
Influencer

I'm not sure about your concern with file locks, but for log files that are actively being written to just use the "monitor" parameter on the forwarders inputs.conf.

If you want to consume historical log files use the "batch" mode.
The Forwarder is extremely efficient in both cases. Though I will say that if you use batch mode, you will generally see better performance with multiple smaller files rather than one extremely large one. The latter will consume more memory and CPU time.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

gunnist
Explorer

I want Splunk to delete the data after it reads it in.

This works, it eats the files (removes them after reading them in).

[batch://C:\Splunk\Local\*.json]
index=myindex
move_policy = sinkhole
disabled = 0

I guess batch is the way to go if I want Splunk to delete the files afterwards.

I guess I have to find a way to tell the log system (NLog):

"Write your file and then release the lock and leave it alone"

Do you agree?

0 Karma

codebuilder
Influencer

Batch mode will not attempt to ingest files that are being actively written to.
As soon as your application rotates its log file(s), Splunk will pick it up for ingestion, and delete it if you have sinkhole configured.

Aside from the log rotation schedule, you don't need to configure anything special on your application. There's no issue with file contention.

----
An upvote would be appreciated and Accept Solution if it helps!

gunnist
Explorer

I'm thinking of creating a log file every second:

$2021-08-26 08-56-40.json

$2021-08-26 08-56-41.json

etc.

..with one or many record in each.


Then after reading this:

https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf

[batch://C:\Splunk\Local\*.json]
index=myIndex
move_policy = sinkhole
time_before_close = 1
multiline_event_extra_waittime = true
disabled = 0

 

That should let splunk batch leave the file alone long enough (1sec) so there won't be any conflict. Right?

0 Karma

gunnist
Explorer

Second thought, just keeping it simple seems enough:

# 'batch' reads in the file and indexes it, and then deletes the file on disk.
[batch://C:\Splunk\Local\*.json]
index=myIndex
move_policy = sinkhole
disabled = 0

Splunk forwarder waits for roughtly 3 seconds before he deletes it. I think I can just use the default batch settings and create a log file every second, f.ex:

2021-08-26 08-56-40.json

Thanks 🙂

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...