F.ex. when using NLog file target:
https://github.com/NLog/NLog/wiki/File-target
What's the optimal performance way for creating log files for the Forwarder? One record per file (timestamp + guid.json)...which would create a lot of files.
Or perhaps logging every second (multiple records per log file), but what about file locks? I don't want Splunk Forwarder to fight with nlog about who has a lock on the file.
What's the optimal best performance solution for creating log files to avoid file locks?
Batch mode will not attempt to ingest files that are being actively written to.
As soon as your application rotates its log file(s), Splunk will pick it up for ingestion, and delete it if you have sinkhole configured.
Aside from the log rotation schedule, you don't need to configure anything special on your application. There's no issue with file contention.
I'm not sure about your concern with file locks, but for log files that are actively being written to just use the "monitor" parameter on the forwarders inputs.conf.
If you want to consume historical log files use the "batch" mode.
The Forwarder is extremely efficient in both cases. Though I will say that if you use batch mode, you will generally see better performance with multiple smaller files rather than one extremely large one. The latter will consume more memory and CPU time.
I want Splunk to delete the data after it reads it in.
This works, it eats the files (removes them after reading them in).
[batch://C:\Splunk\Local\*.json]
index=myindex
move_policy = sinkhole
disabled = 0
I guess batch is the way to go if I want Splunk to delete the files afterwards.
I guess I have to find a way to tell the log system (NLog):
"Write your file and then release the lock and leave it alone"
Do you agree?
Batch mode will not attempt to ingest files that are being actively written to.
As soon as your application rotates its log file(s), Splunk will pick it up for ingestion, and delete it if you have sinkhole configured.
Aside from the log rotation schedule, you don't need to configure anything special on your application. There's no issue with file contention.
I'm thinking of creating a log file every second:
$2021-08-26 08-56-40.json
$2021-08-26 08-56-41.json
etc.
..with one or many record in each.
Then after reading this:
https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf
[batch://C:\Splunk\Local\*.json]
index=myIndex
move_policy = sinkhole
time_before_close = 1
multiline_event_extra_waittime = true
disabled = 0
That should let splunk batch leave the file alone long enough (1sec) so there won't be any conflict. Right?
Second thought, just keeping it simple seems enough:
# 'batch' reads in the file and indexes it, and then deletes the file on disk.
[batch://C:\Splunk\Local\*.json]
index=myIndex
move_policy = sinkhole
disabled = 0
Splunk forwarder waits for roughtly 3 seconds before he deletes it. I think I can just use the default batch settings and create a log file every second, f.ex:
2021-08-26 08-56-40.json
Thanks 🙂