Deployment Architecture

Does the forwarder send data in chunks of 64 KBs?

ddrillic
Ultra Champion

Does the forwarder send data in chunks of 64 KBs? Does it introduce issues with event breaking? Is there a way to change this setting? Thinking about it, in one of the Splunk courses, it was explained that it gets trickier with the acknowledgements...

Tags (2)
0 Karma
1 Solution

lakshman239
Influencer

Yes, the input file processing takes in 64k chunks. I believe its in the design and not sure if there is a setting to change. However with changes in 6.5+, we can increase pipeline/parallel processing, if we have enough resources. I am sure you are aware of the following

https://docs.splunk.com/Documentation/Splunk/7.2.4/Indexer/Pipelinesets

View solution in original post

0 Karma

ddrillic
Ultra Champion

An interesting thread at How often/quickly does a Splunk universal forwarder read a file?

@dshakespeare_splunk said -

My understanding is that interval is only for modular and scripted inputs. Monitor inputs are basically "as fast as we can"

There two main elements to monitor inputs, the File Update Notifier(FUN) and the "Reader"
- FUN will notify the TailReader when it sees a change in a file at OS level. FUN will then stop monitoring the file till the readers (Tail or Batch) have finished reading the file.
- The TailReader looks at the file size, calculates the bytes to be read and passes it to the Batch Reader if need be (>20MB).
- The readers start reading the file in 64KB chunks per iteration, HOWERVER the readers will keep reading ( keeps iterating ) till it finds the end of the file.
- If the file has more data (more then the first time we saw a change in the file) we will read that new appended data.
- Which means if the file is growing really fast we read more data then we actually planned to. This is the reason why we sometimes see >100% reads in the input status page for files that are growing very fast.
- The other thing to note is the readers block on the Parsing queue to insert Pipeline Data. That is if the queue is full, the readers will wait till the queue frees up, before reading more data.
- Once the readers are done reading till the EOF, they notify FUN to start monitoring for changes again.

So the "interval" is really the iteration time, which is dependant on:
- How "active" the files are, how much data is written to the files and the speed of cpu and disk.
- How fast the we process the Pipeline Data further down the pipeline.

0 Karma

lakshman239
Influencer

Yes, the input file processing takes in 64k chunks. I believe its in the design and not sure if there is a setting to change. However with changes in 6.5+, we can increase pipeline/parallel processing, if we have enough resources. I am sure you are aware of the following

https://docs.splunk.com/Documentation/Splunk/7.2.4/Indexer/Pipelinesets

0 Karma

skramp
SplunkTrust
SplunkTrust

you can configure the pipelines in server.conf and the maxKBps in the limits.conf to send more data from a forwarder.

0 Karma
Get Updates on the Splunk Community!

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...