Splunk Enterprise

What will happen after splunk universal forwarder reached throughput limit

Zane
Explorer

Hi  

 I want to know that what will happen after splunk universal forwarder reached throughput limit, because i found my universal forwarder is stop ingest the data at a certain monment every day, and i don't know waht happend here, and i just set up the thruput in limits.conf, and restart the UF, the remain data will be collected, 
although i'm not sure if it will still be effective next time...

so the throughput limit reached, the Splunk UF will stop collecting data until next restart? 

 

0 Karma

Zane
Explorer

I ask this question  because i occured a issues about UF collection,

i have some floder named as date, for example, date is 2023-08-31, and then the log file will be placed here, and so on,  but the log file name may same,but the content is different,  and then i found there is a so strange phenomena, Collecting data always stops the previous day,for example, today is 2023-09-01, it stop at yesterday 2023-08-31, It will not collect the logs generated today, the file names in these two folders are the same, but the content,size,modified time is different, and I have also added the crcSalt parameter , and it will collect data again after i restart UF,  it cycles this phenomenon every day until I restart. 

so is there any parameter for this? thanks so much.

 

my inputs as below:

[monitor:///mnt/business/pvc-6e1ed89e/privopen/*/open-test*.log]
disabled = 0
host = myhost
index = test_index

crcSalt=<SOURCE>
sourcetype = test_business

_TCP_ROUTING=azure_hf

 

 

 
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

could it be that the start of this file is same in every day? That way it could be seen as a same file by splunk? You could try to add 

initCrcLength = <integer>
* How much of a file, in bytes, that the input reads before trying to
  identify whether it has already seen the file.
* You might want to adjust this if you have many files with common
  headers (comment headers, long CSV headers, etc) and recurring filenames.
* Cannot be less than 256 or more than 1048576.
* CAUTION: Improper use of this setting causes data to be re-indexed. You
  might want to consult with Splunk Support before adjusting this value - the
  default is fine for most installations.
* Default: 256 (bytes)

into inputs.conf to tackle that. Probably this is not enough as if the content has just change? Then you can try to add CHECK_METHOD into props.conf

File checksum configuration

CHECK_METHOD = [endpoint_md5|entire_md5|modtime]
* Set CHECK_METHOD to "endpoint_md5" to have Splunk software perform a checksum
  of the first and last 256 bytes of a file. When it finds matches, Splunk
  software lists the file as already indexed and indexes only new data, or
  ignores it if there is no new data.
* Set CHECK_METHOD to "entire_md5" to use the checksum of the entire file.
* Set CHECK_METHOD to "modtime" to check only the modification time of the
  file.
* Settings other than "endpoint_md5" cause Splunk software to index the entire
  file for each detected change.
* This option is only valid for [source::<source>] stanzas.
* This setting applies at input time, when data is first read by Splunk
  software, such as on a forwarder that has configured inputs acquiring the
  data.
* Default: endpoint_md5

initCrcLength = <integer>
* See documentation in inputs.conf.spec.

I hope that those will help you.

r. Ismo

Zane
Explorer

i have set this method, but it's still not working.

 

 

let me explain my sitution as below.

I need to monitor the folders obtained by mounting Azure's "file share" (like pvc-xxxx),

and the log generation policy as i mentioned before,  it will generate new folder named today's date, 

/mnt/xxx/2023-09-20

/mnt/xxx/2023-09-21

....

 

and the log naming policy is 

/mnt/xxx/2023-09-20/open-development-abcd.log

/mnt/xxx/2023-09-20/open-development-efgh.log

/mnt/xxx/2023-09-21/open-development-abcd.log

/mnt/xxx/2023-09-21/open-development-efgh.log

the log name is same, but the content is differernt, 

and then, it always stall ingest data at next day,  and i need to restart it, and then, the data will be collected, today 2023-09-21, i try to place some test file in 2023-09-21 folder by manully before restart , looks like the UF unable to detect them, so i restart it finally, the data was collected. 

my input as below:

[monitor://mnt/xxx/*/open-development*.log]
disabled=0
host=xxxx
index=xxx
soucetype=xxx
_TCP_ROUTING=xxx
crcSalt=<SOURCE>

 

please help try to locate the root cause, thanks so much. please

0 Karma

isoutamo
SplunkTrust
SplunkTrust

If you have rights set up correctly (you should have as this is working after restart), I don’t see any reason why it’s didn’t work! I said that your next step is to create a support case (bug report) to splunk support to solve this issue.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

What do you mean by "throughput limit"?  The UF has a rate limit which defaults to 256kbps.  The UF will read data at that rate until it catches up (if ever), but it will not stop reading.

Tell us more about the symptoms so we can offer suggestions.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...