Hi @gcusello ,
We've observed slow Splunk indexing in one webMethods servers (aexxxxxx) causing certain testcases to fail. Normally indexing completes in 1-2 seconds. But on some servers it takes much longer (upto 2 minutes).
Could you please help me with this.
Regards,
Rahul
Hi @rahul2gupta,
if you have these problem only on these logs, the problem, is probably the one hinted by @scelikok.
At first, check if you have delays in indexing queue (you can see in Splunk Monitor Console).
Then, did you checked the maxKBps parameter on UF or Indexer?
the problem it should be because, by default, an UF has 256 for this parameter.
Ciao.
Giuseppe
Hi @gcusello ,
I checked these parameters on UF(aeaxxxx) because we are facing from this server only.
Please let me know if we need to change maxKBps =0 to some higher value.
Regards,
Rahul
Hi @rahul2gupta,
maxKBps=0 means unlimited, there is no need to change.
Can you check internal logs if you see an error on TailReader component like File Descriptor Cache Full?
index=_internal host=aeaxxxx component=TailReader
If there are too many files monitored, UF cannot monitor all, that may cause delays.
Hi @scelikok ,
I Checked the following query and found that latest events generated was on 20/01/2021.
index=_internal host=aeaxxxx component=TailReader
So what is the solution of this 😅
Still, we see that there is delay in indexing.😓
Regards,
Rahul
Hi @rahul2gupta,
If you are getting logs using Universal Forwarder, delay could be because of the thruput setting on Universal Forwarder. The badwidth limit is default 256KBytes per second, if server creates more the 256 KBytes logs per second, you may experience delays.
This can be confirmed using internal logs of that Universal Forwarder;
index=_internal component=ThruputProcessor
You can increase this bandwidth limit on limits.conf on Universal Forwarder; for example to 1024 Kbyte/s
[thruput]
maxKBps = 1024
If this reply helps you an upvote is appreciated.