Hello, have a question regarding log ingestion from Azure. At the moment, im using REST API to onboard logs to the on premise Heavy Forwarder which sends data to indexes located on splunkcloud.
For some reason there's a huge delay between event indexing and event creation time, still receiving logs that are 3 months old and new logs are getting delayed. What can be a reason for such a delay? Is it a normal behavior during Azure and Splunk integration?
Thank you in advance.
@antnovo I would check the throughput restriction Splunk has by default. It throttles how much data splunk can send to 256kbps. This is done in limits.conf:
[thruput]
# setting this to 0 means makes it unlimited (be careful as a single forwarder can overwhelm an indexer)
maxKBps = 0
Hi,
Thank you for suggestion, checked limits.conf and it was already set to 0. Could it be related to usage of REST? Haven't found a single issue like this in Splunk answers related to log injection via Splunk add-on for Microsoft Cloud Services.
I'm considering to switch from REST to direct integration of Splunk and Azure via App but not sure if it will solve the problem.
Have a great weekend
What is the interval you a querying the API at?
Hi, i was querying the API at 1h interval, changed it this morning to 5 minutes.
Thanks