Hello, have a question regarding log ingestion from Azure. At the moment, im using REST API to onboard logs to the on premise Heavy Forwarder which sends data to indexes located on splunkcloud.
For some reason there's a huge delay between event indexing and event creation time, still receiving logs that are 3 months old and new logs are getting delayed. What can be a reason for such a delay? Is it a normal behavior during Azure and Splunk integration?
Thank you in advance.
I know this thread is old, but this information may still help.
As specified in Microsoft Learn portal, "Microsoft doesn't guarantee a specific time after an event occurs for the corresponding audit record to be returned in the results of an audit log search. For core services (such as Exchange, SharePoint, OneDrive, and Teams), audit record availability is typically 60 to 90 minutes after an event occurs. For other services, audit record availability might be longer. However, some issues that are unavoidable (such as a server outage) might occur outside of the audit service that delays the availability of audit records. For this reason, Microsoft doesn't commit to a specific time."
@antnovo I would check the throughput restriction Splunk has by default. It throttles how much data splunk can send to 256kbps. This is done in limits.conf:
[thruput]
# setting this to 0 means makes it unlimited (be careful as a single forwarder can overwhelm an indexer)
maxKBps = 0
Hi,
Thank you for suggestion, checked limits.conf and it was already set to 0. Could it be related to usage of REST? Haven't found a single issue like this in Splunk answers related to log injection via Splunk add-on for Microsoft Cloud Services.
I'm considering to switch from REST to direct integration of Splunk and Azure via App but not sure if it will solve the problem.
Have a great weekend
What is the interval you a querying the API at?
Hi, i was querying the API at 1h interval, changed it this morning to 5 minutes.
Thanks