We are trying to get Palo alto traps log into splunk however we have incurred an issue of logs being sent to the indexer are having a delay of around 2 to 4 hours from the actual time of generation . This is leading to inconvenience in generating reports and daily monitoring.
Confirmed that the following architecture.
Palo Alto device -> UF -> HF -> indexer
(Palo Alto device and UF are on the same server)
We have tried the below query to verify the delay in logs :
Run the following command for the last 15 minutes which can confirm whether there is any delay from the forwarder to the index.
Have you determined where is the bottleneck in data pipeline?
In Monitoring Console, go to indexing performance - instance/deployment, and the panels there can give you a good understanding of the indexing performance across all the components in the indexing pipeline set. Median Fill Ratio of Data Processing Queues will be very helpful in determining the bottleneck.
You can also take a closer look at metrics.log, which periodically samples Splunk activity every 30 seconds and reports top 10 items in each category to reveal the whole picture across the toplogy, including forwarding thruput and indexing thruput.
index=_internal source=metrics.log host=xyz
The log has a variety of inspection information:
group – indicates the data type: pipeline, queue, thruput, tcpout_connections, udpin_connections, and mpool
group=pipeline – plots the frequency and the duration of the pipeline process machinery
group=queue – displays the data to be processed
* current_size can identify which are the bottlenecks