All Apps and Add-ons

Delay in logs from Universal Forwarder approx 4 hours for Palo Alto App

risgupta_splunk
Splunk Employee
Splunk Employee

Hi,

We are trying to get Palo alto traps log into splunk however we have incurred an issue of logs being sent to the indexer are having a delay of around 2 to 4 hours from the actual time of generation . This is leading to inconvenience in generating reports and daily monitoring.

Confirmed that the following architecture.

Palo Alto device -> UF -> HF -> indexer
(Palo Alto device and UF are on the same server)

We have tried the below query to verify the delay in logs :

  1. Run the following command for the last 15 minutes which can confirm whether there is any delay from the forwarder to the index.

index=_internal host=NESWPR10APACL01
|eval diff = _indextime - _time
|eval i_time = _indextime
|eval e_time = _time
|convert timeformat="%Y-%m-%d %H:%M:%S" ctime(i_time)
|convert timeformat="%Y-%m-%d %H:%M:%S" ctime(e_time)
|sort -diff
|table _raw, i_time, e_time, diff

Delay is less than 5 min i.e.0.299.

Please help us with this issue.

0 Karma

hunters_splunk
Splunk Employee
Splunk Employee

Hi risgupta,

Have you determined where is the bottleneck in data pipeline?

In Monitoring Console, go to indexing performance - instance/deployment, and the panels there can give you a good understanding of the indexing performance across all the components in the indexing pipeline set. Median Fill Ratio of Data Processing Queues will be very helpful in determining the bottleneck.

You can also take a closer look at metrics.log, which periodically samples Splunk activity every 30 seconds and reports top 10 items in each category to reveal the whole picture across the toplogy, including forwarding thruput and indexing thruput.

index=_internal source=metrics.log host=xyz

The log has a variety of inspection information:
group – indicates the data type: pipeline, queue, thruput, tcpout_connections, udpin_connections, and mpool
group=pipeline – plots the frequency and the duration of the pipeline process machinery
group=queue – displays the data to be processed
* current_size can identify which are the bottlenecks

 09-07-2016 17:07:21.416 +0000 INFO Metrics - group=pipeline, name=parsing, processor=utf8,
 cpu_seconds=0.000000, executes=23, cumulative_hits=691835
 09-07-2016 17:07:21.416 +0000 INFO Metrics - group=queue, name=parsingqueue, blocked!!=true,
 max_size=1000, filled_count=0, empty_count=8, current_size=0, largest_size=2, smallest_size=0

Hope this helps. Thanks!
Hunter

0 Karma
Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...