All Apps and Add-ons

How to deploy the Palo Alto app in an Indexer Cluster environment

dmartinez_splun
Splunk Employee
Splunk Employee

Hey everyone,

I'm having trouble deploying the Palo Alto Networks app (4.2.2) in Splunk Enterprise (6.2.2). The setup is 1x Search Head, 1x Cluster Master, 2x Indexers, receiving data from a separate Universal Forwarder that reads off a directory populated by syslog-ng.

The Palo Alto app was deployed as a Distributed Configuration Bundle from the Search Head, and I saw it to be successfully deployed against the 2x indexers.

The Universal Forwarder has a input.conf stanza for the PAN data with:
index = pan_logs
sourcetype = pan_log

Data is coming into the system, and when searching (from Search Head):
index=pan_logs sourcetype=pan_log (shows every event)
index=pan_logs sourcetype=pan_config (shows no events)

In fact, I can see only one sourcetype in that index: pan_log, so it is not getting correctly parsed. I tried loading the syslog-ng data in my local laptop running the PAN app and it worked fine, as in, the sourcetype fields populate correctly. That means the data coming out of syslog-ng is correct.

I can also see the /slave-apps/ and /master-apps/ directories replicated correctly in the indexers. I haven't modified the transforms.conf or props.conf files, but I can see they are there, and contain the necessary rules to correctly assign the event's source type.

I think that for some reason, the transforms.conf and props.conf for the PAN app is not getting picked up by the indexers, thus not getting the correct sourcetypes.

I'm at a loss on how to troubleshoot this further. Any ideas would be greatly appreciated.

Dan.

1 Solution

btorresgil
Builder

Hello,

The app needs to be installed on all searchheads, indexers, and heavy forwarders. Since the sourcetype for the events is pan_log, it means the events are not getting parsed by the app. 9 times out of 10 this is because the logs have been subtly modified by Syslog-NG so the props/transforms cannot recognize them. So...

  1. Make sure the app is installed on all necessary Splunk nodes
  2. Verify syslog-ng isn't adding any characters to the logs or modifying them in any way

Hope that helps!

Update: an earlier version of this answer said the sourcetype was pan_logs, but it is pan_log. This has been corrected. (thanks mbonsack)

View solution in original post

esix_splunk
Splunk Employee
Splunk Employee

If you're ingesting on a HF, then you need the app there, and don't need it on the indexers.

That's not always the case however, but in this case, the HF is cooking the data and forwarding to the indexers with the relevant index time fields and metadata.

0 Karma

dmartinez_splun
Splunk Employee
Splunk Employee

The theory now is that given the forwarder is a Heavy Forwarder, the app needs to be installed in the Forwarder as well.

This link (http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F) explains how both the Input and Parsing stages are performed in the Forwarder - when using a Heavy Forwarder:

Heavy Forwarder → Indexer
Input, Parsing → Indexing, Search

And how transforms.conf is looked at during the Parsing phase:

Parsing
- props.conf
- transforms.conf
- datetime.xml

and not during the Indexing stage:

Indexing
- props.conf
- indexes.conf
- segmenters.conf
- multikv.conf

Trying this next... will update.

0 Karma

dmartinez_splun
Splunk Employee
Splunk Employee

Could it be that in a distributed, clustered environment I need to setup SplunkforPaloAltoNetwork/metadata/default.meta permissions to global rather than none?

I didn't have to do that in the all-in-one deployment, in which they all appear as none by default.

Default:

###PROPS
 [props]
 export = none
 ###TRANSFORMS
 [transforms]
 export = none
 [lookups]
 export = none

Proposed Change:

###PROPS
 [props]
 export = system
 ###TRANSFORMS
 [transforms]
 export = system
 [lookups]
 export = system

As a test in the customer environment, I'm planning to execute the change on the Search Head, as well as on the Cluster Master, Restarting both the Search Head and Cluster Master after having edited the default.meta file.

0 Karma

rtoloczk
Explorer

David,

Did this resolve your issue?

Thanks,

Robert

0 Karma

dmartinez_splun
Splunk Employee
Splunk Employee

It didn't. Developing another theory now...

The forwarders are full Splunk instances, as in, Heavy Forwarders.

I wonder whether the app must be installed in the forwarder only in the case of a heavy forwarder. I'm planning to test that next.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...