All Apps and Add-ons

How to deploy the Palo Alto app in an Indexer Cluster environment

Splunk Employee
Splunk Employee

Hey everyone,

I'm having trouble deploying the Palo Alto Networks app (4.2.2) in Splunk Enterprise (6.2.2). The setup is 1x Search Head, 1x Cluster Master, 2x Indexers, receiving data from a separate Universal Forwarder that reads off a directory populated by syslog-ng.

The Palo Alto app was deployed as a Distributed Configuration Bundle from the Search Head, and I saw it to be successfully deployed against the 2x indexers.

The Universal Forwarder has a input.conf stanza for the PAN data with:
index = pan_logs
sourcetype = pan_log

Data is coming into the system, and when searching (from Search Head):
index=pan_logs sourcetype=pan_log (shows every event)
index=pan_logs sourcetype=pan_config (shows no events)

In fact, I can see only one sourcetype in that index: pan_log, so it is not getting correctly parsed. I tried loading the syslog-ng data in my local laptop running the PAN app and it worked fine, as in, the sourcetype fields populate correctly. That means the data coming out of syslog-ng is correct.

I can also see the /slave-apps/ and /master-apps/ directories replicated correctly in the indexers. I haven't modified the transforms.conf or props.conf files, but I can see they are there, and contain the necessary rules to correctly assign the event's source type.

I think that for some reason, the transforms.conf and props.conf for the PAN app is not getting picked up by the indexers, thus not getting the correct sourcetypes.

I'm at a loss on how to troubleshoot this further. Any ideas would be greatly appreciated.

Dan.

1 Solution

Builder

Hello,

The app needs to be installed on all searchheads, indexers, and heavy forwarders. Since the sourcetype for the events is pan_log, it means the events are not getting parsed by the app. 9 times out of 10 this is because the logs have been subtly modified by Syslog-NG so the props/transforms cannot recognize them. So...

  1. Make sure the app is installed on all necessary Splunk nodes
  2. Verify syslog-ng isn't adding any characters to the logs or modifying them in any way

Hope that helps!

Update: an earlier version of this answer said the sourcetype was pan_logs, but it is pan_log. This has been corrected. (thanks mbonsack)

View solution in original post

Builder

Hello,

The app needs to be installed on all searchheads, indexers, and heavy forwarders. Since the sourcetype for the events is pan_log, it means the events are not getting parsed by the app. 9 times out of 10 this is because the logs have been subtly modified by Syslog-NG so the props/transforms cannot recognize them. So...

  1. Make sure the app is installed on all necessary Splunk nodes
  2. Verify syslog-ng isn't adding any characters to the logs or modifying them in any way

Hope that helps!

Update: an earlier version of this answer said the sourcetype was pan_logs, but it is pan_log. This has been corrected. (thanks mbonsack)

View solution in original post

Builder

Very glad it's working now. Thanks for letting us know!

0 Karma

Explorer

Hi Brian, The answer solves the case of using a Heavy Forwarder as the means of ultimately getting the data into Splunk, however this hasn't solved the issue of a UF on a Syslog-NG server that sends the data onto the indexers.

I am still confused why the sourcetyping works correctly when the app is installed in SPLUNK_HOME/etc/apps on the indexer, but it DOES NOT WORK when the app is in the SPLUNK_HOME/etc/slave-apps/_cluster directory on a clustered indexer.

0 Karma

Splunk Employee
Splunk Employee

Installed the app in SH, Cluster Master, and Heavy Forwarders and the thing started working as expected. So copying the app folder into the HF and restarting did the job.

A little confused though... All data seemed to be parsed into the right sourcetypes now, including last week's data, which hadn't been parsed correctly. I was expecting only today's data to be parsed correctly. (Not that I'm complaining! ha!, but would like to understand why..)

0 Karma

Splunk Employee
Splunk Employee

The App applies field extractions and parsing at Search Time, not index time. This is why historical data is working correctly.

Where on the CM did you put this?

0 Karma

Splunk Employee
Splunk Employee

Thanks Eric,

It's in the /master-apps/ in the CM, but it was pushed there via the UI, using the Distributed Configuration Bundle from the Search Head. It's replicated correctly in the /slave-apps/ directories.

0 Karma

Splunk Employee
Splunk Employee

Is the sourcetype pan_log or pan_logs(plural)? This is very confusing and could be the source of the problem. The index is pan_logs, correct?

0 Karma

Builder

The sourcetype is pan_log

The index is pan_logs

I initially typed it wrong in my answer, but corrected it now. Thanks for the catch.

Yes, this is confusing, and it will be changed in the next version of the app (version 5.0) coming out soon.

0 Karma

Explorer

The sourcetype is set to pan_log . The following is the inputs.conf stanza for my syslog-ng server

Push PaloAlto Syslog to Splunk:

[monitor:///var/log/syslog-ng/paloalto]
disabled = false
index = pan_logs
sourcetype = pan_log
no_appending_timestamp=true
host_segment = 5

The architecture is now:
- PA sends syslog to syslog-ng server with UF
- UF forwards to indexer cluster load balancing between three indexers
- Each indexer has the app in the $SPLUNK_HOME/etc/slave-apps/_cluster directory
- The search head has the app in $SPLUNK_HOME/etc/apps

Our 2nd site with just PA --> UF --> Standalone indexer <-- Search Head parses the logs just fine with the correct sourcetypes.

Our primary site did have a stand alone indexer and it worked. When we changed to the indexer cluster is when the parsing/sourcetyping stopped working.

0 Karma

Explorer

As a test, I copied the app from .../etc/slave-app/_cluster to /etc/apps on one of the clustered indexers and rebooted the box.

The sourcetyping is now correct for indexer02, but incorrect on indexer01 and indexer03

There appears to be something unique with the clustered indexer setup

0 Karma

Splunk Employee
Splunk Employee

Permissions?

0 Karma

Explorer

The app in both /etc/apps and /etc/slave-apps have the same OS level permissions and the same Splunk app permissions as listed below

Application-level permissions

[]
access = read : [ * ], write : [ admin, power ]

EVENT TYPES

[eventtypes]
export = system

PROPS

[props]
export = system

TRANSFORMS

[transforms]
export = system

[lookups]
export = system

OTHER

[savedsearches]
export = none

[commands]
export = system
0 Karma

Splunk Employee
Splunk Employee

The PAN app works at Search Time to break out all the different sourcetypes. These are search time transforms. As long as your data is getting indexed properly and pan_logs.

So it does depend on your ingest framework. If you're ingesting via UDP directly on indexers, you need the inputs and the props/transforms from the App.

If you're ingesting via syslog on a UF, you dont need the app, just set the sourcetype to pan_log. The indexers will need the APP though.

If you're ingesting via syslog / UDP on a HF, you need the props/transforms, same as an indexer.

Then on the SH, you also need the App. The recognition of the pan:traffic|system|threat|log|config sourcetypes are done at search time, not index time.

index=pan_logs earliest=-1h@h | stats count by sourcetype

Assuming you're using the standard configuration, that search should return the different groups of PAN sourcetypes.

0 Karma

Splunk Employee
Splunk Employee

Hi Eric,

So, just to clarify, when you say: "If you're ingesting via syslog / UDP on a HF, you need the props/transforms, same as an indexer."

Do you mean that one would need the Palo Alto App in the Search Head, Indexers, as well as the Heavy Forwarder? My previously proposed answer has my thinking on the why.

0 Karma

Explorer

Thank you both.

However, I am still not getting the correct sourcetypes.

I have my PAs sending their syslogs to a Syslog-NG server with a UF. The UF's inputs.conf sets the index to pan_logs and the sourcetype to pan_log. Both the Search Heads and the Indexers have the app installed.

The standalone (non-clustered) Indexer is reassigning the correct sourcetypes (pan_traffic, pan_config, etc), however the indexers in the cluster are not. I only see the original set pan_log.

Is there a different required config for a clustered instance?

0 Karma

Splunk Employee
Splunk Employee

Configuration for clustered indexers needs to be applied via the cluster bundles under $splunk_home/etc/master-apps/.

0 Karma

Explorer

My configurations/apps are deployed via $splunk_home/etc/master-apps/ The PA app/transforms that change the sourcetype are NOT working when they reside in the $splunk_home/etc/slave-apps directory, they only seem to work when in the $splunk_home/etc/apps directory

0 Karma

Explorer

Did you ever get this figured out? Trying to setup PA for the first time in a clustered environment and universal forwarder on syslog-ng server forwarding cisco and PA logs to the clustermaster/indexers. None of the dashboards are populating.

0 Karma

Explorer

Hi, where do you have the components?

On the cluster master I have
$SPLUNK_HOME/etc/master-apps
/SplunkforPaloAltoNetworks
/TA-paloalto

On the SyslogNG server I have

$SPLUNK_HOME/etc/apps
/SplunkforPaloAltoNetworks
/TA-paloalto

The mistake I made originally was on the cluster master I put all my apps in

$SPLUNK_HOME/etc/master-apps/_cluster/

Only local should go in _cluster

0 Karma

Explorer

I do not have the app installed on the syslog server. The TA was the only one installed.
/opt/splunkforwarder/etc/apps/Splunk_TA_paloalto.

My inputs.conf looks like:

#Palo Alto Devices
[monitor:///var/log/data/palo/.../*]
disabled = 0
host_segment = 6
sourcetype = pan:log
no_appending_timestamp = true
ignoreOlderThan = 1d
index = pan_logs
blacklist = .gz$

The TA was also placed on the deployment server in deployment apps and deployed to the syslog server, the indexers (via the cluster master), and the search head.

Still nothing. I had it working and then it stopped. Not sure what I'm doing wrong. Either in syslog-ng or splunk

0 Karma