All Apps and Add-ons

Palo Alto Logs to UF/syslog Server Listener?

nychawk
Communicator

Can I still use index pan_logs? (my deployment is an upgrade), any benefits to not using it?

At the moment, I am sending logs to a listener I've setup on my syslog server w/UF's as follows:

~etc/system/local/inputs.conf

[udp://5514]
connection_host = ip
sourcetype = pan:log
# index = pan_logs
no_appending_timestamp = true

I've tried also with declaring my index by removing comment above.

When I replace pan_logs queries with index=pan_logs, some of my search queries work; but not dashboards.

pan_traffic queries work, however "Traffic Dashboard" yields no data.
pan_url queries work, however "URL Filtering Dashboard" & "Web Activity Report" yields no data.

I should mention that I did not install the TA portion of the app onto my univ forwarders; instructions stated heavy's only.

Any drawbacks to just removing the app altogether, and re-installing?
(my indexers contain no 'lookups' sub-dirs, which seems to have been an issue for others).

0 Karma

nychawk
Communicator

I upgraded Splunk on my indexers from splunk-6.3.0 to splunk-6.3.2, and am now able to see the following sourcetypes:

pan:traffic
pan:log
pan:threat
pan:system
pan:config

Since I have no plans to revert my version of Splunk, this is not repeatable.

Incidentally, my index is still pan_logs, and my next step is to place my entries in my UF's inputs.conf vs. the add-ons.

Posting in case someone else experiences a similar issue.

UPDATE:

My indexers are in a multi-site indexer cluster. Today, I installed another non-PA related indexer add-on, and upon a rolling restart, my Palo Alto app again began to experience the same issues I had thought were resolved with the Splunk upgrade.

In retrospect the upgrade didn't make any sense, I poured through the errtata data for the upgrades, and nothing closely resembling my issues seemed to fit.

After looking at logs, I decided to install the PA add-on onto my deployment server, then push my apps once more to my indexers; followed by several rolling restarts, an app ta add-on, removal, etc, etc, and voila, I am sure my issues are with the indexer cluster being unable to install the add-on from the PA app. They installed one time, then later removed them; this deserves a closer look.

Regards,

-mi

0 Karma

bsachitano
Explorer

Would you be able to give some hints? I have our PAs reporting to a syslog-ng server then UF sending to the cluster master/indexers. The dashboards on the SH are empty. 😕

0 Karma

nychawk
Communicator

On my UF's, I've installed the TA, and disabled the inputs.conf there, added my inputs inside of my UF's inputs.conf; although I found no difference vs. just using the TA's inputs.conf. For me, it was just easier in case I needed to recreate a UF.

On my Indexers, which is also a clustered indexer as your own, I also added the TA.

On my Search Head, which incidentally is a SHC, not sure if yours is, I needed to separate the TA as an entirely separate TA inside of my apps, otherwise my dashboards were empty.

So, are you pretty much set up in the same way?

What does a search like "index=pan* sourcetype=pan*" yield for source types? More than one sourcetype? Anything at all?

0 Karma

bsachitano
Explorer

I did notice 2 things, first off my inputs.conf monitor stanza had the sourcetype of pan:logs and not pan:log, I corrected that on the deployment server and re-deployed.

Secondly, as admin user I did not have a default index set. Once I set it to pan_logs the dashboards began to populate a few entries. Still figuring out what is missing, but data in the dashboard is a start.

I have syslog-ng set to not enter a time stamp before the logs and my monitor stanza looks like this:
#Palo Alto Devices
[monitor:///var/log/data/palo/.../*]
disabled = 0
host_segment = 6
sourcetype = pan:log
ignoreOlderThan = 1d
index = pan_logs
blacklist = .gz$

index=pan* shows results for pan_logs as the index.
sourcetype=pan* revealed pan:log

0 Karma

nychawk
Communicator

Are your Search Heads in a cluster too? If so, what apps do you have in your deployers search head apps dir?

Did you uncomment your inputs from your TA's? (Assuming you are using your UF's to input?). Also, ditto for your indexers TA's.

0 Karma

nychawk
Communicator

Here is what I am presently doing:

My UF is listening inside of the TA, I am no longer receiving thru syslog-NG. The Inputs.conf in my TA reads:

[udp://5514]
connection_host = ip
sourcetype = pan:log
index = pan_logs
no_appending_timestamp = true

If you are running a Linux UF, test above with:

$ netstat -a | grep 5514

The above should yield:
udp 0 0 0.0.0.0:5514 0.0.0.0:*

If you opt to receive thru your syslog-NG server, be sure you have no inputs defined in /opt/splunkforwarder/etc/apps/Splunk_TA_paloalto/local/, or have them commented out. I found no difference, but going back to my PA's to switch dest ports was more work, and I wanted to see my dashboards working.

On my Indexers, which are clustered as yours, I have both the TA and the app deployed inside of the master-apps directory:
Splunk_TA_paloalto
SplunkforPaloAltoNetworks

On my deployer for my SHC, I have both the app AND the TA deployed in /opt/splunk/etc/shcluster/apps:

Splunk_TA_paloalto
SplunkforPaloAltoNetworks

The SplunkforPaloAltoNetworks still contains Splunk_TA_paloalto inside of the apps "install" directory, but I never saw the TA properly deployed inside of my SHC, thus requiring me to deploy as a separate app for my Search Head Cluster. Because the TA was not deploying properly, my dashboards were visually impaired. Once I pushed out the TA to my SHC, and after verifying they were present on my SHC members, my dashboards began to render properly. I am guessing this is your problem too. Please note, you may have to perform a rolling restart for your search head members.

Let me know how this goes.

0 Karma

bravon
Communicator

Q: When I replace pan_logs queries with index=pan_logs, some of my search queries work; but not dashboards.
A: Look at that search' verbose hits - in what index does the data hide?

Q: pan_traffic queries work, however "Traffic Dashboard" yields no data.
A: (Look at the "traffic dashboard" search (Click the magnifyer) - what does the string say?)

Q: pan_url queries work, however "URL Filtering Dashboard" & "Web Activity Report" yields no data.
A: What i would recommend is ignoring all the field/extration shit and just search your environment for "sourcetype=pan*"
for the last 30 days. Then look at what index/sourcetype it really has. This is an important part of your setup.
Most likely you are not recieving any data or you are missing tranforms.conf/props.conf for the data

0 Karma

nychawk
Communicator

Thank you, will try further debugging.

0 Karma

nychawk
Communicator

Based on feedback, seems I should remove my sourcetype definition on my UF's as stated above?

sourcetype = pan:log

Leaving just:

[udp://5514]
connection_host = ip
index = pan_logs
no_appending_timestamp = true
0 Karma

nychawk
Communicator

Nevermind, I reread the docs, the listener wants sourcetype = pan:log.

Still without proper data.

0 Karma

nychawk
Communicator

Update: (problems still persist...)

I've removed my inputs.conf entry in my universal forwarders /opt/splunkforwarder/etc/system/local/inputs.conf and moved them into the TA's directory in /opt/splunkforwarder/etc/apps/Splunk_TA_paloalto/local/inputs.conf

My apps inputs.conf contains the same entries that I had under Splunks ~etc/system/local/inputs.conf:

[udp://5514]
connection_host = ip
sourcetype = pan:log
index = pan_logs
no_appending_timestamp = true

pan:log is thus far the ONLY sourcetype being created, although at least some of my raw logs show signs that they should have gotten tagged as sourcetype pan:threat, see sample below

Jan 20 11:02:31 PALO-ALTO1.somedomain.local 1,2016/01/20 11:02:31,007801001089,THREAT,url,0,2016/01/20 11:02:31,192.168.2.100,74.125.226.186,0.0.0.0,0.0.0.0,Guest Outbound,,,google-base,vsys1,Guest Trusted,Guest Untrusted,ethernet1/10,ethernet1/9,PAN Log Forwarding,2016/01/20 11:02:31,263889,1,55319,80,0,0,0xc000,tcp,alert,"googleads.g.doubleclick.net/pagead/gen_204?id=wfocus&gqid=la-fVtmbFYKLFGL_s-AD&qqid=CNXIw-fhuMoCFdEsTFodTv0Dlg&bglotd=1",(9999),web-advertisements,informational,client-to-server,938841822,0x0,192.168.0.0-192.168.255.255,US,0,text/html,0,,,5,,,,,,,,0

I've confirmed that I am indeed receiving traffic (tcpdump), and verified data sent to Indexers, and finally, verified that the Indexers are also receiving this traffic (also via tcpdump).

My /opt/splunk/var/log/splunk/paloalto_ta_installer.log on my indexer contains the following:

2016-01-15 13:18:51,507 [INFO] Splunk App for Palo Alto Networks Dependency Manager: Exiting...
2016-01-19 17:46:25,073 [INFO] Splunk App for Palo Alto Networks Dependency Manager: Starting...
2016-01-19 17:46:25,130 [INFO] dependency Splunk_TA_paloalto not found - installing...
2016-01-19 17:46:25,130 [ERROR] unable to copy /opt/splunk/etc/apps/SplunkforPaloAltoNetworks/install/Splunk_TA_paloalto to /opt/splunk/etc/appsSplunk_TA_paloalto
2016-01-19 17:46:25,130 [ERROR] cannot copy tree '/opt/splunk/etc/apps/SplunkforPaloAltoNetworks/install/Splunk_TA_paloalto': not a directory
Traceback (most recent call last):
  File "/opt/splunk/etc/slave-apps/SplunkforPaloAltoNetworks/bin/scripted_inputs/deploy_splunk_ta_paloalto.py", line 40, in install_dependency
    dir_util.copy_tree(src, dst)
  File "/opt/splunk/lib/python2.7/distutils/dir_util.py", line 128, in copy_tree
    "cannot copy tree '%s': not a directory" % src
DistutilsFileError: cannot copy tree '/opt/splunk/etc/apps/SplunkforPaloAltoNetworks/install/Splunk_TA_paloalto': not a directory
2016-01-19 17:46:25,137 [INFO] Splunk App for Palo Alto Networks Dependency Manager: Exiting...

My indexer DOES NOT contain an /opt/splunk/etc/slave-apps/Splunk_TA_paloalto directory, however my search heads do contain /opt/splunk/etc/apps/Splunk_TA_paloalto; perhaps not installed correctly, or at all?

Thank you

0 Karma

btorresgil
Builder

You can still use pan_index. For data to show up from pan_index, it must be an index that is searched by default for your user. This setting is in your 'role' settings.

Instructions are in the App 5.0 upgrade guide:
http://pansplunk.readthedocs.org/en/latest/upgrade.html

0 Karma

nychawk
Communicator

Yup, that index is assigned to my admin role.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...