All Apps and Add-ons

Splunk_TA_ontap only fetches a subset of volumes


The "Splunk Add-on for NetApp Data ONTAP" is only fetching performance information for the first 50 volumes on a cluster.  Changing the "perf_chunk_size_cluster_mode" value in ta_ontap_collection.conf will vary the number -- if I set it to 53, I'll get performance data for the first 53 alphabetic volumes on the cluster.   You can't set this arbitrarily large, if I put it to 10000, I get a failure on the data collection.  

The chunking mechanism is part of, and normally would iterate over multiple queries until it collected data for all volumes.  This worked for years, but has been broken for several months now.  May or may not align with our upgrade to Splunk Enterprise 8.2.2 and Python 3.

Add-on is latest version (3.0.2).  Filers are running ONTAP 9.7.  I went back through the install manual and verified all the steps and add-on data.  Inventory/capacity information works without issue, it is just performance metrics that are a problem. throws log warnings "No instances found for object type volume"... from line 461 in the script.  

It seems like the "next_tag" mechanism in the script is failing, but I can't work out how to run from command line, I don't know how to troubleshoot any further.

Splunk_TA_ontap shows as "unsupported" and developed by "Splunk Works".   Last release was June 2021.  I could really use some pointers on how to resolve this, or how I could move forward troubleshooting it myself.   


Labels (1)
0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...