I'm in the process of installing the app for NetApp Data ONTAP. I have configured a data collection node and added a filer. Both show up as "valid" on the configuration screen. However, data never shows up in Splunk. When searching the _internal index for hydra errors, I see the following messages over and over:
ERROR [ta_ontap_collection_scheduler://nidhogg] Problem with hydra scheduler ta_ontap_collection_scheduler://nidhogg:
maximum recursion depth exceeded while calling a Python object
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1522, in run
collection_manifest.sprayReadyJobs(self.node_manifest)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 408, in sprayReadyJobs
node_manifest.sprayJobSet(reassign_jobs)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1147, in sprayJobSet
self.sprayJobSet(reassign_jobs)
ERROR [ta_ontap_collection_scheduler://nidhogg] [HydraCollectionManifest] failed to assign batch of jobs for node=datacollection.domain.local:8089, marking dead and reassigning jobs to others, may cause job duplication
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1138, in sprayJobSet
I've made sure python is installed on both the indexer and the data collection node. Also verified that the splunk user owns all files and directories. Any idea of what I'm missing?
Using Splunk version 6.1.2 on RHEL 6.5.
... View more