All Apps and Add-ons

Netapp data collection hydra errors

bsteiner
Engager

I'm in the process of installing the app for NetApp Data ONTAP. I have configured a data collection node and added a filer. Both show up as "valid" on the configuration screen. However, data never shows up in Splunk. When searching the _internal index for hydra errors, I see the following messages over and over:

ERROR [ta_ontap_collection_scheduler://nidhogg] Problem with hydra scheduler ta_ontap_collection_scheduler://nidhogg:
maximum recursion depth exceeded while calling a Python object
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1522, in run
collection_manifest.sprayReadyJobs(self.node_manifest)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 408, in sprayReadyJobs
node_manifest.sprayJobSet(reassign_jobs)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1147, in sprayJobSet
self.sprayJobSet(reassign_jobs)

ERROR [ta_ontap_collection_scheduler://nidhogg] [HydraCollectionManifest] failed to assign batch of jobs for node=datacollection.domain.local:8089, marking dead and reassigning jobs to others, may cause job duplication
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1138, in sprayJobSet

I've made sure python is installed on both the indexer and the data collection node. Also verified that the splunk user owns all files and directories. Any idea of what I'm missing?

Using Splunk version 6.1.2 on RHEL 6.5.

0 Karma

hiteshkanchan
Communicator

I installed the Heavy Forwarder (instead of UF) and enabled the Forwarding to the Indexer/SH.
It is working for me. And the python is listening on port 8008.

0 Karma

bsteiner
Engager

I have figured out the issue. The documentation states that a heavy or a light forwarder may be used for the data collection node.

http://docs.splunk.com/Documentation/NetApp/latest/DeployNetapp/InstalltheSplunkAppforNetAppDataONTA...

I have found this to not be the case. When using splunk as a lightforwarder, it only opens port 8089. I disabled the lightforwarder and enabled splunkforwarder. Verified that outputs.conf was forwarding to my indexer.

After doing this, running a netstat on the data collection node showed port 8008 being listened on by python. I added the data collection node and my filers to the app configuration and started up the scheduler. Data started populating in the app.

0 Karma

hiteshkanchan
Communicator

Did you install python separately on a Universal Forwarder as Universal Forwarder doesn't come with python. I am facing issues as I am getting the following error with Universal Forwarder. I intstalled python as well using yum install python. So was curious to know.

2015-07-24 19:59:42,832 ERROR [ta_ontap_collection_scheduler://nidhogg] Problem with hydra scheduler ta_ontap_collection_scheduler://nidhogg:
'job_aggregate_execution_info'
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 2136, in run
collection_manifest.sprayReadyJobs(self.node_manifest)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 735, in sprayReadyJobs
available_work_load, balanced_load = self._calculateLoadDistribution(active_workers, ready_jobs, node_job_infos)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 535, in _calculateLoadDistribution
self._update_execution_time(node_job_info["job_aggregate_execution_info"])
KeyError: 'job_aggregate_execution_info'

0 Karma
Get Updates on the Splunk Community!

CX Day is Coming!

Customer Experience (CX) Day is on October 7th!! We're so excited to bring back another day full of wonderful ...

Strengthen Your Future: A Look Back at Splunk 10 Innovations and .conf25 Highlights!

The Big One: Splunk 10 is Here!  The moment many of you have been waiting for has arrived! We are thrilled to ...

Now Offering the AI Assistant Usage Dashboard in Cloud Monitoring Console

Today, we’re excited to announce the release of a brand new AI assistant usage dashboard in Cloud Monitoring ...