All Apps and Add-ons

Splunk App for NetApp Data ONTAP: Why am I getting a data collection "job_aggregate_execution_info" error?

hiteshkanchan
Communicator

I am getting the following error when I execute:

index=_internal sourcetype=splunk_ta_ontap_api* OR sourcetype=hydra* ERROR

I don't see any data in index=ontap or any of the other NetAppdashboards. I am using the Universal Forwarder (instead of LWF or HF) as Data Collection Node (DCN) and installed python in that. Is that fine?

Also enabled port 8008 on the DCN to receive the data from the storage devices.

2015-07-24 19:59:42,832 ERROR [ta_ontap_collection_scheduler://nidhogg] Problem with hydra scheduler ta_ontap_collection_scheduler://nidhogg: 'job_aggregate_execution_info' Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 2136, in run collection_manifest.sprayReadyJobs(self.node_manifest) File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 735, in sprayReadyJobs available_work_load, balanced_load = self._calculateLoadDistribution(active_workers, ready_jobs, node_job_infos) File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 535, in _calculateLoadDistribution self._update_execution_time(node_job_info["job_aggregate_execution_info"]) KeyError: 'job_aggregate_execution_info'
0 Karma
1 Solution

hiteshkanchan
Communicator

I installed the Heavy Forwarder (instead of UF) and enabled the Forwarding to the Indexer/SH.
I was earlier enabling 8008 port explicitly but then I stopped listening on this port. It is working for me now. Thanks.

View solution in original post

0 Karma

hiteshkanchan
Communicator

I installed the Heavy Forwarder (instead of UF) and enabled the Forwarding to the Indexer/SH.
I was earlier enabling 8008 port explicitly but then I stopped listening on this port. It is working for me now. Thanks.

0 Karma

Masa
Splunk Employee
Splunk Employee
  1. I would not try UF with your own Python if I were you. You're taking responsibility of any unexpected issue such as performance or version change, not supported modules, SSL issues etc which may be related to your python package.
  2. If you need to stick with UF with your own Python package for some reason, I still recommend you to deploy a recommended approach to make sure your deployment works before doing this approach. Whenever, you have problem with UF with your own python, you should go back to recommended deployment to make sure that works fine.
  3. The error message indicates the DCN is running scheduler instead of workers. Something is wrong
  4. "Enabled port 8008 on the DCN to receive the data from the storage devices"; port 8008 is to communicate with a scheduler, not to retrieve data from filer nodes. So, something in configuration might be wrong around here, too.
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...