Hi All,
Facing an issue. Just got Splunk Add-on for VMware installed on my dev environment and not able to fetch data. What happens:
1. Plugged in Splunk user - all validated, green checkmark.
2. Plugged in vCenter hostname, username+password - all validated, green checkmark and fetched the inventory (all included ESXi servers).
When I enable data pulling, nothing happens. Checking internal logs, found this message:
2019-10-19 17:28:51,087 ERROR [ta_vmware_collection_worker://alpha:25993] Problem with hydra worker ta_vmware_collection_worker://alpha:25993: ('Could not login to target=%s with username=%s after num_retries=%s', 'vcenter.host', 'domain\\vcenter_user', 3)
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 618, in run
self.assignJobToHandler(cur_job)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 452, in assignJobToHandler
session = self.getSessionForTarget(job_tuple.target, config["username"], config["realm"])
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 420, in getSessionForTarget
session_stanza = self.updateSessionStanza(session_stanza, target, username, realm)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 349, in updateSessionStanza
session = self.loginToTarget(target, username, password, realm)
File "/opt/splunk/etc/apps/Splunk_TA_vmware/bin/ta_vmware_collection_worker.py", line 55, in loginToTarget
raise Exception("Could not login to target=%s with username=%s after num_retries=%s", target, user, retry_count)
Exception: ('Could not login to target=%s with username=%s after num_retries=%s','vcenter.host', 'domain\\vcenter_user', 3)
Of course, vcenter.host and domain\vcenter.user are correct values. Dev instance is a single instance so avoiding any DCN, time sync, etc. issues. The weirdest thing is, how comes it gets the inventory but then fails to login. On the same host, netapp works perfectly. Any help/hints appreciated. Thank you.
Splunk: v7.1.4
Splunk Add-on for VMware: v3.4.5
vCenter: v6.5
Update 20-10-2019.
Found in the documentation that user shall be used in format of user@domain, changed that. Behaviour is still the same.
Went into every single log file, went through all python scripts. Found a dependency to read "/tmp/suds/version" file/folder. Splunk runs as splunk user, that folder was owned by other user from who knows when thus it was failing. As soon as I have allowed read permission, all suddenly started to work. Just 3 days of effort down the drain 🙂 anyway, works now! Hope it helps someone else 🙂
I was also getting errors like these, and the problem turned out to be that Splunk/Splunk_TA_vcenter doesn't fully respect use of the /etc/hosts file. You'll have to hard code the IP address in the collection configuration page, or presumably set up actual DNS.
The errors I was getting:
2021-08-19 08:56:13,564 ERROR [ta_vmware_collection_worker://gamma:9861] Problem with hydra worker ta_vmware_collection_worker://gamma:9861: ('Could not login to target=%s with username=%s after num_retries=%s', 'vcenter6.local', 'administrator@vsphere.local', 3)
Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 623, in run
self.assignJobToHandler(cur_job)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 457, in assignJobToHandler
session = self.getSessionForTarget(job_tuple.target, config["username"], config["realm"])
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 425, in getSessionForTarget
session_stanza = self.updateSessionStanza(session_stanza, target, username, realm)
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 354, in updateSessionStanza
session = self.loginToTarget(target, username, password, realm)
File "/opt/splunk/etc/apps/Splunk_TA_vmware/bin/ta_vmware_collection_worker.py", line 55, in loginToTarget
raise Exception("Could not login to target=%s with username=%s after num_retries=%s", target, user, retry_count)
Exception: ('Could not login to target=%s with username=%s after num_retries=%s', 'vcenter6.local', 'administrator@vsphere.local', 3)
Went into every single log file, went through all python scripts. Found a dependency to read "/tmp/suds/version" file/folder. Splunk runs as splunk user, that folder was owned by other user from who knows when thus it was failing. As soon as I have allowed read permission, all suddenly started to work. Just 3 days of effort down the drain 🙂 anyway, works now! Hope it helps someone else 🙂