- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Splunk UBA is down
Splunk UBA search head is down.
Even after restarting ui services, status is shown as active in CLI but GUI is not available.
Commands used to stop/start ui service:
sudo service caspida-ui stop
sudo service caspida-ui start
Status when checked in CLI:
● caspida-ui.service
Loaded: loaded (/etc/init.d/caspida-ui; bad; vendor preset: enabled)
Active: active (exited) since Fri 2021-09-03 05:53:12 UTC; 6min ago
I also tried rebooting the VM, but it doesn't help.
Can I please get a suggestion around how to fix this?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

did this setup work in the past? If so, has there been any changes to IP/host/dns resolution and/or firewall/connectivity? looks like connectivity/resolution issue
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@lakshman239 I suspect so too. However, there is no confirmation from network team regarding any connection changes wrt firewall, etc.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

@snisaxena One option would be stop and start all services, so they start gracefully. Pls refer to - https://docs.splunk.com/Documentation/UBA/5.0.4.1/Admin/CLICommands
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@lakshman239I ran /opt/caspida/bin/Caspida stop-all and it has been running since more than 2 hours now.
I tried to exit and run /opt/caspida/bin/Caspida start-all. It was aborted with below message:
failed to check/update system configuration: aborting. see /var/vcap/sys/log/caspida/caspida.out
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

stop-all running for long time does indicate an underlying issue in the cluster.
Have you run the pre-check and post health checks using the latest available scripts? If not, please run them and perhaps raise a case with support attaching the output.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@lakshman239 I did run a health check before running stop-all and observed below error:
ui connect: <hostname> <= curl failed to ui <hostname>
curl: (7) Failed to connect to <hostname> port 443: Connection refused
ui connect: sc2-splunk-uba-1 <= curl failed to ui <hostname>
curl: (7) Failed to connect to <hostname> port 443: Connection refused
