All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I am currently trying to create a pie chart that gets its data from a token: host=* | eval $Overview$ | chart sum(Warning) as "Warnings"  sum(Violation)  as  Violations sum(Alerts) as... See more...
Hi Splunkers, I am currently trying to create a pie chart that gets its data from a token: host=* | eval $Overview$ | chart sum(Warning) as "Warnings"  sum(Violation)  as  Violations sum(Alerts) as Alerts  over sum(ALL) I am looking to create a pie chart that shows these three values. Any help is appreciated    Thank you, Marco
Team, i would like to generate TPS based on two different search criteria but both has to run single report and should be populate both TPS values in single report. Query 1: index=abc "String 1" ... See more...
Team, i would like to generate TPS based on two different search criteria but both has to run single report and should be populate both TPS values in single report. Query 1: index=abc "String 1" | bin _time span=1s | chart count as TPS by _time | timechart max(TPS) as peakTPS eval(round(avg(TPS),2)) as avgTPS span=1h Query 2: index=abc "String1" OR "String 2" | bin _time span=1s | chart count as TPS by _time | timechart max(TPS) as peakTPS eval(round(avg(TPS),2)) as avgTPS span=1h   Here query 1 finds TPS and Peak TPS based on one particular string and query 2 find TPS , Peak TPS based on string which i used on query 1 and another string on top of it. Now i would like to get merge both of then in single query so that one single report is enough for providing metrics
Greetings,  Within my infrastructure, I am hoping to setup a single Splunk instance to ingest uploaded logs for analysis and correlation purposes. The instance would not require forwarders. Is it po... See more...
Greetings,  Within my infrastructure, I am hoping to setup a single Splunk instance to ingest uploaded logs for analysis and correlation purposes. The instance would not require forwarders. Is it possible to set Splunk up in this way? Any help or direction is appreciated!
index=<<My_index>>  earliest="12/23/2020:10:00:00" latest="12/23/2020:11:00:00" "<<url>>" | eval MyFeild=replace(MyFeild,"\d{1}\d+","") | search MyFeild=*sample_search* | stats count by MyFeild   W... See more...
index=<<My_index>>  earliest="12/23/2020:10:00:00" latest="12/23/2020:11:00:00" "<<url>>" | eval MyFeild=replace(MyFeild,"\d{1}\d+","") | search MyFeild=*sample_search* | stats count by MyFeild   When I have been using the above search criteria in UI , it gives the a table in Statistics tab with  MyFeild and its total Count    when I am using the splunkapi for some reason I am not getting the consolidated ouput https://<<mydomain for my splunkapi>>/services/search/jobs/export?search=search index=<<My_index>>  earliest="12/23/2020:10:00:00" latest="12/23/2020:11:00:00" "<<url>>" | eval MyFeild=replace(MyFeild,"\d{1}\d+","") | search MyFeild=*sample_search* | stats count by MyFeild   Not sure If I am missing anything here     
Good afternoon I'm looking for this pdf, does anyone know where I can download it from? Regards.
Hi, Has anyone ran into this issue before? I tried finding other Splunk questions on this and I couldn't find anything that could potentially lead me in the right direction for resolving this. Can a... See more...
Hi, Has anyone ran into this issue before? I tried finding other Splunk questions on this and I couldn't find anything that could potentially lead me in the right direction for resolving this. Can anyone shed light on how this can be fixed? I've googled the issue and there are various tips, but I don't want to mess up Splunk by just poking around and changing things. This is happening on the Cluster Nodes, all 4 servers are getting the same error. Log Name:      System Source:        Application Popup Date:          11/25/2020 9:52:40 AM Event ID:      26 Task Category: None Level         Information Keywords:      Classic User:          N/A Computer:      [host.fqdn] Description: Application popup: streamfwd.exe - Entry Point Not Found : The procedure entry point ResolveLocaleName could not be located in the dynamic link library KERNEL32.dll. Log Name: Application Source: Application Error Date: 11/25/2020 9:52:40 AM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: [host.fqdn] Description: Faulting application streamfwd.exe, version 0.0.0.0, time stamp 0x5de811bd, faulting module KERNEL32.dll!ResolveLocaleName, version 6.0.6003.20708, time stamp 0x5e0acab3, exception code 0xc0000139, fault offset 0x00000000000b6688, process id 0x267c, application start time 0x01d6c353c4b4bd11.
I am using the APP "SA-cim_vladiator" and this message appears indicating that it has found unexpected values In this order of ideas it is only analyzing me and detecting the logs with the action = ... See more...
I am using the APP "SA-cim_vladiator" and this message appears indicating that it has found unexpected values In this order of ideas it is only analyzing me and detecting the logs with the action = allowed field And those that in the value action = Accept or suscces or pass are not detecting them The same happens for blocked where drop or deny are not detected in the action field How can I solve this situation?    
Hi at all, this app is very useful for my needs, because I have to open or close panels in my dashboards, but I don't want the "Dashboard Slider Details", the little panel in right bottom. How can ... See more...
Hi at all, this app is very useful for my needs, because I have to open or close panels in my dashboards, but I don't want the "Dashboard Slider Details", the little panel in right bottom. How can I disable it? Thank you in advance. Ciao. Giuseppe
Hi Everyone, I am not able to see data in any of my dashboards and getting Below Errors. The limit has been reached for log messages in info.csv. 1 messages have not been written to info.csv. Refer... See more...
Hi Everyone, I am not able to see data in any of my dashboards and getting Below Errors. The limit has been reached for log messages in info.csv. 1 messages have not been written to info.csv. Refer to search.log for these messages or limits.conf to configure this limit. The limit has been reached for log messages in info.csv. 185 messages have not been written to info.csv. Refer to search.log for these messages or limits.conf to configure this limit. Unable to distribute to peer named HPNPLEMTWA140.gso.bxp.com at uri https://10.18.94.61:8001 because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_SIZE_EXCEEDS_MAX_SIZE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Unable to distribute to peer named HPNPLEMTWA142.gso.bxp.com at uri https://10.18.94.63:8006because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_SIZE_EXCEEDS_MAX_SIZE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Unable to distribute to peer named HPNPLEMTWA143.gso.bxp.com at uri https://10.18.94.66:8007because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_SIZE_EXCEEDS_MAX_SIZE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Unable to distribute to peer named HPNPLEMTWA144.gso.bxp.com at uri https://10.18.94.67:8003because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_SIZE_EXCEEDS_MAX_SIZE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. .............................. I haven't done anything. Donn know why this Errors are coming. Can someone guide me on this.  
Issue When configured to use Azure SAML on our Enterprise Security search head (no Authentication Extension yet specified) I discovered that Enterprise Security 6.4.0's Incident Review's "Run Adapti... See more...
Issue When configured to use Azure SAML on our Enterprise Security search head (no Authentication Extension yet specified) I discovered that Enterprise Security 6.4.0's Incident Review's "Run Adaptive Response" returned "Unexpected token < in JSON at position 0" when attempting to run any response (even Ping) with no data passed to the response. It was an immediate failure. Support noted a HAR showed it was because credentials weren't being passed, and pointed to a lack of AQR Support by Azure as the reason. Backstory While surprising Enterprise Security had a single feature (so far discovered) relying on AQR, the lack of AQR by Azure was not a surprise as I'd been exposed to that when attempting to setup the Secure Gateway as well (which we gave up on as a secondary priority to finishing our installation). That same exposure also led to us discovering in the Secure Gateway documentation that there was a sample script to be used as a SAML Authentication Extension to overcome this lack of Azure support. Unfortunately, at the time, the script didn't seem to work -- after actually looking at its code I could tell why: the Splunk provided sample expected an Azure API Key. Solution WARNING for Production Environments: If you attempt to use the Authentication Extension script be advised that so long as it is enabled and not working your Web Session will timeout after the User Time To Live period regardless of activity because it cannot re-validate your identity (e.g. 3600s by default -- 1 Hour). When it times out your cookie may be well and truly hosed and you'll need to clear cookies & cache to get back to the login page. Worst case scenario, you'll need to edit $SPLUNK_HOME/etc/system/local/authentication.conf manually to comment out or remove getUserInfoTtl, scriptFunctions,scriptPath,scriptSecureArguments,scriptTimeout, then use $SPLUNK_HOME/bin/splunk restart to get back to the login page. Additional Warning: Splunk Support does not support any of the following (won't even try, they'll direct you to your Account Team) so if anything happens you can curse my name, but I take no responsibility etc. etc., but this is the only way anyone (including Splunk's own documentation, see above) has mentioned how to deal with Azure's lack of AQR support. Azure Prerequisites: If you have not already, go to portal.azure.com and under Azure Active Directory > App Registrations  create an App for your Splunk instance. Please see Splunk's documentation regarding setting up SAML -- but be sure to download the certificate and XML file from Azure! That XML file can be uploaded into Splunk's SAML Configuration page to auto-populate almost everything. For those without an Azure API key, we'll use the Client Secret method... In portal.azure.com where SAML was configured for Splunk (Azure Active Directory > App Registrations > All Applications > search for your app name here): Ask your Azure Admin to create a Client Secret under "Certificates & Secrets" Ask your Azure Admin to then add "Microsoft Graph" APIs User.Read.All (vital), Group.Read.All (unconfirmed if needed), and GroupMember.Read.All (unconfirmed if needed) under "API Permissions". Then ask them to provide Admin Consent on the same page (click a button applying these changes, essentially). For those with an Azure API Key, unfortunately I can't provide a lot of detail below). Enterprise Security Command Line: For those with an Azure API Key (may require special permission to request one) use the provided sample script at $SPLUNK_HOME/share/splunk/authScriptSamples/azureScripted.py (confirmed for Splunk 8.1) For those with a Client Secret (assigned to your Splunk SAML application, much easier to acquire) use the script from https://gist.github.com/vprasanth87/5bd091f0eb24c4919b938f0528ee93bc Place a copy of one of the above scripts into the $SPLUNK_HOME/etc/auth/scripts Have a Web Proxy/Gateway? For those with Web Proxies not using a Global Proxy value by some other means:  Open the $SPLUNK_HOME/etc/auth/scripts/azureScripted.py file in a file editor Find this line near the top of the script: USER_ENDPOINT = 'https://graph.microsoft.com/v1.0/users/' Below that, add the following lines: proxies = { "http" : "http://IPADDRESS:PORT", "https" : "http://IPADDRESS:PORT" } Search the py for any reference to "requests" and add "proxies=proxies" as an argument. Example: access_token_response = requests.post(token_url, data=payload, verify=False, allow_redirects=False, headers=headers, proxies=proxies) Want a Debug Log for the Azure Script's Execution? For those wanting to create a log file so you can see what the script is doing and where it's failing: Between the "USER_ENDPOINT =" line and "def getUserInfo(args):" line add: logging.basicConfig(filename='azureScripted.log', level=logging.DEBUG) Then throughout the script you can add logging.LEVEL() to see what is happening at any given point and isolate where the script stops (errors out). Example: logging.info('Header: %s', access_token_response.headers) Example: logging.debug('Token: %s', tokens['access_token']) Implementing the Azure SAML AQR Workaround... For Client Secret users, edit the azureScripted.py file... make the following changes (haven't tested they're mandatory, just know they work) within the "def getUserInfo(args):" definition... Above the line: token_url = "https://login.microsoftonline.com/${TENANT_ID}/oauth2/v2.0/token" Add: azure_tenant = args['tenantId'] Add: client_id = args['clientId'] Add: client_secret = args['clientSecret'] Replace the token_url line with: token_url = "https://login.microsoftonline.com/{}/oauth2/v2.0/token".format(azure_tenant) Comment out or remove the following lines: client_id = '${AZURE_SPLUNK_SSO_APP_ID}' client_secret = '{AZURE_SSO_APP_API_KEY}' Then in the Enterprise Security Web UI, go to Settings > Authentication Methods and click the SAML Settings link. Click the SAML Configuration button in the top right. Scroll down until you see the "Authentication Extensions" section header, and click the arrow to expand the section. Script Path: azureScripted.py Script Timeout: if left blank it will default to 10s, this seems to be sufficient Get User Info Time to Live: if left blank it will default to 3600s, this seems to be sufficient Script Functions: getUserInfo Script Secure Arguments: enter the key name below in the left column, value in the right column. For Client Secret, Key: clientId For Client Secret, Key: clientSecret For Client Secret, Key: tenantId For Azure API: azureKey Click Save Validating Open the Enterprise Security app and go to Incident Review Click the down arrow next to any Notable entry you'd like to test with, then click Run Adaptive Response Choose a Response and fill it out, then click Run If the script is working as intended you should get a message about the response being successful instead of complaining about the token. If that works, wait over 1 hour with your session still open/active. Verify it doesn't kick you out or at least not in a way that doesn't require something as easy as click 'Refresh.' This depends on other settings in your environment, but as long as you don't get anything weird like it trying to launch the 'None' app (doesn't exist) or throwing a HTTP 500 response... should be good to go.
I need help on how I  can compare 1 day security metric to another day and also generate a metric report that shows low and high and compare it to the security metric in the spreadsheet. Below is th... See more...
I need help on how I  can compare 1 day security metric to another day and also generate a metric report that shows low and high and compare it to the security metric in the spreadsheet. Below is the splunk query I have:     index=security sourcetype="Computers" "Computer Status"=Enabled earliest=-12mon@mon | bin _time span=1day | dedup _time sAMAccountName | timechart span=1day count | stats avg(count) AS avg stdev(count) AS stdev min(count) AS min max(count) AS max latest(count) AS latest_count | eval min_thres=5000, max_thres=7500 | eval alert=if((latest_count<min_thres OR latest_count>max_thres), 1, 0)
I have an index called firewall and sourcetypes of Palo Alto, Checkpoint and Fortinet routers The configuration was carried out according to the documentation recommendations regarding recommended... See more...
I have an index called firewall and sourcetypes of Palo Alto, Checkpoint and Fortinet routers The configuration was carried out according to the documentation recommendations regarding recommended sourcertype and Add-on according to the vendor The "action" type fields that are accepted have different names depending on the brand of the device Those that are accepted are: allowed accept Authorize pass Taking into account that "accept" and "allowed" mean the same thing, should I normalize them? or as they are already recognized by the correlation events that Splunk ES has created?
Hi Splunkers, Happy Holidays!!!. I am trying to create a dashboard on Log Volume Monitoring. I am using ML Toolkit and need help with my search. | tstats count WHERE index=index_name BY index _tim... See more...
Hi Splunkers, Happy Holidays!!!. I am trying to create a dashboard on Log Volume Monitoring. I am using ML Toolkit and need help with my search. | tstats count WHERE index=index_name BY index _time span=1h | eval date=strftime(_time,"%m/%d/%Y") | lookup Paid_Holidays.csv holiday_date as date OUTPUT is_holiday | eval day_of_week = strftime(_time,"%A") | where NOT (day_of_week="Saturday" OR day_of_week="Sunday") | where NOT is_holiday=1 | `forecastviz(245, 240, "count", 93)` | eval isOutlier = if(prediction!="" AND 'count' != "" AND ('count' < 'lower95(prediction)' OR 'count' > 'upper95(prediction)'), 1, 0) | where isOutlier=1 | eval today = relative_time(now(),"-1h@h") | where isOutlier=1 AND _time >= today | where count < 'lower95(prediction)' | fields - isOutlier   The highlighted and underlined part is where I am having issue. I need to alert only when the count is less than the predicted in the next hour as well. The current scenario alerts frequently and I need to constrict it so it alerts only when the count is less continuously for the next hour as well. Can someone help me with my query?
Splunk version: 8.1.1 OS: CentOS 7.9 My indexes.conf file looks like this:   [default] tsidxWritingLevel = 4 [mynewindex] coldPath = $SPLUNK_DB/mynewindex/colddb enableDataIntegrityControl = 0... See more...
Splunk version: 8.1.1 OS: CentOS 7.9 My indexes.conf file looks like this:   [default] tsidxWritingLevel = 4 [mynewindex] coldPath = $SPLUNK_DB/mynewindex/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/mynewindex/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/mynewindex/thaweddb tsidxWritingLevel = 4       What is my goal? I want to ensure that tsidxWritingLevel = 4 is actually set up and is working. So I ran an API query in Splunk:   | rest /services/data/indexes   But tsidxWritingLevel field is empty all the way down. Why is that? How can check that Splunk is actually using tsidxWritingLevel = 4? Output from btool:   [root@localhost local]# /opt/splunk/bin/splunk btool indexes list --debug | grep local /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf [_internal] /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf [default] /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/apps/search/local/indexes.conf [mynewindex] /opt/splunk/etc/apps/search/local/indexes.conf coldPath = $SPLUNK_DB/mynewindex/colddb /opt/splunk/etc/apps/search/local/indexes.conf enableDataIntegrityControl = 0 /opt/splunk/etc/apps/search/local/indexes.conf enableTsidxReduction = 0 /opt/splunk/etc/apps/search/local/indexes.conf homePath = $SPLUNK_DB/mynewindex/db /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/apps/search/local/indexes.conf maxTotalDataSizeMB = 512000 /opt/splunk/etc/apps/search/local/indexes.conf thawedPath = $SPLUNK_DB/mynewindex/thaweddb /opt/splunk/etc/apps/search/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4 /opt/splunk/etc/system/local/indexes.conf journalCompression = zstd /opt/splunk/etc/system/local/indexes.conf tsidxWritingLevel = 4    
Hi Team/Kamlesh, @kamlesh_vaghela  Below is my json object and i want find the count of exception_type  whose value is "Application Exception" in last 1 hour which is child element of error objec... See more...
Hi Team/Kamlesh, @kamlesh_vaghela  Below is my json object and i want find the count of exception_type  whose value is "Application Exception" in last 1 hour which is child element of error object and i should display the count in table.   I am very new to splunk, plz help me by giving the splunk query.   Please help me THANK YOU SO MUCH in advance.   { "class_name": "com.verizon.vsib.addressval.services.CameoClient", "VSAD_ID": "GYEV", "True_ip": "10.118.142.156", "log_message": "Missing Company Code", "server_port": "443", "error": { "exception_type": "Application Exception", "exception_code": "P0106", "exception_details": "Missing Company Code" }, "user_agent": "PostmanRuntime/7.25.0", "@timestamp": "2020-12-24T05:41:18.181Z", "log_time_stamp": 1608788478110, "status_code": 500, "api_url": "https://vsib-dev.ebiz.verizon.com/addressValidation/validateAddress?null", "log_level": "info", "server_host": "10.118.143.141", "app_environment": "dev", "@version": "1", "requestId": "TestSplunk-17", "vast_id": 25439, "log_date": "", "logger_class": "com.verizon.vsib.addressval.services.CameoClient", "time": 1608788478.181, "app_name": "VSIB", "function_name": "pushApplicationError" }   Regards, Murali P.    
Hello All, I have to construct a query whenever there is change in my query result. My query and result is as follows, Query: sourcetype="my_sourcetype"|dedup indicator|table indicator Result: ... See more...
Hello All, I have to construct a query whenever there is change in my query result. My query and result is as follows, Query: sourcetype="my_sourcetype"|dedup indicator|table indicator Result: indicator a b So I need to get an alert whenever my set [Indicator(a,b)] changes, i.e addition [indicator(a,b,c)] or deletion[indicator(a)] happens. Please suggest for my above requirement.
splunk spl query to monitor memory utilization of Splunk servers.
Hello Members, I need some suggestions and help. As per my screenshot we can see my events is skipped from 7:17 pm to 8:17 pm. So because of that user complaint for not getting any alerts during tha... See more...
Hello Members, I need some suggestions and help. As per my screenshot we can see my events is skipped from 7:17 pm to 8:17 pm. So because of that user complaint for not getting any alerts during that period. Please suggest me how can i further troubleshoot this issue and find out the reason.  
Hi Team, The below is my json object, i want to read error object's sub field exception_type and should display the count in the last 1 hour in table format if exception_type="Application Exception"... See more...
Hi Team, The below is my json object, i want to read error object's sub field exception_type and should display the count in the last 1 hour in table format if exception_type="Application Exception"      Please suggest me the splunk query i am very new to splunk.   Thank you so much in advance. { "class_name": "com.verizon.vsib.addressval.services.CameoClient", "VSAD_ID": "GYEV", "True_ip": "10.118.142.156", "log_message": "Missing Company Code", "server_port": "443", "error": { "exception_type": "Application Exception", "exception_code": "P0106", "exception_details": "Missing Company Code" }, "user_agent": "PostmanRuntime/7.25.0", "@timestamp": "2020-12-24T05:41:18.181Z", "log_time_stamp": 1608788478110, "status_code": 500, "api_url": "https://vsib-dev.ebiz.verizon.com/addressValidation/validateAddress?null", "log_level": "info", "server_host": "10.118.143.141", "app_environment": "dev", "@version": "1", "requestId": "TestSplunk-17", "vast_id": 25439, "log_date": "", "logger_class": "com.verizon.vsib.addressval.services.CameoClient", "time": 1608788478.181, "app_name": "VSIB", "function_name": "pushApplicationError" }
Why Splunk UBA UI is not starting in a cluster? [caspida@xxxxxxx bin]$ ./Caspida status IP - xxxxxxx Mon Dec 28 10:38:48 IST 2020: Running: ./Caspida status checking status of: caspida-jobmanager... See more...
Why Splunk UBA UI is not starting in a cluster? [caspida@xxxxxxx bin]$ ./Caspida status IP - xxxxxxx Mon Dec 28 10:38:48 IST 2020: Running: ./Caspida status checking status of: caspida-jobmanager Caspida Job Manager is running[ OK ] checking status of: caspida-ui caspida-ui is dead and pid file exists[FAILED] caspida-ui status: return value: 1 [caspida@xxxxxxx bin]$ ./Caspida start-service caspida-ui IP - xxxxxxx Mon Dec 28 10:39:29 IST 2020: Running: ./Caspida start-service caspida-ui