All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, After installing app in splunk cloud,clicking on app it shows only loading icon. can anyone help me out Please.   Thank You.
I have displayed two sample xml files below.  I have to check whether a xml file has <customer-job-id> and <submission> tags. If both tags are there then we have to take it otherwise leave the xml fi... See more...
I have displayed two sample xml files below.  I have to check whether a xml file has <customer-job-id> and <submission> tags. If both tags are there then we have to take it otherwise leave the xml file and move to next xml file Sample XML file: <?xml version="1.0" encoding="UTF-8"?><message> <customer-job-id>cust1</customer-job-id> <submission> <job> </job> </submission></message>   <?xml version="1.0" encoding="UTF-8"?><message> <customer-job-id>cust2</customer-job-id> </message>
Problem: I have a dashboard with an element that present a result for a specific date. To make it as easy as possible for the user, I would like to have a sticky calendar just beside the results, so... See more...
Problem: I have a dashboard with an element that present a result for a specific date. To make it as easy as possible for the user, I would like to have a sticky calendar just beside the results, so that the user easily could click on another day to get the results for that specific day.  Instead of using external JS libraries, is it possible to re-use the internal calendar that is already there? Without the whole shebang around it?  Looking at the included image; I would like to just have the calendar-part as a sticky calendar in the dashboard, without the whole "select time" framework around it. Question Is this possible? How? If this is not possible, any suggestion on how to achieve similar results without using external libraries? JS and/or CSS in the XML-file, is ok.  Photos to better explain what I am thinking about:  Take just this calendar (reuse code already included in Splunk?), and use it in a dashboard like this: The goal is that the user only clicks on the date, and then get the result for that day (this is just a mock-up to illustrate what I am looking for).
I'm creating a query using 4 sourcetypes and want to search across different timerange for them.  For example: | multisearch [search index=idx_A, sourcetype=a, earliest=-30d, latest=@d] [search ind... See more...
I'm creating a query using 4 sourcetypes and want to search across different timerange for them.  For example: | multisearch [search index=idx_A, sourcetype=a, earliest=-30d, latest=@d] [search index= idx_A, sourcetype=b, earliest=-24h@h] [[search index= idx_A, sourcetype=c, earliest=-24h@h] [[search index= idx_A, sourcetype=d, earliest=-24h@h] I saw these two solutions but it didn't really helped for my case. https://community.splunk.com/t5/Splunk-Search/Is-it-possible-to-use-earliest-twice-in-one-search/td-p/198386 https://community.splunk.com/t5/Splunk-Search/How-to-search-for-two-source-types-each-in-different-time-ranges/m-p/141215 I've tried using both multisearch and join.  Is there a way I can get entire results.
Hi Everybody: I need a little help with statistics: I use this search to list all Calling_Station_IDs. In the example table below  I can find two different values for  RadiusFlow_Type, for some one ... See more...
Hi Everybody: I need a little help with statistics: I use this search to list all Calling_Station_IDs. In the example table below  I can find two different values for  RadiusFlow_Type, for some one one value...   mysearch | fields RadiusFlowType, Calling_Station_ID | dedup Calling_Station_ID,RadiusFlowType | table Calling_Station_ID RadiusFlowType     Calling_Station_ID RadiusFlowType A1 Wired802 A1 MAB A2 Wired802 A2 MAB A3 MAB A4 MAB A5 Wired802   So I want to get a distinct number of how many calling_station_ids do Wired802 and get that number as a percentage of ALL Calling Station IDs If I do like this, I can count the number of Calling_Station IDs which do Wired802, which in this case would be three: mysearch | fields RadiusFlowType, Calling_Station_ID | dedup Calling_Station_ID,RadiusFlowType | table Calling_Station_ID RadiusFlowType |stats count(eval(RadiusFlowType="Wired802_1x")) AS Wired_Clients so that is fine:-) But I want to calculate also the total of all unique calling_station_IDs (in this example it would be 5), so that I can calculate the percentage  of clients who do either one or both flow types .  Somehow I fail to count the total in the same search and output that in a table... How can I get an output in a table like this ?   Total distinct count Calling_Station_ID Total Wired Clients Percentage_wired MAB Only CLients Percentage_mab   5 3 3/5*100 2 2/5*100       thanks
I need Need pricing and feature comparison details  on basic installation of splunk from package or splunk enterprise etc ? i just want to configure and install splunk on my ec
I have a lookup table with Scheduled Tasks called scheduled_tasks, and Columns Command, Arguments.  I need to do a search where I only display results where the Arguments, Command fields ... See more...
I have a lookup table with Scheduled Tasks called scheduled_tasks, and Columns Command, Arguments.  I need to do a search where I only display results where the Arguments, Command fields in events DOES NOT contain a value in the scheduled_tasks lookup table. Where it is going wrong? Thank you! My query is:  (index IN (index1, index2)) EventCode=4698 NOT [|inputlookup scheduled_tasks |fields Arguments, Command] | fillnull Arguments value="-" | rex field=_raw "(?P<Command>((?<=\bCommand>).*(?=<)))" | rex field=_raw "(?P<Arguments>((?<=\bArguments>).*(?=<)))" |table Command,Arguments |dedup Command,Arguments   My lookup table:   
hello in the search below which displays a timechart, I stats events except the weekend  these search displays events on the last 5 days what I would like to do in my timechart is to not display t... See more...
hello in the search below which displays a timechart, I stats events except the weekend  these search displays events on the last 5 days what I would like to do in my timechart is to not display the line chart for saturday and sunday because they are equal to 0 So for example, instead having wenesday, thursday, friday, saturday and sunday in the timechart, I need to display wenesday,,  thursday, friday, monday and tuesday (5 days) is it possible to do this please??   `CPU` | bin _time span=5h | eval slottime = strftime(_time, "%H%M") | eval week =strftime(_time, "%w") | where (slottime >= 900 AND slottime <= 1700) AND (week >=1 AND week <=5) | eval cpu_range=case(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", process_cpu_used_percent>20 AND process_cpu_used_percent <=40,"20-40", process_cpu_used_percent>40 AND process_cpu_used_percent <=60,"40-60", process_cpu_used_percent>60 AND process_cpu_used_percent <=80,"60-80", process_cpu_used_percent>80 AND process_cpu_used_percent <=100,"80-100") | stats avg(process_cpu_used_percent) as process_cpu_used_percent by host, _time, cpu_range, SITE | timechart span=1d dc(host) by cpu_range  
Hi Team, I am trying to get violations data from appdynamics via APIs. Now I am using this API https://appdynamics.com/controller/rest/applications/application_name/problems/healthruleviolations?&t... See more...
Hi Team, I am trying to get violations data from appdynamics via APIs. Now I am using this API https://appdynamics.com/controller/rest/applications/application_name/problems/healthruleviolations?&time-range-type=BEFORE_NOW&duration-in-mins=1440&output=json&rollup=false.  The issue is that it only get health rule events and I want all event data including Errors, slow transactions, Synthetic performance and other all events. But didn't find the specific API to do the same. Could you please help me to get all event data? Thanks in advance.
Hello! I am using Ansible to automate Splunk operations. I have a handler that simply runs:   - name: restart splunk shell: "{{ splunk_home }}/bin/splunk restart" become: yes become_user: "{... See more...
Hello! I am using Ansible to automate Splunk operations. I have a handler that simply runs:   - name: restart splunk shell: "{{ splunk_home }}/bin/splunk restart" become: yes become_user: "{{ splunk_nix_user }}"   Occasionally (1 out of every 5), this command will return an exit code with the following output.   [...] All preliminary checks passed. Starting splunk server daemon (splunkd)... Done Waiting for web server at https://127.0.0.1:8443 to be available..... WARNING: web interface does not seem to be available!   This causes my Ansible Tower job to fail and send out failure notifications to the team. Seconds later, the web interface is available and all is well (except for my pride because of the failure notification).   Is there any way to have Splunk wait longer to check for the web UI to be up? Or possibly a way for Ansible to not fail on this exit code or zero?   Any help is much appreciated
Trying to do a lookup in ingest-time according to https://docs.splunk.com/Documentation/Splunk/8.1.3/Data/IngestLookups and can't get it to work. If I do a simple transform like [my-test-eval] INGE... See more...
Trying to do a lookup in ingest-time according to https://docs.splunk.com/Documentation/Splunk/8.1.3/Data/IngestLookups and can't get it to work. If I do a simple transform like [my-test-eval] INGEST_EVAL = test=spath(_raw,"Event.System.Computer")   I properly get the test field propagated with a value extracted from the event. (as you probably guessed - it's a typical windows XML-formed event). But if I want to use the value retrieved from the event to perform a lookup... sorry, won't happen. And I have no clue why. [my-test-eval] INGEST_EVAL = test=coalesce(json_extract(lookup("test.csv",json_object("Key",spath(_raw,"Event.System.Computer"),json_array("Value")),"Value"),"default") There is no test field in the ingested event. Not even having the "default" value.  I tried giving the lookup name as defined in transforms.conf as well as the csv filename itself. I tried putting the lookup in app context as well as in system/local. Nothing works. To make things more interesting - if I did some mistake in defining the lookup (like giving wrong column name), I'd get an error in the splunkd.log so it would be obvious that something is not right. But the problem is I don't get any errors, the transform therefore should be working but it isn't. So I'm completely stuck here. How to debug this thing?
Hi During validate_input, I want to check the request if it's ok. But when I add datainput and save ,  the error show:   Need help about the error.   Thanks emily
Hi, i use regex to extract fields  My query is  | rex field=_raw "(?P<Command>((?<=\bCommand>).*(?=<)))" | rex field=_raw "(?P<Arguments>((?<=\bArguments>).*(?=<)))" |table Task_Name, ComputerName,... See more...
Hi, i use regex to extract fields  My query is  | rex field=_raw "(?P<Command>((?<=\bCommand>).*(?=<)))" | rex field=_raw "(?P<Arguments>((?<=\bArguments>).*(?=<)))" |table Task_Name, ComputerName,Command,_time,Arguments |dedup Task_Name, ComputerName,Command,_time,Arguments How can i return results if Arguments field is not exist? For example:  ...some xml log.... <Command>C:\Windows\System32\wevtutil.exe</Command> <Arguments>sl Microsoft-Windows-PrintService/Operational /e:true</Arguments> ...some xml log.... Is Ok And  ...some xml log.... <Command>C:\Windows\System32\wevtutil.exe</Command> ...some xml log.... Is not OK. 
I have a requirement like, I have to create a dashboard and there will be a input filed called as account Id and after entering the account id it will give desire results.   I have two payloads and... See more...
I have a requirement like, I have to create a dashboard and there will be a input filed called as account Id and after entering the account id it will give desire results.   I have two payloads and one payload contain accountId along with transId. And another payload contain transId and merchantname and etc.   My 1st query will fetch the transid after giving account id as input.   My 2nd query need that tansid to search other filed for the dashboard.   How can I achieve this. Please someone help.   In last i want to use the output of the 1st query as input to the 2nd query and both query need to be in same place.  
I am preparing to upgrade a distributed and clustered Splunk Enterprise install from 7.3.3 to 8.1.2, but the install guides are not clear for the correct method. My current plan is to upgrade in the... See more...
I am preparing to upgrade a distributed and clustered Splunk Enterprise install from 7.3.3 to 8.1.2, but the install guides are not clear for the correct method. My current plan is to upgrade in the following order: Deployment Servers (primary and standby) Cluster Masters / License Masters (primary and standby) Search Head cluster 2-site Indexer cluster and afterward, all the HFs and UFs (many of each).   Where I'm not clear is the SH/IDX process. SH: rolling or all at once? IDX: one site at a time, or all at once? I have found documentation that says we can do rolling upgrade of SH's, and can do IDX's one site at a time, but other documentation that implies I have to do all SH and IDX in one big hit (because 7.3.3 > 8.1.2 is more than a single version jump). My colleagues are in conflict which is correct. Any clues to the real answer here? Thanks for any help.
index=a0_payservutil_generic_app_audit_prd sourcetype="npp:pom:stdout" eventCode="fundsReservationManualInterventionNeeded"   index=prod_payments OR index=a0_payservutil_generic_app_audit_prd sourc... See more...
index=a0_payservutil_generic_app_audit_prd sourcetype="npp:pom:stdout" eventCode="fundsReservationManualInterventionNeeded"   index=prod_payments OR index=a0_payservutil_generic_app_audit_prd sourcetype="npp:cbis:techevent" $selected_ID$ | rex mode=sed field=_raw "s/\x1f/|/g" | eval fields=split(_raw,"instructionReceiptIdentification=") | eval RN=mvindex(fields,1) | eval Recp=split(RN, "'") | eval ReceiptNumber=mvindex(Recp, 2) | eval EntityID=mvindex(Recp, 6) |dedup ReceiptNumber |table _time, ReceiptNumber, EntityID   The above two queries i am having it. First query identify the entity id which is having 'manualinterventionneeded' and second query get the receipt number by using that entity id. Entity ID in first query and instructionReceiptIdentification in second query having the same values. By matching these two values i want to get one query to create an alert.   Could you please help me on this ?
We are trying to deploy Java Agent in our WildFly 10.1.0  Using Java Agent Version [Server Agent #21.3.0.32281 v21.3.0] we are having SAAS contoller. [AD Thread Pool-Global1] 09 Apr 2021 09:29:59,... See more...
We are trying to deploy Java Agent in our WildFly 10.1.0  Using Java Agent Version [Server Agent #21.3.0.32281 v21.3.0] we are having SAAS contoller. [AD Thread Pool-Global1] 09 Apr 2021 09:29:59,670 INFO ConfigurationChannel - Sending Registration request with: Application Name [SFA], Tier Name [SEVLSUPP], Node Name [guvwdapsfa01], Host Name [GUVWDAPSFA01] Node Unique Local ID [guvwdapsfa01], Version [Server Agent #21.3.0.32281 v21.3.0 GA compatible with 4.4.1.0 rcd7d7317f698cefb9003377c1af0ff3ef81ee922 release/21.3.0] [AD Thread-Metric Reporter0] 09 Apr 2021 09:30:00,000 WARN MetricHandler - Metric Reporter Queue full. Dropping metrics. [AD Thread Pool-Global1] 09 Apr 2021 09:30:00,427 WARN ConfigurationChannel - ResponseReadException creating Response Wrapper [AppAgentConfigurationBinaryRequest], : com.singularity.ee.rest.ResponseReadException: java.lang.UnsupportedOperationException: Deserialization not allowed for class com.singularity.ee.controller.api.constants.AgentType [AD Thread Pool-Global1] 09 Apr 2021 09:30:00,427 WARN ConfigurationChannel - Unable to get AppAgentConfigurationResponse from controller [AD Thread Pool-Global0] 09 Apr 2021 09:30:04,185 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=1.3.0]: org.apache.http.conn.ConnectTimeoutException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: connect timed out [AD Thread Pool-Global1] 09 Apr 2021 09:30:46,921 WARN EventGenerationService - The retention queue is at full capacity [5]. Dropping events for timeslice [Fri Apr 09 09:25:00 IST 2021] to accomodate events for timeslice [Fri Apr 09 09:30:00 IST 2021] [AD Thread-Metric Reporter1] 09 Apr 2021 09:31:00,000 WARN MetricHandler - Metric Reporter Queue full. Dropping metrics. [AD Thread Pool-Global1] 09 Apr 2021 09:31:00,474 INFO ConfigurationChannel - Container id retrieval enabled: true [AD Thread Pool-Global1] 09 Apr 2021 09:31:00,475 INFO ConfigurationChannel - No Container ID found in /proc/self/cgroup : does not exist or is not readable [AD Thread Pool-Global1] 09 Apr 2021 09:31:00,475 INFO ConfigurationChannel - Agent node meta-info thus far: ProcessID;12164;appdynamics.ip.addresses;192.168.76.60;appdynamicsHostName;GUVWDAPSFA01 [AD Thread Pool-Global1] 09 Apr 2021 09:31:00,475 INFO ConfigurationChannel - Detected node meta info: [Name:ProcessID, Value:12164, Name:appdynamics.ip.addresses, Value:192.168.76.60, Name:appdynamicsHostName, Value:GUVWDAPSFA01, Name:supportsDevMode, Value:true] [AD Thread Pool-Global1] 09 Apr 2021 09:31:00,475 INFO ConfigurationChannel - Sending Registration request with: Application Name [SFA], Tier Name [SEVLSUPP], Node Name [guvwdapsfa01], Host Name [GUVWDAPSFA01] Node Unique Local ID [guvwdapsfa01], Version [Server Agent #21.3.0.32281 v21.3.0 GA compatible with 4.4.1.0 rcd7d7317f698cefb9003377c1af0ff3ef81ee922 release/21.3.0] [AD Thread Pool-Global1] 09 Apr 2021 09:31:04,779 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=1.3.0]: org.apache.http.conn.ConnectTimeoutException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: connect timed out [AD Thread-Metric Reporter1] 09 Apr 2021 09:32:00,001 WARN MetricHandler - Metric Reporter Queue full. Dropping metrics. [AD Thread Pool-Global0] 09 Apr 2021 09:32:01,280 INFO ConfigurationChannel - Container id retrieval enabled: true [AD Thread Pool-Global0] 09 Apr 2021 09:32:01,280 INFO ConfigurationChannel - No Container ID found in /proc/self/cgroup : does not exist or is not readable [AD Thread Pool-Global0] 09 Apr 2021 09:32:01,280 INFO ConfigurationChannel - Agent node meta-info thus far: ProcessID;12164;appdynamics.ip.addresses;192.168.76.60;appdynamicsHostName;GUVWDAPSFA01 [AD Thread Pool-Global0] 09 Apr 2021 09:32:01,280 INFO ConfigurationChannel - Detected node meta info: [Name:ProcessID, Value:12164, Name:appdynamics.ip.addresses, Value:192.168.76.60, Name:appdynamicsHostName, Value:GUVWDAPSFA01, Name:supportsDevMode, Value:true] [AD Thread Pool-Global0] 09 Apr 2021 09:32:01,280 INFO ConfigurationChannel - Sending Registration request with: Application Name [SFA], Tier Name [SEVLSUPP], Node Name [guvwdapsfa01], Host Name [GUVWDAPSFA01] Node Unique Local ID [guvwdapsfa01], Version [Server Agent #21.3.0.32281 v21.3.0 GA compatible with 4.4.1.0 rcd7d7317f698cefb9003377c1af0ff3ef81ee922 release/21.3.0] [AD Thread Pool-Global1] 09 Apr 2021 09:32:46,921 WARN EventGenerationService - The retention queue is at full capacity [5]. Dropping events for timeslice [Fri Apr 09 09:27:00 IST 2021] to accomodate events for timeslice [Fri Apr 09 09:32:00 IST 2021] [AD Thread-Metric Reporter1] 09 Apr 2021 09:33:00,000 WARN MetricHandler - Metric Reporter Queue full. Dropping metrics. [AD Thread Pool-Global1] 09 Apr 2021 09:33:02,081 INFO ConfigurationChannel - Container id retrieval enabled: true [AD Thread Pool-Global1] 09 Apr 2021 09:33:02,082 INFO ConfigurationChannel - No Container ID found in /proc/self/cgroup : does not exist or is not readable [AD Thread Pool-Global1] 09 Apr 2021 09:33:02,082 INFO ConfigurationChannel - Agent node meta-info thus far: ProcessID;12164;appdynamics.ip.addresses;192.168.76.60;appdynamicsHostName;GUVWDAPSFA01 [AD Thread Pool-Global1] 09 Apr 2021 09:33:02,082 INFO ConfigurationChannel - Detected node meta info: [Name:ProcessID, Value:12164, Name:appdynamics.ip.addresses, Value:192.168.76.60, Name:appdynamicsHostName, Value:GUVWDAPSFA01, Name:supportsDevMode, Value:true] [AD Thread Pool-Global1] 09 Apr 2021 09:33:02,082 INFO ConfigurationChannel - Sending Registration request with: Application Name [SFA], Tier Name [SEVLSUPP], Node Name [guvwdapsfa01], Host Name [GUVWDAPSFA01] Node Unique Local ID [guvwdapsfa01], Version [Server Agent #21.3.0.32281 v21.3.0 GA compatible with 4.4.1.0 rcd7d7317f698cefb9003377c1af0ff3ef81ee922 release/21.3.0]
Hello friends!   I am faced with a challenge I will be uploading two CSV files to splunk which represents two different table of information Table A: Has the information of purchases in a clot... See more...
Hello friends!   I am faced with a challenge I will be uploading two CSV files to splunk which represents two different table of information Table A: Has the information of purchases in a clothing store with the variables: name of client, date of purchase, agent and product purchased during a period of time t. NAME PRODUCT AGENT DATE_PURCHASE Karen M_14 X_1 8-25-20021 18:21:28 Jean M_78 X_3 8-26-20021 18:11:06 Jean M_71 X_4 8-26-20021 18:21:01 Jean M_64 X_4 8-27-20021 20:21:59 Keith M_57 X_4 8-27-20021 20:21:02 Alba M_50 X_1 8-28-20021 20:21:03 Alba M_43 X_3 8-29-20021 20:21:04 Alex M_36 X_2 8-25-20021 20:21:05 Table B: Has the information of clients who have called the CX SERVICE line of the company during a period of time t and stores the variables name of client, date of call, and type of call. NAME TYPE DATE_OF_CALL DATE_PURCHASE Karen COMPLAIN 8-26-20021 18:21:28 8-25-20021 18:21:28 Jean CX_SERVICE 8-27-20021 18:11:06 8-26-20021 18:11:06 Jean COMPLAIN 8-28-20021 18:21:01 8-26-20021 18:21:01 Jean CX_SERVICE 8-29-20021 20:21:59 8-27-20021 20:21:59 Keith CX_SERVICE 8-29-20021 20:21:02 8-27-20021 20:21:02 Alba COMPLAIN 8-30-20021 20:21:03 8-28-20021 20:21:03 Alex CX_SERVICE 8-25-20021 21:21:05 8-29-20021 20:21:04   I have to build a table in which It will be shown by NAME what was the very last product purchased by the customer prior to their very last call to the customer service line and it should include the variables: NAME ,LAST_PRODUCT_PURCHASED, AGENT, DATE_PURCHASE, TYPE, DATE_OF_CALL that table should look something like this: RESULTS NAME LAST_PRODUCT_PURCHASED AGENT DATE_PURCHASE TYPE DATE_OF_CALL Karen M_14 X_1 8-25-20021 18:21:28 COMPLAIN 8-26-20021 18:21:28 Jean M_64 X_4 8-27-20021 20:21:59 CX_SERVICE 8-29-20021 20:21:59 Keith M_57 X_4 8-27-20021 20:21:02 CX_SERVICE 8-29-20021 20:21:02 Alba M_43 X_3 8-29-20021 20:21:04 COMPLAIN 8-30-20021 20:21:03 Alex M_36 X_2 8-25-20021 20:21:05 CX_SERVICE 8-25-20021 21:21:05   For example: The second raw shows the desired result as the very last product purchased by Jean was M-78 and her very last call on the line was a TYPE= CX_SERVICE with date 8-29-20021 20:21:59 I have been trying to come up with a solution but I finding myself needing of help. I kindly thank you for your assitance or tips, or reference to documentation that can help me achive my results. Thank you so much. PD: What if we would try to add a column that counts how many time the custumer (NAME) has called prior to their most recent call on the line. Thank you so MUCH! thanks a lot  
Hi all,  endswith=(notificationType="TestCompleted" OR notificationType="TestCancelled" OR notificationType="TestRejected" )  this is my part of query. when normally we use boolean like AND, OR th... See more...
Hi all,  endswith=(notificationType="TestCompleted" OR notificationType="TestCancelled" OR notificationType="TestRejected" )  this is my part of query. when normally we use boolean like AND, OR they are in orange. but while using them with endswith, colour of booleans are grey. so is it right, that colour of boolean is different? If not then how to correct that? 
Hi All, Splunk enterprise version 8.1.3; Running on CENTOS 7. I am trying to install an add-on from splunkbase. Using Splunk web for that. After I press the install button I get a fail error mess... See more...
Hi All, Splunk enterprise version 8.1.3; Running on CENTOS 7. I am trying to install an add-on from splunkbase. Using Splunk web for that. After I press the install button I get a fail error message. When pressing the retry button, I get a message that the server is disconnected. Checking the status -->   [root@SPLUNKSERVER bin]# ./splunk status splunkd 117486 was not running. Stopping splunk helpers... [ OK ] Done. Stopped helpers. Removing stale pid file... done.     Any idea how to troubleshoot this? And/Or resolve? Cheers!