All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone used this extension?   Confluence Cloud Audit Log Ingester We installed it hen setup the inputs for Organization ID, Organization URL and API key but are unable to collect any logs. Our... See more...
Has anyone used this extension?   Confluence Cloud Audit Log Ingester We installed it hen setup the inputs for Organization ID, Organization URL and API key but are unable to collect any logs. Our organization id: {orgId created when creating API Key} Organization url: https://zfnsconfluence.atlassian.net/  does this need to be https://zfnsconfluence.atlassian.net/wiki ? API Key: created via the Confluence admin settings screen and valid for a year   Do we need to use a different url or organization id?   I’ve attached the screenshot of the API key we created.   Any help would be appreciated.   
I just want a sanity check to see if this is possible before I go through the effort. I am currently restricted to searching back <=90 days in Splunk but have access to the >90 days data in the sourc... See more...
I just want a sanity check to see if this is possible before I go through the effort. I am currently restricted to searching back <=90 days in Splunk but have access to the >90 days data in the source database. To circumvent this restriction I figure I can convert the old data into a lookup table file, set the time range at "all-time", and append to the lookup table. Has anyone tried this before or is this theoretically possible?
Hi Team,   We are ingesting data from syslot to splunk using Cyberark App . Data is going ON and OFF even though data available on /var/log/Cyberark .   Can you suggest what can be the i... See more...
Hi Team,   We are ingesting data from syslot to splunk using Cyberark App . Data is going ON and OFF even though data available on /var/log/Cyberark .   Can you suggest what can be the issue .
Getting error message "Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'url')" on the console and breaking the APIs currently running.  adrum-latest version: v:23.3.1.4... See more...
Getting error message "Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'url')" on the console and breaking the APIs currently running.  adrum-latest version: v:23.3.1.4273 I tried removing the app agent (adrum-latest.js) from index.html and application is working fine. please suggest any approach to handle these type of errors as this is causing crashing of the application.
SYSLOG often sends the timestamp in the older format (e.g. Jul 11 14:23:32).  Unfortunately, that format does not have a year or timezone.  I know that Splunk has logic to 'figure' it out, but I need... See more...
SYSLOG often sends the timestamp in the older format (e.g. Jul 11 14:23:32).  Unfortunately, that format does not have a year or timezone.  I know that Splunk has logic to 'figure' it out, but I need to have it reformatted to the following:   YYYY-MM-DDTHH:mm:ss<GMT offset>   Is there a way to accomplish this with INGEST_EVAL or other method?  If so how is it done?  This should change the _raw event(that is, this is not a search time question).  Kind of like a mask.
Hello, I am trying to ingest data specifically relating to Chromebook devices that have been enrolled in Google Workspace. Has anyone been able to successfully get this setup? I tried using the Goo... See more...
Hello, I am trying to ingest data specifically relating to Chromebook devices that have been enrolled in Google Workspace. Has anyone been able to successfully get this setup? I tried using the Google Workspace Splunk addon but haven't been able to successfully ingest data related to Chromebooks. I think I am missing something in the configuration settings.
I want to create an alert for which I am writing a search query but I am unable to filter using the time range picker.  since the events contains unix timestamp, I tried to convert but it fails durin... See more...
I want to create an alert for which I am writing a search query but I am unable to filter using the time range picker.  since the events contains unix timestamp, I tried to convert but it fails during time range picker. can you help me what is wrong here? Query: index=isilon sourcetype="emc:isilon:rest" "memory threshold" | eval "Start Time" = strftime('events.start', "%d/%m/%Y %I:%M:%S %p") | table "Start Time" events.message Ideally when I run this query with time range picker on June 12th then there should be NO results,  but the results contains June8th events(attachment provided)   Sample event: {"events": {"devid": 8, "event": 400020001, "id": "8.794044", "lnn": 8, "message": "The SMB server (LWIO) is throttling due to current memory threshold settings. Current memory usage is 90% (23556 MB) and the memory threshold is set to 90%.", "resolve_time": 1686266238, "severity": "critical", "specifier": {"PercentMemoryUsed": 90, "PercentThreshold": 90, "ProcessMemConsumedInMB": 23556, "antime": 1686266290.600042, "devid": 8, "extime": 1686266290.490373, "kmtime": 1686266238.984405, "lnn": 8, "val": 90.0}, "time": 1686266238, "value": 90.0}, "timestamp": "2023-06-12 23:46:57", "node": "0.0.0.0", "namespace": "event"} {"events": {"devid": 8, "event": 400020001, "id": "8.793138", "lnn": 8, "message": "The SMB server (LWIO) is throttling due to current memory threshold settings. Current memory usage is 90% (23556 MB) and the memory threshold is set to 90%.", "resolve_time": 1686248504, "severity": "critical", "specifier": {"PercentMemoryUsed": 90, "PercentThreshold": 90, "ProcessMemConsumedInMB": 23556, "antime": 1686248570.519368, "devid": 8, "extime": 1686248570.447457, "kmtime": 1686248504.901769, "lnn": 8, "val": 90.0}, "time": 1686248504, "value": 90.0}, "timestamp": "2023-06-12 23:46:57", "node": "0.0.0.0", "namespace": "event"}
I am trying to create a second panel based on the results of the first panel. There are 3 columns which have different values (including null) based on which the second panel needs to be populated. ... See more...
I am trying to create a second panel based on the results of the first panel. There are 3 columns which have different values (including null) based on which the second panel needs to be populated. Have created 3 different tokens to store the results of each column. Here is how the first panel looks: ID         Name         Col1         Col2         Col3 111      ABC             null           null         Value1 123      DEF             Value2     null          null 456      GHI              Value3     null          null  789      JKL              null            null         Value4 The second panel should be able to process the results from Col1, Col2 and Col3 and populate related IDs based on the columns values while ignoring the null values.   index=* sourcetype=source Col1="$C1" OR Col2="$C2$" OR Col3="$C3$" | fields + ID, Name | stats count by ID, Name   Currently it just searches with just the first value(Value1) and gives results based on that but I need it to search through all the values (skipping null) and display the IDs corresponding to the values. Can someone help me with this?
Hi, I've been trying to piece together a query that a power user could run that could report the GB/Day of data indexed for a particular index without having to access the license usage data (which a... See more...
Hi, I've been trying to piece together a query that a power user could run that could report the GB/Day of data indexed for a particular index without having to access the license usage data (which a power user wouldn't have access to).   I've been trying to leverage the dashboards in the Monitoring app but nothing seems to be quite what I need.  I'd like to get the deployment wide GB/day indexed for a single index which seems easy but so far I haven't been able to crack it.   Any suggestions?
I am having a below query and the sample output shown: index=<search_string> earliest=-30d@d | timechart span=1m aligntime=earliest count(eval(searchmatch("from"))) as HotCount by TestMQ... See more...
I am having a below query and the sample output shown: index=<search_string> earliest=-30d@d | timechart span=1m aligntime=earliest count(eval(searchmatch("from"))) as HotCount by TestMQ | where tonumber(strftime(_time, "%H")) >= 2 AND tonumber(strftime(_time, "%H")) < 4   _time TestMQ1 TestMQ2 TestMQ3 2023-07-04 02:00:00 20 30 45 2023-07-04 02:01:00 30 80 20 2023-07-04 02:02:00 50 20 25 and so on.... .. .. .. My requirement is to get the Sum of these HotCount and show it as TotalHotCount in a Day wise columns. I have tried modifying this query to get total sum and store the results in day wise column as below: index=<search_string> earliest=-30d@d | timechart span=1m aligntime=earliest count(eval(searchmatch("from"))) as HotCount by TestMQ | where tonumber(strftime(_time, "%H")) >= 2 AND tonumber(strftime(_time, "%H")) < 4 | eval Day=strftime(_time, "%Y-%m-%d") | stats sum(HotCount) as TotalHotCount by Day But, this is not giving me any results (blank output). What is missing here and how it can be modified to achieve the below expected results? Kindly suggest. Day TestMQ1 TestMQ2 TestMQ3 2023-07-04 120 170 210 2023-07-05  90 180 120 2023-07-06  150 120 125 and so on.... .. .. ..
I have the following table Timestamp  2021-08-09 12:26:55.7852 2021-08-09 12:26:56.2278 2021-08-09 12:26:56.2278 2021-08-09 12:26:56.3939 2021-08-09 12:26:39.2861 2021-08-09 12:26:40.3430 202... See more...
I have the following table Timestamp  2021-08-09 12:26:55.7852 2021-08-09 12:26:56.2278 2021-08-09 12:26:56.2278 2021-08-09 12:26:56.3939 2021-08-09 12:26:39.2861 2021-08-09 12:26:40.3430 2021-08-09 12:26:41.3482 2021-08-09 12:26:41.4832 2021-08-09 12:26:56.8794 2021-08-09 12:26:57.8846 2021-08-09 12:26:58.9398 2021-08-09 12:26:59.9450 2021-08-09 12:26:59.9700 2021-08-09 12:26:59.9700 2021-08-09 12:27:00.8201 2021-08-09 12:27:00.8401 2021-08-09 12:27:01.0352 2022-03-30 10:09:25.6406 2022-03-30 10:09:25.8007 2022-03-30 10:09:26.8109 2022-03-30 10:09:27.5961 2022-03-30 10:09:27.5961 I have extracted timestamp manually using regex instead of default timestamp. I have different device_ids. Each device_id will have logfile. I have the following macro query, to remove the events which is decreasing(seconds value are decreasing for the same date marked as bold) index="xxxx" source="*$Device_ID$*xxxx*" | eval Device_ID=mvindex(split(source,"/"),5) | rex field=_raw "(?<timestamp>[^|]+)" | table Device_ID timestamp | streamstats count as s_no by Device_ID | sort 0 - s_no | table Device_ID s_no timestamp | streamstats current=f last(timestamp) as last_timestamp by Device_ID | eval last_timestamp_h=last_timestamp, timestamp_h=timestamp | eval last_timestamp=strptime(last_timestamp,"%Y-%m-%d %H:%M:%S.%4N") | eval timestamp=strptime(timestamp,"%Y-%m-%d %H:%M:%S.%4N") | eval diff=timestamp-last_timestamp | eval ref=if(diff<0,last_timestamp,null) | filldown ref | eval ref_diff=timestamp-ref | fillnull ref_diff value=0 | search ref_diff>=0 | fields Device_ID s_no timestamp_h But when i try to run for all devices, some values are missing and got messed. How can i run for each device_id separately and store the result  
Hi all, When I run this query it is not giving any alerts from the policies marked in red color what changes do we need so that it can raise an alerts when any one violates these policies. index=... See more...
Hi all, When I run this query it is not giving any alerts from the policies marked in red color what changes do we need so that it can raise an alerts when any one violates these policies. index=dlp sourcetype=netskope ((policy="[DLP] - All Apps - Internal" OR policy="[SMTP] - Gmail - Internal " OR policy="[DLP] - GDrive - Internal ") AND ((policy="All DLP Policies" ) OR (policy="[SMTP] - All Apps - Pwd Protected Files - Alert") OR (alert_type=uba alert_name=" Potential suspicious activity: uploads" ) Thanks
This is what happening,  /opt/splunkforwarder/bin # ./splunk add forward-server <splunk-server-ip>:9997 it asks for credentials after that it says "Can't create directory "/opt/splunk/.splunk ": No... See more...
This is what happening,  /opt/splunkforwarder/bin # ./splunk add forward-server <splunk-server-ip>:9997 it asks for credentials after that it says "Can't create directory "/opt/splunk/.splunk ": No such file or directory   How to fix this ? Please help.
I have a inputs.conf where it will monitor all the files from a folder  [monitor:///mydata/my_folder/ToSplunk/*.(mylogfile|edi.mylogfile|edi)] index = xyz _TCP_ROUTING = dev_indexers,qa_indexers... See more...
I have a inputs.conf where it will monitor all the files from a folder  [monitor:///mydata/my_folder/ToSplunk/*.(mylogfile|edi.mylogfile|edi)] index = xyz _TCP_ROUTING = dev_indexers,qa_indexers sourcetype = XYZ_SRCTYPE crcSalt = <SOURCE> Also, I wanted to monitor same directory with different files as a different sourcetype. Also these two stanza will have respective props.  [monitor:///mydata/my_folder/ToSplunk/*.ABC.xml.ToSplunk.edi] index = xyz sourcetype = XYZ_SRCTYPE:ABC _TCP_ROUTING = dev_indexers,qa_indexers crcSalt = <SOURCE>
Hello, I have a folder where I have different types of files in it and want to monitor the whole folder as one sourcetype with different props.conf  inputs.conf [monitor:///mydata/my_folder/ToS... See more...
Hello, I have a folder where I have different types of files in it and want to monitor the whole folder as one sourcetype with different props.conf  inputs.conf [monitor:///mydata/my_folder/ToSplunk/*.(mylogfile|edi.mylogfile|edi)] index = xyz _TCP_ROUTING = dev_indexers,qa_indexers sourcetype = XYZ_SRCTYPE crcSalt = <SOURCE> Props.conf [XYZ_SRCTYPE] SHOULD_LINEMERGE=false LINE_BREAKER=(\~|\r\n)ST\*834\* NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 1 As I said I have different files, I wrote different props.conf for specific log structure to break the events.  [source::/mysource/ToSplunk/*.xml.*.edi] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s]+)\<Policy\>[\r\n\s]+ NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 5 [source::/mysource/ToSplunk/*.COMPARE.xml.*.edi] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s]+)\<CompareMissing\>[\r\n\s]+ NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 6 [source::/mysource/ToSplunk/*.SBS*.xml.edi] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s])+\<Policy\s+ NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 7 [source::/mysource/ToSplunk/*.RCNO*.P.OUT.*] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 8 The linebreaking in first stanza declared for the sourcetype is working fine, but none of the stanzas for [souce://] are breaking the events correctly
Hello!   I'm trying to set up a map in Dashboard Studio and I am not able to find any documentation on how to set the Map to not zoom on scroll. In Classic, there is an XML option, and I've trie... See more...
Hello!   I'm trying to set up a map in Dashboard Studio and I am not able to find any documentation on how to set the Map to not zoom on scroll. In Classic, there is an XML option, and I've tried converting it to JSON with no luck. Any help is appreciated!
Hi team, I am currently getting splunk logs as shown below: 2023-07-11 02:31:43.207 [INFO ] [pool-2-thread-1] FileSensor - Total msg processed for trim reage file:254 host = lgposput503.gso.com ... See more...
Hi team, I am currently getting splunk logs as shown below: 2023-07-11 02:31:43.207 [INFO ] [pool-2-thread-1] FileSensor - Total msg processed for trim reage file:254 host = lgposput503.gso.com source = abs-upstreamer.log sourcetype = 600000304_gg_abs_ipc2 I want to fetch this keyword from splunk logs "Total msg processed for trim reage file:{}" Also Can someone guide me how can I create query to present it in bar form as of now I have created query  like this: index="abc*" sourcetype=600000304_gg_abs_ipc2 source="/amex/app/abs-upstreamer/logs/abs-upstreamer.log" "Total msg processed for trim reage file" But I AM NOT ABLE TO CREATE IT IN ANY CHART/BAR FORM. Can someone help me out with the queries. Thanks in advance
Hello Everyone, I am stuck with a dropdown issue, my requirement is whenever user select different value in previous dropdowns the next dropdown value should unset/clear/set to All(default value). ... See more...
Hello Everyone, I am stuck with a dropdown issue, my requirement is whenever user select different value in previous dropdowns the next dropdown value should unset/clear/set to All(default value). I have 9 dropdowns, dropdown first to forth are working fine but fifth, sixth and seventh dropdowns are not unsetting the value when user select previous dropdowns. Could someone please help..... <fieldset submitButton="false"> <input type="dropdown" token="token1" searchWhenChanged="true"> <label>PSL</label> <fieldForLabel>First</fieldForLabel> <fieldForValue>First</fieldForValue> <change> <set token="token1">$value$</set> <unset token="form.token6"></unset> <unset token="form.token2"></unset> <unset token="form.token3"></unset> <unset token="form.token4"></unset> <unset token="form.token5"></unset> <unset token="form.token7"></unset> </change> <search> <query>| inputlookup Filters_Data_PLM2B05.csv | table First | dedup First</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>DPP</default> <initialValue>DPP</initialValue> </input> <input type="dropdown" token="token2" searchWhenChanged="true"> <label>second</label> <default>$form.token_from_another_dashboard1$</default> <initialValue>$form.token_from_another_dashboard1$</initialValue> <fieldForLabel>portfolio</fieldForLabel> <fieldForValue>portfolio</fieldForValue> <selectFirstChoice>true</selectFirstChoice> <change> <set token="token2">$value$</set> <unset token="form.token6"></unset> <unset token="form.token3"></unset> <unset token="form.token4"></unset> <unset token="form.token5"></unset> <unset token="form.token7"></unset> </change> <search> <cancelled> <unset token="form.token_from_another_dashboard1"></unset> </cancelled> <query>| inputlookup Filters_Data_PLM2B05.csv | search First=$token1$ | table portfolio | dedup portfolio | sort portfolio</query> <earliest>-4h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="token3" searchWhenChanged="true"> <label>Third</label> <default>$form.token_from_another_dashboard2$</default> <initialValue>$form.token_from_another_dashboard2$</initialValue> <fieldForLabel>application_name</fieldForLabel> <fieldForValue>token3</fieldForValue> <change> <set token="token3">$value$</set> <unset token="form.token6"></unset> <unset token="form.token4"></unset> <unset token="form.token5"></unset> <unset token="form.token7"></unset> </change> <search> <query>| inputlookup Filters_Data_PLM2B05.csv |search portfolio="$token2$" CONTROL_M="Yes"|eval application_name=token3."-".application_name |table application_name token3 |sort token3</query> <earliest>-4h@h</earliest> <latest>now</latest> <cancelled> <unset token="form.token_from_another_dashboard2"></unset> <unset token="form.token3"></unset> </cancelled> </search> </input> <input type="dropdown" token="token4" searchWhenChanged="true"> <label>Job Name</label> <fieldForLabel>Forth</fieldForLabel> <fieldForValue>Forth</fieldForValue> <change> <set token="token4">$value$</set> <unset token="form.token6"></unset> <unset token="form.token5"></unset> <unset token="form.token7"></unset> </change> <search base="basesearch"> <query>|stats values(Forth) as Forth |mvexpand Forth</query> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="token5" searchWhenChanged="true"> <label>Fifth</label> <fieldForLabel>token5</fieldForLabel> <fieldForValue>token5</fieldForValue> <change> <set token="token5">$value$</set> <unset token="form.token6"></unset> <unset token="form.token7"></unset> </change> <search base="basesearch"> <query>|stats values(token5) as token5 |mvexpand token5</query> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="token6" searchWhenChanged="true"> <label>sixth</label> <fieldForLabel>job_frequency</fieldForLabel> <fieldForValue>job_frequency</fieldForValue> <change> <set token="token6">$value$</set> <unset token="form.token7"></unset> </change> <search base="basesearch"> <query>|stats values(job_frequency) as job_frequency |mvexpand job_frequency</query> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="token7" searchWhenChanged="true"> <label>seventh</label> <fieldForLabel>token7</fieldForLabel> <fieldForValue>token7</fieldForValue> <change> <set token="token7">$value$</set> </change> <search base="basesearch"> <query>|stats values(token7) as token7 |mvexpand token7</query> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="timepicker_token" searchWhenChanged="true"> <label>Time Picker</label> <choice value="Today">Today</choice> <choice value="week_to_date">Week to date</choice> <choice value="business_week_to_date">Business Week to date</choice> <choice value="month_to_date">Month to date</choice> <choice value="year_to_date">Year to date</choice> <choice value="Yesterday">Yesterday</choice> <choice value="Previous_week">Previous week</choice> <choice value="Previous_business_week">Previous business week</choice> <choice value="Previous_month">Previous month</choice> <choice value="Previous_two_month">Previous two month</choice> <choice value="Previous_year">Previous year</choice> <choice value="Last_60_mins">Last 60 minutes</choice> <choice value="Last_4_hours">Last 4 hours</choice> <choice value="Last_7_days">Last 7 days</choice> <choice value="Last_30_days">Last 30 days</choice> <choice value="Last_3_months">Last 3 months</choice> <choice value="Last_6_months">Last 6 months</choice> <choice value="Last_9_months">Last 9 months</choice> <choice value="Last_1_year">Last 1 year</choice> <change> <condition value="Last_60_mins"> <set token="earliest_time">-60m@m</set> <set token="latest_time">@s</set> </condition> <condition value="Last_4_hours"> <set token="earliest_time">-4h@m</set> <set token="latest_time">@s</set> </condition> <condition value="Today"> <set token="earliest_time">@d</set> <set token="latest_time">@s</set> </condition> <condition value="week_to_date"> <set token="earliest_time">@w0</set> <set token="latest_time">now</set> </condition> <condition value="business_week_to_date"> <set token="earliest_time">@w1</set> <set token="latest_time">now</set> </condition> <condition value="month_to_date"> <set token="earliest_time">@mon</set> <set token="latest_time">now</set> </condition> <condition value="year_to_date"> <set token="earliest_time">@y</set> <set token="latest_time">now</set> </condition> <condition value="Yesterday"> <set token="earliest_time">-1d@d</set> <set token="latest_time">@d</set> </condition> <condition value="Last_7_days"> <set token="earliest_time">-7d@h</set> <set token="latest_time">-1s@</set> </condition> <condition value="Last_30_days"> <set token="earliest_time">-31d@d</set> <set token="latest_time">-1s@</set> </condition> <condition value="Last_3_months"> <set token="earliest_time">-93d@d</set> <set token="latest_time">-1s@</set> </condition> <condition value="Last_6_months"> <set token="earliest_time">-182d@d</set> <set token="latest_time">-1s@</set> </condition> <condition value="Last_9_months"> <set token="earliest_time">-274d@d</set> <set token="latest_time">-1s@</set> </condition> <condition value="Last_1_year"> <set token="earliest_time">-1y@d</set> <set token="latest_time">-1s@</set> </condition> <condition value="Previous_week"> <set token="earliest_time">-7d@w0</set> <set token="latest_time">@w0</set> </condition> <condition value="Previous_business_week"> <set token="earliest_time">-6d@w1</set> <set token="latest_time">-1d@w6</set> </condition> <condition value="Previous_month"> <set token="earliest_time">-1mon@mon</set> <set token="latest_time">@mon</set> </condition> <condition value="Previous_two_month"> <set token="earliest_time">-2mon@mon</set> <set token="latest_time">@mon</set> </condition> <condition value="Previous_year"> <set token="earliest_time">-1y@y</set> <set token="latest_time">@y</set> </condition> </change> <initialValue>Last_4_hours</initialValue> <default>Last_4_hours</default> </input> <input type="text" token="desc_token" searchWhenChanged="true"> <label>Description search</label> <default>*</default> <initialValue>*</initialValue> </input> </fieldset>
Hello Team, I have a bar graph representing data, When I keep the timechart span=15m and run the search for 1h The value for the last 15 mins is showing high and after sometime if I run the same s... See more...
Hello Team, I have a bar graph representing data, When I keep the timechart span=15m and run the search for 1h The value for the last 15 mins is showing high and after sometime if I run the same search the value is showing normal. Is it an expected behaviour and why is it happening like this. How to fix this, any help is appreciated. Eg 9:00 - 9:15 30 9:15 - 9:30 36 9:30 - 9:45 45 9:45 - 10:00 180 After sometime 9:45 - 10:00 49
Hello, (I am very new to Splunk!) I have a standalone Splunk Enterprise instance running from a Linux VM that I am using to practice splunking. I would like to completely reset/empty the instance a... See more...
Hello, (I am very new to Splunk!) I have a standalone Splunk Enterprise instance running from a Linux VM that I am using to practice splunking. I would like to completely reset/empty the instance and start fresh without uninstalling/reinstalling Splunk in my VM. Is this possible? Some background: I used the cleanevents command from the CLI in my VM but it didn't remove the indexes. When I go to the web UI, navigate from Settings to Indexes, my indexes are all still present in the list but cannot be deleted manually (I get an error that the token can't be found).  If I can just essentially empty the instance and start fresh, that would be ideal. Thank you in advance