All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, In our Splunk Architecture Indexers were setup in 2015 and now we need to put manual detention on one of the indexers but I am not able to do this as I don't know the admin password. Can someon... See more...
Hi, In our Splunk Architecture Indexers were setup in 2015 and now we need to put manual detention on one of the indexers but I am not able to do this as I don't know the admin password. Can someone please help to retrieve. B.R
So this is my sample data : 10.3.31.252 - - 15/Mar/2021:14:06:28 +0000 "POST /usenames/rest/sessionscookie dest oamdashboard-oamdashboard.myapp.com/usenames/rest/sessionscookie location usenames ups... See more...
So this is my sample data : 10.3.31.252 - - 15/Mar/2021:14:06:28 +0000 "POST /usenames/rest/sessionscookie dest oamdashboard-oamdashboard.myapp.com/usenames/rest/sessionscookie location usenames upstream_host 10.3.58.247:80 response_from_above 401 user- - - - - myuser myuser 1   I want to extract the status code from this string (which is 401) and user value which is myuser (BOLD sentence mentioned in above logs) How should i write a rex for this in splunk search query ? Also it may happen that status code does not contain any value and instead of 401, value will be simply hyphen(-). Also, hyphens after user field may vary and i want exactly 5 hyphens to match the word, otherwise not. I tried to achieve this by using following: | rex "response_from_above (?<status>\d+) user - - - - - (?<userid>\w+)" but i am not able to figure this out.
Hello All   I added our ES SHC to our monitoring console and the instance(host) name is all the same for all 3 search head cluster nodes.  The instance host (servername) are all unique.  How do I r... See more...
Hello All   I added our ES SHC to our monitoring console and the instance(host) name is all the same for all 3 search head cluster nodes.  The instance host (servername) are all unique.  How do I resolve this issue? thanks ed
Hello Everyone, I just want to export all stack traces of a particular exception type from UI. Anyone has idea about alternative for this? We also had raise support ticket for this enhancement but... See more...
Hello Everyone, I just want to export all stack traces of a particular exception type from UI. Anyone has idea about alternative for this? We also had raise support ticket for this enhancement but got reply that this need to be dicuss in community forum first. All of you have request to add your suggestion on this . Thanks in advance.
Hey, I have an Dashboard similiar to the attached one, also with an Sparkline. Now I want a drilldown to another Dashboard and I want to pass the Location to the other Dashboard. Example:  1. Cli... See more...
Hey, I have an Dashboard similiar to the attached one, also with an Sparkline. Now I want a drilldown to another Dashboard and I want to pass the Location to the other Dashboard. Example:  1. Click on single Value of New York 2. Drilldown to the other Dashboard  3. Dropdown in the other Dashboard, where New York is selected   Does anybody know how this is possible? Thanks in advance   <dashboard>   <label>testme</label>   <row>     <panel>       <single>         <search>           <query> <![CDATA[             | makeresults | eval Location="New York", value=1 | append     [| makeresults     | eval Location="Berlin", value=2 ] | timechart avg(value) by Location | search Berlin=*                     ]]>             </query>         </search>         <option name="colorBy">value</option>         <option name="colorMode">block</option>         <option name="drilldown">all</option>         <option name="height">150</option>         <option name="numberPrecision">0.0</option>         <option name="rangeColors">["0x555","0x53a051","0xf8be34","0xdc4e41"]</option>         <option name="rangeValues">[$service_003.threshold_normal$,$service_003.threshold_medium$,$service_003.threshold_critical$]</option>         <option name="refresh.display">progressbar</option>         <option name="showSparkline">$measurements.showSparkline$</option>         <option name="showTrendIndicator">$measurements.showTrendIndicator$</option>         <option name="trellis.enabled">1</option>         <option name="trellis.scales.shared">1</option>         <option name="trellis.size">small</option>         <option name="trendColorInterpretation">standard</option>         <option name="trendDisplayMode">absolute</option>         <option name="trendInterval">auto</option>         <option name="unitPosition">after</option>         <option name="useColors">1</option>         <option name="useThousandSeparators">1</option>         <drilldown>           <link target="_blank">/app/hsy_ops_da_servicetrace/hsy_kpi_dynamic_2?form.service=$service_003.name$</link>         </drilldown>       </single>     </panel>   </row> </dashboard>
Hi,  If you have (for arguments sake) 10 alerts setup in the Splunk Cloud version.  Is there a way to toggle all of them off/on without having to disable each one individually?   
I am trying to restore data from frozen buckets to Splunk indexer. If I restart the indexer after rebuilding every bucket, the indexer works fine. But if I restart the indexer after rebuilding multi... See more...
I am trying to restore data from frozen buckets to Splunk indexer. If I restart the indexer after rebuilding every bucket, the indexer works fine. But if I restart the indexer after rebuilding multiple buckets, the indexer is getting hanged. My question is if someone knows how to do the rebuild operation for multiple buckets instead of having to do it one bucket at a time?
Hi, I'm looking to create a real-time alert, but I don't see the alert type option of 'real-time' as shown below.  We are using Splunk Cloud, does anyone know if this feature works on Splunk Cloud? ... See more...
Hi, I'm looking to create a real-time alert, but I don't see the alert type option of 'real-time' as shown below.  We are using Splunk Cloud, does anyone know if this feature works on Splunk Cloud? Cheers, Rob      
Hi Community. My customer is ingesting two sources of data: one from IDP and another from a Firewall. Both are CIM compliant and already are ingested fine. Both sources have a definition for "high"... See more...
Hi Community. My customer is ingesting two sources of data: one from IDP and another from a Firewall. Both are CIM compliant and already are ingested fine. Both sources have a definition for "high" category of event. The problem is the IDP is sending the value as "high" and the FW is sending it as "High" (different capital letter). If I want to correlate both sources in one chart, I get one line with high and another with High. My question is: - would you change the values at parsing (props.conf, SEGCMD) so correlation is easier for all the future incidents? - would you change the values at searching time (props.conf) or even at SPL time (EVAL field to make all "high" look like "High"). I'm looking at future consequences of both approaches, efficient-wise and useful-wise. Thanks in advance.
Hi, I am very new to Splunk.  I would like to know how to search just the latest  log file from the below screenshot. (i.e. the current days file only) At the moment I have the below search quer... See more...
Hi, I am very new to Splunk.  I would like to know how to search just the latest  log file from the below screenshot. (i.e. the current days file only) At the moment I have the below search query , but this is pulling all the files so I'm just not sure how of the syntax for adding the current days date string.  Ultimately I am looking to find errors real time which send an alert. source="d:\\logs\\gmoaisfabricsync\\fabricsyncservice-*.txt" Cheers, Rob
hello   I use a scheduled search where I stats events like this : | stats last(LastReboot) as "Last reboot date" by host CATEGORY DEPARTMENT For the moment, in DEPARTMENT field I have a lot of e... See more...
hello   I use a scheduled search where I stats events like this : | stats last(LastReboot) as "Last reboot date" by host CATEGORY DEPARTMENT For the moment, in DEPARTMENT field I have a lot of empty fields In the dashboard, I call my scheduled search and I use token filters | loadjob savedsearch="admin:SA_XXX_sh:LogLogon" | search CATEGORY=$tok_filtercategory|s$ | search DEPARTMENT=$tok_filterdepartment$ What I dont understand is why the events are not displayed if DEPARTMENT fiel is empty? Thanks
Hi everyone, I have installed and configured the following 2 Apps: http://apps.splunk.com/app/3662 http://apps.splunk.com/app/3663 based on the instruction on this page: https://www.cisco.com/c... See more...
Hi everyone, I have installed and configured the following 2 Apps: http://apps.splunk.com/app/3662 http://apps.splunk.com/app/3663 based on the instruction on this page: https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSplunkOperationsGuide_409.html#_Toc529958486 the configuration went pretty good and I could get a successful connection to the eStreamer.  As I wanted to search for sourcetype="cisco:estreamer:data" there were no data coming in.  I can prove that a lot of data is sent to Splunk with the command: tcpdump port 8302 Once I'm looking for index=_internal estreamer (log_level=ERROR OR log_level=WARN) there are a lot of error message like this: ERROR [604f2bfe5a7f42306d1990] appnav:186 - Unable to parse nav XML for app=eStreamer-Dashboard; Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. Could someone please help me, I don't have any idea why I'm getting this error... Thank you very much
Hi Splunkers,   Anyone can help, I need to count field Flag where value is 0. I've tried using this command " streamstats count(Flag=0) as Results_0 | table Results_0" But the table is blank.   ... See more...
Hi Splunkers,   Anyone can help, I need to count field Flag where value is 0. I've tried using this command " streamstats count(Flag=0) as Results_0 | table Results_0" But the table is blank.   Please advice.   Thanks
Hey Team ! I want to access “Splunk IT Service Intelligence” Free trial / Services , But unfortunately it didn’t respond. I want to work on splunk for the long time and want use it in different sce... See more...
Hey Team ! I want to access “Splunk IT Service Intelligence” Free trial / Services , But unfortunately it didn’t respond. I want to work on splunk for the long time and want use it in different scenarios, so I need free trials to understand it. Please check the issue and resolve it ASAP. Please inform me as well.     Here is the snapshot of the error. link to the page: https://www.splunk.com/getsplunk/itsi_sandbox Regards, Hammad Umer  
I have a dashboard with multiple panels each requiring their separate time inputs. But, the dashboard also needs a common time input. So, the requirements are When the dashboard loads, every panel... See more...
I have a dashboard with multiple panels each requiring their separate time inputs. But, the dashboard also needs a common time input. So, the requirements are When the dashboard loads, every panel should have the same time range i.e., default time range of the common time range picker. If the time range picker is changed for a specific panel is changed then it should override the common time range and run only the specified panel again for the new time range. If the common time range picker is changed, it should override the panel specific time ranges and every panel should run again. Basically, the latest changes should override the previous state always. The only catch is that when a panel specific change occurs it only affects that specific panel but when a change us made to common time range picker it will affect all the panels. Here is the Dashboard XML that I am currently using but its's not giving me what I need.   <form> <label>EDTC Reporting</label> <description>EDTC requirements for new application</description> <fieldset submitButton="false" autoRun="true"> <input type="time" token="field1" searchWhenChanged="true"> <label></label> <default> <earliest>0</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <input type="time" token="time1" searchWhenChanged="true"> <label></label> <default> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </default> </input> <chart> <title>Average Response Time per Module</title> <search> <query>index=main | rex field=processingTime "\[(?&lt;responseTime&gt;\d*)" | stats avg(responseTime) by module</query> <earliest>$time1.earliest$</earliest> <latest>$time1.latest$</latest> </search> <option name="charting.axisTitleX.text">Module</option> <option name="charting.axisTitleY.text">Average Response Time</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <input type="time" token="time2" searchWhenChanged="true"> <label></label> <default> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </default> </input> <chart> <title>Module wise Error Rate</title> <search> <query>index=main | where isnotnull(errorCode) | stats count as count2 by module | join type=inner module [ search index=main | stats count as count1 by module ] | eval errorRate = (count2/count1*100) | table module errorRate</query> <earliest>$time2.earliest$</earliest> <latest>$time2.latest$</latest> </search> <option name="charting.axisTitleY.text">Error Rate</option> <option name="charting.axisY.maximumNumber">100</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <input type="time" token="time3" searchWhenChanged="true"> <label></label> <default> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </default> </input> <chart> <title>Module wise Security Exceptions</title> <search> <query>index=main | where match(displayMessage,"Resource Unavailable") | stats count by module</query> <earliest>$time3.earliest$</earliest> <latest>$time3.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <input type="time" token="time4" searchWhenChanged="true"> <label></label> <default> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </default> </input> <table> <title>Unique Users</title> <search> <query>index=main | dedup unique_name | table unique_name</query> <earliest>$time4.earliest$</earliest> <latest>$time4.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <input type="time" token="time5" searchWhenChanged="true"> <label></label> <default> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </default> </input> <table> <title>Most Used Features</title> <search> <query>index=main | dedup _raw | top limit=10 path</query> <earliest>$time5.earliest$</earliest> <latest>$time5.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <input type="time" token="time6" searchWhenChanged="true"> <label></label> <default> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </default> </input> <table> <title>Least Used Feature</title> <search> <query>index=main | dedup _raw | rare limit=10 path</query> <earliest>$time6.earliest$</earliest> <latest>$time6.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>    And, below shown is what I get when I first open the dashboard.  
Hi all, I have only started working on splunk recently and i am stuck at one query. So, I have JSON data like below:   catDevices: [ { model: A1_1234 Name: ZASNJHCDNA } { model: A1_5678 Name: JN... See more...
Hi all, I have only started working on splunk recently and i am stuck at one query. So, I have JSON data like below:   catDevices: [ { model: A1_1234 Name: ZASNJHCDNA } { model: A1_5678 Name: JNDIHUEDHNJ }] Devices : [ JNDIHUEDHNJ NVBBVUYVBHI ]     I want to compare "Devices" with caDevices{}.Name and if it matches i want to display Devices and model list. I tried this query   index=main sourcetype=device |rename Devices{} as success | mvexpand success |dedup success |rename catDevices{}. model as Model ,rename catDevices{}.Name as device_name |eval zip = mvzip(Model, device_name) |fields - _raw |mvexpand zip | rex field = zip "(?<MODEL>.*),(?<DEVICE>.*)" | fields - zip | eval Status = if(match(MODEL,"A1*"), if(success == DEVICE, success, "NO MATCH"), "NO MATCH") | table success, MODEL, Status | where Status != "NO MATCH" | stats count(success)   It worked but as the data increases , due to mvexpand threshold the result is not accurate. Can you please tell me how i can correct my query or if you can provide a different solution for my question, any help would be appreciated. thanks in advance.  
First Issue: I've been trying to ingest 1 XML file into 1 event only in Splunk. But Splunk always splitting it into 2 events. Example XML file: ################## <?xml version="1.0" encoding="UTF... See more...
First Issue: I've been trying to ingest 1 XML file into 1 event only in Splunk. But Splunk always splitting it into 2 events. Example XML file: ################## <?xml version="1.0" encoding="UTF-8" standalone="no"?> <IntegrationTransaction> <TransactionMetaData xmlns=""> <SourceSystemName>SystemNameSource</SourceSystemName> <TransactionType>ValidTrans</TransactionType> <UniqueTransactionID>DFGDFGFG</UniqueTransactionID> <TransactionDateTime>2021-03-12T17:38:02.725+01:00</TransactionDateTime> </TransactionMetaData> <Payload xmlns=""> <ValidatedSalesTransactions> <Transaction> <RetailID>XZ0051</RetailID> </Transaction> </ValidatedSalesTransactions> </Payload> </IntegrationTransaction> ################## transforms.conf ################## [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [accept_xml_files] REGEX = <?xml version DEST_KEY = queue FORMAT = indexQueue ################## props.conf ################## [test_XML_sourcetype] BREAK_ONLY_BEFORE = goblygook MAX_EVENTS = 200000 DATETIME_CONFIG = NONE CHECK_METHOD = modtime pulldown_type = true LEARN_MODEL = false SHOULD_LINEMERGE = true TRUNCATE = 0 kv_mode = xml TRANSFORMS-set = setnull, accept_xml_files ################## inputs.conf ################## [monitor:///tmp/testXML/*.xml] index = test_XML_index sourcetype = test_XML_sourcetype crcSalt = <SOURCE> ################## Result in Splunk: ################## First Event: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <IntegrationTransaction> <TransactionMetaData xmlns=""> <SourceSystemName>SystemNameSource</SourceSystemName> <TransactionType>ValidTrans</TransactionType> <UniqueTransactionID>DFGDFGFG</UniqueTransactionID> ################## ################## Second Event: <TransactionDateTime>2021-03-12T17:38:02.725+01:00</TransactionDateTime> </TransactionMetaData> <Payload xmlns=""> <ValidatedSalesTransactions> <Transaction> <RetailID>XZ0051</RetailID> </Transaction> </ValidatedSalesTransactions> </Payload> </IntegrationTransaction> ################## Note: Second event always starts with <TransactionDateTime> Second Issue: Splunk indexing it not real time. Sometimes it takes 30mins-45mins to be available in Splunk. Thank you.      
Hello Guys, we would like to do a group project/contest where different SHs are connected to a IDX-Cluster. Each SH is for one contestant and the should solve severall search-tasks on the indexed d... See more...
Hello Guys, we would like to do a group project/contest where different SHs are connected to a IDX-Cluster. Each SH is for one contestant and the should solve severall search-tasks on the indexed data. Is there a possibility to "hide" the searches from the indexed data at the indexer cluster or any other splunk-server? So that the contestants could not cheat, if the can look into the _internal index? I know, that we could restrict the access to the _internal, but let us assume all of them have admin-rights AND access to the shell of the underlying OS? My guess, it is not possible to hide such a search from the rest of the splunk servers (except the executing SH), because the IDX had to search the data itself. Even not forwarding the internal data to the indexer does not help here. Am I right? Thanks in advance. BR, Tom
Hello,  I wanted to know how can React app call api directly (usually running on 8089, even on other instance). Right now when i do so, an error is thrown stating "ERR_connection_reset". What we us... See more...
Hello,  I wanted to know how can React app call api directly (usually running on 8089, even on other instance). Right now when i do so, an error is thrown stating "ERR_connection_reset". What we usually do is use URL set by create-RESTURL from 'splunk-utils library, which works. But we want other way since UI has to also call api of different instances. Kindly suggest a solution.
Hi, I am trying to build a dashboard with some status indicators and uptime gauges.  Below are the few sample logs. 2021-02-21 13:48:42,744 (DEBUG) Thread_^[OP].* BATCH ID(31673) response (Internal ... See more...
Hi, I am trying to build a dashboard with some status indicators and uptime gauges.  Below are the few sample logs. 2021-02-21 13:48:42,744 (DEBUG) Thread_^[OP].* BATCH ID(31673) response (Internal Server Error, 500) 2021-02-21 13:48:42,741 (DEBUG) Thread_^[KL].* BATCH ID(62422) response (Internal Server Error, 500) 2021-02-21 13:48:31,620 (DEBUG) Thread_^[UV].* BATCH ID(40284) response (OK, 200) 2021-02-21 13:47:41,991 (DEBUG) Thread_^[OP].* BATCH ID(31672) response (OK, 200) Created a status indicator for last 10 minutes as in the below query. index="abc" | eval Indicator=if(Response=="(OK, 200)", "UP", "DOWN") | stats count(eval(if(Indicator="UP", 1, null()))) as UP_count count(eval(if(Indicator="DOWN", 1, null()))) as DOWN_count count(Indicator) as "TotalCount" | eval SI = case(UP_count>0,"UP", UP_count==0,"DOWN") | table SI Result will be either UP or DOWN. Now I am trying to create a uptime gauge which will display the time from which the value of SI is UP. For example, if the value of SI  was DOWN for sometime and the connection restored and SI = UP from last 2 days, the uptime will be like  2 days, 30 minutes, 40 seconds Can anyone please help me with this.