All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers,   How to change the threat intelligence Function  time interval  in Splunk ES. currently , I'm getting the information downloaded every 24h, is it possible to change the time interv... See more...
Hi Splunkers,   How to change the threat intelligence Function  time interval  in Splunk ES. currently , I'm getting the information downloaded every 24h, is it possible to change the time interval of it. And How to check the downloaded history of it.
Hello Splunker! I created below regex from the raw events. And I want to create an alert which show the event in one cloumn only. | rex field=_raw "Site\|\_\_SYSTEM\__(?<ServiceName>[A-Za-z]+)" |... See more...
Hello Splunker! I created below regex from the raw events. And I want to create an alert which show the event in one cloumn only. | rex field=_raw "Site\|\_\_SYSTEM\__(?<ServiceName>[A-Za-z]+)" | rex field=_raw "Message\s\=\s(?<Error_Message>.+\:\s[A-Za-z0-9]+)" | rex field=_raw "failed:\s(?<OrderNumber>[A-Za-z0-9]+)" | rex field=_raw "httpStatusCode\s\=\s(?<ResponseTime>[0-9]+)" | rex field=_raw "ResponseTime\s\=\s(?<Reason>.+)" By using all the fields i want one liner column result like . Please let me know how to concate and use makemv command. And if any other approach then please guide me. ServiceName Error_Message OrderNumber Reason ResponseTime  
Short description: When a consumer orders groceries online, I provide the picker—the individual who picked the foods based on the order—with an estimated box that will be needed for that order and t... See more...
Short description: When a consumer orders groceries online, I provide the picker—the individual who picked the foods based on the order—with an estimated box that will be needed for that order and that data is stored in a database. The functionality of the Estimated box generally works, although occasionally it fluctuates. It may be greater or lesser. Actual box use data will be stored in data if the picker adds more or fewer boxes than necessary for the order. Actual data box never store in database if approximated functionality works. Expected output:  1. I want find out how much Percentage/Average of actual values missing. I am not sure how to evaluate null/defined Actual boxes. This is my attempt not sure is it correct:       | spath path=data{}.actual_totes{}.finalBoxAmount output=finalBoxes | spath path=data{}.estimated_totes{}.box output=estimatedBox | stats sum(estimatedBox) as totalEstimatedBox, sum(finalBoxes) as totalFinalBoxes | eval diff =( totalFinalBoxes - totalEstimatedBox) * 100 / totalFinalBoxes | table diff       This is my data splunk data table image. As you can see in splunk table  some my actual boxes value is null/undefined/emptyObject(not sure).  In splunk JSON, this is how I get actual_totes: { } data: { actual_totes: { }, estimated_totes: { box: 4 } } PS: I'm a rookie with Splunk, thus my grasp of its syntax is limited. Please walk me through how to display the value in a PIE chart. Pie chat had the following value: Estimated Boxes, Real Boxes used, and missing actual numbers in Percentage %. Thank you.
One dashboard was made by me. I'm showing my colleagues my dashboard. Problem: When my coworkers or I access that Splunk dashboard link(clean the cookies), the edit dashboard mood screen immediately ... See more...
One dashboard was made by me. I'm showing my colleagues my dashboard. Problem: When my coworkers or I access that Splunk dashboard link(clean the cookies), the edit dashboard mood screen immediately appears. I merely want to showcase mood; I don't want to edit mood. PS: I alone may provide permission; others cannot.
Hi, I am a student and new to Splunk. I really need help creating a table like this: The goal is to detect different users that authenticated using same clientIP, different httpmethod, different st... See more...
Hi, I am a student and new to Splunk. I really need help creating a table like this: The goal is to detect different users that authenticated using same clientIP, different httpmethod, different status codes, and its equivalent sessionid. I used the below query, which yielded no results.   index=* sourcetype=* httpmethod=* httpstatus=* | table clientip,httpmethod,statuscode,sessionid | eval mv_field = clientip.”,”.httpmethod”,”.statuscode”,”.sessionid | makemv delim=”,” mv_field | table mv_field   clientIP HTTPMETHOD STATUS CODE SESSION clientIP 1 GET POST HEAD 200s 400s 300s 500s sessionid clientIP 2 POST 400s 200s sessionid clientIP 3 GET POST 200S sessionid
Hello I am a bit unclear from the readings the meaning of  'latest_day and 1 day_ before'. I have attached a screen shot where I am comparing a particular event that occurred over 7 days.  N... See more...
Hello I am a bit unclear from the readings the meaning of  'latest_day and 1 day_ before'. I have attached a screen shot where I am comparing a particular event that occurred over 7 days.  Now if I want to compare say   'latest_day and 1 day_ before'. : is latest like yesterday and 1day_ before is the 4th October? I am confused. My query:  index="AB" earliest=-8d@d latest=@d | search status="OTP_REQUIRED" | timechart span=1h count | timewrap d |  fields _time latest_day, 1day_before ( when I want to compare days) Thankyou
We are new to splunk and we are trying to find about all the vast capabilities that splunk offers. So here is the scenario. We have a repository that contains100s of zip files that are accessible t... See more...
We are new to splunk and we are trying to find about all the vast capabilities that splunk offers. So here is the scenario. We have a repository that contains100s of zip files that are accessible through a network share. Within these zip files contains a csv file that contains the data we need to ingest. So the question is, is it possible for splunk to ingest just the csv within these individual zip files without having to unzip the entire archive first.
I'd like to save splunk dashboard with mouseover effects (it shows the data when I hover over the graph). How can this be done? When I save it as HTML using chrome and 2 other chrome extensions ... See more...
I'd like to save splunk dashboard with mouseover effects (it shows the data when I hover over the graph). How can this be done? When I save it as HTML using chrome and 2 other chrome extensions (save page we, save page offline), it doesn't have the mouse over effect:    Note: when inspecting the page, it doesn't show any console errors.
Hello, I have to manipulate some data from an api, and send those events to splunk. One set of the api has to go to a normal index, but a subset of the data has to go to a metrics index, which is d... See more...
Hello, I have to manipulate some data from an api, and send those events to splunk. One set of the api has to go to a normal index, but a subset of the data has to go to a metrics index, which is defined as an input in the add-on configuration.   However, when I try to send events to the metrics, I don't get anything showing up there. I have tried the following: Prepending "metric_name:" to the field name for the metric Making a new add-on to only send data to metrics (very simple create an event and send it) In that same add-on, create the event, and send it to the index defined in the config, and defined my metrics index in that config None of these worked. Is there a special way to send these to metrics indexes?
| makeresults count=1 | eval list_split_failure_1 = "fail:,searching old data:,searching new" | eval list_split_failure_2 = "fail:,searching old ata:,searching new" | eval list_split_success = "fa... See more...
| makeresults count=1 | eval list_split_failure_1 = "fail:,searching old data:,searching new" | eval list_split_failure_2 = "fail:,searching old ata:,searching new" | eval list_split_success = "fail:,searching old qata:,searching old dta:,searching old ta:,searching new" | eval list_split_failure_1 = split(list_split_failure_1, ",") | eval list_split_failure_2 = split(list_split_failure_2, ",") | eval list_split_success = split(list_split_success, ",") Can someone help me to understand why the split function fails for list_split_failure_1 and 2 but succeeds for list_split_success?
Isn't hyphen a minor breaker so I'm wondering why the values with hyphen get double quoted when doing summary indexing? This breaks the tstats TERM and PREFIX usage. Assume I have the following data... See more...
Isn't hyphen a minor breaker so I'm wondering why the values with hyphen get double quoted when doing summary indexing? This breaks the tstats TERM and PREFIX usage. Assume I have the following datas: _time field1 field2 2022-10-05 22:22:22 what-not whatnot   Will end up into summary event index with: 10/05/2022 22:22:22, field="what-not", field=whatnot   What I have missed when populating my summary index?-)
Static data with one common field app Name as splunk query.
Right now we have  Splunk Enterprise version 8.0.5.0 Java Update 8 333 Java Se Development Kit 8 Update 291   Due to vulnerabilities, we need to update the Java version. Can we upgrade to the la... See more...
Right now we have  Splunk Enterprise version 8.0.5.0 Java Update 8 333 Java Se Development Kit 8 Update 291   Due to vulnerabilities, we need to update the Java version. Can we upgrade to the latest version and if not can you please tell me the latest version of Java I can upgrade to?    
How do I specify the time zone in an alert search where I need to exclude a specific time period? - I want to exclude the time period of midnight to 12:20am UTC - I want to be able to change my t... See more...
How do I specify the time zone in an alert search where I need to exclude a specific time period? - I want to exclude the time period of midnight to 12:20am UTC - I want to be able to change my time zone in my preferences as needed - I don't want to change the owner of my alert to "nobody" After my basic search criteria I have this, which works as long as my profile is set to UTC: | eval Hour=strftime(_time,"%H") | eval Minute=strftime(_time,"%M") | search NOT ( (Hour=00 AND Minute >= 00) AND (Hour=00 AND Minute <= 20) )
I have a lookup which has a field with time values (in 24 hr time; i.e. 00:30, 13:45, 23:15), which tells my dashboard the scheduled start time of jobs. I have a number of jobs which are set to run h... See more...
I have a lookup which has a field with time values (in 24 hr time; i.e. 00:30, 13:45, 23:15), which tells my dashboard the scheduled start time of jobs. I have a number of jobs which are set to run hourly, and as such need to have every hour as their start time (XX:00). I've tried adding wildcard functionality to the desired fields in the lookup definition like this:  WILDCARD(Field_Name_1),WILDCARD(Field_Name_2) This has unfortunately not worked as I'd hoped, though, and it does not allow the wildcards to be every number when searching for all jobs which are set to run this hour.    Any ideas on how I can best implement this within the lookup? 
I know I can change the color of a .panel-title, but I haven't found the code to change the .input-label color. This works:       <panel id="panel2"> <title>title</title> <in... See more...
I know I can change the color of a .panel-title, but I haven't found the code to change the .input-label color. This works:       <panel id="panel2"> <title>title</title> <input type="text" token="commentpicker"> <label>Add any additional comments below</label> <default></default> </input> <html> <style> #panel2 .dashboard-panel .panel-title { color: #EC102B; } </style>       But this won't       <input type="multiselect" token="newenabledpicker" searchWhenChanged="false" id="input1"> <label>Modify or Disable (REQUIRED)?</label> <choice value="modify">Modify</choice> <choice value="disable">Disable</choice> <delimiter> </delimiter> </input> <html> <style> #input1 .dashboard-panel .input-label { color: #EC102B; } </style> </html>      
I've discovered issue with WebLogic add-on (1.0.0) for Splunk and I am having hard time figuring out how to fix props.conf to parse timestamp correctly. My Splunk instance is in CEST. There is strin... See more...
I've discovered issue with WebLogic add-on (1.0.0) for Splunk and I am having hard time figuring out how to fix props.conf to parse timestamp correctly. My Splunk instance is in CEST. There is string with timezone after timestamp and CEST/GMT are parsed correctly but UTC is not. Server #1 - Correct     ####<Oct 5, 2022 5:00:21 PM CEST> <Info> ... Parsed timestamp: 10/5/22 5:00:21.000 PM       Server #2 - correct     ####<Oct 5, 2022 3:24:11 PM GMT> <Info> ... Parsed timestamp: 10/5/22 5:24:11.000 PM       Server #3 - incorrect     ####<Oct 5, 2022 4:30:23 PM UTC> <Info> Parsed timestamp: 10/5/22 4:30:23.000 PM Should be: 10/5/22 6:30:23.000 PM         
Hello Splunkers!! We got some issues with internal communications, and wondering about the cause of those internal communications and if this is normal (not an error), can anyone explain when this ... See more...
Hello Splunkers!! We got some issues with internal communications, and wondering about the cause of those internal communications and if this is normal (not an error), can anyone explain when this internal communication happens?  Interface:lo Internal Communications:  (127.0.0.1:8089 => 127.0.0.1:XXXXX, 127.0.0.1:XXXXX => 127.0.0.1:8089, 127.0.0.1:25 => 127.0.0.1:XXXXX) (172.16.18.23:XXXXX => 172.16.17.23:XXXXX) Thank you for the work that will be provided.
The new version of an app created using Splunk Add On Builder 4.1.1 on Splunk Enterprise 9.0.1 breaks on upgrade because the older password constant in 'aob_py3/splunktaucclib/rest_handler/credential... See more...
The new version of an app created using Splunk Add On Builder 4.1.1 on Splunk Enterprise 9.0.1 breaks on upgrade because the older password constant in 'aob_py3/splunktaucclib/rest_handler/credentials.py' was six '*' and the new one is eight '*'. Now is expecting that the password setting in inputs.conf uses the new format. This makes the Input page not load correctly raising error 500. Any ideas on how to solve this problem so the final users can seamlessly upgrade the app and keep the older input configurations? We are trying to work with this solution: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Addon-Builder-4-package-resetting-password-conf-entries/td-p/584187 But we dont like that we need to patch the credentials.py
So I've read the docs on how to properly format a monitor stanza in Windows.. and am trying to monitor a dir full of csv files. Here's the stanza:     # Windows Log Processor [monitor://C:\User... See more...
So I've read the docs on how to properly format a monitor stanza in Windows.. and am trying to monitor a dir full of csv files. Here's the stanza:     # Windows Log Processor [monitor://C:\Users\user\Desktop\ICTExports\*.csv] disabled = false crcSalt = <SOURCE> ignoreOlderThan = 2d index = it_app_ict sourcetype = csv      I added the crcSalt bit because without it the files in the monitor directory were generating the seekptr errors since the first few lines in all the files are identical. And here's the error in splunkd.log:     10-05-2022 10:25:25.768 -0500 INFO TailingProcessor [3624 MainTailingThread] - Parsing configuration stanza: monitor://C:\Users\user\Desktop\ICTExports\*.csv. 10-05-2022 10:25:25.768 -0500 INFO TailReader [3624 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 10-05-2022 10:25:25.768 -0500 INFO TailReader [3624 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 10-05-2022 10:25:25.768 -0500 INFO TailingProcessor [3624 MainTailingThread] - Adding watch on path: C:\Users\user\Desktop\ICTExports.       So it's been ~5 minutes since I last restarted the service and there's no further mention of the monitor path nor any of the .csv's within it. There is one file that falls within the 2d period so Im expecting it to be read. What can I do?   Thanks!