All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Sorry again for beginner question. I want to add drilldown in my TableView that when i click a cell in Table A it will show Table B. Also, If I click a cell in Table B it will hide. ... See more...
Hello, Sorry again for beginner question. I want to add drilldown in my TableView that when i click a cell in Table A it will show Table B. Also, If I click a cell in Table B it will hide. I want to check how it is done in Splunk Js. Thank you
I have the following log :  data=123 params="{"limit":200,"id":["123"] someotherdata   How can I parse the params field to a table so that the final output is    data params 1... See more...
I have the following log :  data=123 params="{"limit":200,"id":["123"] someotherdata   How can I parse the params field to a table so that the final output is    data params 123 "{"limit":200,"id":["123"]   if I try table data params   It ends up being :    data params 123 {  
Is it possible to use the collect function to send data to multiple different summary indexes? For example, let's say my search produces the following results: date org field1 ... See more...
Is it possible to use the collect function to send data to multiple different summary indexes? For example, let's say my search produces the following results: date org field1 field2 field3 03-15-22 Finance valueA1 ValueA2 ValueA3 03-15-22 Maintenance valueB1 ValueB2 ValueB3 I want to use collect to send the results for org:Finance to a specific summary index = FinanceSummary and similarly send the results for org:Maintenance to another summary index=MaintenanceSummary The syntax I have for the collect function was: |collect index=[the target summary index] My question is there way I can do something like: | where org=Finance | collect index=FinanceSummary | where org=Maintenance | collect index=MaintenanceSummary I was not sure if this was possible and was hoping to check before pollute my summary indexes with bad results. The documentation itself does not explicitly address this question unfortunately  https://docs.splunk.com/Documentation/Splunk/8.2.5/SearchReference/Collect Thanks in advance!
I have a search in which I segregated the result into 1 hour spans using:   | bin _time span=1h     I use predict command to compare the results from the search to the predicted values with... See more...
I have a search in which I segregated the result into 1 hour spans using:   | bin _time span=1h     I use predict command to compare the results from the search to the predicted values with actual data captured. I would like to have splunk check the results hourly, and alert me if the Actual_Percent < Predicted_Percent I would like to only evaluated results that are part of specific hours of the day, so I added:       | eval date_hour=strftime(_time, "%H") | search date_hour>=8 date_hour<=23 | where Actual_Percent < Predicted_Percent       Now, I have 3 columns of data: _time Actual_Percent Predicted_Percent 8:00 9:00 10:00 11:00 60 75 85 90 58 80 80 95    I need to get an alert based on individual time slots as the job is executed, so if the alert triggered for any value of Actual_Percent < Predicted_Percent (in this case 9:00, and 11:00), but I don't want to get new alerts subsequent to the original alert for that time slot. If I setup the alert to send email on any results greater than 0, then it will send email as soon as the first time it sees result set matching the criteria (i.e.9:00), and will continue throughout the rest of the day. However, I want only 1 alert per time slot if the condition Actual_Percent < Predicted_Percent. Is there a way to restrict the "where" statement to only look at data for that past 1 hour time slot?
Dear Community I am looking for a way to add a static and a dynamic value at the end of a search to track the status of the (saved) search. I would like to add the dynamic value to be extraced from... See more...
Dear Community I am looking for a way to add a static and a dynamic value at the end of a search to track the status of the (saved) search. I would like to add the dynamic value to be extraced from an CSV-File.   |...base search... | table index, sourcetype, _time..... | append [ makeresults | eval status="completed" | eval ID = missionID<field from input.csv> ]   Any help is appreciated.    
I've got an alert I put together and am trying to REX multiple pieces of it out to their own columns. This is against the Splunk internal logging. I had no problem pulling errorCode since it has a cl... See more...
I've got an alert I put together and am trying to REX multiple pieces of it out to their own columns. This is against the Splunk internal logging. I had no problem pulling errorCode since it has a clearly defined field-within-a-field, but I'm not able to pull a subset string of another part of the message Query index=_internal sourcetype=sfdc:object:log log_level=ERROR OR log_level=WARNING | rex "\"errorCode\":\"(?<errorCode>[^\s]+)\"" | stats count(stanza_name) by stanza_name, log_level, errorCode, message I've got the message at the end just to give me the query error, but what I'd like to do is REX that also like I did to get the errorCode as its own column. Below is a sample message, with the part in bold what I'd like to rex out to its own column. I can't find an example of doing that where there isn't a clear delineation within the message like "errorCode":"<error>" [{"message":"\nFoo,Bar,FooBar,FooBar2\n ^\nERROR at Row:1:Column:232\nNo such column 'FooBar2' on entity 'MyAwesomeObject'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.","errorCode":"INVALID_FIELD"}]
Hi Team, We have Splunk cloud in production environment like indexer and Search head now customer want UAT environment in On Premises(Indexer and Search head) as testing purpose. Purpose of th... See more...
Hi Team, We have Splunk cloud in production environment like indexer and Search head now customer want UAT environment in On Premises(Indexer and Search head) as testing purpose. Purpose of the UAT environment is that we can test all log source onboarding, usecase development and testing in UAT environment first then forward it to Production Splunk cloud So my question are 1. Is there any challenges, issues and any dependency with respect to App, Add-Ons or any other if we have On premises ES in UAT environment  2.  What is exact difference between Splunk Cloud and Splunk ES On- premises Because we have Splunk cloud in productions environment and customer want Splunk ES on premises as UAT purpose.  
After a successful saved-search run, the results can be found on the directory `$SPLUNK_HOME/var/run/splunk/dispatch/scheduler__...`  We know that the result of the search is named `results.csv.gz` ... See more...
After a successful saved-search run, the results can be found on the directory `$SPLUNK_HOME/var/run/splunk/dispatch/scheduler__...`  We know that the result of the search is named `results.csv.gz`  How do we read this in the OS level apps? Untarring it using `tar -xzvf` does not work.   Thanks
I was looking to implement a search described in this article: threathunting-spl/Detecting_Beaconing.md at master · inodee/threathunting-spl · GitHub TLDR: The above link shows a search that allows... See more...
I was looking to implement a search described in this article: threathunting-spl/Detecting_Beaconing.md at master · inodee/threathunting-spl · GitHub TLDR: The above link shows a search that allows for a data source like firewall connections data to be used to identify connections that are suspiciously uniform in terms of interval. It uses both streamstats and eventstats to calculate the standard deviation for the time intervals for connections between unique combinations of src and dst IPs.  The issue is that the data I would be using is enormous - I'm looking to do the above search on 24 hours worth of data but it fails due to memory limits. What I have in place so far, I do have an accelerated data model for the filtered firewall data, but I dont know how to combine tstats and the search in the above link. Summary indexing wouldnt work since I would still like the stats calculations over the larger time frame (i.e. doing the search every hour for the last hour worth of data might miss whether a connection truly does have a low standard deviation).  Has anyone successfully combined tstats with streamevents/eventstats or built a search that works around just how resource intensive the search is?
I have an accelerated data model which has a field created using a lookup. What I need is for the field to be created at the time the acceleration search runs, rather than at the time that I consume ... See more...
I have an accelerated data model which has a field created using a lookup. What I need is for the field to be created at the time the acceleration search runs, rather than at the time that I consume the data in my query. The lookup file is re-created each day, so the important thing for me is that the value in the field is the one from how the lookup file was at the time it was created, not when I use it. Is that possible or even the default behaviour? Thanks  
Hello, We are currently working with two sets of data that have similar fields. We would like to align matching events in one row (payment amount, category/source and account number) while also mai... See more...
Hello, We are currently working with two sets of data that have similar fields. We would like to align matching events in one row (payment amount, category/source and account number) while also maintaining the values that do not match for failed processing.  Below are some screenshots of what the data looks like now in four rows, as well as what we're hoping to visualize in 3 rows. Any assistance would be greatly appreciated!   Below is our current search: index="index1" Tag="Tag1" | stats values(PaymentAmount) as PaymentAmount by PaymentChannel,AccountId,PaymentCategory,ResponseStatus,StartDT | rename AccountId as AccountNumber | rename PaymentChannel as A_PaymentChannel | rename PaymentCategory as A_PaymentCategory | rename ResponseStatus as A_ResponseStatus | rename StartDT as A_Time | append [search index="index2" sourcetype="source2" | rename PaymentAmount as M_PayAmt | eval PayAmt = tonumber(round(M_PayAmt,2)) | rex field=source "M_(?<M_Source>\w+)_data.csv" | rename "TERMINAL ID" as M_KioskID | rename "ResponseStatus" as "M_ResponseStatus" | rename "KIOSK REPORT TIME" as M_Time | eval _time =strptime(M_Time,"%Y-%m-%d %H:%M:%S.%3Q") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | stats values(PayAmt) as M_PayAmt latest(M_Time) by AccountNumber, M_Source, M_ResponseStatus,M_KioskID | rename latest(M_Time) as M_Time | table M_PayAmt,AccountNumber, M_Source, M_KioskID,M_ResponseStatus,M_Time | mvexpand M_PayAmt] | eval A_PaymentTotal = "$" + PaymentAmount | eval M_PayAmt = "$" + M_PayAmt | eval joiner = AccountNumber | table AccountNumber,A_PaymentChannel,M_KioskID,A_PaymentCategory,M_Source,A_PaymentTotal,M_PayAmt,A_ResponseStatus,M_ResponseStatus,A_Time,_Time | eval M_PayAmt=if(isnull(M_PayAmt),"Unknown",M_PayAmt) | eval A_PaymentTotal=if(isnull(A_PaymentTotal),"Unknown",A_PaymentTotal) | eval A_Time=if(isnull(A_Time), M_Time, A_Time) | eval M_Time=if(isnull(M_Time), A_Time, M_Time) | sort by M_Time desc
I've created an alert for Account Expired.  However, the triggered alert disappears when I do a splunk restart.   Is there any way to prevent this alert from disappearing?  Any config setting? ... See more...
I've created an alert for Account Expired.  However, the triggered alert disappears when I do a splunk restart.   Is there any way to prevent this alert from disappearing?  Any config setting? In case you wanted to know the alert information: -  Settings:     - Alert Type = Scheduled     - Runs every day at 23:00     - Expires 24 hours - Trigger Conditions    - Trigger alert when Number of Results is greater than 0    - Trigger Once - Trigger Action    - Add to Triggered Alerts with Severity Critical
I have a dashboard with a multiselect that is populated dynamically using a search. When "All" is selected, I'm setting a different token with "*" (I'm using the hack found here to remove "All"). My ... See more...
I have a dashboard with a multiselect that is populated dynamically using a search. When "All" is selected, I'm setting a different token with "*" (I'm using the hack found here to remove "All"). My multiselect is populated with choices dependent on other inputs. I want to make the "All" basically be all the choices, as opposed to "*", since it will go out of the scope of the available choices.   Anyway I can do that? Set a different token as all the choices every time "All" is selected?
I know this is a commonly asked question due to it's complexity, but I cannot figure out how to get emails to send via Splunk alert. I created a simple search to find a specific string and created ... See more...
I know this is a commonly asked question due to it's complexity, but I cannot figure out how to get emails to send via Splunk alert. I created a simple search to find a specific string and created an alert with the following: App: Search Permissions: Private. Owned by admin. Alert Type: Real-Time Trigger Condition: Per-Result Actions: Send email / Add to Triggered Alerts I see it being triggered, but it never sends the email. I've tried sending it to two different email addresses. One to my work email, and another to my phone as a text (phoneNumber@mms.att.net) and neither of them work. The trigger appears in the list though. I have tried multiple mail hosts in the configuration, but the current one is the default that appeared when I opened it: smtp-mail.outlook.com:587 Email security: I have tried all three options No user/pass currently configured Allowed Domains: mms.att.net Send Emails As: SplunkAlert@test.edu I've been sifting through the Splunk documentation for hours now and can't seem to get it right. Any ideas? Thanks
Hello  I'm trying to create a summary index. I scheduled a search and edited the summary index but I could not do the new search in the results that I have already obtained in the scheduled searches... See more...
Hello  I'm trying to create a summary index. I scheduled a search and edited the summary index but I could not do the new search in the results that I have already obtained in the scheduled searches 
We log job status messages in splunk.  When a job runs successfully, a success message is logged.  When a job errors out, an error message is logged.  Both types of messages include hostname as a fie... See more...
We log job status messages in splunk.  When a job runs successfully, a success message is logged.  When a job errors out, an error message is logged.  Both types of messages include hostname as a field.  But when the underlying service fails to run a job, no message is logged. I need to find hostnames that are missing success messages.  If I could use dataset literals, I might search something like this: | FROM <list of expected hostnames as dataset literal> NOT [subsearch for success message hostnames] But Splunk Cloud Platform apparently does not support the use of dataset literals, so I've resorted to a more convoluted process using stats, as suggested by several Internet authors: <search for success message hostnames> | eval expected = split("<list of expected hostnames>"," ") | stats values(hostname) as hostname by expected | where NOT match (hostname,expected) This approach works if some, but not all, expected hostnames are missing.  However, in the case where all the expected hostnames are missing the search comes back empty.  I understand why it comes back empty.  What I need is a "correct" way to find these missing hostnames that will work in all cases.
Hi all,  I was wondering if someone could help with a sort ordering issue I have. I am looking for a way to sort instance names of my computers  alphanumerically where I can sort the list like: a... See more...
Hi all,  I was wondering if someone could help with a sort ordering issue I have. I am looking for a way to sort instance names of my computers  alphanumerically where I can sort the list like: a100pc1 a100pc2 a100pc3 a100pc10 a100pc20 instead of lexicographically like:   a100pc1  a100pc10  a100pc2  a100pc20  a100pc3
Hi Im developing an app that supplies a scripted input to Splunk. When its run (on linux machines), it reads the session key from stdin :     session_key = sys.stdin.readlines()[0]     ... See more...
Hi Im developing an app that supplies a scripted input to Splunk. When its run (on linux machines), it reads the session key from stdin :     session_key = sys.stdin.readlines()[0]     This does not seem to work for Windows based deployments   Does anyone have an idea of how to do this on windows?
In Splunk ES we have correlation searches creating notable events. The timestamp of the notable event, and thus the timestamp of the incident in "Incident Review", is the time of when the correlation... See more...
In Splunk ES we have correlation searches creating notable events. The timestamp of the notable event, and thus the timestamp of the incident in "Incident Review", is the time of when the correlation search ran. Is there any way to change this timestamp to a custom timestamp, i.e. the time of the actual log event in Splunk that triggered the notable event? I know one solution is to make the correlation search run really often, like every minute, which would make the timestamps quite precise, but not perfect, and also this would not be optimal with regards to performance. Also, I guess we could change the default time parsing of notable events in Splunk ES and add my own time field, e.g. "my_time_field", and use this field for time parsing instead, but then all out-of-the-box correlation searches in Splunk ES would stop working properly and it is in general not a good solution. We've made a temporary solution to this by adding a new "Incident Review Event Attribute" field called "Alert Time", which adds a new field to the incidents with the "real" timestamp, but it's not optimal, as the time of the incident itself is still the same. Is there any other way?  
Good Afternoon, I am attempting to create a panel that shows me the unique URIs that have been accessed by a specific IP, with counts associated with the URI. I'm trying to get it to where it tells m... See more...
Good Afternoon, I am attempting to create a panel that shows me the unique URIs that have been accessed by a specific IP, with counts associated with the URI. I'm trying to get it to where it tells me something like this: 10.20.30.40 accessed www<.>google<.>com 40 times. Here is my current query: Index=nsm | stats list(uri) by src_ip This displays what I want but with duplicates, and it provides no counts. I tried adding | dedup with it which shows everything only once, but again no count. Index=nsm | chart count by src_ip,uri This provides me the information/details of what I'm looking for, however the display is not ideal, and it doesn't show all URI's since it caps at OTHER.   Any information would be greatly appreciated