All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, I am trying to pull a result per customer, where he/she has visited url based on time_order I did something like this , I got the result but it is in alphabetical order what I am loo... See more...
Hi Everyone, I am trying to pull a result per customer, where he/she has visited url based on time_order I did something like this , I got the result but it is in alphabetical order what I am looking is time_order my_search | transaction user_id startswith=http_uri="/" endswith=http_uri="random.html" | table user_id http_uri Also is there any other way other than transaction,I am not sure. please guide on how this can be achieved. thank you.
How do i enable Splunk App for AWS with Detailed billing report with resources and tags ? i want to automate this to monthly basis. please tell me step by step procedure.
In our enterprise, there is already another team which has setup Splunk Search Heads and Indexers in their own AWS account (say A). We are planning to index and store new data in our AWS account (sa... See more...
In our enterprise, there is already another team which has setup Splunk Search Heads and Indexers in their own AWS account (say A). We are planning to index and store new data in our AWS account (say B). For our dashboards, we would like to pull in data indexed in Account A as well. So, trying to determine the best approach here 1. Is is possible to setup Search Heads in Account B and add indexers to it from account B and A as well ? 1.1. In such case, will existing setup in Account A get affected any way? Overall, is it possible to share indexers across multiple AWS accounts and still maintain its own Search Heads and dashboard UI ? As we are different teams, we would like to have independence in maintaining our dashboards/splunk enterprise instances and also not share indexed confidential data.  The documentation here lists command to edit indexer cluster config but not add a new search head from other aws account. So, it would be helpful to know if its possible to share indexes across aws accounts. https://docs.splunk.com/Documentation/Splunk/8.2.5/DistSearch/SHCandindexercluster  
hi In my dashboard, I use 2 similar searches in the first, I am doing a dc of  "s"     index=test earliest=@d+7h latest=@d+19h | search rtt >= 150 | stats count as Pb by s | search Pb >= 5 ... See more...
hi In my dashboard, I use 2 similar searches in the first, I am doing a dc of  "s"     index=test earliest=@d+7h latest=@d+19h | search rtt >= 150 | stats count as Pb by s | search Pb >= 5 | stats dc(s)     the result is 12  in the second search, i use the same search but I need to gather events also by "s" and also _time     index=test type=* earliest=@d+7h latest=@d+19h | bin span=1h _time | search rtt >= 150 | stats count as Pb by s _time | search Pb >= 5 | timechart dc(s) as sam span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure     the pb I have is that the results is not equal to 12 but to 6 Why I can retrieve the same resulst that in the first search please?
Hi Team, We notice that the page below is no longer available. https://www.splunk.com/en_us/product-security/announcements-archive.html   Can you share with us the new link?
Hi everyone, I'm on Victoria experience and want to perform a self-installation of the following app: Microsoft Sentinel Add-On for Splunk. However, I cannot see the app in the list for self-installa... See more...
Hi everyone, I'm on Victoria experience and want to perform a self-installation of the following app: Microsoft Sentinel Add-On for Splunk. However, I cannot see the app in the list for self-installation. How can I get this installed?
Can someone, please explain to me what expires does when setting up an alert. I can not find an explanation in the manuals, I search.  
Dears I have installed  splunk UF V8.1.3 on Solaris sparc server V11.5.we are not getting any log from those servers apart from _internal logs. we did below checks. 1.connection fine- telnet hap... See more...
Dears I have installed  splunk UF V8.1.3 on Solaris sparc server V11.5.we are not getting any log from those servers apart from _internal logs. we did below checks. 1.connection fine- telnet happening connected 2.splunkd log -connected to hf and refusing in few seconds. 3.directory path is fine in input.conf file. 4.nothing found in HF audit log. 5.checked firewall logs showing server rest and client reset. 6.debug log collected and share with support team no root cause found. Can you please help on this?  What could be the issue? Is there any configuration need to modified? BR, Jakir
配置了smtp域名,为什么报警邮箱收不到  
Hello, Sorry again for beginner question. I want to add drilldown in my TableView that when i click a cell in Table A it will show Table B. Also, If I click a cell in Table B it will hide. ... See more...
Hello, Sorry again for beginner question. I want to add drilldown in my TableView that when i click a cell in Table A it will show Table B. Also, If I click a cell in Table B it will hide. I want to check how it is done in Splunk Js. Thank you
I have the following log :  data=123 params="{"limit":200,"id":["123"] someotherdata   How can I parse the params field to a table so that the final output is    data params 1... See more...
I have the following log :  data=123 params="{"limit":200,"id":["123"] someotherdata   How can I parse the params field to a table so that the final output is    data params 123 "{"limit":200,"id":["123"]   if I try table data params   It ends up being :    data params 123 {  
Is it possible to use the collect function to send data to multiple different summary indexes? For example, let's say my search produces the following results: date org field1 ... See more...
Is it possible to use the collect function to send data to multiple different summary indexes? For example, let's say my search produces the following results: date org field1 field2 field3 03-15-22 Finance valueA1 ValueA2 ValueA3 03-15-22 Maintenance valueB1 ValueB2 ValueB3 I want to use collect to send the results for org:Finance to a specific summary index = FinanceSummary and similarly send the results for org:Maintenance to another summary index=MaintenanceSummary The syntax I have for the collect function was: |collect index=[the target summary index] My question is there way I can do something like: | where org=Finance | collect index=FinanceSummary | where org=Maintenance | collect index=MaintenanceSummary I was not sure if this was possible and was hoping to check before pollute my summary indexes with bad results. The documentation itself does not explicitly address this question unfortunately  https://docs.splunk.com/Documentation/Splunk/8.2.5/SearchReference/Collect Thanks in advance!
I have a search in which I segregated the result into 1 hour spans using:   | bin _time span=1h     I use predict command to compare the results from the search to the predicted values with... See more...
I have a search in which I segregated the result into 1 hour spans using:   | bin _time span=1h     I use predict command to compare the results from the search to the predicted values with actual data captured. I would like to have splunk check the results hourly, and alert me if the Actual_Percent < Predicted_Percent I would like to only evaluated results that are part of specific hours of the day, so I added:       | eval date_hour=strftime(_time, "%H") | search date_hour>=8 date_hour<=23 | where Actual_Percent < Predicted_Percent       Now, I have 3 columns of data: _time Actual_Percent Predicted_Percent 8:00 9:00 10:00 11:00 60 75 85 90 58 80 80 95    I need to get an alert based on individual time slots as the job is executed, so if the alert triggered for any value of Actual_Percent < Predicted_Percent (in this case 9:00, and 11:00), but I don't want to get new alerts subsequent to the original alert for that time slot. If I setup the alert to send email on any results greater than 0, then it will send email as soon as the first time it sees result set matching the criteria (i.e.9:00), and will continue throughout the rest of the day. However, I want only 1 alert per time slot if the condition Actual_Percent < Predicted_Percent. Is there a way to restrict the "where" statement to only look at data for that past 1 hour time slot?
Dear Community I am looking for a way to add a static and a dynamic value at the end of a search to track the status of the (saved) search. I would like to add the dynamic value to be extraced from... See more...
Dear Community I am looking for a way to add a static and a dynamic value at the end of a search to track the status of the (saved) search. I would like to add the dynamic value to be extraced from an CSV-File.   |...base search... | table index, sourcetype, _time..... | append [ makeresults | eval status="completed" | eval ID = missionID<field from input.csv> ]   Any help is appreciated.    
I've got an alert I put together and am trying to REX multiple pieces of it out to their own columns. This is against the Splunk internal logging. I had no problem pulling errorCode since it has a cl... See more...
I've got an alert I put together and am trying to REX multiple pieces of it out to their own columns. This is against the Splunk internal logging. I had no problem pulling errorCode since it has a clearly defined field-within-a-field, but I'm not able to pull a subset string of another part of the message Query index=_internal sourcetype=sfdc:object:log log_level=ERROR OR log_level=WARNING | rex "\"errorCode\":\"(?<errorCode>[^\s]+)\"" | stats count(stanza_name) by stanza_name, log_level, errorCode, message I've got the message at the end just to give me the query error, but what I'd like to do is REX that also like I did to get the errorCode as its own column. Below is a sample message, with the part in bold what I'd like to rex out to its own column. I can't find an example of doing that where there isn't a clear delineation within the message like "errorCode":"<error>" [{"message":"\nFoo,Bar,FooBar,FooBar2\n ^\nERROR at Row:1:Column:232\nNo such column 'FooBar2' on entity 'MyAwesomeObject'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.","errorCode":"INVALID_FIELD"}]
Hi Team, We have Splunk cloud in production environment like indexer and Search head now customer want UAT environment in On Premises(Indexer and Search head) as testing purpose. Purpose of th... See more...
Hi Team, We have Splunk cloud in production environment like indexer and Search head now customer want UAT environment in On Premises(Indexer and Search head) as testing purpose. Purpose of the UAT environment is that we can test all log source onboarding, usecase development and testing in UAT environment first then forward it to Production Splunk cloud So my question are 1. Is there any challenges, issues and any dependency with respect to App, Add-Ons or any other if we have On premises ES in UAT environment  2.  What is exact difference between Splunk Cloud and Splunk ES On- premises Because we have Splunk cloud in productions environment and customer want Splunk ES on premises as UAT purpose.  
After a successful saved-search run, the results can be found on the directory `$SPLUNK_HOME/var/run/splunk/dispatch/scheduler__...`  We know that the result of the search is named `results.csv.gz` ... See more...
After a successful saved-search run, the results can be found on the directory `$SPLUNK_HOME/var/run/splunk/dispatch/scheduler__...`  We know that the result of the search is named `results.csv.gz`  How do we read this in the OS level apps? Untarring it using `tar -xzvf` does not work.   Thanks
I was looking to implement a search described in this article: threathunting-spl/Detecting_Beaconing.md at master · inodee/threathunting-spl · GitHub TLDR: The above link shows a search that allows... See more...
I was looking to implement a search described in this article: threathunting-spl/Detecting_Beaconing.md at master · inodee/threathunting-spl · GitHub TLDR: The above link shows a search that allows for a data source like firewall connections data to be used to identify connections that are suspiciously uniform in terms of interval. It uses both streamstats and eventstats to calculate the standard deviation for the time intervals for connections between unique combinations of src and dst IPs.  The issue is that the data I would be using is enormous - I'm looking to do the above search on 24 hours worth of data but it fails due to memory limits. What I have in place so far, I do have an accelerated data model for the filtered firewall data, but I dont know how to combine tstats and the search in the above link. Summary indexing wouldnt work since I would still like the stats calculations over the larger time frame (i.e. doing the search every hour for the last hour worth of data might miss whether a connection truly does have a low standard deviation).  Has anyone successfully combined tstats with streamevents/eventstats or built a search that works around just how resource intensive the search is?
I have an accelerated data model which has a field created using a lookup. What I need is for the field to be created at the time the acceleration search runs, rather than at the time that I consume ... See more...
I have an accelerated data model which has a field created using a lookup. What I need is for the field to be created at the time the acceleration search runs, rather than at the time that I consume the data in my query. The lookup file is re-created each day, so the important thing for me is that the value in the field is the one from how the lookup file was at the time it was created, not when I use it. Is that possible or even the default behaviour? Thanks  
Hello, We are currently working with two sets of data that have similar fields. We would like to align matching events in one row (payment amount, category/source and account number) while also mai... See more...
Hello, We are currently working with two sets of data that have similar fields. We would like to align matching events in one row (payment amount, category/source and account number) while also maintaining the values that do not match for failed processing.  Below are some screenshots of what the data looks like now in four rows, as well as what we're hoping to visualize in 3 rows. Any assistance would be greatly appreciated!   Below is our current search: index="index1" Tag="Tag1" | stats values(PaymentAmount) as PaymentAmount by PaymentChannel,AccountId,PaymentCategory,ResponseStatus,StartDT | rename AccountId as AccountNumber | rename PaymentChannel as A_PaymentChannel | rename PaymentCategory as A_PaymentCategory | rename ResponseStatus as A_ResponseStatus | rename StartDT as A_Time | append [search index="index2" sourcetype="source2" | rename PaymentAmount as M_PayAmt | eval PayAmt = tonumber(round(M_PayAmt,2)) | rex field=source "M_(?<M_Source>\w+)_data.csv" | rename "TERMINAL ID" as M_KioskID | rename "ResponseStatus" as "M_ResponseStatus" | rename "KIOSK REPORT TIME" as M_Time | eval _time =strptime(M_Time,"%Y-%m-%d %H:%M:%S.%3Q") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | stats values(PayAmt) as M_PayAmt latest(M_Time) by AccountNumber, M_Source, M_ResponseStatus,M_KioskID | rename latest(M_Time) as M_Time | table M_PayAmt,AccountNumber, M_Source, M_KioskID,M_ResponseStatus,M_Time | mvexpand M_PayAmt] | eval A_PaymentTotal = "$" + PaymentAmount | eval M_PayAmt = "$" + M_PayAmt | eval joiner = AccountNumber | table AccountNumber,A_PaymentChannel,M_KioskID,A_PaymentCategory,M_Source,A_PaymentTotal,M_PayAmt,A_ResponseStatus,M_ResponseStatus,A_Time,_Time | eval M_PayAmt=if(isnull(M_PayAmt),"Unknown",M_PayAmt) | eval A_PaymentTotal=if(isnull(A_PaymentTotal),"Unknown",A_PaymentTotal) | eval A_Time=if(isnull(A_Time), M_Time, A_Time) | eval M_Time=if(isnull(M_Time), A_Time, M_Time) | sort by M_Time desc