All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This was my original query to get the list of apis that failed for a client. I have more details of the client in the lookup table. How can I extract that in the `chart`.  index=application_na sour... See more...
This was my original query to get the list of apis that failed for a client. I have more details of the client in the lookup table. How can I extract that in the `chart`.  index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping client OUTPUT ClientID ClientName Region | chart count over ClientName by apiName This shows the data like  ClientName RetrievePaymentsA RetrievePaymentsB RetrievePaymentsC Client A 2 1 4 Client B 2 0 3 Client C 5 3 1 How can I add other fields to the output like this ClientId ClientName Region RetrievePaymentsA RetrievePaymentsB RetrievePaymentsC             Any help will be appreciated.
what is the best approach to run splunk queries
So, I have data like this after I ran a query.  For each aggregator, if the aggregator_status is Error and before15 minutes, the aggregator_status becomes Up, alert should not run. But, if the a... See more...
So, I have data like this after I ran a query.  For each aggregator, if the aggregator_status is Error and before15 minutes, the aggregator_status becomes Up, alert should not run. But, if the aggregator_status is still Error or no new event comes, alert should trigger. The Time field is epoch time which I am thinking can be used to find difference in Up and Error status times. How do I create such a query for the alert? I am thinking of using foreach command or some sort of streamstats, but I am unable to resolve this issue. The alert needs to run once every 24 hours.
So I'm unable to get HEC logs into Splunk Cloud (version 9.1.2312.102). When I test the HECs in Postman via: (obviously didn't enter my domain or token for privacy reasons). POST  https://http-inpu... See more...
So I'm unable to get HEC logs into Splunk Cloud (version 9.1.2312.102). When I test the HECs in Postman via: (obviously didn't enter my domain or token for privacy reasons). POST  https://http-inputs-mydomain.splunkcloud.com:443/services/collector/raw with the Authorization Header of "Splunk mytoken" It works as expected and I receive a "text":Success , "code": 0 response, which is good.  I can also see the event in Splunk when I search it.  I did this invidivdually for each HEC that I've created, and they all work....however, whenever I go to setup the actual HECs via the applications I'm trying to integrate...I get nothing. I'm trying to send logs from Dashlane, FiveTran, Knowbe4, and OneTrust.  All of these support native Splunk integrations, I enter the information as requested on their external logging setup and nothing shows in Splunk.  I'm not sure what to do here. Any guidance would be awesome! Thanks in advance!  
Hi All, just started a new role and not been introduced to splunk in any previous jobs, and this is completly new to me. We have a user that is constantly getting account lockout issues. All our D... See more...
Hi All, just started a new role and not been introduced to splunk in any previous jobs, and this is completly new to me. We have a user that is constantly getting account lockout issues. All our Domain controller security logs etc are extracted into splunk every fifteen minutes. I am attempting to complete a search from the Splunk>enterprise --- New Search field but I can only extract the below information which tells me the user, source, and host and that the user has an Audit failure. Please could someone point me to how I would go about extracting the information of what machine the user is getting the account lock from. I see quite a few messages on the internet but they never say where the actual message should be input from. Is it directly into the New Search field.... Any help would be very much appreciated.    
Hi, How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to ... See more...
Hi, How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to the "Incident Review" dashboard in Splunk ES? As far as I know the incidents exists in a KV store collection, and I would then assume that there is some scheduled job that take notable events from the "notable" index, and puts them in the KV store collection. The reason I'm asking is that we are missing incidents in our "Incident Review" dashboard, but the corresponding notable events exists in the notable index. So it looks like the "notable event to incident" job has failed somehow. Is this documented somewhere in more detail?
I am using query as below  index="test" sourcetype="reports" | bin _time span=1m | stats values(a) as a values(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by ... See more...
I am using query as below  index="test" sourcetype="reports" | bin _time span=1m | stats values(a) as a values(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by par1, _time | append [search (index="test" sourcetype=reports_metadata) | table par1,par2,par3,par4,par5,par6,par7,par8,par9,par10,par11,par12] | eventstats values(par2) as par2,values(par3) as par3, values(par4) as par4, values(par5) as par5, values(par6) as par6, values(par7) as par7, values(par8) as par8,values(par9) as par9,values(par10) as par10,values(par11) as par11,values(par12) as par12, values(a) as a alues(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by par1 | search par2 IN ("*") par3 IN ("*") par3 IN ("*") par4 ("*") par5 IN ("*") par6 IN ("*") par7 IN ("*") par8 IN ("*") par9 IN ("*") par10 IN ("*") | search par1="*"ar2 IN ("*") par3 IN ("*") par3 IN ("*") par4 ("*") par5 IN ("*") par6 IN ("*") par7 IN ("*") par8 IN ("*") par9 IN ("*") par10 IN ("*") par11 IN ("*") par12 IN ("*") | timechart span=15m values(a) by par1 limit=0 In this query, I am able to use any values rangin from a to g and plot a time series graph. I need help in plotting time series for one or more values and also how this value can be used to pick from a drop down filter  #timeseries #timechart #xyseries #multiseries #multivalue 
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard... See more...
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard.  While seeing the  developer console, below error shwoing. How to resume back the dashboards of Splunk App for AWS.  Current version of Splunk App AWS 6.0.2.      
Hello Splunkers, I'd like to schedule a query twice a day. For example, one at 12:00 PM and the other at 7:00 PM, and then receive a report of each query. This would save me from having to run the q... See more...
Hello Splunkers, I'd like to schedule a query twice a day. For example, one at 12:00 PM and the other at 7:00 PM, and then receive a report of each query. This would save me from having to run the query each time manually. Is it possible, and if so, how can I do it? The query in question is: (index="index1" Users=* IP=*) OR (index="index2" tag=1) | where NOT match(Users, "^AAA-[0-9]{5}\$") | where NOT match(Users, "^AAA[A-Z0-9]{10}\$") | eval ip=coalesce(IP, srcip) | stats dc(index) AS index_count values(Users) AS Users values(destip) AS destip values(service) AS service earliest(_time) AS earliest latest(_time) AS latest BY ip | where index_count>1 | eval earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table Users, ip, dest_ip, service, earliest, latest Thanks in advance!
Hi Everyone, If I lower the index retention and tell it to use the archive, what happens to the logs with larger retention? Example. We currently have 1 year of retention. If we move to 6 months of ... See more...
Hi Everyone, If I lower the index retention and tell it to use the archive, what happens to the logs with larger retention? Example. We currently have 1 year of retention. If we move to 6 months of retention + 18 of archiving, what happens to logs older than 6 months? 
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assi... See more...
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assigned to notable events in Splunk Enterprise Security - Splunk Documentation  . So, if severity is assigned in the settings of the correlation search, where do we assign the priority to assets? Can someone please explain or provide a documentation page of how this process (assigning priority) is done exactly? Specifically, I would really appreciate if someone could share, where should this be configured, whether on Enterprise Security itself, or elsewhere, is it done through GUI, or it requires manually editing some config files.    Also, a bit stupid question, but, can we also assign priority to identities, for example to indicate higher priority for admin accounts rather than usual accounts.    Thank you for taking your time reading and replying to my post
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform ... See more...
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform is not displayed. I forget what steps I took to register for an account, but maybe it’s because I created an account using a SaaS trial license. Is it possible to install the on-premises AppDynamics platform from this state? Is there no other way but to recreate the account?
i have a log and i am able to fetch all the codes which is of same format, but not able to fetch logs of one error code: {"stream":"stderr","logtag":"P","log":"10/May/2024:09:31:53 +1000 [dgbttrfr]... See more...
i have a log and i am able to fetch all the codes which is of same format, but not able to fetch logs of one error code: {"stream":"stderr","logtag":"P","log":"10/May/2024:09:31:53 +1000 [dgbttrfr] [correlationId=] [subject=], ERROR au.com.jbjcbdj.o.fefewgr.logging.LoggingUtil - severity = \"ERROR\", DateTimestamp = \"09/May/2024 23:31:53\", errorCode = \"PAY_STAT_ERR_0017\", errorMessage = \"Not able to fetch error\","hostname":"ip-101-156-185.ap-southeast-2.internal","host_ip":"10.56","cluster":"nod/pmn08"} i tried fetching using this :  |rex field=log "errorCode\s=\s*(?<errorCode>[^,\s]+)"and key value pair:|rex field=log "errorCode\s=\s*(?<errorCode>[^,\s]+)" But not able to fetch the values whereas i can `be able to fetch all other` `codes exceopt this. can anyone help. Thanks in Advance
Hi, I am new to AppD. I want to using  Method Invocation Data Collectors to collect data for specific method, show System.Net.Sockets.Socket:DoConnect  in my business transaction snapshot. Here is... See more...
Hi, I am new to AppD. I want to using  Method Invocation Data Collectors to collect data for specific method, show System.Net.Sockets.Socket:DoConnect  in my business transaction snapshot. Here is the configuration: Here is the result: I got nothing in data collector tab. Why? Did I set something wrong? Thanks!  
Hello! I'm looking to set the index parameter of the collect command with the value of a field from each event. Here's an example.     | makeresults count=2 | streamstats count | eval index = ca... See more...
Hello! I'm looking to set the index parameter of the collect command with the value of a field from each event. Here's an example.     | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | collect index=index testmode=true     This search creates two events. Both events have the index field, one with "myindex1" as the value, and the other with "myindex2". I would like to use these values to set the index in the collect command.
Hello everyone, How can I correlate two alerts into a third one? For instance: I have alert 1 and alert 2 both with medium severity. I need the following validation in alert 3: If, after 6 hours... See more...
Hello everyone, How can I correlate two alerts into a third one? For instance: I have alert 1 and alert 2 both with medium severity. I need the following validation in alert 3: If, after 6 hours since alert 1 was triggered, alert 2 is triggered as well, generate alert 3 with high severity.
my output in splunk is as below  <error code #> IP Address is x.y.z.a    I want to extract only the x.y.z.a and its count. Should ignore duplicates.   Can someone please assist?
Hello, I have created a dashboard, it is public within my group. I want the end users to be able to open the main SPLUNK link and see all the teams dashboards. We have most of the dashboards linked t... See more...
Hello, I have created a dashboard, it is public within my group. I want the end users to be able to open the main SPLUNK link and see all the teams dashboards. We have most of the dashboards linked to the app but I dont know how to add the one I just did. Added a picutre. 
Hi, all. So, I'm using a timechart visualization (line graph) to display the number of events, by hour, over six weeks and using timewrap to overlay the weeks on top of each other, then showing the ... See more...
Hi, all. So, I'm using a timechart visualization (line graph) to display the number of events, by hour, over six weeks and using timewrap to overlay the weeks on top of each other, then showing the last two weeks along with a six week average in order to be able to spot anomalies at a glance. The problem I'm having is if I mouse over a data point from the current week it shows the appropriate date, but it still shows the same date if I mouse over the previous week's data point, too, or the week before that. For example, if I mouse over 12:00 on Wednesday for "latest_week," the tooltip will show "May 8th, 2024 12:00 PM." If I mouse over 12:00 on Wednesday for "1week_before," the tooltip still shows "May 8th, 2024 12:00 PM."  Is there any way to get the tooltip to show the proper date on the mouse-over? I know that's not going to work on the six week average, but it'd be nice with the current and previous weeks. It's a minor inconvenience, granted, but this is going into a dashboard for not-so-tech-savy customers and if I don't have to make them do math in their head we'll all be a lot better off. Here's my query, in case it'll help (and feel free to direct me toward something more efficient if I'm doing something stupid, you aren't going to hurt my feelings any):     | tstats count where <my_index> <data_field1> <data_field2> by _time span=1h prestats=t | timechart span=1h count by <data_field2> | rename <data_field2> as tot | timewrap 1w | addtotals | eval avg=round((Total/6),0) | table _time tot_1week_before tot_latest_week avg | rename avg as "6 Week Average" tot_latest_week as "Current Week" tot_1week_before as "Previous Week"