All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A co-worker thought to build a new dashboard in 9.3.2 and compare the source code I ended up stripping out the tabs and layout definitions.  Looks like a syntax changed from 9.3 to 9.4.  With those ... See more...
A co-worker thought to build a new dashboard in 9.3.2 and compare the source code I ended up stripping out the tabs and layout definitions.  Looks like a syntax changed from 9.3 to 9.4.  With those removed and the rest of the tags lined up under layout, the page works.   You got me looking in the right place, so I'll mark this as accepted.
Hi @tkwaller1 Thats fine, we can make this work with the global time. Apologies, Im still not 100% sure what you mean - " if its 24 hours I would expect a 24 hour average over the timespan"  - So -... See more...
Hi @tkwaller1 Thats fine, we can make this work with the global time. Apologies, Im still not 100% sure what you mean - " if its 24 hours I would expect a 24 hour average over the timespan"  - So - If 24hours is selected you want 1 number, is this: A) The number of times in that 24 hours that /event/Submission_Event was hit? (Thus, count) B) The number of times in that 24 hours, divided by the number of hours (In this case, 24?)  If B, what would the count be divided by for time range of 5 hours? 7 days? Or 14 days? 60 days? (For example). I'm keen to help you get to the bottom of this so please let me know and we can work out the best search to get your answer. Thanks    
This is built in a dashboard studio dashboard. The $global_time$ token is the timerange selector on the dashboard so I cannot hardcode the search to a specific time range. So no matter what range the... See more...
This is built in a dashboard studio dashboard. The $global_time$ token is the timerange selector on the dashboard so I cannot hardcode the search to a specific time range. So no matter what range they select it should function and give me an average. So if its 24 hours I would expect a 24 hour average over the timespan. Same if they selected last 7 days, a single average for the timerange.
Hi @hammadraza, In Splunk Cloud, especially on a trial version, certain capabilities and settings are restricted for security and operational reasons. The edit_webhook_allow_list capability is one o... See more...
Hi @hammadraza, In Splunk Cloud, especially on a trial version, certain capabilities and settings are restricted for security and operational reasons. The edit_webhook_allow_list capability is one of those that typically require sc_admin access but unfortunately isn't available in trial environments. If you need to configure webhooks for a trial, I recommend contacting Splunk Support or your Splunk representative to discuss potential workarounds or to explore options for a full-featured environment where you have the necessary permissions. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tkwaller1  I may be getting the wrong end of what you're looking for here, but it sounds like you dont need to use the bin command (I suspect this is what you meant instead of bucket command?). ... See more...
Hi @tkwaller1  I may be getting the wrong end of what you're looking for here, but it sounds like you dont need to use the bin command (I suspect this is what you meant instead of bucket command?). If you're only searching 7 days then do you want the average per day? Or the count across the 7 days? For average per day: index=poc channel="/event/Submission_Event" | timechart span=1d count | stats avg(count) as AverageCount If you take out the span=1d then it will create determine an automatic span based on the timeframe which might be unexpected. If you want the count across the 7 days (or whatever global time is set) then you could do: index=poc channel="/event/Submission_Event" | stats count as AverageCount If Ive got the wrong end of the stick then please clarify and I can update the reply   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Maybe a dumb question but its been making me mad, maybe im overthinking it. I have a very simple search: index=poc channel="/event/Submission_Event" | bucket _time span=$global_time.earliest$ | st... See more...
Maybe a dumb question but its been making me mad, maybe im overthinking it. I have a very simple search: index=poc channel="/event/Submission_Event" | bucket _time span=$global_time.earliest$ | stats count by _time | stats avg(count) as AverageCount I just want the avg(count) over the timerange that is selected. So if they picked 7 days it would give the 7 day average, if they picked 24h then it would give the average over the 24 hours span so i can use it in a single viz visual.   I keep getting: Error in 'bucket' command: The value for option span (-24h@h) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. because the token time is something like 24h@h which isnt feasible for token setting. How can i work around this? Any ideas? Thanks so much for the help!
Is there a way to enable edit_webhook_allow_list capability on Splunk Cloud trial? I'm unable to find this setting under Settings -> Server Settings.
See if this gets you started. <<your search for events>> ```Determine if the event is from today or yesterday ``` | eval day=if((now() - _time) >= (now() - relative_time(now(), "@d")),"today", "yest... See more...
See if this gets you started. <<your search for events>> ```Determine if the event is from today or yesterday ``` | eval day=if((now() - _time) >= (now() - relative_time(now(), "@d")),"today", "yesterday") ```Keep the most recent event today and yesterday for each URL | dedup url, day ```List the actions for each URL``` | stats list(action) as actions, values(*) as * by url ```Keep the events with different actions | where mvcount(actions) = 2 ```Keep the events where the first action is 'allowed' and the second is 'blocked'``` | where (mvindex(actions,0)="allowed" AND mvindex(actions,1)="blocked")
Thank you @livehybrid  Just one more information. In Cisco ISE TACACs live logs I do have the option for filtering both: Authorization Authentication. However, in splunk I also tried to filer fo... See more...
Thank you @livehybrid  Just one more information. In Cisco ISE TACACs live logs I do have the option for filtering both: Authorization Authentication. However, in splunk I also tried to filer for Authorization and Authentication, but only was able to filter for type Accountings.    
Hi @abhijeets, The itsi_summary index is a special-purpose index used by Splunk IT Service Intelligence (ITSI) to store KPI data, service analyzer data, and other ITSI-specific summary information. ... See more...
Hi @abhijeets, The itsi_summary index is a special-purpose index used by Splunk IT Service Intelligence (ITSI) to store KPI data, service analyzer data, and other ITSI-specific summary information. Key points about itsi_summary: Stores calculated KPI results and service health scores Contains event-based and metric-based KPI data Used for historical analysis and reporting in ITSI Data retention follows standard index configuration settings For detailed documentation: Overview: https://docs.splunk.com/Documentation/ITSI/4.20.0/Configure/IndexOverview itsi_summary specifically: https://docs.splunk.com/Documentation/ITSI/4.20.0/Configure/IndexRef Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Experts,    Is there any document available which can give me more in-depth knowledge about itsi_summary index. 
Hello Experts,  looking for query where i can find  list of urls  blocked today which were allowed yesterday under different category.  fields- url, url-category, action (values-allowed, blocked) ... See more...
Hello Experts,  looking for query where i can find  list of urls  blocked today which were allowed yesterday under different category.  fields- url, url-category, action (values-allowed, blocked) and time (to compare between yesterday and today)   Thank you advance.   
Ah this is a shame, it looks like it doesnt allow \n characters either. Unfortunately I think using this approach isnt going to work for you due to the way that teams processes the webhook. Instead... See more...
Ah this is a shame, it looks like it doesnt allow \n characters either. Unfortunately I think using this approach isnt going to work for you due to the way that teams processes the webhook. Instead I would recommend checking out https://splunkbase.splunk.com/app/4855 / https://github.com/guilhemmarchand/TA-ms-teams-alert-action by the mighty @guilmxm  which does support Markdown for your MS Teams alerts! Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @netmart  I dont have any ISE events in Splunk to verify the exact SPL for this, however your could start by looking at doing your stats by "Type" and then limit this in your SPL once you know wh... See more...
Hi @netmart  I dont have any ISE events in Splunk to verify the exact SPL for this, however your could start by looking at doing your stats by "Type" and then limit this in your SPL once you know which you want. <ISE Server> Type=* | stats count by Type This is a high level search but might get you started. If you're able to provide examples of the events I'd be happy to update the search accordingly. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Yes the data has a valid json in for these events. For the post, I exported the raw events and sanitized it for the post in the community. If that wasn't the best way to go about it, let me know. I'm... See more...
Yes the data has a valid json in for these events. For the post, I exported the raw events and sanitized it for the post in the community. If that wasn't the best way to go about it, let me know. I'm only starting to post on the Splunk community.  Wen I ran the extended search, the one event that ties them all together is return_value with a value of 0. Try running the stats on return_value? 
Hi @lux209  wow - 18446744073709551615 is presumably an error somewhere! I realised I left my host=macdev in the previous search but guess you noticed and fixed that   If you have a distributed e... See more...
Hi @lux209  wow - 18446744073709551615 is presumably an error somewhere! I realised I left my host=macdev in the previous search but guess you noticed and fixed that   If you have a distributed environment then you should set host=<YourLicenseServer> - which makes me wonder - are you in Splunk Cloud?  If you're in Splunk Cloud then the license limit might be measured slightly differently (and might explain the "16384 petabyte" poolsz value  Either way, at this point you might be best with the following: index=_internal source=*license_usage.log type=RolloverSummary | stats latest(poolsz) as total_license_limit_gb, sum(b) as total_usage_bytes by _time | eval total_usage_gb=round(total_usage_bytes/1024/1024/1024,3) | rename _time as Date total_usage_gb as "Daily License Usage" | eval limitGB=300 | where 'Daily License Usage' > limitGB   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@netmart wrote: I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. Why not... See more...
@netmart wrote: I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. Why not the other two?  What happens when you try? In Cisco ISE Logging, TACACS Accounting has been linked to splunk server. What does it mean to link to a Splunk server? And I am running the following filter: <ISE Server> AND "CISE_TACACS_Accounting" AND Type=Accounting | stats count(_time) by _time. Have you tried adding the other types to the query? <ISE Server> AND ("CISE_TACACS_Accounting" OR <<name for ISE authenticaton>> OR <<name for ISE authorization>>) AND (Type=Accounting OR Type=Authentication OR Type=Authorization) BTW, there's little value in counting events by timestamp.
Hi @JJCO  To audit the usage of a lookup table in Splunk, you can search the search logs to find any queries using it. Use the following SPL to search for references to your lookup table: index=_a... See more...
Hi @JJCO  To audit the usage of a lookup table in Splunk, you can search the search logs to find any queries using it. Use the following SPL to search for references to your lookup table: index=_audit action=search info=completed search="*your_lookup_table_name*" Replace your_lookup_table_name with the actual name of your lookup table. This will show you any search queries that include your lookup table, indicating its usage. For more details, you can refer to Splunk's documentation on auditing: Audit Logs in Splunk This should help you determine if the lookup table is being utilized elsewhere. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correl... See more...
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correlation searches or anything is actually using that lookup table. I wanted to see if there was a way to audit it's use so I can delete the table, and remove the correlation search if needed.
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging... See more...
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging, TACACS Accounting has been linked to splunk server. And I am running the following filter: <ISE Server> AND "CISE_TACACS_Accounting" AND Type=Accounting | stats count(_time) by _time. Any advice is much appreciated. Thanks.