All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tkwaller1  I may be getting the wrong end of what you're looking for here, but it sounds like you dont need to use the bin command (I suspect this is what you meant instead of bucket command?). ... See more...
Hi @tkwaller1  I may be getting the wrong end of what you're looking for here, but it sounds like you dont need to use the bin command (I suspect this is what you meant instead of bucket command?). If you're only searching 7 days then do you want the average per day? Or the count across the 7 days? For average per day: index=poc channel="/event/Submission_Event" | timechart span=1d count | stats avg(count) as AverageCount If you take out the span=1d then it will create determine an automatic span based on the timeframe which might be unexpected. If you want the count across the 7 days (or whatever global time is set) then you could do: index=poc channel="/event/Submission_Event" | stats count as AverageCount If Ive got the wrong end of the stick then please clarify and I can update the reply   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Maybe a dumb question but its been making me mad, maybe im overthinking it. I have a very simple search: index=poc channel="/event/Submission_Event" | bucket _time span=$global_time.earliest$ | st... See more...
Maybe a dumb question but its been making me mad, maybe im overthinking it. I have a very simple search: index=poc channel="/event/Submission_Event" | bucket _time span=$global_time.earliest$ | stats count by _time | stats avg(count) as AverageCount I just want the avg(count) over the timerange that is selected. So if they picked 7 days it would give the 7 day average, if they picked 24h then it would give the average over the 24 hours span so i can use it in a single viz visual.   I keep getting: Error in 'bucket' command: The value for option span (-24h@h) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. because the token time is something like 24h@h which isnt feasible for token setting. How can i work around this? Any ideas? Thanks so much for the help!
Is there a way to enable edit_webhook_allow_list capability on Splunk Cloud trial? I'm unable to find this setting under Settings -> Server Settings.
See if this gets you started. <<your search for events>> ```Determine if the event is from today or yesterday ``` | eval day=if((now() - _time) >= (now() - relative_time(now(), "@d")),"today", "yest... See more...
See if this gets you started. <<your search for events>> ```Determine if the event is from today or yesterday ``` | eval day=if((now() - _time) >= (now() - relative_time(now(), "@d")),"today", "yesterday") ```Keep the most recent event today and yesterday for each URL | dedup url, day ```List the actions for each URL``` | stats list(action) as actions, values(*) as * by url ```Keep the events with different actions | where mvcount(actions) = 2 ```Keep the events where the first action is 'allowed' and the second is 'blocked'``` | where (mvindex(actions,0)="allowed" AND mvindex(actions,1)="blocked")
Thank you @livehybrid  Just one more information. In Cisco ISE TACACs live logs I do have the option for filtering both: Authorization Authentication. However, in splunk I also tried to filer fo... See more...
Thank you @livehybrid  Just one more information. In Cisco ISE TACACs live logs I do have the option for filtering both: Authorization Authentication. However, in splunk I also tried to filer for Authorization and Authentication, but only was able to filter for type Accountings.    
Hi @abhijeets, The itsi_summary index is a special-purpose index used by Splunk IT Service Intelligence (ITSI) to store KPI data, service analyzer data, and other ITSI-specific summary information. ... See more...
Hi @abhijeets, The itsi_summary index is a special-purpose index used by Splunk IT Service Intelligence (ITSI) to store KPI data, service analyzer data, and other ITSI-specific summary information. Key points about itsi_summary: Stores calculated KPI results and service health scores Contains event-based and metric-based KPI data Used for historical analysis and reporting in ITSI Data retention follows standard index configuration settings For detailed documentation: Overview: https://docs.splunk.com/Documentation/ITSI/4.20.0/Configure/IndexOverview itsi_summary specifically: https://docs.splunk.com/Documentation/ITSI/4.20.0/Configure/IndexRef Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Experts,    Is there any document available which can give me more in-depth knowledge about itsi_summary index. 
Hello Experts,  looking for query where i can find  list of urls  blocked today which were allowed yesterday under different category.  fields- url, url-category, action (values-allowed, blocked) ... See more...
Hello Experts,  looking for query where i can find  list of urls  blocked today which were allowed yesterday under different category.  fields- url, url-category, action (values-allowed, blocked) and time (to compare between yesterday and today)   Thank you advance.   
Ah this is a shame, it looks like it doesnt allow \n characters either. Unfortunately I think using this approach isnt going to work for you due to the way that teams processes the webhook. Instead... See more...
Ah this is a shame, it looks like it doesnt allow \n characters either. Unfortunately I think using this approach isnt going to work for you due to the way that teams processes the webhook. Instead I would recommend checking out https://splunkbase.splunk.com/app/4855 / https://github.com/guilhemmarchand/TA-ms-teams-alert-action by the mighty @guilmxm  which does support Markdown for your MS Teams alerts! Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @netmart  I dont have any ISE events in Splunk to verify the exact SPL for this, however your could start by looking at doing your stats by "Type" and then limit this in your SPL once you know wh... See more...
Hi @netmart  I dont have any ISE events in Splunk to verify the exact SPL for this, however your could start by looking at doing your stats by "Type" and then limit this in your SPL once you know which you want. <ISE Server> Type=* | stats count by Type This is a high level search but might get you started. If you're able to provide examples of the events I'd be happy to update the search accordingly. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Yes the data has a valid json in for these events. For the post, I exported the raw events and sanitized it for the post in the community. If that wasn't the best way to go about it, let me know. I'm... See more...
Yes the data has a valid json in for these events. For the post, I exported the raw events and sanitized it for the post in the community. If that wasn't the best way to go about it, let me know. I'm only starting to post on the Splunk community.  Wen I ran the extended search, the one event that ties them all together is return_value with a value of 0. Try running the stats on return_value? 
Hi @lux209  wow - 18446744073709551615 is presumably an error somewhere! I realised I left my host=macdev in the previous search but guess you noticed and fixed that   If you have a distributed e... See more...
Hi @lux209  wow - 18446744073709551615 is presumably an error somewhere! I realised I left my host=macdev in the previous search but guess you noticed and fixed that   If you have a distributed environment then you should set host=<YourLicenseServer> - which makes me wonder - are you in Splunk Cloud?  If you're in Splunk Cloud then the license limit might be measured slightly differently (and might explain the "16384 petabyte" poolsz value  Either way, at this point you might be best with the following: index=_internal source=*license_usage.log type=RolloverSummary | stats latest(poolsz) as total_license_limit_gb, sum(b) as total_usage_bytes by _time | eval total_usage_gb=round(total_usage_bytes/1024/1024/1024,3) | rename _time as Date total_usage_gb as "Daily License Usage" | eval limitGB=300 | where 'Daily License Usage' > limitGB   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@netmart wrote: I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. Why not... See more...
@netmart wrote: I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. Why not the other two?  What happens when you try? In Cisco ISE Logging, TACACS Accounting has been linked to splunk server. What does it mean to link to a Splunk server? And I am running the following filter: <ISE Server> AND "CISE_TACACS_Accounting" AND Type=Accounting | stats count(_time) by _time. Have you tried adding the other types to the query? <ISE Server> AND ("CISE_TACACS_Accounting" OR <<name for ISE authenticaton>> OR <<name for ISE authorization>>) AND (Type=Accounting OR Type=Authentication OR Type=Authorization) BTW, there's little value in counting events by timestamp.
Hi @JJCO  To audit the usage of a lookup table in Splunk, you can search the search logs to find any queries using it. Use the following SPL to search for references to your lookup table: index=_a... See more...
Hi @JJCO  To audit the usage of a lookup table in Splunk, you can search the search logs to find any queries using it. Use the following SPL to search for references to your lookup table: index=_audit action=search info=completed search="*your_lookup_table_name*" Replace your_lookup_table_name with the actual name of your lookup table. This will show you any search queries that include your lookup table, indicating its usage. For more details, you can refer to Splunk's documentation on auditing: Audit Logs in Splunk This should help you determine if the lookup table is being utilized elsewhere. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correl... See more...
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correlation searches or anything is actually using that lookup table. I wanted to see if there was a way to audit it's use so I can delete the table, and remove the correlation search if needed.
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging... See more...
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging, TACACS Accounting has been linked to splunk server. And I am running the following filter: <ISE Server> AND "CISE_TACACS_Accounting" AND Type=Accounting | stats count(_time) by _time. Any advice is much appreciated. Thanks.      
Hi livehybrid, Thanks you for your help and the information ! I did some test and it is looking promising, I just have an issue with the poolsz field, the value I get for total_license_limit_gb o... See more...
Hi livehybrid, Thanks you for your help and the information ! I did some test and it is looking promising, I just have an issue with the poolsz field, the value I get for total_license_limit_gb or "Daily License Limit" is 18446744073709551615 and my license limit is 300GB. Do I need to change someting to get the 300GB limit ? I guess this is something around ", sum(b) as total_usage_bytes by _time" to change but I'm not sure what ?   Thank you !
@bowesmana , Thanks for sharing one way of doing it .  I did some changes and trying to get result it is returning it in epoch form ( that value is coming from outer _time) because when I am again c... See more...
@bowesmana , Thanks for sharing one way of doing it .  I did some changes and trying to get result it is returning it in epoch form ( that value is coming from outer _time) because when I am again converting it to readable format it is getting converted to request_time .  index=aws_XXX Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="XXXXX" network_environment=qa source="API-Gateway-Execution-Logs*" (application="XXX" OR application="XXXX") | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | where status=200 and messageGUID="af2ee9ec-9b02-f163-718a-260e83a877f0" | rename _time as request_time | fieldformat request_time=strftime(request_time, "%F %T") | table messageGUID,request_time | join type=inner messageGUID [ search kubernetes_cluster="XXXX*" index="aws_xXXX" sourcetype = "kubernetes_logs" source = *XXXX* | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | where messageGUID="af2ee9ec-9b02-f163-718a-260e83a877f0" | rename _time as pubsub_time | fieldformat pubsub_time=strftime(pubsub_time, "%F %T") | table messageGUID, pubsub_time ] |table messageGUID, request_time, pubsub_time kubernetes_cluster="eks-XXXX" index="aws_XXXX" sourcetype = "kubernetes_logs" source = *da_XXXXX* " "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | where messageGUID="af2ee9ec-9b02-f163-718a-260e83a877f0" | rename _time as pubsub_time | fieldformat pubsub_time=strftime(pubsub_time, "%F %T") | table messageGUID, pubsub_time   When I am running inner search value I am getting    Also I would like to understand option you have provided ,can I run it for multiple dataset ?   
We are not Splunk Support - we're users like you. To properly troubleshoot an issue using regular expressions, we need to see some sample (sanitized) data.  Currently, I'm concerned that events with... See more...
We are not Splunk Support - we're users like you. To properly troubleshoot an issue using regular expressions, we need to see some sample (sanitized) data.  Currently, I'm concerned that events with "22" in the timestamp will be sent to nullQueue. The preferred way to specify the index for data is to put the index name in inputs.conf.  If the index name is absent from inputs.conf, data will go to the default index.  
Hi @zafar, The event you're seeing indicates that the WMI Input on your Universal Forwarder (UF) has performed a clean shutdown. This typically happens when the Splunk service is stopped/restarted O... See more...
Hi @zafar, The event you're seeing indicates that the WMI Input on your Universal Forwarder (UF) has performed a clean shutdown. This typically happens when the Splunk service is stopped/restarted OR can be if you have your WMI input setup on an interval - in which case it is notifying you that it has completed. Here are some steps you can take to investigate further: Check your ingested data: Are you actually missing any WMI events from the UF?  Check the Splunk Service: Ensure that the Splunk service is running on the Windows machine. You can check this in the Windows Services console or by running the following command in a command prompt: sc query SplunkForwarder Review the Splunkd.log: Look for any errors or warnings related to the splunk-wmi.exe in the Splunkd.log file, located in the $SPLUNK_HOME\var\log\splunk directory. Check for Resource Issues: Make sure the Windows machine has sufficient resources (CPU, memory, disk space) to run the Splunk UF. Insufficient resources can lead to unexpected shutdowns or process halting. Review Scheduled Restarts: If you have any scheduled tasks or scripts that restart the Splunk service or Windows machine, ensure they are configured correctly and not causing unintended shutdowns. If you continue to experience issues, please provide more details about your environment, including the Splunk version, operating system, and any relevant configuration settings. This will help in further troubleshooting. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing