All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As I said before - you _can_ use search-time fields but your users can bypass it if they know about it and know how.
Hello,     I see there are lots of Cisco event based detections and not many palo alto or checkpoint (fw, ids/ips, threats) events.    Is everyone just creating their own event based detections fo... See more...
Hello,     I see there are lots of Cisco event based detections and not many palo alto or checkpoint (fw, ids/ips, threats) events.    Is everyone just creating their own event based detections for these two vendors? I do have all the TA apps installed and connectors for both vendors.  Just not seeing any event based detections that have already been setup. 
I'm going to have to go down the Regex path as the Networking team doesn't want to change how their side is set up. I want to double check, this would go on the indexer, correct? I missed the "on ... See more...
I'm going to have to go down the Regex path as the Networking team doesn't want to change how their side is set up. I want to double check, this would go on the indexer, correct? I missed the "on the first HF/Indexer"
@PickleRick will this work for me? What @splunklearner given... 
I mean we have 100 roles already assigned to the users (AD groups) and we can see only 5 roles when giving that search... We want to see all roles assigned to each user... AD group consists of many m... See more...
I mean we have 100 roles already assigned to the users (AD groups) and we can see only 5 roles when giving that search... We want to see all roles assigned to each user... AD group consists of many members
1. It's an old thread. It's often that people aren't even active on Answers after several years. 2. An index is just a place for events "storage". Whether props/transforms work or not is not index-s... See more...
1. It's an old thread. It's often that people aren't even active on Answers after several years. 2. An index is just a place for events "storage". Whether props/transforms work or not is not index-specific (ok, it _can_ be made index-specific but you have to work to explicitly make it so; you can safely assume that it's a very very unlikely case). So if your index-time mechanism doesn't work, it's either defined in a wrong place (where do you have your settings defined?) or is not written properly.
Why would you create indexed fields in the first place? You have a nice space-delimited entries, if you just want performance, use TERM() in your searches.
Wait. Are you saying that you're getting only a handful of results meaning that you don't see all users? (because that's usually the case @livehybrid  described - problematic setting in role definiti... See more...
Wait. Are you saying that you're getting only a handful of results meaning that you don't see all users? (because that's usually the case @livehybrid  described - problematic setting in role definitions cause users to not show up properly in some places). Or do you mean that you have 100 roles defined in your system and ony see 5 roles assigned to the users? This case is acctually normal because Splunk doesn't expand inherited roles. You can see all effective capabilities per user, but you can't see any "intermediate" roles - just the ones explicitly assigned to a user.
You can cheat a bit using normal line chart by selecting to not fill gaps and possibly generating empty rows with no value between valid data points. But there should be better ways to do it.
| eval row=mvrange(0,2) | mvexpand row | eval tmp=if(row==0,tmp,null()) | eval min_w=if(row==0,min_w,null()) | eval max_w=if(row==0,max_w,null()) | fields - row Use line chart with no joining for nu... See more...
| eval row=mvrange(0,2) | mvexpand row | eval tmp=if(row==0,tmp,null()) | eval min_w=if(row==0,min_w,null()) | eval max_w=if(row==0,max_w,null()) | fields - row Use line chart with no joining for nulls
No, the question wasn't for the logs from the Trellix solution. The question is whether you're geting any Splunk forwarder's own events into your _internal index from the hosts from which you will al... See more...
No, the question wasn't for the logs from the Trellix solution. The question is whether you're geting any Splunk forwarder's own events into your _internal index from the hosts from which you will also want to pull Trellix events. Also - where and how are you putting those inputs.conf settings?
I want to configure Federated Search so that Deployment A can search Deployment B, and Deployment B can also search Deployment A. I understand that Federated Search is typically unidirectional (local... See more...
I want to configure Federated Search so that Deployment A can search Deployment B, and Deployment B can also search Deployment A. I understand that Federated Search is typically unidirectional (local search head → remote provider). Is it possible to configure it for true bidirectional searches in a single architecture (create two separate unidirectional configurations (A→B and B→A))? Has anyone implemented this setup successfully? Any best practices or caveats would be appreciated. Also, have anyone implemented this along with ITSI - what are the takeaways and do & don'ts?
Team, do you know where I can find information about certifications like ISO 27001 that apply to our agents as Hotel Collector (Splunk Distribution) UF, HF?
You can give in this way and test and it will some how work. but this is not secure you know. Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = ... See more...
You can give in this way and test and it will some how work. but this is not secure you know. Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod srchFilter = (index=non_prod) Below is the role created for prod [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod) OR (index=opco_summary AND (service=juniper-prod OR service=juniper-cont )) I think this can help you.
Many thanks for the replies guys. That was what i was missing.
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical even... See more...
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical event completion investigation per month for Critical and High notable events in Splunk ES.  I have this query which gives me the numerator and denominator for the events, but does not break it out by Urgency/Severity:  | inputlookup incident_review_workflow_audit | where notable_time > relative_time(now(), "-1mon@mon") AND notable_time < relative_time(now(), "@mon") | eval EventOpenedEpoch = notable_time, TriageStartedEpoch = triage_time, ResolutionEpoch = notable_time + new_to_resolution_duration, DaysInNewStatus = round(new_duration/86400,2), DaysToResolution = round(new_to_resolution_duration/86400,2) | where new_to_resolution_duration>0 | eval "Event Opened" = strftime(EventOpenedEpoch, "%Y-%m-%d %H:%M:%S"), "Triage process started" = strftime(TriageStartedEpoch, "%Y-%m-%d %H:%M:%S"), "Event Resolved" = strftime(ResolutionEpoch, "%Y-%m-%d %H:%M:%S") | rename rule_id AS "Event ID" | table "Event ID", "Event Opened", "Triage process started", "Event Resolved", DaysInNewStatus, DaysToResolution | sort - DaysToResolution      Event ID Event Opened Triage process started Event Resolved DaysInNewStatus DaysToResolution 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@e90ff7db7d8ff92bbe8aa4566c1bab37 2025-07-05 02:02:13 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48 7C412294-C46A-448A-8170-466CE301D56A@@notable@@0feff824336394dbe4dcbedcbf980238 2025-07-05 02:02:08 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48   This query does give me the Urgency for events, but does not give me time to resolution: `notable` | search (urgency=critical) | eval startTime=strftime (_time, "%Y-%m-%d %H:%M:%S") | table startTime, rule_id source comment urgency reviewer status_description owner_realname status_label     startTime rule_id source comment urgency reviewer status_description owner_realname  status_label 2025-07-29 09:30:16 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@5ebbdf0e0821b477785b018e29d44973 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 09:30:12 AD72F249-8457-4D5E-9557-9621E2F5D3FF@@notable@@3043a1f3a2fbc3f92f67800a066ada66 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 07:15:18 7C412294-C46A-448A-8170-466CE301D56A@@notable@@54a0ffabacbf083cb7f2e370937fc2bf Endpoint - ADFS Smart Lockout Events - Rule The event has been triaged critical abcde00 Initial analysis of threat John Doe Triage Trying to combine them to get time to resolution plus urgency (so I can filter on urgency) has been a complete mess. If I do manage to combine them by trimming around the Event ID / rule_id, it doesn't give me the expected number or half the time it is missing the urgency.  Is there something I am missing, or is this even possible? Thanks in advance. 
Sir, When I do a query (index=_internal) looking for records from any of the logs, there are no results.
Hi, can anybody help to create dott chart? x-axis: _time y-axis: points of values of fields: tmp, min_w, max_w Here is the input table:   Here is the wished chart:  
As @livehybrid noted, explicit type conversion does not make a difference here.  If you need numbers sorted as strings then you must use the str operator.
ES 8.1.1 solved this for us!