All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What exactly is not working? Please share your search where you are using the token?
Hi @pdgill314  You could start with the `notable` search and then to a lookup on Rule_ID/event_id, however there might be an easier way. I believe the KV Store incident_review_lookup has an urgency ... See more...
Hi @pdgill314  You could start with the `notable` search and then to a lookup on Rule_ID/event_id, however there might be an easier way. I believe the KV Store incident_review_lookup has an urgency field... Try: | inputlookup incident_review_workflow_audit | lookup incident_review_lookup rule_id OUTPUT urgency | where urgency="critical" ``` then the rest as before ``` | where notable_time > relative_time(now(), "-1mon@mon") AND notable_time < relative_time(now(), "@mon") | eval EventOpenedEpoch = notable_time, TriageStartedEpoch = triage_time, ResolutionEpoch = notable_time + new_to_resolution_duration, DaysInNewStatus = round(new_duration/86400,2), DaysToResolution = round(new_to_resolution_duration/86400,2) | where new_to_resolution_duration>0 | eval "Event Opened" = strftime(EventOpenedEpoch, "%Y-%m-%d %H:%M:%S"), "Triage process started" = strftime(TriageStartedEpoch, "%Y-%m-%d %H:%M:%S"), "Event Resolved" = strftime(ResolutionEpoch, "%Y-%m-%d %H:%M:%S") | rename rule_id AS "Event ID" | table "Event ID", "Event Opened", "Triage process started", "Event Resolved", DaysInNewStatus, DaysToResolution urgency | sort - DaysToResolution Im not infront of an ES deployment at the minute so sorry I cant test completely!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Apologies if I wasnt previously clear - the purpose of the check in the _internal index is to check that your forwarder is successfully sending its own internal logs to your indexer(s) - this allows ... See more...
Apologies if I wasnt previously clear - the purpose of the check in the _internal index is to check that your forwarder is successfully sending its own internal logs to your indexer(s) - this allows us to establish if the cause is due to a forwarding issue from the forwarder, or a problem reading in the data. Do you see your forwarder host sending *any* logs (not specific to Trellix) in the _internal index?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ah my apologies, I misunderstood previously. As @PickleRick said, Splunk will only be aware of any changes/additional/removal of groups when a user logs in, so this is something worth considering, a... See more...
Ah my apologies, I misunderstood previously. As @PickleRick said, Splunk will only be aware of any changes/additional/removal of groups when a user logs in, so this is something worth considering, although isnt the issue here. Can I clarify - the 100s of roles that you're referring to here, are these *all* Splunk roles that also exist in Splunk where the AD role is mapped to a specific (unique) Splunk role? The REST endpoints will only return the Splunk role for a user not all their AD roles - I just want to make sure we're on the same page before I dig deeper! Thanks  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @daniela1  You can gain access to documents about and certifications relating to ISO27001 via the Customer Trust portal at https://customertrust.splunk.com/ - Note that you may be required to req... See more...
Hi @daniela1  You can gain access to documents about and certifications relating to ISO27001 via the Customer Trust portal at https://customertrust.splunk.com/ - Note that you may be required to request access to this, which can be done from the portal itself. Generally these certificates are not publicly available, however as @richgalloway mentioned you can see on that URL some of the products which have ISO27001 certifications.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @meetmshah  Yes two different deployments can be fed. search clients for eachother - however the connections will not really know of each other.  I dont know too much about the best practices he... See more...
Hi @meetmshah  Yes two different deployments can be fed. search clients for eachother - however the connections will not really know of each other.  I dont know too much about the best practices here, however *Federated Search for Splunk supports Splunk IT Service Intelligence version 4.16.0 and higher, for transparent mode federated search only* based on the docs. Note - the federated search docs suggest engaging with your account team and/or support when working with premium apps such as ITSI with federated search.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks for your reply. I have checked the case and for the special characters, I even changed the token name from search_criteria to searchcriteria    
Thanks for the info, yes I have gone through ESCU and the Detections on the Splunk page.    Looks like I'm going to have to create my own detections I guess, if I have something solid I will contri... See more...
Thanks for the info, yes I have gone through ESCU and the Detections on the Splunk page.    Looks like I'm going to have to create my own detections I guess, if I have something solid I will contribute for sure.     Thanks again for replying. 
I am sorry for the typing mistake. The Token name is searchcriteria, there is no difference in the case: I know case difference is an issue, but I have made sure there is no case difference,... See more...
I am sorry for the typing mistake. The Token name is searchcriteria, there is no difference in the case: I know case difference is an issue, but I have made sure there is no case difference, and I am still having the same issue.  
Hi @hl  Have you already explored https://research.splunk.com/detections/ and/or Enterprise Security Content Updater? I havent worked with PA events for a while now, when I did I had to create my o... See more...
Hi @hl  Have you already explored https://research.splunk.com/detections/ and/or Enterprise Security Content Updater? I havent worked with PA events for a while now, when I did I had to create my own custom detections however I think there are more and more being added to ESCU.  If you do end up with additional detections for PaloAlto then its worth contributing these to https://github.com/splunk/security_content so that others can benefit from them, and this also demonstrates that the wider community needs detections from PA and thus helps grow the requirement for PA detections to be built by Splunk teams.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You say you found no warm buckets, but what about cold buckets?  Did you find any of those? If you're not running out of disk space then maxVolumeDataSizeMB is not too high. The current settings ha... See more...
You say you found no warm buckets, but what about cold buckets?  Did you find any of those? If you're not running out of disk space then maxVolumeDataSizeMB is not too high. The current settings have buckets spanning 90 days.  Therefore, you should have 5 "generations" of buckets - 0-90 days (hot), 91-180 days, 181-270 days, 271-360 days, and 361-450 days.  That last one is because a bucket won't be frozen until *all* events in it exceed the retention time. Set maxHotSpanSecs to 86400 so each bucket only contains a single day of data and retention should improve.
Hi @jkamdar  Due to having things defined in different places here, it might be best to run a btool to the exact configuration (including default values) Please could you run: $SPLUNK_HOME/bin/spl... See more...
Hi @jkamdar  Due to having things defined in different places here, it might be best to run a btool to the exact configuration (including default values) Please could you run: $SPLUNK_HOME/bin/splunk cmd btool indexes list --debug <yourIndexName> When you talk about buckets "not rolling" - do you mean from Hot->Warm, or Cold->Frozen?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
https://www.splunk.com/en_us/about-splunk/splunk-data-security-and-privacy/compliance-at-splunk.html?locale=en_us
Having a lot of indexes can work against you.  It means the UI can take longer to load.  It also means indexers have to open and unzip more files.  It may also lead to more buckets for the Cluster Ma... See more...
Having a lot of indexes can work against you.  It means the UI can take longer to load.  It also means indexers have to open and unzip more files.  It may also lead to more buckets for the Cluster Manager to track. Some are tempted to create a new index for each data source.  Resist that temptation.  A new index is needed if: 1) New access requirements are needed for some data 2) New retention requirements are needed for some data 3) Data volume is high enough that searches for low-volume data in the same index is affected
Same , I see that there are missing models also. When I event went to the ONNX github I still can't find the models that the splunk query is using for the mltk.    ``` | tstats `summariesonly` dc(... See more...
Same , I see that there are missing models also. When I event went to the ONNX github I still can't find the models that the splunk query is using for the mltk.    ``` | tstats `summariesonly` dc(All_Traffic.src) as src_count,count as total_count from datamodel=Network_Traffic.All_Traffic | apply app:network_traffic_src_count_30m [|`get_qualitative_upper_threshold(extreme)`] | apply app:network_traffic_count_30m [|`get_qualitative_upper_threshold(extreme)`] | search "IsOutlier(src_count)"=1 OR "IsOutlier(total_count)"=1 ``` Where is this located  ? 
I have this small Splunk Enterprise deployment in a lab that's air gapped. So I setup this deployment about 18 months ago. Recently I noticed, I am not rolling any data. I want to set retention peri... See more...
I have this small Splunk Enterprise deployment in a lab that's air gapped. So I setup this deployment about 18 months ago. Recently I noticed, I am not rolling any data. I want to set retention period of 1 year for all the data. After checking the configuration, looks like I have # of Hot buckets set to auto (which is 3 by default, I assume) but I don't find any Warm buckets. So, everything is in Hot buckets. I am looking at few settings maxHotSpanSecs, frozenTimePeriodInSecs and maxVolumeDataSizeMB, that should roll data to warm and then cold buckets eventually.  Under /opt/splunk/etc/system/local/indexes.conf maxHotSpanSecs is set to 7776000 frozenTimePeriodInSecs 31536000 maxVolumeDataSizeMB (not set) Under /opt/splunk/etc/apps/search/indexs.conf maxHotSpanSecs not set frozenTimePeriodInSecs 31536000 (for all the indexes) maxVolumeDataSizeMB (not set) Shouldn't frozenTimePeriodInSecs take precedent? Maybe, my maxVolumeDataSizeMB is set to too high. Do I need to change it? How do frozenTimePeriodInSecs and maxVolumeDataSizeMB affect each other? I thought frozenTimePeriodInSecs would override maxVolumeDataSizeMB
@PickleRick ok got it. So the secure one will be creating seperate index for application wise. But we have nearly 500 indexes to come in overall scope and as of now we have created 100+ indexes which... See more...
@PickleRick ok got it. So the secure one will be creating seperate index for application wise. But we have nearly 500 indexes to come in overall scope and as of now we have created 100+ indexes which means 50 apps (non-prod and prod 2 indexes per app).. if I create summary indexes for these it would be more indexes again. Ideally how many indexes should be there in an environment? However we are using volumes and smartstore as well. Is it very difficult to manage these indexes in future?
They are logging in daily but still can't able to see their name and title
Ok. As I said - you will only see the groups directly assigned by group mappings - no inherited roles. That's one thing. Another thing - as far as I remember, the user is assigned roles from LDAP ma... See more...
Ok. As I said - you will only see the groups directly assigned by group mappings - no inherited roles. That's one thing. Another thing - as far as I remember, the user is assigned roles from LDAP mapping at the time they are logging in. After that the provisioned user stays the way it is until the user logs in again, LDAP gets contacted and then user's roles are synchronized to LDAP groups. So if - for example - your users last logged in a month ago but you added them to various LDAP groups last week, you won't see that in Splunk until they log in.
As I said before - you _can_ use search-time fields but your users can bypass it if they know about it and know how.