All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

you can use the "Splunk App for SOAR" https://splunkbase.splunk.com/app/6361  
@Mit  Can you check this. https://community.splunk.com/t5/Deployment-Architecture/streamfwd-app-error-in-var-log-splunk-streamfwd-log/m-p/658283  https://community.splunk.com/t5/All-Apps-and-Add-on... See more...
@Mit  Can you check this. https://community.splunk.com/t5/Deployment-Architecture/streamfwd-app-error-in-var-log-splunk-streamfwd-log/m-p/658283  https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-uninstall-Independent-Stream-Forwarder/m-p/278073 
@anthonyi  Identify the sourcetype of your Cisco ISE logs. Common sourcetypes for ISE are cisco:ise:syslog.. You can confirm this in Splunk’s GUI or CLI.   GUI:    Go to Search & Reporting... See more...
@anthonyi  Identify the sourcetype of your Cisco ISE logs. Common sourcetypes for ISE are cisco:ise:syslog.. You can confirm this in Splunk’s GUI or CLI.   GUI:    Go to Search & Reporting. Run a search like index=<indexname> Check the sourcetype field in the events to note the exact name (e.g., cisco:ise:syslog). CLI: The sourcetype for Cisco ISE can be identified in the inputs.conf file through the CLI. Edit props.conf to Truncate Events, You can find out the props.conf details using the below command:   /opt/splunk/bin/splunk btool props list --debug    For instance, if you have the props.conf file located in /opt/splunk/etc/system/local, you can configure it with the following settings:   vi /opt/splunk/etc/system/local/props.conf    Add the following stanza to set the TRUNCATE parameter for your ISE logs:   [your_sourcetype] TRUNCATE = 2000   For example, if your ISE logs have a sourcetype of cisco:ise:syslog, the stanza would be:   [cisco:ise:syslog] TRUNCATE = 2000 This setting ensures that any event exceeding 2000 bytes will be truncated, effectively reducing the size of each event stored in Splunk.   After saving the changes to props.conf, restart your Splunk instance to apply the new configuration.   /opt/splunk/bin/splunk restart   NOTE:  If this add-on is not yet installed, please proceed with the installation. it https://splunkbase.splunk.com/app/1915    Reference for sourcetypes: https://splunk.github.io/splunk-add-on-for-cisco-identity-services/Sourcetypes/   
Seems like this is much more involved than I initially thought. Before you delve into crevices, maybe check something more obvious: rex or regex autoextract itself does not filter results.  You ... See more...
Seems like this is much more involved than I initially thought. Before you delve into crevices, maybe check something more obvious: rex or regex autoextract itself does not filter results.  You sill need a filter to do that. index=accounting sourcetype=linux_admin | rex field=_raw "(?<ssh>\bssh\b)"  Have you tried adding a filter after rex, like this? index=accounting sourcetype=linux_admin | rex field=_raw "(?<ssh>\bssh\b)" | where isnotnull(ssh)  This tells Splunk to return only those events in which the regex has a match. If you use autoextraction as your props.conf shows, to apply filter, you need something like index=accounting sourcetype=linux_admin ssh=* But here is another obvious mismatch. props.conf [linux_audit] TRANSFORMS-changesourcetype = change_sourcetype_authentication This stanza applies to sourcetype linux_audit, NOT linux_admin as suggested in your original search.  Is this a typo when you set up the autoextraction?
Great, in that case you should be able to make the changes in the UI if preferred. Did this work for you?  Did this answer help you? If so, please consider: Adding karma to show it was useful ... See more...
Great, in that case you should be able to make the changes in the UI if preferred. Did this work for you?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dtsao  I'm afraid you lost me at transaction - I dont think I've seen a good usecase for transaction for a number of years, where stats would be much better. The way I would approach this is to ... See more...
Hi @dtsao  I'm afraid you lost me at transaction - I dont think I've seen a good usecase for transaction for a number of years, where stats would be much better. The way I would approach this is to use something like foreach to loop through your array/multival field to set a fixed field with the value you are trying to transaction against. Once you've got this you should be able to do things with stats like | stats range(_time) as timeRange, count, etc BY yourField If you're able to provide some sample data (redacted if needed) then I'd be happy to create a full query for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @fraserphillips  I havent got access to ES today to check this, however it could be the context of the app you are using for the search, in the video can you see which app they are in when they r... See more...
Hi @fraserphillips  I havent got access to ES today to check this, however it could be the context of the app you are using for the search, in the video can you see which app they are in when they run the search? Are you in the same app when you run your search? It could be that the event action is only  When you are running your search, is it in the same app? (Im assuming Mission Control or ES app..) If you're able to share a link to the video I can check for you although I have a feeling that this is an ES7 feature that might not be in ES8 (Yet?) - The more I think about it, the more I think this behaviour is different in ES8 and you're expected to create Investigations from the Analyst Queue and then work from there?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @k1green97  Check out the following, if your Field2 is a multivalue field you should be good with a 'where IN': | where Field1 IN (Field2)  Full example: | windbag | head 25 | streamstats coun... See more...
Hi @k1green97  Check out the following, if your Field2 is a multivalue field you should be good with a 'where IN': | where Field1 IN (Field2)  Full example: | windbag | head 25 | streamstats count as Field1 | table _time Field1 | eval Field2=split("27,33,17,22,24,31,29,08,36",",") | where Field1 IN (Field2)   HOWEVER, if as it looks on the table you posted that for Row 1, Field1=17 Field2=27 but you want to check if Field1 is in the combined list of Field2 values then you will need to group them together first using eventstats: | eventstats values(Field2) as Field2 | where Field1 IN (Field2) Full example: | makeresults count=9 | streamstats count as _n | eval Field1=case(_n=1, 17, _n=2, 24, _n=3, 36) | eval Field2=case(_n=1, 27, _n=2, 33, _n=3, 17, _n=4, 22, _n=5, 24, _n=6, 31, _n=7, 29, _n=8, 8, _n=9, 36) | fields - _time ``` finished data sample ``` | eventstats values(Field2) as Field2 | where Field1 IN (Field2)    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kn450  I'll try and keep this brief!  Best practice for endpoint data collection varies by organisation, industry, size, and threat profile (e.g., insider threats, advanced attackers, commodity... See more...
Hi @kn450  I'll try and keep this brief!  Best practice for endpoint data collection varies by organisation, industry, size, and threat profile (e.g., insider threats, advanced attackers, commodity malware). The right approach is to define your critical security use-cases (such as lateral movement, privilege escalation, or unauthorised access) and then determine what data to collect to support those cases. Relying on tuned Windows event logs provides robust coverage for most behavioral detection; additional Splunk_TA_windows inputs should only be enabled if tied directly to your specific use-cases. Essential Windows event types commonly used for security monitoring: Logon/Logoff events: 4624 (logon), 4634 (logoff), 4625 (failed logon) Process creation: 4688 (Windows Security log), Sysmon Event ID 1 (with command-line) Network connections: Sysmon Event ID 3 File creation/modifications: Sysmon Event ID 11 Registry changes: Sysmon Event ID 13 (collect selectively, only if relevant to a use case) Security group/user changes: 4720-4732 (account and group modifications) Service creation/modifications: 7045 (filtered for suspicious activity), Sysmon Event ID 7   The free Splunk Security Essentials app can help you map your currently collected data to known security use-cases and identify gaps, aligning your ingestion strategy with what provides actionable security value. I would highly recommend installing this and having a look! Avoid enabling all data sources by default. Regularly review event types and volumes, filtering or disabling sources that do not support your prioritised detection requirements, to control storage and license use. Are you using Splunk Enterprise Security, or are you planning to create your own rules? There is a lot we havent covered here such as making sure data is CIM compliant/mapped, hardware sizing etc. This is the sort of thing which would usually be architected out at the start to ensure you have the right data and resources available. I'd recommend checking out the following links too: Data source planning for Splunk Enterprise Security Lantern - Getting data onboarded to Splunk Enterprise Security  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
There is no single "best practice" here. It all depends on what you want to achieve. If you have specific use cases and detections, you might want to limit your data only to the data directly contrib... See more...
There is no single "best practice" here. It all depends on what you want to achieve. If you have specific use cases and detections, you might want to limit your data only to the data directly contributing to those detections and nothing else. And generally the extent of the data gethered will differ depending on your detections. But if you want to have data for subsequent forensics or threathunting then you might want to have as much data as possible. As long as you know what your data is about (and that's the most important thing with data onboarding - don't onboard unknown data just because "it might come in handy one day"), you want as much data as you can afford with your environment size and license constraints.
@kn450  Configuring Windows event logs for Enterprise Security use https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Configuring_Windows_event_logs_for_Enterprise_Security_use ... See more...
@kn450  Configuring Windows event logs for Enterprise Security use https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Configuring_Windows_event_logs_for_Enterprise_Security_use Ensure that data sent from endpoints to Splunk is encrypted using SSL/TLS Configure Splunk indexing and forwarding to use TLS certificates - Splunk Documentation  
@kn450  Utilize heavy forwarders (HF) to filter and route data based on event types, reducing unnecessary data ingestion Route and filter data - Splunk Documentation Data collection architecture... See more...
@kn450  Utilize heavy forwarders (HF) to filter and route data based on event types, reducing unnecessary data ingestion Route and filter data - Splunk Documentation Data collection architecture - Splunk Lantern
@kn450  Critical for detecting login attempts, privilege escalation, and account changes (e.g., Event IDs 4624, 4648, 4672, 4720). Filter out noisy events like 4663 (file access audits) unless speci... See more...
@kn450  Critical for detecting login attempts, privilege escalation, and account changes (e.g., Event IDs 4624, 4648, 4672, 4720). Filter out noisy events like 4663 (file access audits) unless specifically needed   https://community.splunk.com/t5/All-Apps-and-Add-ons/How-do-I-collect-basic-Windows-OS-Event-Log-data-from-my-Windows/m-p/440187    https://community.splunk.com/t5/Splunk-Enterprise-Security/What-s-the-best-practice-to-configure-a-windows-system-to/m-p/467532    Refer this event codes: https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/    Useful for system-level events like service changes or crashes (e.g., Event IDs 7036, 7045). Limit to high-value events to reduce volume.    Avoiding Redundancy:   Firewall logs provide network traffic visibility (e.g., source/destination IPs, ports, protocols). Avoid collecting redundant network data from endpoints (e.g., excessive DNS or connection logs) unless it provides unique context, like process-level details from Sysmon https://lantern.splunk.com/Data_Descriptors/Firewall_data  WinRegistry and Service: These are high-volume sources. Limit to specific keys (e.g., Run keys, AppInit_DLLs) and events (e.g., new service creation) to avoid collecting redundant or low-value changes.   https://www.splunk.com/en_us/blog/security/threat-hunting-sysmon-event-codes.html 
@kn450  Clearly define the security use cases (e.g., threat detection, incident response, compliance) to determine which data sources are necessary. Avoid collecting all available data without a pur... See more...
@kn450  Clearly define the security use cases (e.g., threat detection, incident response, compliance) to determine which data sources are necessary. Avoid collecting all available data without a purpose, as this increases storage and processing overhead. For example, focus on data that supports MITRE ATT&CK tactics like Execution, Persistence, or Credential Access.    https://riversafe.co.uk/resources/tech-blog/mastering-data-onboarding-with-splunk-best-practices-and-approaches/    Check this https://community.splunk.com/t5/Splunk-Dev/Splunk-registry-monitor-splunk-regmon-generating-too-much-data/m-p/371705 
Dear Splunk Community, I am currently working on a project focused on identifying the essential data that should be collected from endpoints into Splunk, with the goal of avoiding data duplication a... See more...
Dear Splunk Community, I am currently working on a project focused on identifying the essential data that should be collected from endpoints into Splunk, with the goal of avoiding data duplication and ensuring efficiency in both performance and storage. Here’s what has been implemented so far: The Splunk_TA_windows add-on has been deployed. The inputs.conf file has been configured to include all available data. Sysmon has been installed on the endpoints. The Sysmon inputs.conf path has been added to be collected using the default configuration from the Splunk_TA_windows add-on. In addition, we are currently collecting data from firewalls and network switches. I’ve attached screenshots showing the volume of data collected from one endpoint over a 24-hour period. The data volume is quite large, especially in the following categories: WinRegistry Service Upon reviewing the data, I noticed that some information gathered from endpoints may be redundant or unnecessary, especially since we are already collecting valuable data from firewalls and switches. This has led me to consider whether we can reduce the amount of endpoint data being collected without compromising visibility. I would appreciate your input on the following: What are Splunk's best practices for collecting data from endpoints? What types of data are considered essential for security monitoring and analysis? Is relying solely on Sysmon generally sufficient in most security environments? Is there a recommended framework or guideline for collecting the minimum necessary data while maintaining effective monitoring? I appreciate any suggestions, experiences, or insights you can share. Looking forward to learning from your expertise.
I second @richgalloway 's doubts - your description of the problem is confusing  
OK. And those "fields" are...? Values of a multivalued field in a single event? Or just multiple values returned from "stats values "command? Something else? Do you have any other fields in your dat... See more...
OK. And those "fields" are...? Values of a multivalued field in a single event? Or just multiple values returned from "stats values "command? Something else? Do you have any other fields in your data? Do you want them preserved?  
@k1green97  To find values of Field1 that appear in Field2 using Splunk with makeresults(The makeresults command allows users to quickly generate sample data sets for testing) you can create a query... See more...
@k1green97  To find values of Field1 that appear in Field2 using Splunk with makeresults(The makeresults command allows users to quickly generate sample data sets for testing) you can create a query that generates the data and then uses eval and where to filter the matching values.     You can try this query and replace the values: index=my_index sourcetype=my_sourcetype | stats values(Field1) as Field1_values, values(Field2) as Field2_values | mvexpand Field1_values | where Field1_values IN (Field2_values) | table Field1_values
I am not sure where to start on this. I have 2 fields. Field1 only has a few values while Field2 has many. How can I return values Field2 that appear in Field1? Field 1 Field 2 17 27 24 ... See more...
I am not sure where to start on this. I have 2 fields. Field1 only has a few values while Field2 has many. How can I return values Field2 that appear in Field1? Field 1 Field 2 17 27 24 33 36 17   22   24   31   29   08   36
We need more information. Please say more about the problem you are trying to solve.  It would help to see sample data and desired output.