All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @livehybrid I tried to use the version 8.4 and the same issue. Luiz
@MrLR_02 , the 1-hourfrozenTimePeriodInSecs will not affect buckets which are "hot" - ie they are actively open and being written to. If your buckets aren’t rolling from hot → warm → cold within an h... See more...
@MrLR_02 , the 1-hourfrozenTimePeriodInSecs will not affect buckets which are "hot" - ie they are actively open and being written to. If your buckets aren’t rolling from hot → warm → cold within an hour, retention will appear longer. The reason a restart causes them to roll to frozen is that the indexer closes the hot bucket when it restarts and thus becomes warm, and can then be frozen out. To enforce deletion 1 hour after ingestion, you may need to review some of the following settings, ive included some examples below:   Force hot buckets to roll faster by setting: Its worth understanding these and configuring as required - check https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf#:~:text=maxHotSpanSecs%20%3D%20%3Cpositive%20integer%3E for more info.     [your_index] maxHotSpanSecs = 3600 # Hot bucket rolls to warm after 1h maxHotIdleSecs = 60 # Rolls if idle for 1min maxDataSize = auto_high_volume # Or lower to cap hot-bucket size   These ensure hot buckets roll to warm based on time, not just size.   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello Experts,   I am trying to work on setting up panels with two different queries output based on a filter. I am using the change on condition option  <input type="dropdown" token="spliterror_1... See more...
Hello Experts,   I am trying to work on setting up panels with two different queries output based on a filter. I am using the change on condition option  <input type="dropdown" token="spliterror_1" searchWhenChanged="true"> <label>Splits</label> <choice value="*">All</choice> <choice value="false">Exclude</choice> <choice value="true">Splits Only</choice> <prefix>isSplit="</prefix> <suffix>"</suffix> <default>$spliterror_1$</default> <change> <condition label="All"> <set token="ShowAll">*</set> <unset token="ShowTrue"></unset> <unset token="ShowFalse"></unset> </condition> <condition label="Exclude"> <unset token="ShowAll"></unset> <set token="ShowFalse">false</set> <unset token="ShowTrue"></unset> </condition> <condition label="Splits Only"> <unset token="ShowAll"></unset> <unset token="ShowFalse"></unset> <set token="ShowTrue">true</set> </condition> </change> </input>   The setting/unsetting token displays the panel accordingly but in backend all 3 queries run simultaneously, is there a way that only one condition and related query run only on selection basis   Nishant
# Version Information Splunk Security Essentials version: 3.8.1 Splunk Security Essentials build: 1889 Splunk Enterprise Version: 9.3.2 Current MITRE ATT&CK Ver: 16.1 # Issue Description ... See more...
# Version Information Splunk Security Essentials version: 3.8.1 Splunk Security Essentials build: 1889 Splunk Enterprise Version: 9.3.2 Current MITRE ATT&CK Ver: 16.1 # Issue Description After an update to the MITRE ATT&CK framework, the Data Sources ID column breaks. It becomes vertically indented by 4, leaving the first 4 columns without an ID, and the subsequent columns are off by 4. There are no additional IDs at the end of the lookup. This lookup is correctly formatted upon a clean install as demonstrated below (First and Last 5 rows of the `mitre_data_sources.csv` lookup located at `$SPLUNK_HOME/etc/apps/Splunk_Security_Essentials/lookups/mitre_data_sources.csv` ## Clean Install - First 5 Id Name Data_Source Description Data_Component Data_Component_Description DS0014 Pod Pod: Pod Creation A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Creation Initial construction of a new pod (ex: kubectl apply|run) DS0014 Pod Pod: Pod Modification A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Modification Changes made to a pod, including its settings and/or control data (ex: kubectl set|patch|edit) DS0014 Pod Pod: Pod Metadata A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Metadata Contextual data about a pod and activity around it such as name, ID, namespace, or status DS0014 Pod Pod: Pod Enumeration A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Enumeration An extracted list of pods within a cluster (ex: kubectl get pods) DS0032 Container Container: Container Creation A standard unit of virtualized software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another(Citation: Docker Docs Container) Container Creation Initial construction of a new container (ex: docker create <container_name>) - Last 5 DS0018 Firewall Firewall: Firewall Metadata A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Metadata Contextual data about a firewall and activity around it such as name, policy, or status DS0018 Firewall Firewall: Firewall Disable A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Disable Deactivation or stoppage of a cloud service (ex: Write/Delete entries within Azure Firewall Activity Logs) DS0018 Firewall Firewall: Firewall Rule Modification A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Rule Modification Changes made to a firewall rule, typically to allow/block specific network traffic (ex: Windows EID 4950 or Write/Delete entries within Azure Firewall Rule Collection Activity Logs) DS0018 Firewall Firewall: Firewall Enumeration A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Enumeration An extracted list of available firewalls and/or their associated settings/rules (ex: Azure Network Firewall CLI Show commands) DS0011 Module Module: Module Load Executable files consisting of one or more shared classes and interfaces, such as portable executable (PE) format binaries/dynamic link libraries (DLL), executable and linkable format (ELF) binaries/shared libraries, and Mach-O format binaries/shared libraries(Citation: Microsoft LoadLibrary)(Citation: Microsoft Module Class) Module Load Attaching a module into the memory of a process/program, typically to access shared resources/features provided by the module (ex: Sysmon EID 7) ## After triggering a `Force Update` of Security Content - First 5 Id Name Data_Source Description Data_Component Data_Component_Description   Pod Pod: Pod Enumeration A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Enumeration An extracted list of pods within a cluster (ex: kubectl get pods)   Pod Pod: Pod Metadata A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Metadata Contextual data about a pod and activity around it such as name, ID, namespace, or status   Pod Pod: Pod Creation A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Creation Initial construction of a new pod (ex: kubectl apply|run)   Pod Pod: Pod Modification A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Modification Changes made to a pod, including its settings and/or control data (ex: kubectl set|patch|edit) DS0014 Container Container: Container Metadata A standard unit of virtualized software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another(Citation: Docker Docs Container) Container Metadata Contextual data about a container and activity around it such as name, ID, image, or status - Last 5 DS0009 Firewall Firewall: Firewall Rule Modification A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Rule Modification Changes made to a firewall rule, typically to allow/block specific network traffic (ex: Windows EID 4950 or Write/Delete entries within Azure Firewall Rule Collection Activity Logs) DS0009 Firewall Firewall: Firewall Disable A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Disable Deactivation or stoppage of a cloud service (ex: Write/Delete entries within Azure Firewall Activity Logs) DS0009 Firewall Firewall: Firewall Metadata A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Metadata Contextual data about a firewall and activity around it such as name, policy, or status DS0009 Firewall Firewall: Firewall Enumeration A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Enumeration An extracted list of available firewalls and/or their associated settings/rules (ex: Azure Network Firewall CLI Show commands) DS0018 Module Module: Module Load Executable files consisting of one or more shared classes and interfaces, such as portable executable (PE) format binaries/dynamic link libraries (DLL), executable and linkable format (ELF) binaries/shared libraries, and Mach-O format binaries/shared libraries(Citation: Microsoft LoadLibrary)(Citation: Microsoft Module Class) Module Load Attaching a module into the memory of a process/program, typically to access shared resources/features provided by the module (ex: Sysmon EID 7) --- This has occurred consistently on both existing and fresh Splunk installations. I suspect it's due to an update to MITRE, and the JSON parsers haven't been updated to handle the changes accordingly. This is purely conjecture. I have been playing about reading Python scripts located at `~/etc/apps/Splunk_Security_Essentials/bin`, but have come across nothing conclusive so far. --- Please let me know if this is an issue that anyone else has been facing, and if this also affects any of the other MITRE lookups, that I haven't yet noticed. If this affected more important lookups such as Detections or Threat Groups, this would considerably affect the app functionality. If anybody has any suggestions, or requires any more information, please let me know. Thanks - Stanley
Hello, I have defined a frozenTimePeriodInSecs for 1 hour on my IDX for a certain index, so that the logs it contains are only kept for 1 hour. The definition of the frozenTimePeriodInSecs was made... See more...
Hello, I have defined a frozenTimePeriodInSecs for 1 hour on my IDX for a certain index, so that the logs it contains are only kept for 1 hour. The definition of the frozenTimePeriodInSecs was made in the indexes.conf in the system/local directory The problem I have, however, is that the frozenTimePeriodInSecs config only takes effect once when the IDX is restarted. Otherwise, the logs remain in this index for the defined retention period. Has anyone already had the same problem and can help me with this? Thanks in advance.
Thanks for your suggestion but It does not seem to be repFactor. Its set on Auto for all indexes. [splunk@servername~]$ splunk btool indexes list _internal | grep repFactor repFactor = auto  
Hi @Braagi  I would use eval for this rather than set. Have a look at the example below which uses a timechart as you mentioned and then sets the earliest/latest for a stats table on the left: ... See more...
Hi @Braagi  I would use eval for this rather than set. Have a look at the example below which uses a timechart as you mentioned and then sets the earliest/latest for a stats table on the left:   <dashboard version="1.1" theme="light"> <label>AnswersTesting</label> <row> <panel> <table> <search> <query>|tstats count where index=_internal earliest=$form.earliest$ latest=$form.latest$ by host</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>ClickVal = $form.earliest$ - $form.latest$</title> <chart> <search> <query>|tstats count where index=_internal by _time, host span=1m | timechart span=1m sum(count) as count by host</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="form.earliest">$click.value$-300</eval> <eval token="form.latest">$click.value$+300</eval> </drilldown> </chart> </panel> </row> </dashboard> Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @kiwiglen  To address your requirement of aggregating CPU metrics across all tasks (parents and children) by USER type, ensure your SPL include all events without filtering on parent/child relati... See more...
Hi @kiwiglen  To address your requirement of aggregating CPU metrics across all tasks (parents and children) by USER type, ensure your SPL include all events without filtering on parent/child relationships. Use stats directly on USER to calculate avg, max and sum - I wasnt sure if you wanted avg/max or sum?: index=your_index | stats avg(USRCPUT_MICROSEC) as avg_cpu, sum(USRCPUT_MICROSEC) as total_cpu max(USRCPUT_MICROSEC) as max_cpu by USER   This aggregates CPU for all tasks (parents + children) by their USER type. If you need hierarchical sums (e.g., parent CPU + child CPU per root transaction), clarify your data structure, as recursive aggregation would require a different approach (e.g., transaction grouping or hierarchical data modeling). If you do need per transaction level instead of USER then the following might work: index=your_index | eval txID = COALESCE(PHTRANNO,TRANNO) | stats avg(USRCPUT_MICROSEC) as avg_cpu, sum(USRCPUT_MICROSEC) as total_cpu max(USRCPUT_MICROSEC) as max_cpu by txID  Obviously this depends a lot on what the raw data actually looks like, let me know if this is close to what you need and maybe provide some sample anonymised data if possible? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @rukshar  Do you have theSplunk_TA_windows installed as well?  If you not making the changes in Splunk_TA_windows/local and are using your own custom app with a default directory rather than loc... See more...
Hi @rukshar  Do you have theSplunk_TA_windows installed as well?  If you not making the changes in Splunk_TA_windows/local and are using your own custom app with a default directory rather than local then you need to make sure the custom app has a higher order of precedence than the Splunk_TA_windows app. e.g 100_yourOrg_wininputs (Precendence goes 0-9A-Za-z) For more info on precedence check out https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
I'd definitely recommend looking at the SVA models that @PickleRick linked to.  I like the diagram-as-code approach with Mermaid but they arent the prettiest! If you do go down the draw.io route the... See more...
I'd definitely recommend looking at the SVA models that @PickleRick linked to.  I like the diagram-as-code approach with Mermaid but they arent the prettiest! If you do go down the draw.io route then there are some icons available at https://github.com/livehybrid/splunk_drawio_icons Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @Gryphus , you should analyze the indexes.conf in your cluster Manager (and so in your Indexers) and verify if you configured  repFactor = auto for those indexes, as described at https://docs.s... See more...
Hi @Gryphus , you should analyze the indexes.conf in your cluster Manager (and so in your Indexers) and verify if you configured  repFactor = auto for those indexes, as described at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf Ciao. Giuseppe
Your requirements and the description of your data are a bit unclear. If you have your data as a tree structure represented only as separate vertices without a full path to the root... you can't do ... See more...
Your requirements and the description of your data are a bit unclear. If you have your data as a tree structure represented only as separate vertices without a full path to the root... you can't do it reasonably using SPL alone. It's similar to the anti-patterns in SQL - since you don't have recursion at your disposal, you can't reconstruct an arbitrarily-long path just from parent-child node pairs. So be a bit more verbose about your problem.
I have 2 indexers in a cluster. One is down and one is up. All buckets are there on the indexer that is up but still not all indexes are searchable. Why is this and what can i do?      
What do you mean by "interactive" here? Why not just use SVA diagrams? https://docs.splunk.com/Documentation/SVA/current/Architectures/About
1. Why are you using custom sourcetype? There are already well-defined knowledge objects for the standard windows eventlog sourcetypes which come with TA_windows. 2. You can't define two separate in... See more...
1. Why are you using custom sourcetype? There are already well-defined knowledge objects for the standard windows eventlog sourcetypes which come with TA_windows. 2. You can't define two separate instances of the same input (in your case - WinEventLog://Security). So check with btool what are the effective settings for your input after layering your own app and windows built-in stuff (and possibly TA_windows).
Thank you for your support. I found a error bucket in the bucket state, removed it directly from the CLI environment, and rebooted it to fix the problem.   And the rf & sf met finally. Bucket with... See more...
Thank you for your support. I found a error bucket in the bucket state, removed it directly from the CLI environment, and rebooted it to fix the problem.   And the rf & sf met finally. Bucket with error was not created in folder form, but in file form.
 We are trying to configure event monitoring for Security Event ID 4624 (successful login) and Event ID 4625 (unsuccessful login) for an Account. We have created the app with the below stanza in inpu... See more...
 We are trying to configure event monitoring for Security Event ID 4624 (successful login) and Event ID 4625 (unsuccessful login) for an Account. We have created the app with the below stanza in inputs.conf file   [WinEventLog://Security] index = wineventlog sourcetype=Security:AD_Sec_entmon disabled = 0 start_from = oldest current_only = 1 evt_resolve_ad_obj = 1 checkpointInterval = 300 whitelist = EventCode="4624|4625" #renderXml=false    However, there is no data though the app has been successfully deployed. Please assist me on this issue.
Figured it out; | eval reg_date=strptime(newdate, "%Y%m%d")
Hi SMEs; I'd like to convert the following date format into epoch:  yyyymmdd. E.g 20220508. Any assistance would be appreciated!  
Further Update:  Splunk fixed the bug and expect it to be released to Splunk Cloud in the next couple of weeks.