All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here are my findings from a case I opened on this issue a while back. This fixed it for me. Splunk verifies the TLS certificates using SHA-1 cryptography. The default policy on the Linux server need... See more...
Here are my findings from a case I opened on this issue a while back. This fixed it for me. Splunk verifies the TLS certificates using SHA-1 cryptography. The default policy on the Linux server needed to be updated to SHA-1. update-crypto-policies --set DEFAULT:SHA1 https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening
Here is a sample dashboard showing how to set the colours for the pie charts using CSS - note that the order of the pie charts in the trellis is assumed to be fixed. <row> <panel depends="$alw... See more...
Here is a sample dashboard showing how to set the colours for the pie charts using CSS - note that the order of the pie charts in the trellis is assumed to be fixed. <row> <panel depends="$alwayshide$"> <html> <style> #trellis_pie div.facets-container div.viz-panel:nth-child(1) g.highcharts-series path { fill: red !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(2) g.highcharts-series path { fill: green !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(3) g.highcharts-series path { fill: blue !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(4) g.highcharts-series path { fill: yellow !important; } </style> </html> </panel> <panel> <chart id="trellis_pie"> <search> <query>| makeresults count=100 | fields - _time | eval Computer_Name=mvindex(split("ABCDE",""),random()%5).mvindex(split("ABCDE",""),random()%5) | eval Category__Names_of_Patches=mvindex(split("XYZ",""),random()%3) | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches &gt;= 1 AND totalNumberOfPatches &lt;= 5, "Low Exposure", totalNumberOfPatches &gt;= 6 AND totalNumberOfPatches &lt;= 9, "Medium Exposure", totalNumberOfPatches &gt;= 10, "High Exposure", totalNumberOfPatches == 0, "Compliant", 1=1, "&lt;not reported&gt;" ) | stats sum(totalNumberOfPatches) as total by exposure_level | eval category=exposure_level | xyseries category exposure_level total</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">none</option> <option name="trellis.enabled">1</option> </chart> </panel> </row>
I am working on a playbook and I'm facing a challenge in synchronizing and comparing the outputs from two different actions, in particular domain reputation checks via VirusTotal and Cisco Umbrella a... See more...
I am working on a playbook and I'm facing a challenge in synchronizing and comparing the outputs from two different actions, in particular domain reputation checks via VirusTotal and Cisco Umbrella apps, executed on multiple artifacts within a container. (the mentioned apps are just an example)   Below the two challanges that I'm facing:   Synchronizing and Comparing Action Outputs: my main issue is obtaining an output that allows me to verify and compare which IOCs have been flagged as malicious by both the VirusTotal and Cisco Umbrella apps. The current setup involves running both actions on each artifact in the container, but I'm struggling with how to effectively gather and compare these results to determine which IOCs are considered high-risk (flagged by both apps) versus low-risk (flagged by only one app). Filtering Logic Limitation in Splunk SOAR: Another issue is that the SOAR filtering logic is applied at the container level, not at the individual artifact level. This is problematic when a container has multiple IOCs, as benign IOCs might get included in the final analysis even after the application of the filter. I need an effective method to ensure that only artifacts identified as potentially malicious are shown for the final output.   Below an example of the scenario and the desired output: - A container contains multiple artifacts. - Actions executed: VirusTotal and Cisco Umbrella reputation checks on all artifacts. - Expected Output: A list or a summary indicating which artifacts are flagged as malicious by both apps, which would be classified as high-risk, and which are flagged by only one app, classified as low-risk.   I am looking for advice on how to structure the playbook to efficiently filter and analyze these artifacts, ensuring accurate severity assessment based on the action of the app results. Do you have any insights, examples, or best practices on how to define the filtering logic and analysis process in Splunk SOAR?   Thank you for your help
Hello, I manage Splunk hybrid (cloud SH, on-premise DS, HF etc). I have task to create custom roles and R-B-A-C. I have few questions and I would be thankful if you could help me clarify that: 1) ... See more...
Hello, I manage Splunk hybrid (cloud SH, on-premise DS, HF etc). I have task to create custom roles and R-B-A-C. I have few questions and I would be thankful if you could help me clarify that: 1) Do the custom roles populate between Splunk instances? Example, if I create role at cloud SH, will it populate automatically to other cloud SH and on-premise DS? Or do I have to create manually roles and assign users everywhere? 2) Is there a set of Splunk best practices for roles creation? 3) What is the difference if I create roles at web GUI vs backend (at on-prem instances)? Is the final result the same?
The sort is unnecessary. By default Splunk returns results in reverse chronological order so they are sorted (ok, the other way around but it's not that much of a problem). Transaction might indeed ... See more...
The sort is unnecessary. By default Splunk returns results in reverse chronological order so they are sorted (ok, the other way around but it's not that much of a problem). Transaction might indeed work but you have to remember that transaction is a tricky command because it's resource-intensive and has its limitations.
Splunk Enterprise (shows the correct version) automatically redirects to Splunk cloud (Unknown_version) post logging in again. Is there any way to stop this redirect?
sorry , category = exposure_level
Where is category coming from? You only have totalNumberOfPatches, Computer_Name and exposure_level
exactly and will show the number of the count for the specific category
So, each pie chart would be all one colour?
separate once
Are you wanting a separate pie chart for each exposure level or a single pie chart where all the counts for each exposure level are combined?
You need to add (or subtract) the timezone offset from the times. To do this, you  should parse the time strings to epoch datetimes with strptime(), change the time appropriately, and then reformat t... See more...
You need to add (or subtract) the timezone offset from the times. To do this, you  should parse the time strings to epoch datetimes with strptime(), change the time appropriately, and then reformat them with strftime().
Thanks It looks better I just want to color the pie in different colors so :  "Low Exposure" - blue  "Medium Exposure" - yellow "High Exposure" - red "Compliant" - green <not reported> - gra... See more...
Thanks It looks better I just want to color the pie in different colors so :  "Low Exposure" - blue  "Medium Exposure" - yellow "High Exposure" - red "Compliant" - green <not reported> - gray I couldn't find an option to do it 
Hi, I want to find out how many license warnings there is in the current 60 day rolling window. Why is there not an easy way to find this? Surely this should be included in the license usage report? ... See more...
Hi, I want to find out how many license warnings there is in the current 60 day rolling window. Why is there not an easy way to find this? Surely this should be included in the license usage report? regards, Knut
Try something like this index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND totalNumberO... See more...
Try something like this index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND totalNumberOfPatches <= 5, "Low Exposure", totalNumberOfPatches >= 6 AND totalNumberOfPatches <= 9, "Medium Exposure", totalNumberOfPatches >= 10, "High Exposure", totalNumberOfPatches == 0, "Compliant", 1=1, "<not reported>" ) | xyseries Computer_Name exposure_level totalNumberOfPatches Then set your trellis to be by exposure_level
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLo... See more...
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLogon="*" chg="*" seqTimestamp"*" currStateStart="*" currActCodeOid="*" currActStart="*" schedActCodeOid="*" schedActStart="*" nextActCodeOid="*" nextActStart="*" schedDate="*" adherenceStart="*" acdtimediff="*" | eval seqTimestamp=replace(seqTimestamp,"^(.+)T(.+)Z$","\1 \2") | eval currStateStart=replace(currStateStart,"^(.+)T(.+)Z$","\1 \2") | eval currActStart=replace(currActStart,"^(.+)T(.+)Z$","\1 \2") | eval schedActStart=replace(schedActStart,"^(.+)T(.+)Z$","\1 \2") | eval nextActStart=replace(nextActStart,"^(.+)T(.+)Z$","\1 \2") | eval adherenceStart=replace(adherenceStart,"^(.+)T(.+)Z$","\1 \2") | table AcdId, AgentId, AgentLogon, chg, seqTimestamp,seqTimestamp1, currStateStart, currActCodeOid, currActStart, schedActCodeOid, schedActStart, nextActCodeOid, nextActStart, schedDate, adherenceStart, acdtimediff Below are the results I am getting:
pie chart
I got some help from a co-worker which looks to solve my issue, here is the query that he provided me with. All the credit goes to him btw! | tstats summariesonly=true count from datamodel="Aut... See more...
I got some help from a co-worker which looks to solve my issue, here is the query that he provided me with. All the credit goes to him btw! | tstats summariesonly=true count from datamodel="Authentication" WHERE Authentication.action="failure" AND Authentication.user="*" AND Authentication.src="*" AND Authentication.user!=*$ by _time span=1d,Authentication.user | `drop_dm_object_name("Authentication")` | sort 0 - _time | eval date=strftime(_time,"%Y-%m-%d %H:%M:%S") | transaction user maxpause=24h mvlist=true | stats max(eventcount) as maxeventcount by user | where maxeventcount>5 | rename maxeventcount as DaysInRow Using the transaction command instead (which i need to study abit to understand) and also works around the issue with the sort command limitation of 10000 events.  I will let him know about the approach you're suggesting and see what he thinks about that one. 
Where is this information coming from?