All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as... See more...
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Message EventCode | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `suspicious_event_log_service_behavior_filter` | collect index=asx sourcetype=asx marker="mitre_id=T1070.001, execution_type=adhoc, execution_time=1637664004.675815" but the rule is currently too noisy. Is it possible to set a bin time(5mins) between stop logging and start logging events. After 5mins if the logging started then I want to ignore the alerts.  Or I have seen a field named dvc_priority, can we set the alerts only for high or critical?  Help me with the query please. 
Thanks, I did try the method here, but it doesn't seem effective. Solved: User-specific browser session timeout? - Splunk Community Are there other ways, regardless of which timeout settings I ... See more...
Thanks, I did try the method here, but it doesn't seem effective. Solved: User-specific browser session timeout? - Splunk Community Are there other ways, regardless of which timeout settings I can use to ensure my particular dashboard user does not logged out?
You could try using a dashboard with a charting option <dashboard version="1.1" theme="light"> <label>Test</label> <row> <panel> <chart> <search> <query>| makeresults... See more...
You could try using a dashboard with a charting option <dashboard version="1.1" theme="light"> <label>Test</label> <row> <panel> <chart> <search> <query>| makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="charting.data.fieldShowList">[Name,GPA]</option> </chart> </panel> </row> </dashboard>
The Hyper-V add-on is using C:\Window\Temp folder as the location I want to change from.
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber c... See more...
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber con qué FQDN o IP necesita comunicarse para obtener actualizaciones. Esta información es necesaria agregar esos destinos al firewall para que la comunicación no se bloquee y las actualizaciones se puedan realizar sin problemas".
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows d... See more...
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows directory. It appears a dll is being created by PowerShell.exe as part of the add-on and the ‘Access Protection’ component of McAfee sees this as a threat and blocks it. If I disable Access Protection or add PowerShell.exe to the exclusion list within McAfee, then the add-on creates a tmp file (but no visiable dll) and the configured logs are available within Splunk Enterprise. I do not want to do either of these options with McAfee and would instead prefer to change the location used by the Hyper-V add-on to be outside the Windows directory and therefore would not be considered a threat. Is this possible, or is there a better way?
Hi @PickleRick    I do have a cluster manger that handles all my licenses, the issues I am having is with my first cluster manager aws instance just stopped connecting out of the blue so I terminat... See more...
Hi @PickleRick    I do have a cluster manger that handles all my licenses, the issues I am having is with my first cluster manager aws instance just stopped connecting out of the blue so I terminated the aws instance. So now its seems like that terminated instance some how is still connected to the license and this my new cluster manager error is that the license is been used by the terminated aws instance
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and... See more...
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and endpoint starts with https. what can be the issue? Thanks 
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would lik... See more...
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would like to have the calls for the second domain, to go into an application container of their own, instead of all the api calls going into the same application container in EUM. Is this possible? Thanks, Greg
Hi the newer splunk versions have added own monitor for macOS’ logd. You should use it. https://lantern.splunk.com/Data_Descriptors/Mac_OS/Collecting_Mac_OS_log_files r. Ismo
If you have several duplicate email address in to field then you could add dedup or something similar (stats + values) to remove those.
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our sy... See more...
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our syslog server on the default syslog port. I'm able to get test logs from any severity to go through without issue, but I am unable to see any other type of logs.  NMC: AP9641 Syslog settings on device: Port: 514 Protocol : UDP   Message Generation: Enabled Facility Code: User (I've tried all the other options but I was still unable to see any logs)   Severity Mapping Critical: Critical Warning: Warning Informational: Informational  
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a repo... See more...
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a report emailed periodically with all the fields, but the column chart only two fields If I used table command only to show Name and GPA, it showed two graph, but it removed the rest of the fields Please suggest. Thanks StudentID Name GPA Percentile Email 101 Student1 4 100% Student1@email.com 102 Student2 3 90% Student2@email.com 103 Student3 2 70% Student3@email.com 104 Student4 1 40% Student4@email.com | makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com" Current graph Expected result    
Because your desired result is an aggregation, stats is the tool of choice.   | stats max(_time) as _time values(*) as * by id | foreach * [eval changed = mvappend(changed, if(mvcount(<<FIELD>>... See more...
Because your desired result is an aggregation, stats is the tool of choice.   | stats max(_time) as _time values(*) as * by id | foreach * [eval changed = mvappend(changed, if(mvcount(<<FIELD>>) > 1, "changed field \"<<FIELD>>\"", null()))] | table _time changed | eval changed = mvjoin(changed, ", ")   Your sample events give _time changed 2024-01-25 10:20:56 changed field "c" 2024-01-25 10:22:56 changed field "a", changed field "b" Here is an emulation you can play with and compare with real data   | makeresults | eval data =split("10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:21:35 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:22:56 25/Jan/2024 id=2 a=xyz b=- c=385 ...", " ") | mvexpand data | rename data as _raw | extract | rex "(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%H:%M:%S %d/%b/%Y") ``` data emulation above ```  
Hello @richgalloway , Thanks for your help.   It's odd that I didn't receive notification when you responded. 1) It looks like it also works if I do the index first, then DBX query.  2) How do I p... See more...
Hello @richgalloway , Thanks for your help.   It's odd that I didn't receive notification when you responded. 1) It looks like it also works if I do the index first, then DBX query.  2) How do I put company ID in the brackets on DBX query dynamically?     eval variable = .....   A, B, C, ...   Z  (Company ID)      where companyID in $variable$ index=company | append [ | dbxquery query="select * from employee where companyID in (A,B,C)" | stats values(*) as * by CompanyID
Thanks for your reply.  It returns multiple results because there's more than one tag in the array per event.  "stats count by tags{}.name" returns 1 count for each tag. os_system_name: Microsoft... See more...
Thanks for your reply.  It returns multiple results because there's more than one tag in the array per event.  "stats count by tags{}.name" returns 1 count for each tag. os_system_name: Microsoft Windows os_type: Workstation os_vendor: Microsoft os_version: 22H2 risk_score: 747.0674438476562 severe_vulnerabilities: 1 tags: [ [-] { [-] name: Asset_Workstation type: CUSTOM } { [-] name: Dept_Finance type: SITE } ] total_vulnerabilities: 1  Results: tags{}.name count Asset_Workstation 1 Dept_Finance 1   I wasn't able to run eval or where operations on the tags{}.name without getting an error so I was stuck.  I just stumbled on my answer but I appreciate your time looking at this.  I knew it had to be a simple query but I wasn't initially able to put it together.  Feel free to offer a better more efficient way to get the below results. (index="index_name") | dedup id | stats count by tags{}.name | rename tags{}.name AS dept | where (dept like "Dept_%")  Results: dept count Dept_Finance 1
Use spath for json data | spath input=properties | spath input=response.result custom_tags | spath input=custom_tags
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise.... See more...
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise. I have it on Mac OS Monterey and it seems to work fine, but I know most use it on Windows. Please, can someone help me find how to log local events on Splunk for Mac? Thank you for your help. Noé
@ITWhisperer suggests to add it to the by clause. (Also known as groupby in Splunk lingo.) Literally just added it after by.  Something like `mbp_ocp4` kubernetes.container.name =*service* level=NG_... See more...
@ITWhisperer suggests to add it to the by clause. (Also known as groupby in Splunk lingo.) Literally just added it after by.  Something like `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") |bin span=30m _time bins=2 | stats count as "total_requests", sum(resp_time_exceeded) as long_calls by _time kubernetes.namespace.name, kubernetes.container.name | eval Percent_Exceeded = (long_calls/total_requests)*100 | where total_requests>200 and Percent_Exceeded>5   
Thank you for making a very nuanced question.  Is that "hostname" Splunk field "host"?  It doesn't matter to the solution, though.   | stats values(result) as result by hostname | where mvcount(res... See more...
Thank you for making a very nuanced question.  Is that "hostname" Splunk field "host"?  It doesn't matter to the solution, though.   | stats values(result) as result by hostname | where mvcount(result) == 1 AND result == "1"