All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Great, thanks, that works.
for us the upgrade worked when we added the parameter: USE_LOCAL_SYSTEM=1 and the service was started as Local System when we did an uninstall of 9.1.2 and new install of 9.1.3 without the param... See more...
for us the upgrade worked when we added the parameter: USE_LOCAL_SYSTEM=1 and the service was started as Local System when we did an uninstall of 9.1.2 and new install of 9.1.3 without the parameter the service was installed with the user NT SERVICE\SplunkForwarder
Yes, that was the documentation I was going on as well. As soon as I switched the old host to Standalone mode and configured Distributed mode on the new host, the indexers appeared, so that's the fir... See more...
Yes, that was the documentation I was going on as well. As soon as I switched the old host to Standalone mode and configured Distributed mode on the new host, the indexers appeared, so that's the first part of the migration done anyway.
Hello @ITWhisperer , I wonder why I didn't get notification when you responded. Is it possible to only display 2-field column chart in a weekly report, but with all fields in the statistics table? ... See more...
Hello @ITWhisperer , I wonder why I didn't get notification when you responded. Is it possible to only display 2-field column chart in a weekly report, but with all fields in the statistics table? Thank you for your help.
Hello, indeed the links provided are the ones I'm following. This issue has been resolved internally. We found out that Django manages the web server instead of the manage.py for python. Thank yo... See more...
Hello, indeed the links provided are the ones I'm following. This issue has been resolved internally. We found out that Django manages the web server instead of the manage.py for python. Thank you for your response.
I am switching from local auth to saml authentication and when logging in, the username is now a random string. How do I get it to be the "nickname" or friendly name that is provided in the saml resp... See more...
I am switching from local auth to saml authentication and when logging in, the username is now a random string. How do I get it to be the "nickname" or friendly name that is provided in the saml response? Is there a way to override the field in the saml stanza in the authentication.conf file? Changing the realName field in the authenticationResponseAttrMap_SAML stanza in the authentication.conf doesn't actually change the username. If it is not possible, how would I transfer knowledge objects to the "new" users.
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as... See more...
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Message EventCode | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `suspicious_event_log_service_behavior_filter` | collect index=asx sourcetype=asx marker="mitre_id=T1070.001, execution_type=adhoc, execution_time=1637664004.675815" but the rule is currently too noisy. Is it possible to set a bin time(5mins) between stop logging and start logging events. After 5mins if the logging started then I want to ignore the alerts.  Or I have seen a field named dvc_priority, can we set the alerts only for high or critical?  Help me with the query please. 
Thanks, I did try the method here, but it doesn't seem effective. Solved: User-specific browser session timeout? - Splunk Community Are there other ways, regardless of which timeout settings I ... See more...
Thanks, I did try the method here, but it doesn't seem effective. Solved: User-specific browser session timeout? - Splunk Community Are there other ways, regardless of which timeout settings I can use to ensure my particular dashboard user does not logged out?
You could try using a dashboard with a charting option <dashboard version="1.1" theme="light"> <label>Test</label> <row> <panel> <chart> <search> <query>| makeresults... See more...
You could try using a dashboard with a charting option <dashboard version="1.1" theme="light"> <label>Test</label> <row> <panel> <chart> <search> <query>| makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="charting.data.fieldShowList">[Name,GPA]</option> </chart> </panel> </row> </dashboard>
The Hyper-V add-on is using C:\Window\Temp folder as the location I want to change from.
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber c... See more...
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber con qué FQDN o IP necesita comunicarse para obtener actualizaciones. Esta información es necesaria agregar esos destinos al firewall para que la comunicación no se bloquee y las actualizaciones se puedan realizar sin problemas".
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows d... See more...
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows directory. It appears a dll is being created by PowerShell.exe as part of the add-on and the ‘Access Protection’ component of McAfee sees this as a threat and blocks it. If I disable Access Protection or add PowerShell.exe to the exclusion list within McAfee, then the add-on creates a tmp file (but no visiable dll) and the configured logs are available within Splunk Enterprise. I do not want to do either of these options with McAfee and would instead prefer to change the location used by the Hyper-V add-on to be outside the Windows directory and therefore would not be considered a threat. Is this possible, or is there a better way?
Hi @PickleRick    I do have a cluster manger that handles all my licenses, the issues I am having is with my first cluster manager aws instance just stopped connecting out of the blue so I terminat... See more...
Hi @PickleRick    I do have a cluster manger that handles all my licenses, the issues I am having is with my first cluster manager aws instance just stopped connecting out of the blue so I terminated the aws instance. So now its seems like that terminated instance some how is still connected to the license and this my new cluster manager error is that the license is been used by the terminated aws instance
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and... See more...
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and endpoint starts with https. what can be the issue? Thanks 
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would lik... See more...
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would like to have the calls for the second domain, to go into an application container of their own, instead of all the api calls going into the same application container in EUM. Is this possible? Thanks, Greg
Hi the newer splunk versions have added own monitor for macOS’ logd. You should use it. https://lantern.splunk.com/Data_Descriptors/Mac_OS/Collecting_Mac_OS_log_files r. Ismo
If you have several duplicate email address in to field then you could add dedup or something similar (stats + values) to remove those.
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our sy... See more...
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our syslog server on the default syslog port. I'm able to get test logs from any severity to go through without issue, but I am unable to see any other type of logs.  NMC: AP9641 Syslog settings on device: Port: 514 Protocol : UDP   Message Generation: Enabled Facility Code: User (I've tried all the other options but I was still unable to see any logs)   Severity Mapping Critical: Critical Warning: Warning Informational: Informational  
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a repo... See more...
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a report emailed periodically with all the fields, but the column chart only two fields If I used table command only to show Name and GPA, it showed two graph, but it removed the rest of the fields Please suggest. Thanks StudentID Name GPA Percentile Email 101 Student1 4 100% Student1@email.com 102 Student2 3 90% Student2@email.com 103 Student3 2 70% Student3@email.com 104 Student4 1 40% Student4@email.com | makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com" Current graph Expected result    
Because your desired result is an aggregation, stats is the tool of choice.   | stats max(_time) as _time values(*) as * by id | foreach * [eval changed = mvappend(changed, if(mvcount(<<FIELD>>... See more...
Because your desired result is an aggregation, stats is the tool of choice.   | stats max(_time) as _time values(*) as * by id | foreach * [eval changed = mvappend(changed, if(mvcount(<<FIELD>>) > 1, "changed field \"<<FIELD>>\"", null()))] | table _time changed | eval changed = mvjoin(changed, ", ")   Your sample events give _time changed 2024-01-25 10:20:56 changed field "c" 2024-01-25 10:22:56 changed field "a", changed field "b" Here is an emulation you can play with and compare with real data   | makeresults | eval data =split("10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:21:35 25/Jan/2024 id=2 a=something b=253 c=385 ... 10:22:56 25/Jan/2024 id=2 a=xyz b=- c=385 ...", " ") | mvexpand data | rename data as _raw | extract | rex "(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%H:%M:%S %d/%b/%Y") ``` data emulation above ```