All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Here is my scenario: There are many Windows servers where the Windows service information is flowing to my Splunk enterprise. There is also a Phantom instance available. I would like to run... See more...
Hi, Here is my scenario: There are many Windows servers where the Windows service information is flowing to my Splunk enterprise. There is also a Phantom instance available. I would like to run a playbook on phantom once a given service’s status is “stopped”. Would you please share me if there a documentation or sample playbook to achieve it. Regards  
Hello Splunkers, My sole intention is to only view events if they are fetched appropriately from UFs or not, as a part of testing after deploying any TA through DS. I don't need any clustering with... See more...
Hello Splunkers, My sole intention is to only view events if they are fetched appropriately from UFs or not, as a part of testing after deploying any TA through DS. I don't need any clustering with indexers or distributed search head. I'm thinking to provision test server and install Splunk enterprise with personalized dev test license, and place it within prod servers network, for testing, debugging. Will this suffice my need, how reliable it would be to use personalized dev test license for such testing or development monitoring purpose?  Will this decision be reliable for a longer tem, considering I renew license every 6 months? What can be other factors I need to understand. I'm trying to find alternative instead of extending our prod license, since we don't have large prod license. Any suggestions would be appreciated to help me to take my decision over this part  
Over the weekend we bounce our indexers and we just found out that the data model accelerations take over an hour to stabilize after such bounces. Their cpu is close to 100% for a while, the time to ... See more...
Over the weekend we bounce our indexers and we just found out that the data model accelerations take over an hour to stabilize after such bounces. Their cpu is close to 100% for a while, the time to complete the searches is very long and we don’t fully trust the system when the cpu is so high for quite a long time. Any thoughts how to improve the situation?
We are ingesting lots of Oracle audit data from various instances into Splunk. We wonder if there are any dashboards/apps for this audit data out there?
I have this add-on "TA Microsoft Windows Defender" installed in our UFs using a deployment server. All configuration is the same in all UFs, but some are working (sending logs to Splunk Cloud), and t... See more...
I have this add-on "TA Microsoft Windows Defender" installed in our UFs using a deployment server. All configuration is the same in all UFs, but some are working (sending logs to Splunk Cloud), and the others are not. I can see all servers are successfully sending other event log events, systems, applications, and security, but some do not send windows defender logs. Core functionality is working with no errors related to the defender TA as well. I have this on windows server 2016. Thanks!
Hi, Just can't get upgrade from  7.3.3 to 8.0.6 working.. Here I'm trying to upgrade clustered splunk (master node) and I also enabled some loggings in web.conf: [settings appServerProcessLogStder... See more...
Hi, Just can't get upgrade from  7.3.3 to 8.0.6 working.. Here I'm trying to upgrade clustered splunk (master node) and I also enabled some loggings in web.conf: [settings appServerProcessLogStderr = true Then I could see at least following suspicious lines: 10-05-2020 19:46:46.301 +0300 WARN X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts you r Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-sign certificates> 10-05-2020 19:46:46.306 +0300 INFO TailReader - Registering metrics callback for: batchreader0 10-05-2020 19:46:46.306 +0300 INFO TailReader - Starting batchreader0 thread 10-05-2020 19:46:46.323 +0300 INFO UiHttpListener - Limiting UI HTTP server to 1365 sockets 10-05-2020 19:46:46.323 +0300 INFO UiHttpListener - Limiting UI HTTP server to 631 threads 10-05-2020 19:46:46.324 +0300 INFO UiAppServer - Starting stderr collecting thread 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: Traceback (most recent call last): 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/root.py", line 10, in <module> 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: from cherrypy import expose 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/site-packages/cherrypy/__init__.py", line 76, in <module> 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: from . import _cprequest, _cpserver, _cptree, _cplogging, _cpconfig 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpserver.py", line 6, in <module> 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: from cherrypy.process.servers import ServerAdapter 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/site-packages/cherrypy/process/__init__.py", line 13, in <module> 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: from .wspbus import bus 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/site-packages/cherrypy/process/wspbus.py", line 66, in <module> 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: import ctypes 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/ctypes/__init__.py", line 551, in <module> 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: _reset_cache() 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: File "/opt/splunk/lib/python3.7/ctypes/__init__.py", line 273, in _reset_cache 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: CFUNCTYPE(c_int)(lambda: None) 10-05-2020 19:46:46.626 +0300 INFO UiAppServer - From appserver: MemoryError I can also deliver more logs if needed... Any ideas what could be wrong?
We are using puppet to upgrade our Splunk UFs to version 8.0.6. We recently successfully updated from 6.x to 7.1.1, and are now trying to update again to 8.0.6. We are trying to update using splunkf... See more...
We are using puppet to upgrade our Splunk UFs to version 8.0.6. We recently successfully updated from 6.x to 7.1.1, and are now trying to update again to 8.0.6. We are trying to update using splunkforwarder-8.0.6-152fb4b2bb96-linux-2.6-x86_64.rpm and our RHEL6 systems are working correctly. However, we are getting the following errors on RHEL5. Is this package installable on RHEL5? If so, can you tell us how to resolve the following errors? Thank you. Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y install splunkforwarder-8.0.6-152fb4b2bb96' returned 1: ERROR with rpm_check_debug vs depsolve: rpmlib(FileDigests) is needed by splunkforwarder-8.0.6-152fb4b2bb96.x86_64 Error: /Stage[main]/Bw_packages::Splunkforwarder/Package[splunkforwarder]/ensure: change from 7.1.1-8f0ead9ec3db to 8.0.6-152fb4b2bb96 failed: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y install splunkforwarder-8.0.6-152fb4b2bb96' returned 1: ERROR with rpm_check_debug vs depsolve: rpmlib(FileDigests) is needed by splunkforwarder-8.0.6-152fb4b2bb96.x86_64
Hello Dear Splunkers, Hope you're doing good! My organization has over 20k servers including Windows and Linux. We have so much of data coming in everyday. We are currently trying to build some usec... See more...
Hello Dear Splunkers, Hope you're doing good! My organization has over 20k servers including Windows and Linux. We have so much of data coming in everyday. We are currently trying to build some usecases for the Windows AD logs. Any apps that you guys can recommend for a distributed environment which I can leverage for the AD logs?  Also, We have Splunk ES and we are trying to use it's capabilities with our overall logs. Please provide some recommendations. Thanks in Advance.
Does the use of HECs require traversing the public internet to get data into Splunk? Example, if my customer was the government and the data passed through Firehose into Splunk is to not touch the in... See more...
Does the use of HECs require traversing the public internet to get data into Splunk? Example, if my customer was the government and the data passed through Firehose into Splunk is to not touch the internet.
Hi All  I have this query index=checkpoint sourcetype=opsec:anti_virus OR sourcetype=opsec:anti_malware Protection_Name=* NOT action=blocked NOT te_action=block | stats count dc(Protection_Name),va... See more...
Hi All  I have this query index=checkpoint sourcetype=opsec:anti_virus OR sourcetype=opsec:anti_malware Protection_Name=* NOT action=blocked NOT te_action=block | stats count dc(Protection_Name),values(Protection_Type),values(Destination_DNS_Hostname),values(te_action),values(malware_action),values(file_name),values(file_md5),values(dest),values(Protection_Name) by src | `map_notable_fields` | rename values(Protection_Type) as \"Protection_Type\",values(Destination_DNS_Hostname) as \"Name_Resolved\",values(te_action) as Action,values(malware_action) as \"Malicious_Intent\",values(file_name) as \"File_Name\",values(file_md5) as \"File_Hash\",values(dest) as Dest,src as Source,values(Protection_Name) as \"Protection_Names\",dc(Protection_Name) as \"Infection_Count\"| where Infection_Count>1 | table \"Infection_Count\",Source,\"Protection_Names\",\"Protection_Type\",Dest,\"Name_Resolved\",\"File_Name\",\"File_Hash\",\"Malicious_Intent\",Action"   how can i know how much notable events i will get historical by hour?  I mean not a query over the last 2 hours, but a query every 2 hours for the last month. In other words, if they run the rule every 2 hours, how many notables per day should they expect?   I tried to do that but its not working: |gentimes start=-20 end=0 increment=1d | map maxsearches=90 index=checkpoint sourcetype=opsec:anti_virus OR sourcetype=opsec:anti_malware Protection_Name=* NOT action=blocked NOT te_action=block | stats count dc(Protection_Name),values(Protection_Type),values(Destination_DNS_Hostname),values(te_action),values(malware_action),values(file_name),values(file_md5),values(dest),values(Protection_Name) by src | `map_notable_fields` | rename values(Protection_Type) as \"Protection_Type\",values(Destination_DNS_Hostname) as \"Name_Resolved\",values(te_action) as Action,values(malware_action) as \"Malicious_Intent\",values(file_name) as \"File_Name\",values(file_md5) as \"File_Hash\",values(dest) as Dest,src as Source,values(Protection_Name) as \"Protection_Names\",dc(Protection_Name) as \"Infection_Count\"| where Infection_Count>1 | table \"Infection_Count\",Source,\"Protection_Names\",\"Protection_Type\",Dest,\"Name_Resolved\",\"File_Name\",\"File_Hash\",\"Malicious_Intent\",Action   What can I do?   Thanks
If I decided to create an Investigation in Splunk ES via the Investigation Workbench from the Investigations page ("Create new Investigation"), could I also create a new notable event associated to t... See more...
If I decided to create an Investigation in Splunk ES via the Investigation Workbench from the Investigations page ("Create new Investigation"), could I also create a new notable event associated to that Investigation? I'm trying to see if there is a way to display an investigation on the Incident Review dashboard since we are leveraging that dashboard for reporting purposes.
Hello Splunkers,  I have a report (apple_weekly_report) which runs every week and I receive an email of the report. Now the issue is the date stamp is auto appending to my report name (apple_weekly_... See more...
Hello Splunkers,  I have a report (apple_weekly_report) which runs every week and I receive an email of the report. Now the issue is the date stamp is auto appending to my report name (apple_weekly_report-2020-09-01.csv) but I do not want the date stamp appended. I just need apple_weekly_report.csv as my report name. Is there any way to disable this date stamp. Thank you
@chrisyounger  Is it possible to reset the value of setting the value of  $flow_map_viz-drilldown$ when clicking on the white space of the diagram? I'm using it to control granularity of a drilldow... See more...
@chrisyounger  Is it possible to reset the value of setting the value of  $flow_map_viz-drilldown$ when clicking on the white space of the diagram? I'm using it to control granularity of a drilldown based on clicking the node and wanted to reset the granularity when the user clicks on the whitespace in the diagram for example.
Hi All,   I am trying to use below regex in my splunk SPL, which is working fin in rubular but not working as SPL.   |rex field=_raw (?<Severity?\s\w{7,8}\;) not working     Please suggest
I have a base search that returns a column I utilise to get the sum off it’s values as a metric, this metric is used as a kpi of a service that has entity rules for one field (fieldX), this field mat... See more...
I have a base search that returns a column I utilise to get the sum off it’s values as a metric, this metric is used as a kpi of a service that has entity rules for one field (fieldX), this field matches around 10 entities each having a different value for this field to find matches. However, within my base search, when I enter fieldX into split by entity, my results, within service analyser and in my glass tables all become 0. When I remove this split by entity(by choosing the option no), the values all begin to work as expected. i am wondering why it is that splitting by entity causes all my results to return 0, I’ve redone this again and again to the same result.
Using EMR Spark & all the logs goes to splunk & there are multiple type of jobs running in the cluster. I want to setup splunk alert,if more that 5% total no. of jobs failed then we get the alert.
I want to monitor our Linux Splunk instances and am using the Splunk Add-on for Nix to collect metrics data and am sending it to em_metrics index and monitoring them as SAI entities via SAI on search... See more...
I want to monitor our Linux Splunk instances and am using the Splunk Add-on for Nix to collect metrics data and am sending it to em_metrics index and monitoring them as SAI entities via SAI on search head.  We have a clustered environment. I am trying to get data from 3 members indexer cluster and 3 members SH cluster. We have universal forwarders installed on all of our instances. I tried to deploy the add-on to UF’s but I couldn’t see the entities on SAI (no data coming through from the hosts).  Now I have installed Nix add-on to the indexer cluster and SH cluster and have changed the inputs to send data to em_metrics index being used in SAI.  The issue that I am facing is for the indexer/SH cluster it is only displaying indexer master and SH cluster captain entities in SAI. I can see the IP's of other members in the dimensions of entities, but I want each host as a seperate entity.  Could anyone please guide me through if I am doing something wrong or how I can achieve what I want to see SAI app? I have been stuck for a few days, so any help is appreciated.  Thanks.
in My cloud different tools  are there like jira,servicenow and there  i can send alert notification to that tools so how can i add splunk and how to get syslog data from prisma-cloud  to splunk. w... See more...
in My cloud different tools  are there like jira,servicenow and there  i can send alert notification to that tools so how can i add splunk and how to get syslog data from prisma-cloud  to splunk. what i have to do for that  is there anyone who can help on this???
Hello, guys Have troubles with the output of lookup command. I know the right syntax of command: ...| lookup <lookup-table-name> <lookup-field1> AS <event-field1>, <lookup-field2> AS <event-field2... See more...
Hello, guys Have troubles with the output of lookup command. I know the right syntax of command: ...| lookup <lookup-table-name> <lookup-field1> AS <event-field1>, <lookup-field2> AS <event-field2> OUTPUTNEW <lookup-destfield1> AS <event-destfield1>, <lookup-destfield2> AS <event-destfield2> And I`m sure that described fields are in the lookup. However, I still get this error message. Any idea what it can be? P.S. Also tried with OUTPUTNEW, nothing changed
Hi all, I have been trying to make a search where i can monitor the expired user accounts. So far i have this   | ldapsearch search="(&(objectClass=user)(!(objectClass=computer)))" attrs="*" | tab... See more...
Hi all, I have been trying to make a search where i can monitor the expired user accounts. So far i have this   | ldapsearch search="(&(objectClass=user)(!(objectClass=computer)))" attrs="*" | table accountExpires sAMAccountName   My problem is the time output. For example, in the AD the time is set to let's say 10/5/2020 12:00:00 AM and Splunk gives 2020-10-04T22:00:00Z as output. I know there Splunk gives it in UTC but we are in GMT+2. Is there a way, without modifying any settings to have the same output as in the AD?  Thank you very much Sasquatchatmars