All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the ind... See more...
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the indexer node to be able to receive data from third parties. The sourcetype configured to store the data is as follows: [integration] DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = test disabled = false pulldown_type = 1 INDEXED_EXTRACTIONS = none KV_MODE = json My problem is that when I fetch the data, there are events where the field extraction is done in duplicate and others where the field extraction is done only once. Please, can you help me? Best regards, thank you very much  
Why is my Correlation Search not showing up in Incident Review?” “How do I determine why a Correlation Search isn’t creating a notable event?”
Hello family, here is a concern I am experiencing: I have correlation searches that are activated or enable, and to verify that they are receiving CIM-compliant data that are required to make it work... See more...
Hello family, here is a concern I am experiencing: I have correlation searches that are activated or enable, and to verify that they are receiving CIM-compliant data that are required to make it work, when I search their name one-by-one on a Splunk Enterprise Security dashboard pane to make sure the dashboard populates properly, nothing comes out. But when I run the query of this correlation searches on the Search and Reporting pane of Splunk, I will see the events populate. I have gone through the Splunk documentation on CIM-Compliance topics already and watched some You Tube videos, but still don't get it...Please any extra sources from anyone that can help me understand very well will be very welcome. Thanks and best regards.
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been r... See more...
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached" Percentage skipped is 33.3%.  I saw many solutions online, stating to increase the user/role search job limit or either make changes in the limits.conf (which I don't have access to) but couldn't figure out or get clear explanations, Can someone help me to solve this. Also, the saved search in question is running on a cron of 2-59/5 * * * * for time range of 5 mins.  Please suggest.  
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up an... See more...
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up and deleted but scheduled searches may still exist against these deleted indexes. I tried looking in internal indexes but don't see any sort of warning messages for index does not exist. Does anyone know if such a warning message shows up that I can then use to deactivate these searches? thanks in advance Dave
IHAC running a large C11 On-Prem stack. They are in a bit of a pickle due to unsupported RHEL 7 and halfway through an upgrade from 9.3.x to 9.4.x and are seeking advice on the recent CVE's. My prob... See more...
IHAC running a large C11 On-Prem stack. They are in a bit of a pickle due to unsupported RHEL 7 and halfway through an upgrade from 9.3.x to 9.4.x and are seeking advice on the recent CVE's. My problem / question is what version of 'golang' is installed with their particular version of Splunk, this is in response to SVD-2025-0603 | Splunk Vulnerability Disclosure It is not clear how to verify this.  
Hi., Need to generate a server metric report which gives Server Availability and other Hardware metrics.   Tried Dexter, but it doesn't give the Machine Availability metric. Correct me if any chan... See more...
Hi., Need to generate a server metric report which gives Server Availability and other Hardware metrics.   Tried Dexter, but it doesn't give the Machine Availability metric. Correct me if any changes to be done to get this metric in DEXTER.   Tried the API POST Call, but it gives the report with 1/10 mins granularity data. For eg, if I fetch for last 1 hours it splits the data into 6 times for each 10 mins and shares the data, This makes it very complex to get the desired metric. Tried Dashboard, but have to create multiple widgets or Dashboard to achieve this, even after that the reports generated out if it is not clear. Kindly suggest a way to get the Machine availability metrics and other hardware metrics from AppDynamics as  report.    
Hi, I am experiencing issue with  SA-ldapsearch TA.   I am using this search to validate the timestamp index = <index name> | eval bucket=_bkt | eval diff = _indextime - _time | eval indextim... See more...
Hi, I am experiencing issue with  SA-ldapsearch TA.   I am using this search to validate the timestamp index = <index name> | eval bucket=_bkt | eval diff = _indextime - _time | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval capturetime=strftime(_time,"%Y-%m-%d %H:%M:%S") | table indextime capturetime diff _raw I can see that, the indextime  = 2025-06-08 05:00:20 but capturetime = 2020-01-13 10:00:01 Splunk is ingesting the latest ldap events but _time field is having timestamps of 2020.  In the raw event, there are multiple timestamps available: "whenCreated":"2018-06-05 10:43:19+00:00 "whenChanged":"2024-02-11 13:52:37+00:00 "pwdLastSet":"2019-07-24T06:41:44.698530Z "lastLogonTimestamp":"2019-07-24T06:41:44.282975Z but I am not able to understand how the TA is extracting the 2020 timestamp from the raw as there is no such timestamp in the raw event.
I am using the Synthetics browser test to track availability of our Citrix client application endpoints. The user journey: access public url sign into account (username, click next, password, cli... See more...
I am using the Synthetics browser test to track availability of our Citrix client application endpoints. The user journey: access public url sign into account (username, click next, password, click sign-in) click the application icon a new window loads with the application Everything works great up-to step 3. I cannot figure out how we track the new window. This is the key part, I need to know if this loads successfully.  I suspect it is not possible based upon reading the documentation but has anyone had a similar issue and successfully solved it?
Hello,   I am getting the below error on two of my indexers. The indexers in question are on a different site (Site2) to the other two indexers & license manager in the cluster (site1). Site 1is wo... See more...
Hello,   I am getting the below error on two of my indexers. The indexers in question are on a different site (Site2) to the other two indexers & license manager in the cluster (site1). Site 1is working correctly with the same configuration as the indexers for site 2. My guess is networking but both indexers can connect to the LM on this port and there are no issues showing on the firewall between the two. All troubleshooting I have tried shows doesn't show any connectivity issues. Anyone come across this problem and have a solution? ####################################################################### HttpClientRequest [2156984 LMTrackerExecutorWorker-0] - Returning error HTTP/1.1 502 Error connecting: Connection reset by peer ERROR LMTracker [2156984 LMTrackerExecutorWorker-0] - failed to send rows, reason='Unable to connect to license manager=https://****:8089 Error connecting: Connection reset by peer' #######################################################################
I am using splunk 9.3.2. I have visualisation panels added my dashboard with multiple queries. I use a base search with global time picker default value to 48 hours and subsequently use the chain ... See more...
I am using splunk 9.3.2. I have visualisation panels added my dashboard with multiple queries. I use a base search with global time picker default value to 48 hours and subsequently use the chain searches. I need my entire dashboard to refresh after every 5 mins. I tried "refresh":300 but it doesn't work. Not sure what am I missing here. { "visualizations": { }, "dataSources": { }, "defaults": { }, "inputs": { }, "layout": { "type": "absolute", "options": { "height": 2500, "backgroundColor": "#000000", "display": "fit-to-width", "width": 1550 }, }, "description": "", "title": "My Dashboard", "refresh": 300 }
I am using Okta to configure SAML for splunk. Following the step of introduction, I created a SAML group in Splunk and same group name in Okta. Made a role mapping.  https://saml-doc.okta.com/SAML_... See more...
I am using Okta to configure SAML for splunk. Following the step of introduction, I created a SAML group in Splunk and same group name in Okta. Made a role mapping.  https://saml-doc.okta.com/SAML_Docs/How-to-Configure-SAML-2.0-for-Splunk-Cloud.html When finished the setup, the logon page is through Okta but it got below error message after filled in user email and password in Okta logon page. Saml response does not contain group information. Attached the output of saml-tracer addon.  Did I miss something?      
I'm stuck deploying the RUM app provided by the Guided Onboarding (https://github.com/signalfx/microservices-demo-rum). Is anyone else having the same issue? I tried it with both quickstart method o... See more...
I'm stuck deploying the RUM app provided by the Guided Onboarding (https://github.com/signalfx/microservices-demo-rum). Is anyone else having the same issue? I tried it with both quickstart method or local build method. I'm using minikube on M1 Max Mac, and containers won't run.  Since adoptopenjdk/openjdk8:alpine-slim doesn't exist anymore, I'm using openjdk:8-jdk-alpine as the base, and I'm stuck with the gradle error.    
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avo... See more...
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avoid any performance issue. Kindly assist
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to... See more...
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to our Splunk indexers in the Splunk Cloud. Events from /var/log/secure are found there as expected. But no events are found from /var/log/messages. To troubleshoot, I did find these messages in the _internal index from the host: 06-02-2025 15:01:05.507 -0400 INFO WatchedFile [3811453 tailreader0] - Will begin reading at offset=13553847 for file='/var/log/messages'. 06-01-2025 03:21:02.729 -0400 INFO WatchedFile [2392 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/messages'. So the file was read but no events found in Splunk?   [Edit 2025-06-09] The file inputs are configured with a simple stanza in a custom TA: [monitor:///var/log] whitelist = (messages$|secure$) index = os disabled = 0 As the stanza shows, two files are forwarded: /var/log/messages and /var/log/secure. With this search: | tstats count where index=os host=server-name-* by host source I get these results: host source count server-name-a /var/log/secure 39795 server-name-b /var/log/messages 112960 server-name-b /var/log/secure 21938 Server a and b are a pair running the same OS, patches, applications, etc..
Hi all. Having an issue with hostname override for snmp logs. An issue I’m having is i created this props and transforms to get the agent_hostname from the logs to override the host (syslog011) for t... See more...
Hi all. Having an issue with hostname override for snmp logs. An issue I’m having is i created this props and transforms to get the agent_hostname from the logs to override the host (syslog011) for these snmp trap logs but it doesn’t seem to have worked. Not sure what the mistake is herE. TRANSFORMS.CONF [snmptrapd_kv] DELIMS - "\n," =" [snmp_hostname_change] DEST_KEY-MetaData: : Host REGEX-Agent_Hostname = (•*) FORMAT-host:: $1 PROPS.CONF [snmptrapd] disabled = false LINE BREAKER = ([\r\n]+) Agent_ Address\s= MAX TIMESTAMP LOOKAHEAD = 30 NO_BINARY_CHECK - true SHOULD LINEMERGE = false TIME _FORMAT = SY-8m-%d 8H:&M: :S TIME _PREFIX = Datels=\s EXTRACT-node = ^[^\[\n]*\[(?P<node>[^\]]+) REPORT-snmptrapd = snmptrapd_kv TRANSFORMS-snmp_hostname_change = snmp_hostname_change  
I have successfully setup AME and tested the tenant connection and get back connector is healthy.  I can also send test event from the tenant setup page and can see it in the default index.  If I go ... See more...
I have successfully setup AME and tested the tenant connection and get back connector is healthy.  I can also send test event from the tenant setup page and can see it in the default index.  If I go to events there is not test not any of the alerts I have configured to send to AME even though I can see them in the traditional triggered alerts as they are still configured as well.  Looking in _internal I do see the below error: 2025-06-06T11:24:06.612+00:00 version=3.4.0 log_level=ERROR pid=1615220 s=AbstractHECWrapper.py:send_chunk:304 uuid=***************** action=sending_event reason="[Errno 111] Connection refused" Seems to suggest there is an issue with HEC, but the tenant shows green/healthy and the test comes to the index.  Any assistance would be appreaciated. Also, if I create an event from the Events page, that does show up in the app:    
Does splunk support fill-forward or "last observation carried forward". I want to create a daily based monitoring. One example is getting the version of all reported items. I'm getting the versi... See more...
Does splunk support fill-forward or "last observation carried forward". I want to create a daily based monitoring. One example is getting the version of all reported items. I'm getting the version only if it is changed. For each day I need the last available version of the item. How can this be realized with splunk to realize a line-chart?   Thank you in advance Markus
I have promote multiple events into a case. From the case, I will run a playbook.  I understand that I can use the following container automations to set the status to close. phantom.update() p... See more...
I have promote multiple events into a case. From the case, I will run a playbook.  I understand that I can use the following container automations to set the status to close. phantom.update() phantom.close() phantom.set_status() However, these 3 playbook is only able to set the case's status to close. Is it possible to set the status of the promoted events within the case to close also?  For example, I have the following events. Event #1 Event #2 Event #3 When these 3 events are promoted to a case. And I run the playbook from this case, is it possible to set the status of this case and the 3 events to close .