All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avo... See more...
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avoid any performance issue. Kindly assist
Hi @jcm  Are you already sending PKI Spotlight data into Splunk? This add-on applies CIM compliance to the data that is sent from PKI Spotlight. If you're already ingesting the data then simply inst... See more...
Hi @jcm  Are you already sending PKI Spotlight data into Splunk? This add-on applies CIM compliance to the data that is sent from PKI Spotlight. If you're already ingesting the data then simply install the app for free on your Splunk instance. If you arent already sending the PKI Spotlight data to Splunk then this can be configured in PKI Spotlight (  From PKI Spotlight controller, go to Settings > Integrations > Splunk) to setup HEC. For more information see the Installation tab on https://splunkbase.splunk.com/app/6875  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ww9rivers  Given that the log reports it is starting from offset=13553847 it sounds like the file is present and it has permission to read it, so we can rule that out. The other thing to check i... See more...
Hi @ww9rivers  Given that the log reports it is starting from offset=13553847 it sounds like the file is present and it has permission to read it, so we can rule that out. The other thing to check is that it is infact still being written to (when was the last even in that file)? You were able to see the _internal logs for the host, so that rules out connectivity issues. I'm wondering if perhaps its ending up somewhere else in an index you arent expecting? How have you configured the input? Is this within the Splunk Add-on for Unix and Linux, or is this a custom monitor stanza in an inputs.conf ? If you have the Splunk Add-on for Unix and Linux but also configured your own input then there is a chance the destination index is being overwritten by the Linux TA - its worth doing a btool to check this, on the UF try: $SPLUNK_HOME/bin/splunk btool inputs list --debug monitor  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Have you checked that there is actually something to forward from that file? Modern RHELs don't write there by default relying on journald instead.
Hi @ww9rivers , which add-on are you using to read these files, Splunk_TA_nix or a custom add-on? anyway, you can debug your configurations using btool (https://help.splunk.com/en/splunk-enterprise... See more...
Hi @ww9rivers , which add-on are you using to read these files, Splunk_TA_nix or a custom add-on? anyway, you can debug your configurations using btool (https://help.splunk.com/en/splunk-enterprise/administer/troubleshoot/9.0/first-steps/use-btool-to-troubleshoot-configurations). Probably there's a conflict in your configurations. Ciao. Giuseppe
Check this out.   https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/Inputsconf#MONITOR:  
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to... See more...
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to our Splunk indexers in the Splunk Cloud. Events from /var/log/secure are found there as expected. But no events are found from /var/log/messages. To troubleshoot, I did find these messages in the _internal index from the host: 06-02-2025 15:01:05.507 -0400 INFO WatchedFile [3811453 tailreader0] - Will begin reading at offset=13553847 for file='/var/log/messages'. 06-01-2025 03:21:02.729 -0400 INFO WatchedFile [2392 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/messages'. So the file was read but no events found in Splunk?   [Edit 2025-06-09] The file inputs are configured with a simple stanza in a custom TA: [monitor:///var/log] whitelist = (messages$|secure$) index = os disabled = 0 As the stanza shows, two files are forwarded: /var/log/messages and /var/log/secure. With this search: | tstats count where index=os host=server-name-* by host source I get these results: host source count server-name-a /var/log/secure 39795 server-name-b /var/log/messages 112960 server-name-b /var/log/secure 21938 Server a and b are a pair running the same OS, patches, applications, etc..
Since you're posting this in the Cloud section I'm assuming you're gonna want to send the events to Cloud. There is no way to send such data directly to Cloud. So you will need at least one server ... See more...
Since you're posting this in the Cloud section I'm assuming you're gonna want to send the events to Cloud. There is no way to send such data directly to Cloud. So you will need at least one server on-prem to gather data from your local environment. Depending on your sources and what you want to collect (and the goal of your data collection) you might use one of various possible syslog receiving methods (dedicated syslog daemon - rsyslog or syslog-ng either writing to files or sending directly to HEC input or SC4S). There are multiple methods of handling SNMP (SNMP modular input, SC4SNMP, self-configured external snmp collecting script and/or snmptrapd). And API enpoints can be handled by some existing TAs or you can try to handle them on your own by external scripts or Addon-builder created API inputs. So there is plethora of possibilities.
App's description on Splunkbase doesn't list any specific external licensing requirements so you should be able to just download the app from there and use it.
I did run 'splunk resync shcluster-replicated-config' .  I left it overnight and somehow SH3 sync'd itself.  I also became the captain, which I changed back.  Ran a sync on SH1 and all good now. No ... See more...
I did run 'splunk resync shcluster-replicated-config' .  I left it overnight and somehow SH3 sync'd itself.  I also became the captain, which I changed back.  Ran a sync on SH1 and all good now. No clue how or why it resync'd itself after many failed tries and clean ups.
Please use code block or preformatted paragraphs for config snippets. It greatly improves readability. And your config's syntax is completely off. If that is your actual config, use splunk btool ch... See more...
Please use code block or preformatted paragraphs for config snippets. It greatly improves readability. And your config's syntax is completely off. If that is your actual config, use splunk btool check to verify your config. If not, please copy-paste your literal settings.
There are several possible approaches to such problem. One is filldown mentioned already by @richgalloway. Another is streamstats or autoregress. Or you might simply reformulate your problem to avoid... See more...
There are several possible approaches to such problem. One is filldown mentioned already by @richgalloway. Another is streamstats or autoregress. Or you might simply reformulate your problem to avoid this altogether. It all depends on particular use case.
Hi @harpr86  Unfortunately this isn’t possible. I think this is the same when using the UI? Eg you create a search and it starts as private and then you have to update the permission to be shared.  ... See more...
Hi @harpr86  Unfortunately this isn’t possible. I think this is the same when using the UI? Eg you create a search and it starts as private and then you have to update the permission to be shared.  I hope this helps, sorry it isn’t the answer you might have hoped for! 
It would help to know more about your use case including any existing SPL you're using. Have you looked at the filldown command?
Hi @ayomotukoya , I suppose that the transforms.conf isn't correct: [snmp_hostname_change] DEST_KEY = MetaData::Host REGEX = Agent_Hostname\s*\=\s*(.*) FORMAT = host::$1 I can be more detailed and... See more...
Hi @ayomotukoya , I suppose that the transforms.conf isn't correct: [snmp_hostname_change] DEST_KEY = MetaData::Host REGEX = Agent_Hostname\s*\=\s*(.*) FORMAT = host::$1 I can be more detailed and sure if you could share a sample of your logs. Ciao. Giuseppe
Seeing events now.  The default template needed a notification assigned and that notification needed to be defined as there was none.  The error mentioned above is still showing but am not sure if it... See more...
Seeing events now.  The default template needed a notification assigned and that notification needed to be defined as there was none.  The error mentioned above is still showing but am not sure if it is causing any seen issues.
Hi all. Having an issue with hostname override for snmp logs. An issue I’m having is i created this props and transforms to get the agent_hostname from the logs to override the host (syslog011) for t... See more...
Hi all. Having an issue with hostname override for snmp logs. An issue I’m having is i created this props and transforms to get the agent_hostname from the logs to override the host (syslog011) for these snmp trap logs but it doesn’t seem to have worked. Not sure what the mistake is herE. TRANSFORMS.CONF [snmptrapd_kv] DELIMS - "\n," =" [snmp_hostname_change] DEST_KEY-MetaData: : Host REGEX-Agent_Hostname = (•*) FORMAT-host:: $1 PROPS.CONF [snmptrapd] disabled = false LINE BREAKER = ([\r\n]+) Agent_ Address\s= MAX TIMESTAMP LOOKAHEAD = 30 NO_BINARY_CHECK - true SHOULD LINEMERGE = false TIME _FORMAT = SY-8m-%d 8H:&M: :S TIME _PREFIX = Datels=\s EXTRACT-node = ^[^\[\n]*\[(?P<node>[^\]]+) REPORT-snmptrapd = snmptrapd_kv TRANSFORMS-snmp_hostname_change = snmp_hostname_change  
@livehybrid thanks for your response. but,  I am looking to perform the two operation in single api.   for example, at the time of creation of splunk alert  , alert should have permission of r+w to... See more...
@livehybrid thanks for your response. but,  I am looking to perform the two operation in single api.   for example, at the time of creation of splunk alert  , alert should have permission of r+w to user.
I have successfully setup AME and tested the tenant connection and get back connector is healthy.  I can also send test event from the tenant setup page and can see it in the default index.  If I go ... See more...
I have successfully setup AME and tested the tenant connection and get back connector is healthy.  I can also send test event from the tenant setup page and can see it in the default index.  If I go to events there is not test not any of the alerts I have configured to send to AME even though I can see them in the traditional triggered alerts as they are still configured as well.  Looking in _internal I do see the below error: 2025-06-06T11:24:06.612+00:00 version=3.4.0 log_level=ERROR pid=1615220 s=AbstractHECWrapper.py:send_chunk:304 uuid=***************** action=sending_event reason="[Errno 111] Connection refused" Seems to suggest there is an issue with HEC, but the tenant shows green/healthy and the test comes to the index.  Any assistance would be appreaciated. Also, if I create an event from the Events page, that does show up in the app: