All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks a lot
Hi @ayomotukoya  Can I check which method you are using to ingest the SNMP data? Is it using Splunk Connect for SNMP? (https://splunk.github.io/splunk-connect-for-snmp/main/) - If so this sends the ... See more...
Hi @ayomotukoya  Can I check which method you are using to ingest the SNMP data? Is it using Splunk Connect for SNMP? (https://splunk.github.io/splunk-connect-for-snmp/main/) - If so this sends the data to Splunk with HEC so would require a different approach to this problem. Or is this monitoring local files which are saved to disk from SNMP? If so, is this on a Universal Forwarder (UF) or heavy forwarder (HF)?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @raushank26  As @richgalloway has mentioned, there are a number of apps on Splunkbase which give the capability to check certificates, such as SSL Certificate Checker .  Once the app is setup to... See more...
Hi @raushank26  As @richgalloway has mentioned, there are a number of apps on Splunkbase which give the capability to check certificates, such as SSL Certificate Checker .  Once the app is setup to pull in data about your certificates then you can use the events to generate an appropriate dashboard, highlighting things like upcoming expiries etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I'm stuck deploying the RUM app provided by the Guided Onboarding (https://github.com/signalfx/microservices-demo-rum). Is anyone else having the same issue? I tried it with both quickstart method o... See more...
I'm stuck deploying the RUM app provided by the Guided Onboarding (https://github.com/signalfx/microservices-demo-rum). Is anyone else having the same issue? I tried it with both quickstart method or local build method. I'm using minikube on M1 Max Mac, and containers won't run.  Since adoptopenjdk/openjdk8:alpine-slim doesn't exist anymore, I'm using openjdk:8-jdk-alpine as the base, and I'm stuck with the gradle error.    
Hi! If the logs produced by AME can not be sent to the index, you will not get any alert data when expanding events. It would be easiest if you could open a support case in our support portal and p... See more...
Hi! If the logs produced by AME can not be sent to the index, you will not get any alert data when expanding events. It would be easiest if you could open a support case in our support portal and provide the output of the following search as a CSV export. index=_internal source=*ame* ERROR | table _time host source _raw   Regards, Simon
Thanks a lot for that details....I am also looking to create a dashboard for all certificate where we can see the expiry date. it helps management in further decision making. if possible kindly... See more...
Thanks a lot for that details....I am also looking to create a dashboard for all certificate where we can see the expiry date. it helps management in further decision making. if possible kindly let me know the steps.
There are several apps in splunkbase to help with that.  See https://splunkbase.splunk.com/apps?page=1&keyword=certificate  
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avo... See more...
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avoid any performance issue. Kindly assist
Hi @jcm  Are you already sending PKI Spotlight data into Splunk? This add-on applies CIM compliance to the data that is sent from PKI Spotlight. If you're already ingesting the data then simply inst... See more...
Hi @jcm  Are you already sending PKI Spotlight data into Splunk? This add-on applies CIM compliance to the data that is sent from PKI Spotlight. If you're already ingesting the data then simply install the app for free on your Splunk instance. If you arent already sending the PKI Spotlight data to Splunk then this can be configured in PKI Spotlight (  From PKI Spotlight controller, go to Settings > Integrations > Splunk) to setup HEC. For more information see the Installation tab on https://splunkbase.splunk.com/app/6875  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ww9rivers  Given that the log reports it is starting from offset=13553847 it sounds like the file is present and it has permission to read it, so we can rule that out. The other thing to check i... See more...
Hi @ww9rivers  Given that the log reports it is starting from offset=13553847 it sounds like the file is present and it has permission to read it, so we can rule that out. The other thing to check is that it is infact still being written to (when was the last even in that file)? You were able to see the _internal logs for the host, so that rules out connectivity issues. I'm wondering if perhaps its ending up somewhere else in an index you arent expecting? How have you configured the input? Is this within the Splunk Add-on for Unix and Linux, or is this a custom monitor stanza in an inputs.conf ? If you have the Splunk Add-on for Unix and Linux but also configured your own input then there is a chance the destination index is being overwritten by the Linux TA - its worth doing a btool to check this, on the UF try: $SPLUNK_HOME/bin/splunk btool inputs list --debug monitor  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Have you checked that there is actually something to forward from that file? Modern RHELs don't write there by default relying on journald instead.
Hi @ww9rivers , which add-on are you using to read these files, Splunk_TA_nix or a custom add-on? anyway, you can debug your configurations using btool (https://help.splunk.com/en/splunk-enterprise... See more...
Hi @ww9rivers , which add-on are you using to read these files, Splunk_TA_nix or a custom add-on? anyway, you can debug your configurations using btool (https://help.splunk.com/en/splunk-enterprise/administer/troubleshoot/9.0/first-steps/use-btool-to-troubleshoot-configurations). Probably there's a conflict in your configurations. Ciao. Giuseppe
Check this out.   https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/Inputsconf#MONITOR:  
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to... See more...
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to our Splunk indexers in the Splunk Cloud. Events from /var/log/secure are found there as expected. But no events are found from /var/log/messages. To troubleshoot, I did find these messages in the _internal index from the host: 06-02-2025 15:01:05.507 -0400 INFO WatchedFile [3811453 tailreader0] - Will begin reading at offset=13553847 for file='/var/log/messages'. 06-01-2025 03:21:02.729 -0400 INFO WatchedFile [2392 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/messages'. So the file was read but no events found in Splunk?   [Edit 2025-06-09] The file inputs are configured with a simple stanza in a custom TA: [monitor:///var/log] whitelist = (messages$|secure$) index = os disabled = 0 As the stanza shows, two files are forwarded: /var/log/messages and /var/log/secure. With this search: | tstats count where index=os host=server-name-* by host source I get these results: host source count server-name-a /var/log/secure 39795 server-name-b /var/log/messages 112960 server-name-b /var/log/secure 21938 Server a and b are a pair running the same OS, patches, applications, etc..
Since you're posting this in the Cloud section I'm assuming you're gonna want to send the events to Cloud. There is no way to send such data directly to Cloud. So you will need at least one server ... See more...
Since you're posting this in the Cloud section I'm assuming you're gonna want to send the events to Cloud. There is no way to send such data directly to Cloud. So you will need at least one server on-prem to gather data from your local environment. Depending on your sources and what you want to collect (and the goal of your data collection) you might use one of various possible syslog receiving methods (dedicated syslog daemon - rsyslog or syslog-ng either writing to files or sending directly to HEC input or SC4S). There are multiple methods of handling SNMP (SNMP modular input, SC4SNMP, self-configured external snmp collecting script and/or snmptrapd). And API enpoints can be handled by some existing TAs or you can try to handle them on your own by external scripts or Addon-builder created API inputs. So there is plethora of possibilities.
App's description on Splunkbase doesn't list any specific external licensing requirements so you should be able to just download the app from there and use it.
I did run 'splunk resync shcluster-replicated-config' .  I left it overnight and somehow SH3 sync'd itself.  I also became the captain, which I changed back.  Ran a sync on SH1 and all good now. No ... See more...
I did run 'splunk resync shcluster-replicated-config' .  I left it overnight and somehow SH3 sync'd itself.  I also became the captain, which I changed back.  Ran a sync on SH1 and all good now. No clue how or why it resync'd itself after many failed tries and clean ups.
Please use code block or preformatted paragraphs for config snippets. It greatly improves readability. And your config's syntax is completely off. If that is your actual config, use splunk btool ch... See more...
Please use code block or preformatted paragraphs for config snippets. It greatly improves readability. And your config's syntax is completely off. If that is your actual config, use splunk btool check to verify your config. If not, please copy-paste your literal settings.
There are several possible approaches to such problem. One is filldown mentioned already by @richgalloway. Another is streamstats or autoregress. Or you might simply reformulate your problem to avoid... See more...
There are several possible approaches to such problem. One is filldown mentioned already by @richgalloway. Another is streamstats or autoregress. Or you might simply reformulate your problem to avoid this altogether. It all depends on particular use case.