All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@L_Petch  Check this https://community.splunk.com/t5/Splunk-Enterprise-Security/Unable-to-connect-to-license-master-since-certification/m-p/488156    https://community.splunk.com/t5/Security/Getti... See more...
@L_Petch  Check this https://community.splunk.com/t5/Splunk-Enterprise-Security/Unable-to-connect-to-license-master-since-certification/m-p/488156    https://community.splunk.com/t5/Security/Getting-an-issue-where-Splunk-hosts-can-t-reach-the-License/m-p/455396 
Hello,   I am getting the below error on two of my indexers. The indexers in question are on a different site (Site2) to the other two indexers & license manager in the cluster (site1). Site 1is wo... See more...
Hello,   I am getting the below error on two of my indexers. The indexers in question are on a different site (Site2) to the other two indexers & license manager in the cluster (site1). Site 1is working correctly with the same configuration as the indexers for site 2. My guess is networking but both indexers can connect to the LM on this port and there are no issues showing on the firewall between the two. All troubleshooting I have tried shows doesn't show any connectivity issues. Anyone come across this problem and have a solution? ####################################################################### HttpClientRequest [2156984 LMTrackerExecutorWorker-0] - Returning error HTTP/1.1 502 Error connecting: Connection reset by peer ERROR LMTracker [2156984 LMTrackerExecutorWorker-0] - failed to send rows, reason='Unable to connect to license manager=https://****:8089 Error connecting: Connection reset by peer' #######################################################################
I have a base search and multiple chain search. can u add refresh only to base search ?  Will that refresh other panels ?
@tgulgund  Unfortunately i dont think dashboard studio can set auto refresh for entire dashboard in a single config, auto-refresh is set per data source. You must define the refresh interval for... See more...
@tgulgund  Unfortunately i dont think dashboard studio can set auto refresh for entire dashboard in a single config, auto-refresh is set per data source. You must define the refresh interval for each relevant data source in the dataSources section of your dashboard JSON Eg: "dataSources": { "myDataSource": { "type": "ds.search", "options": { "query": "your search here", "queryParameters": { "earliest": "-48h@h", "latest": "@h" }, "refresh": "5m", "refreshType": "delay" } } } #https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/dsOpt Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
I am using splunk 9.3.2. I have visualisation panels added my dashboard with multiple queries. I use a base search with global time picker default value to 48 hours and subsequently use the chain ... See more...
I am using splunk 9.3.2. I have visualisation panels added my dashboard with multiple queries. I use a base search with global time picker default value to 48 hours and subsequently use the chain searches. I need my entire dashboard to refresh after every 5 mins. I tried "refresh":300 but it doesn't work. Not sure what am I missing here. { "visualizations": { }, "dataSources": { }, "defaults": { }, "inputs": { }, "layout": { "type": "absolute", "options": { "height": 2500, "backgroundColor": "#000000", "display": "fit-to-width", "width": 1550 }, }, "description": "", "title": "My Dashboard", "refresh": 300 }
@ww9rivers  The message File too small to check seekcrc, probably truncated. Will re-read entire file indicates that Splunk detected that /var/log/messages was truncated. Would you mind checking yo... See more...
@ww9rivers  The message File too small to check seekcrc, probably truncated. Will re-read entire file indicates that Splunk detected that /var/log/messages was truncated. Would you mind checking your logrotate setup? Splunk interprets file truncation as a reduction in size and will restart reading from the start of the file. Troubleshooting 1-Review your logrotate config or Temporarily disable log rotation for this file to verify whether it is causing the issue. 2-Add a test message to /var/log/messages and verify whether Splunk is ingesting the new entry. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
I am using Okta to configure SAML for splunk. Following the step of introduction, I created a SAML group in Splunk and same group name in Okta. Made a role mapping.  https://saml-doc.okta.com/SAML_... See more...
I am using Okta to configure SAML for splunk. Following the step of introduction, I created a SAML group in Splunk and same group name in Okta. Made a role mapping.  https://saml-doc.okta.com/SAML_Docs/How-to-Configure-SAML-2.0-for-Splunk-Cloud.html When finished the setup, the logon page is through Okta but it got below error message after filled in user email and password in Okta logon page. Saml response does not contain group information. Attached the output of saml-tracer addon.  Did I miss something?      
thanks a lot
Hi @ayomotukoya  Can I check which method you are using to ingest the SNMP data? Is it using Splunk Connect for SNMP? (https://splunk.github.io/splunk-connect-for-snmp/main/) - If so this sends the ... See more...
Hi @ayomotukoya  Can I check which method you are using to ingest the SNMP data? Is it using Splunk Connect for SNMP? (https://splunk.github.io/splunk-connect-for-snmp/main/) - If so this sends the data to Splunk with HEC so would require a different approach to this problem. Or is this monitoring local files which are saved to disk from SNMP? If so, is this on a Universal Forwarder (UF) or heavy forwarder (HF)?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @raushank26  As @richgalloway has mentioned, there are a number of apps on Splunkbase which give the capability to check certificates, such as SSL Certificate Checker .  Once the app is setup to... See more...
Hi @raushank26  As @richgalloway has mentioned, there are a number of apps on Splunkbase which give the capability to check certificates, such as SSL Certificate Checker .  Once the app is setup to pull in data about your certificates then you can use the events to generate an appropriate dashboard, highlighting things like upcoming expiries etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I'm stuck deploying the RUM app provided by the Guided Onboarding (https://github.com/signalfx/microservices-demo-rum). Is anyone else having the same issue? I tried it with both quickstart method o... See more...
I'm stuck deploying the RUM app provided by the Guided Onboarding (https://github.com/signalfx/microservices-demo-rum). Is anyone else having the same issue? I tried it with both quickstart method or local build method. I'm using minikube on M1 Max Mac, and containers won't run.  Since adoptopenjdk/openjdk8:alpine-slim doesn't exist anymore, I'm using openjdk:8-jdk-alpine as the base, and I'm stuck with the gradle error.    
Hi! If the logs produced by AME can not be sent to the index, you will not get any alert data when expanding events. It would be easiest if you could open a support case in our support portal and p... See more...
Hi! If the logs produced by AME can not be sent to the index, you will not get any alert data when expanding events. It would be easiest if you could open a support case in our support portal and provide the output of the following search as a CSV export. index=_internal source=*ame* ERROR | table _time host source _raw   Regards, Simon
Thanks a lot for that details....I am also looking to create a dashboard for all certificate where we can see the expiry date. it helps management in further decision making. if possible kindly... See more...
Thanks a lot for that details....I am also looking to create a dashboard for all certificate where we can see the expiry date. it helps management in further decision making. if possible kindly let me know the steps.
There are several apps in splunkbase to help with that.  See https://splunkbase.splunk.com/apps?page=1&keyword=certificate  
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avo... See more...
Hi All, I am new to splunk and looking for assistance in creating a dashboard for multiple windows server certificate expiry details. This will help us in proactively installing the cert to avoid any performance issue. Kindly assist
Hi @jcm  Are you already sending PKI Spotlight data into Splunk? This add-on applies CIM compliance to the data that is sent from PKI Spotlight. If you're already ingesting the data then simply inst... See more...
Hi @jcm  Are you already sending PKI Spotlight data into Splunk? This add-on applies CIM compliance to the data that is sent from PKI Spotlight. If you're already ingesting the data then simply install the app for free on your Splunk instance. If you arent already sending the PKI Spotlight data to Splunk then this can be configured in PKI Spotlight (  From PKI Spotlight controller, go to Settings > Integrations > Splunk) to setup HEC. For more information see the Installation tab on https://splunkbase.splunk.com/app/6875  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ww9rivers  Given that the log reports it is starting from offset=13553847 it sounds like the file is present and it has permission to read it, so we can rule that out. The other thing to check i... See more...
Hi @ww9rivers  Given that the log reports it is starting from offset=13553847 it sounds like the file is present and it has permission to read it, so we can rule that out. The other thing to check is that it is infact still being written to (when was the last even in that file)? You were able to see the _internal logs for the host, so that rules out connectivity issues. I'm wondering if perhaps its ending up somewhere else in an index you arent expecting? How have you configured the input? Is this within the Splunk Add-on for Unix and Linux, or is this a custom monitor stanza in an inputs.conf ? If you have the Splunk Add-on for Unix and Linux but also configured your own input then there is a chance the destination index is being overwritten by the Linux TA - its worth doing a btool to check this, on the UF try: $SPLUNK_HOME/bin/splunk btool inputs list --debug monitor  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Have you checked that there is actually something to forward from that file? Modern RHELs don't write there by default relying on journald instead.
Hi @ww9rivers , which add-on are you using to read these files, Splunk_TA_nix or a custom add-on? anyway, you can debug your configurations using btool (https://help.splunk.com/en/splunk-enterprise... See more...
Hi @ww9rivers , which add-on are you using to read these files, Splunk_TA_nix or a custom add-on? anyway, you can debug your configurations using btool (https://help.splunk.com/en/splunk-enterprise/administer/troubleshoot/9.0/first-steps/use-btool-to-troubleshoot-configurations). Probably there's a conflict in your configurations. Ciao. Giuseppe
Check this out.   https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/Inputsconf#MONITOR: