All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Suddenly we observed /opt/data was unmounted, and ownership has changed from splunk to root. Mounted back and restarted the service. still SF and RF are not meeting up. Restarted the service from AWS... See more...
Suddenly we observed /opt/data was unmounted, and ownership has changed from splunk to root. Mounted back and restarted the service. still SF and RF are not meeting up. Restarted the service from AWS, still no response, we have 3 indexers placed in this cluster. tried rollingg restart for remaining indexers, when i restarted the second indexer, the splunk stopped and /opt/data  ownership changed and unmounted, mounted them again same happend with 1st indexer too, didnot touched 3rd indexer. Now amoung 3 indexer 2 were down restarted then and started splunk in them and mounted /opt/data too, still we are not able to see SF and RF are meeting.
what does indicates   06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 MainThread] - Text decryption - error in finalizing: No errors in queue 06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 Ma... See more...
what does indicates   06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 MainThread] - Text decryption - error in finalizing: No errors in queue 06-19-2025 11:09:33.046 +0000 ERROR AesGcm [65605 MainThread] - AES-GCM Decryption failed! 06-19-2025 11:09:33.047 +0000 ERROR Crypto [65605 MainThread] - Decryption operation failed: AES-GCM Decryption failed! 06-19-2025 11:09:33.081 +0000 ERROR AesGcm [65605 MainThread] - Text decryption - error in finalizing: No errors in queue 06-19-2025 11:09:33.081 +0000 ERROR AesGcm [65605 MainThread] - AES-GCM Decryption failed! 06-19-2025 11:09:33.081 +0000 ERROR Crypto [65605 MainThread] - Decryption operation failed: AES-GCM Decryption failed!
Please help share query to check  > network logs and firewall blocks for specific Host machine > LDAP password login failed query for specific user account >
If anyone knows, could you please let me know the following? Our Splunk Enterprise system is based on AWS EC2.We use AWS S3 for Splunk SmartStore.We take backup of EBS and S3(smartstore) everyday.Bu... See more...
If anyone knows, could you please let me know the following? Our Splunk Enterprise system is based on AWS EC2.We use AWS S3 for Splunk SmartStore.We take backup of EBS and S3(smartstore) everyday.But the cost of S3 backup is very high.So we are planning to stop backup of  S3.Because we have Smart S3 versioning turned on. If we had to restore the EC2 , could we restore from S3 versioning? Since the timing of EBS backup and S3 versioning are different, I think there may be a problem if I restore EBS and restore S3 from a previous version at a different slice. Since I want to prioritize cost reduction, it is not a problem if some data on the EBS side is missing as long as it can be read without any problems. Please let me know if you have any ideas on how to reduce backup costs in a similar environment.
Hello Splunk Community,   We are currently transitioning a live dashboard from Splunk Enterprise to Splunk Cloud. The original dashboard was built using HTML, CSS, and JavaScript to monitor three c... See more...
Hello Splunk Community,   We are currently transitioning a live dashboard from Splunk Enterprise to Splunk Cloud. The original dashboard was built using HTML, CSS, and JavaScript to monitor three critical KPIs: Success, Response, and Availability. In our Splunk Enterprise version, we implemented: A custom alarm beep and red colour flash when any KPI drops below 99%. Pagination to manage and display data effectively.   However, while recreating this dashboard in Splunk Cloud Studio, we’ve encountered limitations: It seems that Studio doesn’t support custom pagination features. We're unable to add the alarm beep and visual alerts as we did use HTML/CSS/JS in Enterprise. 3. Federated searches limitation and how we can use for this requirement and how to their license usage Could someone please clarify:  What are the limitations of using HTML, CSS, and JavaScript in Splunk Cloud and Studio dashboards? Are there any workarounds or supported methods to implement these types of visual and audio alerts in Splunk Cloud? Any suggestions or examples on how to handle pagination and threshold-based alerts within Studio? Your insights or best practices would be greatly appreciated.  Thank you in advance for your support!    
I am newbie to this env and I'm trying to understand some logs regrading a linux server troubleshoot. A server stopped sending metrics to Splunk (eventlogs are fine). To troubleshoot, I searched the ... See more...
I am newbie to this env and I'm trying to understand some logs regrading a linux server troubleshoot. A server stopped sending metrics to Splunk (eventlogs are fine). To troubleshoot, I searched the error logs on that time stamp. These are the logs I got, 15:02:44.000: collectd[909]: processmon plugin: Error reading /proc/3605381/stat 15:12:53.000: runsvc.sh[968]: Error reported in diagnostic logs. Please examine the log for more details. 15:12:53.000: runsvc.sh[968]: 2025-06-13 19:12:53Z: Agent connect error: The HTTP request timed out after 00:01:00.. Retrying until reconnected. 15:31:07.000: splunk[3844643]: ERROR - Failed opening "/opt/splunkforwarder/var/log/splunk/splunkd.log": No such file or directory Please help to understand the issue and troubleshooting steps for the issue(If possible) Thank you in advance.
Hi all, I’m using the Splunk Universal Forwarder on Windows to collect event logs. My inputs.conf includes the following configurations: [WinEventLog://Security] disabled = 0 index = win_log [WinE... See more...
Hi all, I’m using the Splunk Universal Forwarder on Windows to collect event logs. My inputs.conf includes the following configurations: [WinEventLog://Security] disabled = 0 index = win_log [WinEventLog://System] disabled = 0 index = win_log [WinEventLog://Application] disabled = 0 index = win_log [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 renderXml = true index = win_log   The first three (Security, System, and Application) work perfectly and show readable, structured logs. However, when I run: index=win_log sourcetype=*sysmon* I get logs in unreadable binary or hex format like: \x00\x00**\x00\x00 \x00\x00@ \x00\x00\x00\x00\x00\x00\xCE....  How can I fix this and get properly parsed Sysmon logs (with fields like CommandLine, ParentProcess, etc.)?
why this issues I was trying to upgrade the splunk enterprise  Checking prerequisites...         Checking http port [8000]: open         Checking mgmt port [8089]: open         Checking appserver... See more...
why this issues I was trying to upgrade the splunk enterprise  Checking prerequisites...         Checking http port [8000]: open         Checking mgmt port [8089]: open         Checking appserver port [127.0.0.1:8065]: open         Checking kvstore port [8191]: open         Checking configuration... Done.         Checking critical directories...        Done         Checking indexes...                 Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary         Done     Bypassing local license checks since this instance is configured with a remote license master.           Checking filesystem compatibility...  Done         Checking conf files for problems...                 Invalid key in stanza [email] in /opt/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True).                 Invalid key in stanza [cloud] in /opt/splunk/etc/apps/splunk_assist/default/assist.conf, line 14: http_client_timout_seconds (value: 30).                 Invalid key in stanza [setup] in /opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf, line 16: cluster_monitor_interval (value: 300).                 Invalid key in stanza [setup] in /opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf, line 20: cluster_mode_enabled (value: false).                 Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'         Done         Checking default conf files for edits...         Validating installed files against hashes from '/opt/splunk/splunk-9.3.4-30e72d3fb5f7-linux-2.6-x86_64-manifest'         All installed files intact.         Done All preliminary checks passed.   Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done     Waiting for web server at https://127.0.0.1:8000 to be available.............splunkd 261927 was not running. Stopping splunk helpers...   Done. Stopped helpers. Removing stale pid file... done.     WARNING: web interface does not seem to be available!
Hey Folkes Ingesting ZPA logs in Splunk using the Zscaler LSS service, I believe the configuration is correct based on the documentation, however the sourcetype is coming up as sc4s fallback and t... See more...
Hey Folkes Ingesting ZPA logs in Splunk using the Zscaler LSS service, I believe the configuration is correct based on the documentation, however the sourcetype is coming up as sc4s fallback and the logs are unreadable. It's confirmed that the logs are streaming to the HF. Can anyone who've done a similar configuration setup advise? 
In ITSI, when using NEAP to trigger email alerts, Splunk ITSI automatically appends a footer text to the email body. Even though we remove the footer text via the email alert action configuration an... See more...
In ITSI, when using NEAP to trigger email alerts, Splunk ITSI automatically appends a footer text to the email body. Even though we remove the footer text via the email alert action configuration and saved successfully, it reappears when the configuration is reopened. This issue persists despite the general email settings in Splunk not containing any footer text. We also need to have the "HTML & Plain Text" on because we are running tokens and multiple links (service analyzer and viewing the episode) and if not, it will not be utilized correctly. We cannot use just the "Plain Text" due to that reason above. If anyone has any ideas or suggestions that will be much appreciated.
We have case of a customer that developed a dashboard within Splunk that has 10 panels (visualizations), and he uses the dashboard from outside Splunk with a Java program that does web scraping, and ... See more...
We have case of a customer that developed a dashboard within Splunk that has 10 panels (visualizations), and he uses the dashboard from outside Splunk with a Java program that does web scraping, and then he produces the appropriate real-time reports in the code. Now, due to authentication issues, we would like to convert this to be done via the REST API. Do we have any other options? Like embed or anything else.
Hello In Splunk dashboard studio, is it possible to change the font size of specific column ?   thanks
Hello, I had set up a Distributed Search setup in VirtualBox with a Search Head, indexer and Deployment Server.  Initially the forwarders were showing up in the deployment server as phoned home. Bu... See more...
Hello, I had set up a Distributed Search setup in VirtualBox with a Search Head, indexer and Deployment Server.  Initially the forwarders were showing up in the deployment server as phoned home. But after restarting I see that no clients are coming up in DS, instead they are showing up in the Search Head's Forwarder Management. I checked the deploymentclient.conf and the IP points towards the Deployment Server.  I tried removing the deployment-apps in Search Head and restarting but I think as it's in a Distributed Search mode the folder is automatically getting created.
Hello, is it possible in Splunk HEC from Kafka to receive raw events on HF in order to parse fields with addons? It seems we can only receive json data with "event" field and may not be able to extr... See more...
Hello, is it possible in Splunk HEC from Kafka to receive raw events on HF in order to parse fields with addons? It seems we can only receive json data with "event" field and may not be able to extract fields within standard addons? The HEC event may also contain target index and sourcetype. Thanks.  
I want to set two token at the same time when a value is selected from a dropdown input. I have dynamic drop down input. Let's say possible values are A,B and C in the drop down. If A is selected I w... See more...
I want to set two token at the same time when a value is selected from a dropdown input. I have dynamic drop down input. Let's say possible values are A,B and C in the drop down. If A is selected I want to set two token  token1=X and token2=Y. And I will use those token in different panel. Tried below but one token is not getting the value  <change> <condition value="A"> <set token="T1">myRegex1(X)</set> </condition> </change> <change> <condition value="A"> <set token="T2">myRegex2(Y)</set> </condition> </change> Thanks
Hello Splunkers !! How can I efficiently use the mvexpand command to expand multiple multi-value fields, considering its high resource consumption and expensive command? Please guide me  
Good day everyone, ia m new to Splunk and i need some suggestions.  We are sending our Mendix logs to SplunkCloud, but our logs are sent to Splunk as a single event.  Is that possible for me to ext... See more...
Good day everyone, ia m new to Splunk and i need some suggestions.  We are sending our Mendix logs to SplunkCloud, but our logs are sent to Splunk as a single event.  Is that possible for me to extract the fields from the message part? Example Module:SplunkTest Microflow: ACT_Omnext_Create latesteror_message:Access denied.. http status: 401 Http reasonphrase Access denied... Or is this data should be structured from Mendix and send to Splunk? Thanks for any suggestion.
So I am using this script , what this script does it it creates a solved button on each field of dashboard drilldown and when that button it clicked it changes the value of solved field to 1 from 0 .... See more...
So I am using this script , what this script does it it creates a solved button on each field of dashboard drilldown and when that button it clicked it changes the value of solved field to 1 from 0 . I wan to know whats wrong in this script button can be seen but when it is clicked nothing happens even if i refresh the drilldown. This is the drilldown with the button in rowKey field. This is the script. require([ 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!', 'jquery' ], function(mvc, SearchManager, TableView, ignored, $) { // Define a simple cell renderer with a button var ActionButtonRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'rowKey'; }, render: function($td, cell) { $td.addClass('button-cell'); var rowKey = cell.value var $btn = $('<button class="btn btn-success">Mark Solved</button>'); $btn.on('click', function(e) { e.preventDefault(); e.stopPropagation(); var searchQuery = `| inputlookup sbc_warning.csv | eval rowKey=tostring(rowKey) | eval solved=if(rowKey="${rowKey}", "1", solved) | outputlookup sbc_warning.csv`; var writeSearch = new SearchManager({ id: "writeSearch_" + Math.floor(Math.random() * 100000), search: searchQuery, autostart: true }); writeSearch.on('search:done', function() { console.log("Search completed and lookup updated"); var panelSearch = mvc.Components.get('panel_search_id'); if (panelSearch) { panelSearch.startSearch(); console.log("Panel search restarted"); } }); }); $td.append($btn); } }); // Apply the renderer to the specified table var tableComponent = mvc.Components.get('sbc_alarm_table'); if (tableComponent) { tableComponent.getVisualization(function(tableView) { tableView.table.addCellRenderer(new ActionButtonRenderer()); tableView.table.render(); }); } });
Hi,    I would like to ask to how change all user timezone to Pacific time. I did some research and see people recommend to config this file - SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf.... See more...
Hi,    I would like to ask to how change all user timezone to Pacific time. I did some research and see people recommend to config this file - SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf. But from what I know, or at learst how my Splunk was set up. It's a Saas, they provided me a link to my domain and I start using it. I'm not quite sure where exactly is the mentioned file.   Thanks, Brian 
Hello, I would like to know if there is a way to see/check tsidx files size and raw data file size. I would like to reduce tsidx file size to improve search performance. Thanks for support Nordin... See more...
Hello, I would like to know if there is a way to see/check tsidx files size and raw data file size. I would like to reduce tsidx file size to improve search performance. Thanks for support Nordine