All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Namo  Could you please confirm the upgrade path — specifically, from which version to which version Splunk was upgraded? Please note that you must first upgrade to the KV Store server version 4.2.... See more...
@Namo  Could you please confirm the upgrade path — specifically, from which version to which version Splunk was upgraded? Please note that you must first upgrade to the KV Store server version 4.2.x before proceeding with an upgrade to Splunk Enterprise 9.4.x or higher. For detailed instructions on updating to KV Store version 4.2.x (applicable to Splunk Enterprise versions 9.0.x through 9.3.x), refer to the official documentation: Migrate the KV store storage engine in the https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/administer-the-app-key-value-store/migrate-the-kv-store-storage-engine  We strongly recommend reviewing this guide to ensure a successful upgrade path and avoid issues like the one you're encountering.  https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/MigrateKVstore
Hello ,  Can anyone please provide me a query which lists out  all forwarders that have not send data over the last 30 days?   Thank you
I think I used the wrong terms. To clarify: I have distributed Splunk env. with clustered Indexers
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks so... See more...
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks something like this, but whenever I run the command, I get an error 'No results to summary index'.  | mpreview index=metrics_old target_per_timeseries=5 filter="metric_name IN ( process.java.gc.collections) env IN (server_name:port)" | mcollect index=metrics_new  Is there something I'm doing wrong when using the mcollect command? Please advise. Thanks in advance.   Regards, Pravin
Hi @vnetrebko , when you say clustered environment, are you meaning Indexers or Search Heads Cluster? if Search Head, Lookup are automaticalli replicated between  peers and you don't need any addit... See more...
Hi @vnetrebko , when you say clustered environment, are you meaning Indexers or Search Heads Cluster? if Search Head, Lookup are automaticalli replicated between  peers and you don't need any additional method. If you don't have a Search Head Cluster, maybe this could be the easiest solution. Ciao. Giuseppe
Hello @tej57  It was upgraded from 9.2 to 9.4. The cert is not expired ,is valid for another few days . the  server cert is a combined cert
Hey @vnetrebko, One of the approach that I can think of considering your scenario is to use REST endpoints to fetch the information. You can run a search that would do an inputlookup to both the loo... See more...
Hey @vnetrebko, One of the approach that I can think of considering your scenario is to use REST endpoints to fetch the information. You can run a search that would do an inputlookup to both the lookup tables and then export the search job result and access the data. Also, whenever a search is run from the search head that references the lookup table, the lookup is migrated to the search peers (indexers) as part of the search bundle. All the information for running the search via REST API and getting the output exports are documented here - https://help.splunk.com/en/splunk-enterprise/leverage-rest-apis/rest-api-tutorials/9.4/rest-api-tutorials/creating-searches-using-the-rest-api Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!
Hello @Namo, There can be multiple reasons for KVStore failure. What version of Splunk did you upgrade from? Also, did you check the expiry of the certificate used by kvstore? Setting enabledSplunkd... See more...
Hello @Namo, There can be multiple reasons for KVStore failure. What version of Splunk did you upgrade from? Also, did you check the expiry of the certificate used by kvstore? Setting enabledSplunkdSSL to false will disconnect secure communication internally throughout the Splunk deployment wherever management port is being used.  Thanks, Tejas. 
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below... See more...
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below is the mongod.log SSL peer certificate validation failed: self signed certificate in certificate chain 2025-06-20T08:09:03.925Z I NETWORK [conn638] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:54188 (connection id: 638) This error can be bypassed if we add the below stanza  in server.conf, though it is a workaround only. enableSplunkdSSL = false Any other inputs is appreciated.
@gcusello thanks that is exactly what I needed 
Hello there! I am currently managing a Splunk Enterprise clustered environment, where I have implemented a scheduled search that runs every 5 minutes to maintain and update two CSV lookup files. Thes... See more...
Hello there! I am currently managing a Splunk Enterprise clustered environment, where I have implemented a scheduled search that runs every 5 minutes to maintain and update two CSV lookup files. These lookup files are currently stored in the designated lookups directory on the Search Head. My objective is to develop a custom application using the Splunk Add-on Builder, which will incorporate a Python script that will be executed on the Heavy Forwarder. This script requires access to the updated lookup data to function properly. However, due to the clustered nature of my environment, directly accessing these CSV files from the filesystem through the script is not an option. Ideally, indexers should also have access to the same lookup data as both SH and HF. Are there any potential methods or best practices for establishing a reliable mechanism to push or synchronize these lookup files from the SH to the HF (and Indexers, if possible)? Perhaps there are some recommended approaches or established methodologies to achieve reliable sync of those lookup files in a clustered environment that I haven’t found?
hi @L_Petch , use as filter: index=* OR (index=_internal host IN (spl001,spl002)) Ciao. Giuseppe
Hi @gcusello  How does this stop the filter from stopping access to all the other indexers the user has access to? As soon as I add the filter " host::spl001 OR host::spl002 " for example it will ap... See more...
Hi @gcusello  How does this stop the filter from stopping access to all the other indexers the user has access to? As soon as I add the filter " host::spl001 OR host::spl002 " for example it will apply this restriction to all indexes not jus the _internal I want it to wont it?
Hi @L_Petch , to create the new role don't use Inheritation but clone the role from another one and modify the search filters. Ciao. Giuseppe
Hello,   I need to give certain users access to _internal but only allow them to see certain hosts. I planned to do this by adding a new role, giving them access to the index and then limiting them... See more...
Hello,   I need to give certain users access to _internal but only allow them to see certain hosts. I planned to do this by adding a new role, giving them access to the index and then limiting them to the hosts in the search filter section. This works however it restricts access in all other indexes they have access to, to the same search filter even though this isn't inherited and access granted by a separate role.   Is there an easier/better way of doing this?
1. @PrewinThomas is on the right track here - your CM/deployer should be used only to deploy the ES app. It shouldn't run it after that. 2. Still, you might soon run into the same problem with searc... See more...
1. @PrewinThomas is on the right track here - your CM/deployer should be used only to deploy the ES app. It shouldn't run it after that. 2. Still, you might soon run into the same problem with searches skipped/delayed if you leave the default schedules and the searches will bunch up at certain points in time. As a rule of thumb you should review your scheduled searches (reports, correlation searches, datamodel acceleration) and spread them across available time slots.
While you probably could upload a file as a part of your app (I'm not sure if it would pass vetting though), Dashboard Studio has no native method of playing audio files and last time I checked it st... See more...
While you probably could upload a file as a part of your app (I'm not sure if it would pass vetting though), Dashboard Studio has no native method of playing audio files and last time I checked it still didn't support extending by means of custom JS code.
Wait a second. You can't use props/transforms after the events have been indexed. You can do that during indexing after initial ingestion by input.
Time to add a new entry in ideas.splunk.com and ask this feature! Of course you should check if there is already this kind of idea. Then write up that idea here, so we could vote it too!
Not quite. 1. Ingesting /var/log/messages indiscriminately will result in a high level of noise. Also with typical syslog-enabled system you will have multiple formats of events which you won't easi... See more...
Not quite. 1. Ingesting /var/log/messages indiscriminately will result in a high level of noise. Also with typical syslog-enabled system you will have multiple formats of events which you won't easily parse. 2. There are many issues with TA_nix so advising it as primary source of data is a bit hasty