All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey @vnetrebko, One of the approach that I can think of considering your scenario is to use REST endpoints to fetch the information. You can run a search that would do an inputlookup to both the loo... See more...
Hey @vnetrebko, One of the approach that I can think of considering your scenario is to use REST endpoints to fetch the information. You can run a search that would do an inputlookup to both the lookup tables and then export the search job result and access the data. Also, whenever a search is run from the search head that references the lookup table, the lookup is migrated to the search peers (indexers) as part of the search bundle. All the information for running the search via REST API and getting the output exports are documented here - https://help.splunk.com/en/splunk-enterprise/leverage-rest-apis/rest-api-tutorials/9.4/rest-api-tutorials/creating-searches-using-the-rest-api Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!
Hello @Namo, There can be multiple reasons for KVStore failure. What version of Splunk did you upgrade from? Also, did you check the expiry of the certificate used by kvstore? Setting enabledSplunkd... See more...
Hello @Namo, There can be multiple reasons for KVStore failure. What version of Splunk did you upgrade from? Also, did you check the expiry of the certificate used by kvstore? Setting enabledSplunkdSSL to false will disconnect secure communication internally throughout the Splunk deployment wherever management port is being used.  Thanks, Tejas. 
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below... See more...
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below is the mongod.log SSL peer certificate validation failed: self signed certificate in certificate chain 2025-06-20T08:09:03.925Z I NETWORK [conn638] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:54188 (connection id: 638) This error can be bypassed if we add the below stanza  in server.conf, though it is a workaround only. enableSplunkdSSL = false Any other inputs is appreciated.
@gcusello thanks that is exactly what I needed 
Hello there! I am currently managing a Splunk Enterprise clustered environment, where I have implemented a scheduled search that runs every 5 minutes to maintain and update two CSV lookup files. Thes... See more...
Hello there! I am currently managing a Splunk Enterprise clustered environment, where I have implemented a scheduled search that runs every 5 minutes to maintain and update two CSV lookup files. These lookup files are currently stored in the designated lookups directory on the Search Head. My objective is to develop a custom application using the Splunk Add-on Builder, which will incorporate a Python script that will be executed on the Heavy Forwarder. This script requires access to the updated lookup data to function properly. However, due to the clustered nature of my environment, directly accessing these CSV files from the filesystem through the script is not an option. Ideally, indexers should also have access to the same lookup data as both SH and HF. Are there any potential methods or best practices for establishing a reliable mechanism to push or synchronize these lookup files from the SH to the HF (and Indexers, if possible)? Perhaps there are some recommended approaches or established methodologies to achieve reliable sync of those lookup files in a clustered environment that I haven’t found?
hi @L_Petch , use as filter: index=* OR (index=_internal host IN (spl001,spl002)) Ciao. Giuseppe
Hi @gcusello  How does this stop the filter from stopping access to all the other indexers the user has access to? As soon as I add the filter " host::spl001 OR host::spl002 " for example it will ap... See more...
Hi @gcusello  How does this stop the filter from stopping access to all the other indexers the user has access to? As soon as I add the filter " host::spl001 OR host::spl002 " for example it will apply this restriction to all indexes not jus the _internal I want it to wont it?
Hi @L_Petch , to create the new role don't use Inheritation but clone the role from another one and modify the search filters. Ciao. Giuseppe
Hello,   I need to give certain users access to _internal but only allow them to see certain hosts. I planned to do this by adding a new role, giving them access to the index and then limiting them... See more...
Hello,   I need to give certain users access to _internal but only allow them to see certain hosts. I planned to do this by adding a new role, giving them access to the index and then limiting them to the hosts in the search filter section. This works however it restricts access in all other indexes they have access to, to the same search filter even though this isn't inherited and access granted by a separate role.   Is there an easier/better way of doing this?
1. @PrewinThomas is on the right track here - your CM/deployer should be used only to deploy the ES app. It shouldn't run it after that. 2. Still, you might soon run into the same problem with searc... See more...
1. @PrewinThomas is on the right track here - your CM/deployer should be used only to deploy the ES app. It shouldn't run it after that. 2. Still, you might soon run into the same problem with searches skipped/delayed if you leave the default schedules and the searches will bunch up at certain points in time. As a rule of thumb you should review your scheduled searches (reports, correlation searches, datamodel acceleration) and spread them across available time slots.
While you probably could upload a file as a part of your app (I'm not sure if it would pass vetting though), Dashboard Studio has no native method of playing audio files and last time I checked it st... See more...
While you probably could upload a file as a part of your app (I'm not sure if it would pass vetting though), Dashboard Studio has no native method of playing audio files and last time I checked it still didn't support extending by means of custom JS code.
Wait a second. You can't use props/transforms after the events have been indexed. You can do that during indexing after initial ingestion by input.
Time to add a new entry in ideas.splunk.com and ask this feature! Of course you should check if there is already this kind of idea. Then write up that idea here, so we could vote it too!
Not quite. 1. Ingesting /var/log/messages indiscriminately will result in a high level of noise. Also with typical syslog-enabled system you will have multiple formats of events which you won't easi... See more...
Not quite. 1. Ingesting /var/log/messages indiscriminately will result in a high level of noise. Also with typical syslog-enabled system you will have multiple formats of events which you won't easily parse. 2. There are many issues with TA_nix so advising it as primary source of data is a bit hasty
@dinesh001kumar  Natively, Splunk Cloud Studio dashboards do not support playing audio alerts. As a workaround you can consider alert actions such as emails, webhooks, or integrations with extern... See more...
@dinesh001kumar  Natively, Splunk Cloud Studio dashboards do not support playing audio alerts. As a workaround you can consider alert actions such as emails, webhooks, or integrations with external systems/scripts to play audio. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @bmer , ok,you want only organizationIDs that are common to both the searches that have the same index and user*. in this case you have to use my search with an additional clause: index="abc" "... See more...
Hi @bmer , ok,you want only organizationIDs that are common to both the searches that have the same index and user*. in this case you have to use my search with an additional clause: index="abc" "usr*" ("`DLQuery`DLQuery`POST`" OR ("DLQuery" "DLSqlQueryV2")) | eval type=if(searchmatch("`DLQuery`DLQuery`POST`"), "v1", "v2") | stats dc(type) AS type_count count BY organizationId | where type_count>1 in this way you have only the events that match bpth the searches. Ciao. Giuseppe  
@gcusello I want to count the list of occurrence of events coming in splunk 1 and splunk 2 seperately I suppose that between quotes you have strings to search and the you want to count the occur... See more...
@gcusello I want to count the list of occurrence of events coming in splunk 1 and splunk 2 seperately I suppose that between quotes you have strings to search and the you want to count the occurrences for each organizationId. : First part of your statement is correct BUT I do not want to aggregate by organizationId.It is common for BOTH splunk. The "`DLQuery`DLQuery`POST`" is part of all events related to v1 whereas "DLQuery" "DLSqlQueryV2" is all events related to v2. So at the EOD I want to know daywise v1 and v2 count
Hi @dinesh001kumar , I'm not sure that Splunk Cloud permits to upload an audio file (I never tried before!) but you can ask them. Anyway, if the anomaly you're speaking is an alert, you could conne... See more...
Hi @dinesh001kumar , I'm not sure that Splunk Cloud permits to upload an audio file (I never tried before!) but you can ask them. Anyway, if the anomaly you're speaking is an alert, you could connect to the alert a script run that launch the audio player, otherwise I don't see any other choices. Ciao. Giuseppe
I was having Live Service Monitoring Dashboard, created in Splunk Cloud using Studio Dashboard(JSON). Is there any possibility to play audio sound if there was any abnormalities in any of the servic... See more...
I was having Live Service Monitoring Dashboard, created in Splunk Cloud using Studio Dashboard(JSON). Is there any possibility to play audio sound if there was any abnormalities in any of the service in Studio Dashboard. If its possible can anyone help on this how to achieve the output.
Hi @bmer , just some additional information: what's your purpose: find the count of occurrances or list all the events? what are the backthicks between quotes? I suppose that between quotes you h... See more...
Hi @bmer , just some additional information: what's your purpose: find the count of occurrances or list all the events? what are the backthicks between quotes? I suppose that between quotes you have strings to search and the you want to count the occurrences for each organizationId. In this case, there are many way to reach your purpose, but the most efficient is stats: index="abc" "usr*" ("`DLQuery`DLQuery`POST`" OR ("DLQuery" "DLSqlQueryV2")) | stats count BY organizationId Ciao. Giuseppe