All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to fetch metric values of the infra i am monitoring using rest apis, so far all the apis i have tried are only giving metrics metadata not actual value of the metrics. Can someone help... See more...
I am trying to fetch metric values of the infra i am monitoring using rest apis, so far all the apis i have tried are only giving metrics metadata not actual value of the metrics. Can someone help me with the values api?
Hi @Namo  Have you got custom SSL in use on your Splunk instance?  One thing you could check is running: $SPLUNK_HOME/bin/splunk cmd btool server list kvstore Check specifically for sslVerifyServ... See more...
Hi @Namo  Have you got custom SSL in use on your Splunk instance?  One thing you could check is running: $SPLUNK_HOME/bin/splunk cmd btool server list kvstore Check specifically for sslVerifyServerCert - Is this true? If so try setting to false and restart Splunk to see if this resolves the issue temporarily - if it does then at least you can get the service back up and then focus on how you can get the SSL certs in working state without having to set sslVerifyServerCert to false.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kiran_panchavat  There is no folder with the below name in the $SPLUNK_HOME/etc/apps folder 100_<stackname>_splunkcloud
The KV sore was already on 4.2 before the upgrade.Hence went ahead with the splunk upgrade.  The readme file was reviewed before the upgrade. The error states  Error receiving request from client:... See more...
The KV sore was already on 4.2 before the upgrade.Hence went ahead with the splunk upgrade.  The readme file was reviewed before the upgrade. The error states  Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:58948 Does it mean we need to review the certificate steps also. Currently the server cert is issued by Digicert and cacert doesnt have Digicert as list of  trusted list
Hi @livehybrid , To answer your questions in order, I can get events, and it works when I run the first line. I am logged in as an admin, and I created the index, so I have permission to read and... See more...
Hi @livehybrid , To answer your questions in order, I can get events, and it works when I run the first line. I am logged in as an admin, and I created the index, so I have permission to read and write to the index. Data is available, but not written to the new index. info : The limit has been reached for log messages in info.csv. 65149 messages have not been written to info.csv. Refer to search.log for these messages or limits.conf to configure this limit. warn : No results to summary index. Once I complete the execution, I get the above error message in the job inspector. Thanks, Pravin
Hi @sverdhan  I would avoid looking at things like index=* because this is very resource intensive and also may include hosts which are not forwarders! Instead you can utilise the _metrics index wh... See more...
Hi @sverdhan  I would avoid looking at things like index=* because this is very resource intensive and also may include hosts which are not forwarders! Instead you can utilise the _metrics index which is very fast and efficient, you could try something like this: |mstats latest_time(_value) as latest_time WHERE earliest=-31d latest=now index=_metrics metric_name="spl.mlog.tcpin_connections._tcp_eps" source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections by hostname | eval notSeenFor30Days=IF(latest_time<now()-(60*60*24*30),"NotSeen","Seen") | eval lastSeen=tostring(now()-latest_time,"duration") I would usually recommend having a lookup of "known forwarders" for this task and then update it with when it was last seen, that way you wouldnt need to look back 30 days each time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sverdhan , see in the Monitoring Console the "DMC Alert - Missing forwarders" alert | inputlookup dmc_forwarder_assets | search status="missing" | rename hostname as Instance otherwise, if yo... See more...
Hi @sverdhan , see in the Monitoring Console the "DMC Alert - Missing forwarders" alert | inputlookup dmc_forwarder_assets | search status="missing" | rename hostname as Instance otherwise, if you want to know the clients that were connected in the last 30 days but not in the last hour, you could run something like this: | tstats latest(_time) AS _time count where index=_internal BY host | eval period=if(_time>now()-3600,"Last hour","Previous") | stats dc(period) AS period_count values(period) AS period latest(_time) AS _time BY host | where period_count=1 AND period="Last hour" | table host _time Ciao. Giuseppe
@sverdhan  | metadata type=hosts index=* earliest=-30d@d | eval age = now() - lastTime | eval last_seen = strftime(lastTime, "%Y-%m-%d %H:%M:%S") | where age > 30*24*60*60 | eval age_days = roun... See more...
@sverdhan  | metadata type=hosts index=* earliest=-30d@d | eval age = now() - lastTime | eval last_seen = strftime(lastTime, "%Y-%m-%d %H:%M:%S") | where age > 30*24*60*60 | eval age_days = round(age/(24*60*60), 2) | table host, last_seen, age_days | rename host as "Forwarder", last_seen as "Last Data Received", age_days as "Days Since Last Data"  
Hi @_pravin  The error "No results to summary index" suggests that the first part of your query didnt return any events, or didnt return events which could be interpeted as a metric. Can you run th... See more...
Hi @_pravin  The error "No results to summary index" suggests that the first part of your query didnt return any events, or didnt return events which could be interpeted as a metric. Can you run the first line to confirm you are getting events returned? Can you also please confirm that you have permission to read/write to metrics_new and that (as the name suggests ) it is definitely a metric index?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@sverdhan  | tstats latest(_time) as lastTime where index=* by host | eval age=now()-lastTime | where age > 2592000 | convert ctime(lastTime) | rename host as "Forwarder Host", lastTime ... See more...
@sverdhan  | tstats latest(_time) as lastTime where index=* by host | eval age=now()-lastTime | where age > 2592000 | convert ctime(lastTime) | rename host as "Forwarder Host", lastTime as "Last Data Received Time", age as "Age (in seconds)" | sort - "Age (in seconds)"
@Namo  Please can you confirm if you followed the Splunk 9.4 upgrade pre-steps that are documented here? https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/AboutupgradingREADTHISFIRST ... See more...
@Namo  Please can you confirm if you followed the Splunk 9.4 upgrade pre-steps that are documented here? https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/AboutupgradingREADTHISFIRST  There is a section on upgrading the kv-store before running the Splunk 9.4 upgrade. Reference: https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4? 
@Namo  Could you please confirm the upgrade path — specifically, from which version to which version Splunk was upgraded? Please note that you must first upgrade to the KV Store server version 4.2.... See more...
@Namo  Could you please confirm the upgrade path — specifically, from which version to which version Splunk was upgraded? Please note that you must first upgrade to the KV Store server version 4.2.x before proceeding with an upgrade to Splunk Enterprise 9.4.x or higher. For detailed instructions on updating to KV Store version 4.2.x (applicable to Splunk Enterprise versions 9.0.x through 9.3.x), refer to the official documentation: Migrate the KV store storage engine in the https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/administer-the-app-key-value-store/migrate-the-kv-store-storage-engine  We strongly recommend reviewing this guide to ensure a successful upgrade path and avoid issues like the one you're encountering.  https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/MigrateKVstore
Hello ,  Can anyone please provide me a query which lists out  all forwarders that have not send data over the last 30 days?   Thank you
I think I used the wrong terms. To clarify: I have distributed Splunk env. with clustered Indexers
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks so... See more...
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks something like this, but whenever I run the command, I get an error 'No results to summary index'.  | mpreview index=metrics_old target_per_timeseries=5 filter="metric_name IN ( process.java.gc.collections) env IN (server_name:port)" | mcollect index=metrics_new  Is there something I'm doing wrong when using the mcollect command? Please advise. Thanks in advance.   Regards, Pravin
Hi @vnetrebko , when you say clustered environment, are you meaning Indexers or Search Heads Cluster? if Search Head, Lookup are automaticalli replicated between  peers and you don't need any addit... See more...
Hi @vnetrebko , when you say clustered environment, are you meaning Indexers or Search Heads Cluster? if Search Head, Lookup are automaticalli replicated between  peers and you don't need any additional method. If you don't have a Search Head Cluster, maybe this could be the easiest solution. Ciao. Giuseppe
Hello @tej57  It was upgraded from 9.2 to 9.4. The cert is not expired ,is valid for another few days . the  server cert is a combined cert
Hey @vnetrebko, One of the approach that I can think of considering your scenario is to use REST endpoints to fetch the information. You can run a search that would do an inputlookup to both the loo... See more...
Hey @vnetrebko, One of the approach that I can think of considering your scenario is to use REST endpoints to fetch the information. You can run a search that would do an inputlookup to both the lookup tables and then export the search job result and access the data. Also, whenever a search is run from the search head that references the lookup table, the lookup is migrated to the search peers (indexers) as part of the search bundle. All the information for running the search via REST API and getting the output exports are documented here - https://help.splunk.com/en/splunk-enterprise/leverage-rest-apis/rest-api-tutorials/9.4/rest-api-tutorials/creating-searches-using-the-rest-api Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!
Hello @Namo, There can be multiple reasons for KVStore failure. What version of Splunk did you upgrade from? Also, did you check the expiry of the certificate used by kvstore? Setting enabledSplunkd... See more...
Hello @Namo, There can be multiple reasons for KVStore failure. What version of Splunk did you upgrade from? Also, did you check the expiry of the certificate used by kvstore? Setting enabledSplunkdSSL to false will disconnect secure communication internally throughout the Splunk deployment wherever management port is being used.  Thanks, Tejas. 
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below... See more...
Hello Team,   We are on Linux and Post upgrade to splunk 9.4.3, KV store is failing.I have followed few recommendations given in the community  for the related issue,but they are not working .Below is the mongod.log SSL peer certificate validation failed: self signed certificate in certificate chain 2025-06-20T08:09:03.925Z I NETWORK [conn638] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:54188 (connection id: 638) This error can be bypassed if we add the below stanza  in server.conf, though it is a workaround only. enableSplunkdSSL = false Any other inputs is appreciated.