All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Let me be completely transparent on this answer.  I do not know anything about what I am about to respond.  I put your question into Grok and am giving back what it says.  So if it is way off, I apol... See more...
Let me be completely transparent on this answer.  I do not know anything about what I am about to respond.  I put your question into Grok and am giving back what it says.  So if it is way off, I apologize.  It sounds like you are using Splunk Observability.  If you are just trying to pull metrics logs of the OS of a system, this is much easier and I would just use the Splunk Linux TA as a guide for the scripts to pull that off a Linux box and the Windows TA for windows, but If my gut is right, that is not your problem and it is Splunk observability cloud is what you are actually looking at.   So here is the cut and paste from Grok. To fetch actual metric values (time-series data) in Splunk Observability Cloud using REST APIs, you can use the ** /v2/datapoint** endpoint, which retrieves data points for specified metrics. Unlike the Metrics Catalog endpoints (e.g., /v2/metric), which return metadata like metric names and dimensions, the /v2/datapoint endpoint provides the numerical values for metrics over a specified time range. Here’s how you can approach it: Endpoint: Use GET /v2/datapoint or POST /v2/datapoint to query metric values. The POST method is useful for complex queries with multiple metrics or filters. Authentication: Include an access token in the header (X-SF-TOKEN: <YOUR_ORG_TOKEN>). You can find your org token in the Splunk Observability Cloud UI under Settings > Access Tokens. Query Parameters: Specify the metric name(s) you want to query (e.g., cpu.utilization). Use dimensions to filter the data (e.g., host:server1). Define the time range with startTs and endTs (Unix timestamps in milliseconds) or a relative time range (e.g., -1h for the last hour). Set the resolution (e.g., 10s for 10-second intervals). Example Request (using curl): bash curl --request POST \   --header "Content-Type: application/json" \   --header "X-SF-TOKEN: <YOUR_ORG_TOKEN>" \   --data '{     "metrics": [       {         "name": "cpu.utilization",         "dimensions": {"host": "server1"}       }     ],     "startTs": 1697059200000,     "endTs": 1697062800000,     "resolution": "10s"   }' \   https://api.<REALM>.signalfx.com/v2/datapoint Replace <YOUR_ORG_TOKEN> with your access token and <REALM> with your Splunk Observability realm (e.g., us0, found in your profile). Response: The API returns a JSON object with time-series data points, including timestamps and values for the specified metric(s). For example: json {   "cpu.utilization": [     {"timestamp": 1697059200000, "value": 45.2, "dimensions": {"host": "server1"}},     {"timestamp": 1697059210000, "value": 47.8, "dimensions": {"host": "server1"}}   ] } Tips: Use the Metric Finder in the Splunk Observability Cloud UI to confirm metric names and dimensions. If you’re using OpenTelemetry, ensure your Collector is configured to send metrics to Splunk Observability Cloud. For detailed documentation, check the Splunk Observability Cloud developer portal: https://dev.splunk.com/observability/docs/datapoint_endpoint/.[](https://help.splunk.com/en/splunk-observability-cloud/manage-data/other-data-ingestion-methods/other-data-ingestion-methods/send-data-using-rest-apis) If you’re still getting metadata, verify you’re not using /v2/metric or /v2/metricstore/metrics endpoints, which are for metadata only.
Can i please which api you were using to get these data values?
We recently had the exact same issue on our environment.  It was all related to expiring certs.  It sounds like you have already looked into this, but I just want to echo that when the cert expires o... See more...
We recently had the exact same issue on our environment.  It was all related to expiring certs.  It sounds like you have already looked into this, but I just want to echo that when the cert expires or can't be found, you can get this very behavior and it took a lot longer than I want to admit for our team to figure out that this was the reason all of our KV stores were failing.  
I am trying to fetch metric values of the infra i am monitoring using rest apis, so far all the apis i have tried are only giving metrics metadata not actual value of the metrics. Can someone help... See more...
I am trying to fetch metric values of the infra i am monitoring using rest apis, so far all the apis i have tried are only giving metrics metadata not actual value of the metrics. Can someone help me with the values api?
Hi @Namo  Have you got custom SSL in use on your Splunk instance?  One thing you could check is running: $SPLUNK_HOME/bin/splunk cmd btool server list kvstore Check specifically for sslVerifyServ... See more...
Hi @Namo  Have you got custom SSL in use on your Splunk instance?  One thing you could check is running: $SPLUNK_HOME/bin/splunk cmd btool server list kvstore Check specifically for sslVerifyServerCert - Is this true? If so try setting to false and restart Splunk to see if this resolves the issue temporarily - if it does then at least you can get the service back up and then focus on how you can get the SSL certs in working state without having to set sslVerifyServerCert to false.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kiran_panchavat  There is no folder with the below name in the $SPLUNK_HOME/etc/apps folder 100_<stackname>_splunkcloud
The KV sore was already on 4.2 before the upgrade.Hence went ahead with the splunk upgrade.  The readme file was reviewed before the upgrade. The error states  Error receiving request from client:... See more...
The KV sore was already on 4.2 before the upgrade.Hence went ahead with the splunk upgrade.  The readme file was reviewed before the upgrade. The error states  Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:58948 Does it mean we need to review the certificate steps also. Currently the server cert is issued by Digicert and cacert doesnt have Digicert as list of  trusted list
Hi @livehybrid , To answer your questions in order, I can get events, and it works when I run the first line. I am logged in as an admin, and I created the index, so I have permission to read and... See more...
Hi @livehybrid , To answer your questions in order, I can get events, and it works when I run the first line. I am logged in as an admin, and I created the index, so I have permission to read and write to the index. Data is available, but not written to the new index. info : The limit has been reached for log messages in info.csv. 65149 messages have not been written to info.csv. Refer to search.log for these messages or limits.conf to configure this limit. warn : No results to summary index. Once I complete the execution, I get the above error message in the job inspector. Thanks, Pravin
Hi @sverdhan  I would avoid looking at things like index=* because this is very resource intensive and also may include hosts which are not forwarders! Instead you can utilise the _metrics index wh... See more...
Hi @sverdhan  I would avoid looking at things like index=* because this is very resource intensive and also may include hosts which are not forwarders! Instead you can utilise the _metrics index which is very fast and efficient, you could try something like this: |mstats latest_time(_value) as latest_time WHERE earliest=-31d latest=now index=_metrics metric_name="spl.mlog.tcpin_connections._tcp_eps" source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections by hostname | eval notSeenFor30Days=IF(latest_time<now()-(60*60*24*30),"NotSeen","Seen") | eval lastSeen=tostring(now()-latest_time,"duration") I would usually recommend having a lookup of "known forwarders" for this task and then update it with when it was last seen, that way you wouldnt need to look back 30 days each time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sverdhan , see in the Monitoring Console the "DMC Alert - Missing forwarders" alert | inputlookup dmc_forwarder_assets | search status="missing" | rename hostname as Instance otherwise, if yo... See more...
Hi @sverdhan , see in the Monitoring Console the "DMC Alert - Missing forwarders" alert | inputlookup dmc_forwarder_assets | search status="missing" | rename hostname as Instance otherwise, if you want to know the clients that were connected in the last 30 days but not in the last hour, you could run something like this: | tstats latest(_time) AS _time count where index=_internal BY host | eval period=if(_time>now()-3600,"Last hour","Previous") | stats dc(period) AS period_count values(period) AS period latest(_time) AS _time BY host | where period_count=1 AND period="Last hour" | table host _time Ciao. Giuseppe
@sverdhan  | metadata type=hosts index=* earliest=-30d@d | eval age = now() - lastTime | eval last_seen = strftime(lastTime, "%Y-%m-%d %H:%M:%S") | where age > 30*24*60*60 | eval age_days = roun... See more...
@sverdhan  | metadata type=hosts index=* earliest=-30d@d | eval age = now() - lastTime | eval last_seen = strftime(lastTime, "%Y-%m-%d %H:%M:%S") | where age > 30*24*60*60 | eval age_days = round(age/(24*60*60), 2) | table host, last_seen, age_days | rename host as "Forwarder", last_seen as "Last Data Received", age_days as "Days Since Last Data"  
Hi @_pravin  The error "No results to summary index" suggests that the first part of your query didnt return any events, or didnt return events which could be interpeted as a metric. Can you run th... See more...
Hi @_pravin  The error "No results to summary index" suggests that the first part of your query didnt return any events, or didnt return events which could be interpeted as a metric. Can you run the first line to confirm you are getting events returned? Can you also please confirm that you have permission to read/write to metrics_new and that (as the name suggests ) it is definitely a metric index?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@sverdhan  | tstats latest(_time) as lastTime where index=* by host | eval age=now()-lastTime | where age > 2592000 | convert ctime(lastTime) | rename host as "Forwarder Host", lastTime ... See more...
@sverdhan  | tstats latest(_time) as lastTime where index=* by host | eval age=now()-lastTime | where age > 2592000 | convert ctime(lastTime) | rename host as "Forwarder Host", lastTime as "Last Data Received Time", age as "Age (in seconds)" | sort - "Age (in seconds)"
@Namo  Please can you confirm if you followed the Splunk 9.4 upgrade pre-steps that are documented here? https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/AboutupgradingREADTHISFIRST ... See more...
@Namo  Please can you confirm if you followed the Splunk 9.4 upgrade pre-steps that are documented here? https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/AboutupgradingREADTHISFIRST  There is a section on upgrading the kv-store before running the Splunk 9.4 upgrade. Reference: https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4? 
@Namo  Could you please confirm the upgrade path — specifically, from which version to which version Splunk was upgraded? Please note that you must first upgrade to the KV Store server version 4.2.... See more...
@Namo  Could you please confirm the upgrade path — specifically, from which version to which version Splunk was upgraded? Please note that you must first upgrade to the KV Store server version 4.2.x before proceeding with an upgrade to Splunk Enterprise 9.4.x or higher. For detailed instructions on updating to KV Store version 4.2.x (applicable to Splunk Enterprise versions 9.0.x through 9.3.x), refer to the official documentation: Migrate the KV store storage engine in the https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.3/administer-the-app-key-value-store/migrate-the-kv-store-storage-engine  We strongly recommend reviewing this guide to ensure a successful upgrade path and avoid issues like the one you're encountering.  https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/MigrateKVstore
Hello ,  Can anyone please provide me a query which lists out  all forwarders that have not send data over the last 30 days?   Thank you
I think I used the wrong terms. To clarify: I have distributed Splunk env. with clustered Indexers
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks so... See more...
Hi, I am using mcollect to collect data from certain metrics into another metric index. I have created the new metric index in the search head and also in the indexer clusters. The command looks something like this, but whenever I run the command, I get an error 'No results to summary index'.  | mpreview index=metrics_old target_per_timeseries=5 filter="metric_name IN ( process.java.gc.collections) env IN (server_name:port)" | mcollect index=metrics_new  Is there something I'm doing wrong when using the mcollect command? Please advise. Thanks in advance.   Regards, Pravin
Hi @vnetrebko , when you say clustered environment, are you meaning Indexers or Search Heads Cluster? if Search Head, Lookup are automaticalli replicated between  peers and you don't need any addit... See more...
Hi @vnetrebko , when you say clustered environment, are you meaning Indexers or Search Heads Cluster? if Search Head, Lookup are automaticalli replicated between  peers and you don't need any additional method. If you don't have a Search Head Cluster, maybe this could be the easiest solution. Ciao. Giuseppe
Hello @tej57  It was upgraded from 9.2 to 9.4. The cert is not expired ,is valid for another few days . the  server cert is a combined cert