Splunk Search

Data Not Stored in Metric Index

olahlala24
Engager

Hey all,

I am new to Splunk Enterprise and I would like to understand more about metrics and the use of metric indexes. So far, I have created my own metric index by going to Settings > Indexing. I have a bunch of Splunk Rules I have created and so far I have used the mcollect command to use the following:

host= (ip address) source=(source name) | mcollect index=(my_metric_index)

I am able to get a list of event logs showing on the Splunk dashboard , but I am not sure if the results showing on the Search and Reporting is being stored under my metric index. When I try to check under the Indexing Tab, my associated metric index is still at "0 MB" indicating no data 

Is there anyway somone can help? Is it my index that needs work? Is it my search string query?

 

 

Labels (2)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @olahlala24 

Without seeing the full search, I cant be use that the search you showed will have given you metrics when you ran mcollect.

Here is a working example which you can tweak:

index="_audit" search_id info total_run_time 
| stats count(search_id) as jobs avg(total_run_time) as latency by user 
| rename jobs as metric_name:jobs latency as metric_name:latency 
| mcollect index=mcollect_test

To view data in your metric index you can do something like this:

| mstats avg(_value) WHERE index=my_metric_index by metric_name span=1m

or use mcatalog (not recommended other than for debugging etc

| mcatalog values(metric_name) WHERE index=my_metric_index

This will list all the available metrics in a given index.

Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards

Will

View solution in original post

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @olahlala24 

Without seeing the full search, I cant be use that the search you showed will have given you metrics when you ran mcollect.

Here is a working example which you can tweak:

index="_audit" search_id info total_run_time 
| stats count(search_id) as jobs avg(total_run_time) as latency by user 
| rename jobs as metric_name:jobs latency as metric_name:latency 
| mcollect index=mcollect_test

To view data in your metric index you can do something like this:

| mstats avg(_value) WHERE index=my_metric_index by metric_name span=1m

or use mcatalog (not recommended other than for debugging etc

| mcatalog values(metric_name) WHERE index=my_metric_index

This will list all the available metrics in a given index.

Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards

Will

0 Karma

olahlala24
Engager

Thanks! and is search_id and total_run_time variables created or is it based on the specific field used in the log events?

0 Karma

livehybrid
SplunkTrust
SplunkTrust

In the example those are referenced on line 1 to ensure that only data with those fields is returned, the stats command then counts them and creates new fields, (for example "jobs" which contains the count of search_id.

fieldvalue
jobs50
total_run_time12.4

After the stats these are renamed as follows:

fieldvalue
metric_name:jobs50
metric_name:total_run_time12.4

This is because a metric must be a key-value pair, where the name is metric_name:<yourMetricName> which is equal to a numeric value. You can also add dimensions, but lets not worry about that for now!

The mcollect statement then captures the metrics_name:*=<value> fields into your metric index.

Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards

Will

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...