Splunk Enterprise

Summarizing data and storing it in a metrics index (Splunk 7.0.0)

Lucas_K
Motivator

In the metrics getting started documentation ( http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetStarted ) it says "Summary indexing does not work with metrics."

When I read the rest of the documentation I don't see any specific reason I couldn't craft my own data to fit the metrics format.

If I massage an event into having all the correct fields ( http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetStarted#Metrics_data_format ) could I save that event to a metrics store?

I am looking to leverage the speed increase in the metric store with data I already process and save into summaries.

My only concern might be that the sourcetype would be stash so I may need custom input stash parsing to make it "fit".

1 Solution

esix_splunk
Splunk Employee
Splunk Employee

If you have the correct fields, you can push this into a metrics index, Yes.

Accelerated data model (as @MuS says) is a better option in my view though.. Otherwise you're left writing the Props or Searches required to get this to fit into the metrics indexes.

Here's a bit more for that approach : http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetMetricsInOther

View solution in original post

esix_splunk
Splunk Employee
Splunk Employee

If you have the correct fields, you can push this into a metrics index, Yes.

Accelerated data model (as @MuS says) is a better option in my view though.. Otherwise you're left writing the Props or Searches required to get this to fit into the metrics indexes.

Here's a bit more for that approach : http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetMetricsInOther

Lucas_K
Motivator

Thanks.

Looks like the only way is an outputcsv file.

collect puts to many extra fields and also stores the file into the spool dir which will be picked up by the default stash inputs stanzas.

0 Karma

MuS
Legend

You could also just create an accelerated data model with your existing events, add all needed fields, and use tstats to get the data back from the data model. Benefits: no need to munch up data and also lighting fast 😉

cheers, MuS

0 Karma

Lucas_K
Motivator

We'd tried accelerated data models for this specific use case but it just killed our indexers (multi million user radius accounting stats)

I'll probably revisit again as a couple of big splunk versions have come out so perhaps performance has improved since then.

I do agree accelerate data models makes keeping those statistics up to date a much simpler process though!

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...