Splunk Enterprise

Summarizing data and storing it in a metrics index (Splunk 7.0.0)

Lucas_K
Motivator

In the metrics getting started documentation ( http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetStarted ) it says "Summary indexing does not work with metrics."

When I read the rest of the documentation I don't see any specific reason I couldn't craft my own data to fit the metrics format.

If I massage an event into having all the correct fields ( http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetStarted#Metrics_data_format ) could I save that event to a metrics store?

I am looking to leverage the speed increase in the metric store with data I already process and save into summaries.

My only concern might be that the sourcetype would be stash so I may need custom input stash parsing to make it "fit".

1 Solution

esix_splunk
Splunk Employee
Splunk Employee

If you have the correct fields, you can push this into a metrics index, Yes.

Accelerated data model (as @MuS says) is a better option in my view though.. Otherwise you're left writing the Props or Searches required to get this to fit into the metrics indexes.

Here's a bit more for that approach : http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetMetricsInOther

View solution in original post

esix_splunk
Splunk Employee
Splunk Employee

If you have the correct fields, you can push this into a metrics index, Yes.

Accelerated data model (as @MuS says) is a better option in my view though.. Otherwise you're left writing the Props or Searches required to get this to fit into the metrics indexes.

Here's a bit more for that approach : http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetMetricsInOther

Lucas_K
Motivator

Thanks.

Looks like the only way is an outputcsv file.

collect puts to many extra fields and also stores the file into the spool dir which will be picked up by the default stash inputs stanzas.

0 Karma

MuS
Legend

You could also just create an accelerated data model with your existing events, add all needed fields, and use tstats to get the data back from the data model. Benefits: no need to munch up data and also lighting fast 😉

cheers, MuS

0 Karma

Lucas_K
Motivator

We'd tried accelerated data models for this specific use case but it just killed our indexers (multi million user radius accounting stats)

I'll probably revisit again as a couple of big splunk versions have come out so perhaps performance has improved since then.

I do agree accelerate data models makes keeping those statistics up to date a much simpler process though!

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...