Getting Data In

How do we move towards the metrics usage?

ddrillic
Ultra Champion

How do we move towards the metrics usage? Will it replace the conventional log file ingestion? How does it work for an existing standard implementation? Will it replace the existing log file collection?

Tags (1)
0 Karma
1 Solution

mattymo
Splunk Employee
Splunk Employee

Hey @ddrillic !

The use of metrics are suited for particular types of data. So think of it more as logs AND metrics. not OR.

https://docs.splunk.com/Documentation/Splunk/7.2.1/Metrics/GetStarted

Just another powerful option in your Splunk toolbox

As a Splunk admin/user, you do not have to replace anything . SPL, tstats, summary indexing & dashboarding is still awesome.

However, If you find yourself curating data for users using timecharts/dashboards, the combination of metric data type and the metric workspace, is a giant leap forward in the life of Splunk users and admins. - https://splunkbase.splunk.com/app/4192/

alt text

Features like the logs-to-metrics, mcatalog , and the workspace, make working with the time series metric data much easier and give the user a great experience from day 1.

Once you get into more advanced consumption of the metric data and move to the "open in search" button, you still have all the power of SPL.

Even alerting is just a point a click away. No need for them to learn any of the SPL required to get value out of metrics.

alt text

Prime examples to begin exploring are time-series performance metrics that come out of tools like snmp pollers, metric agents like Splunk UF apps, collectd, telegraf, prometheus, etc.

All this data can be Splunked as events and represented in timechart form, but the price for entry is SPL knowledge and dashboarding know how.....So these are the places to start looking at integrations, so you can make this data more accessible and enjoyable to use for the basic user in an Splunk shop.

This does not mean, however, you could not get tremendous value out of both raw event data as well as metric representation.

Thats where |mcollect is interesting.

Many times you may like to have very granular operational event data around, but then summarize and roll off the granular data for a metric summary.

An example might be Firewall session logs, or the output of a scripted input, where the event data may be useful for granular troubleshooting and session analysis, but may be equally as impactful to others as a timechart of avg session metrics as a "check engine light" (metric) rather than "a diagnostic report"(events). Marry the two together and you graduate from static dashboarding and into drilldowns that get to root cause, not just draw pretty graphs.

So I guess what I would say is, begin to explore the metrics data type when you are working with timeseries metrics data, for easy human consumption and self-serve curations in the metrics workspace. (its all just the Splunk Indexed Extractions logic you already know!)

Logs will still always be there to provide the drilldown and analysis to go with the metrics, or to do advanced calculations and data wizardry that the workspace/metric store doesn't handle, but it's when you bring the two data types together when you start to get tremendous insight into the two data types.

- MattyMo

View solution in original post

mattymo
Splunk Employee
Splunk Employee

Hey @ddrillic !

The use of metrics are suited for particular types of data. So think of it more as logs AND metrics. not OR.

https://docs.splunk.com/Documentation/Splunk/7.2.1/Metrics/GetStarted

Just another powerful option in your Splunk toolbox

As a Splunk admin/user, you do not have to replace anything . SPL, tstats, summary indexing & dashboarding is still awesome.

However, If you find yourself curating data for users using timecharts/dashboards, the combination of metric data type and the metric workspace, is a giant leap forward in the life of Splunk users and admins. - https://splunkbase.splunk.com/app/4192/

alt text

Features like the logs-to-metrics, mcatalog , and the workspace, make working with the time series metric data much easier and give the user a great experience from day 1.

Once you get into more advanced consumption of the metric data and move to the "open in search" button, you still have all the power of SPL.

Even alerting is just a point a click away. No need for them to learn any of the SPL required to get value out of metrics.

alt text

Prime examples to begin exploring are time-series performance metrics that come out of tools like snmp pollers, metric agents like Splunk UF apps, collectd, telegraf, prometheus, etc.

All this data can be Splunked as events and represented in timechart form, but the price for entry is SPL knowledge and dashboarding know how.....So these are the places to start looking at integrations, so you can make this data more accessible and enjoyable to use for the basic user in an Splunk shop.

This does not mean, however, you could not get tremendous value out of both raw event data as well as metric representation.

Thats where |mcollect is interesting.

Many times you may like to have very granular operational event data around, but then summarize and roll off the granular data for a metric summary.

An example might be Firewall session logs, or the output of a scripted input, where the event data may be useful for granular troubleshooting and session analysis, but may be equally as impactful to others as a timechart of avg session metrics as a "check engine light" (metric) rather than "a diagnostic report"(events). Marry the two together and you graduate from static dashboarding and into drilldowns that get to root cause, not just draw pretty graphs.

So I guess what I would say is, begin to explore the metrics data type when you are working with timeseries metrics data, for easy human consumption and self-serve curations in the metrics workspace. (its all just the Splunk Indexed Extractions logic you already know!)

Logs will still always be there to provide the drilldown and analysis to go with the metrics, or to do advanced calculations and data wizardry that the workspace/metric store doesn't handle, but it's when you bring the two data types together when you start to get tremendous insight into the two data types.

- MattyMo

ddrillic
Ultra Champion

Very kind of you @mmodestino !

0 Karma

gjanders
SplunkTrust
SplunkTrust

If your running a new enough version of Splunk you can do logs to metrics conversion (I think it was 7.1.x that brought this in but the documentation can confirm)

You can get metrics in (not from statsd/collectd)

I would say it "can" replace the log file ingestion in some circumstances, but it depends...good luck

0 Karma

ddrillic
Ultra Champion

Really interesting - can we ingest any regular log file via the metrics way?

0 Karma

gjanders
SplunkTrust
SplunkTrust
0 Karma

ddrillic
Ultra Champion

Our sales engineer said -

-- To answer your question directly; No, not every log file can be presented as metrics. I encourage you to read through the documentation on Splunk’s website; but metrics must fit within a very specific format. Think of a metric as a measurement of something on your system. Good examples would be;
• A count of something
• A measurement of a sensor at this moment
• A percentage of something

It will do you no good if something textual is important to you like;
• An error message
• A sentence
• Non-numeric information

So usually it’s used for performance and statistical measurements of packets etc.

Mostly, we still direct people to use our metrics enabled TAs, like the Linux TA or the Windows TA. Other TAs are slowly adopting some forms of measurements and formatting them as metrics. Building your own sources of metrics is non-trivial, and converting log contents to metrics where you can is an expensive process.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...