Splunk Search

What are best practices for creating summary indexes to populate a dashboard?

mansel_scheffel
Explorer

Hi,

I need to create some summary indexes and am wondering what the best approach would be? So far there are various searches filling a dashboard (most counts of hosts or license usage etc).. I have configured the below summary and summary search.. I was wondering how to make this as efficient as possible using known best practices?

Create Summary:

search index=_internal source=*license_usage.log type=Usage pool=* idx=eop* earliest=-3d@d latest=@d | timechart span=1d sum(b) as Bytes | eval Bytes=(Bytes/1024/1024/1024) | eval time=_time | fields _time Bytes | collect index=main sourcetype=summary source=eop_daily_volume_test marker="summary_type=metrics_volume, summary_span=86400, summary_method=timechart, report=\"eop daily volume test\""

Search Summary:

index=main source=eop_daily_volume_test

We will need to define a standard for summarization that could be applied for all our summaries in the future?

Thanks!

0 Karma
1 Solution

Runals
Motivator

I'm trying to understand the relationship between your question and the query you listed. Is that an example of the type of query you are talking about creating, or something that is prod now? There are several potential issues with the query itself.

  1. You have earliest and latest in the query that covers 3 days - if you run this query more frequently than that, when you create queries against the aggregated data, the numbers will be off.
  2. You continue to use the name Bytes when you've converted the data to GB - when you (or someone else) create the queries for your dashboards, you will have to remember that bytes isn't really bytes.
  3. If you are doing a summary index data creation, I'd configure that in the scheduled search GUI and not use the collect command. The more frequent use of this command is to move a copy of raw data from one index to another. That said, I suppose it would work in this context.
  4. Minor thing, but you will need to append a wildcard in front of license_usage in the source field.
  5. Your question mentioned counts by host, but your query is just doing a rollup of all license. I'd adjust your query so that you aren't using the Usage data, but Rollover. Since the rollover data is created after midnight, you will need to adjust the _time field backwards. In the SoS app (maybe DMC too) Splunk uses a query that moves the time back 12 hours. At the least you will want to account for DST wherever you are. I've created this query that allows you to pick however many hours you want to adjust the time by: index=_internal source=*license* "type=RolloverSummary" | eval _time = _time - (3600 * 12) | eval GB = b/1024/1024/1024 | timechart sum(GB)
  6. We have a large environment and the metrics logs generally aren't able to account for all our hosts and whatnot. To create my own summary that figures out the types and counts of events any particular index/host combo sends, I basically use the following: | metasearch | eval host=lower(host) | fields host index sourcetype | rex field=sourcetype "(?<sourcetype>.+)(?:-\d+|-too_small)" | stats count by host index sourcetype . Metasearch is pretty fast; it is likely tstats would be faster /shrug. I have that search run every hour.
  7. If you want a per host count of license usage, you might as well break that down to the sourcetype level, but know that in the end as the system gets bigger Splunk will start to 'drop' fields for tracking internal data I think in the order of source, host, index, leaving just sourcetype. At any rate, I essentially use the following: index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" TERM(type=Usage) | stats sum(b) as bytes by idx st | rename idx as index st as sourcetype

I've mentioned 'essentially' a few times. In my environment we have data from individual units going into their own indexes and roughly split the indexes into 2 over arching groups. In order to facilitate the sharing of meta data I have 2 separate queries for items 6 & 7 as they populate separate summary indexes. I've created a CSV that maps my index list to several categories, string names for units, etc. One of those fields basically has binary t/f whether it belongs to group1. The query actually looks more like this for item 7 where I'm using a subsearch to bring back all of the indexes related to group 1: index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" TERM(type=Usage) [ inputlookup index_list | search group1=t reportable=t | rename index as idx | fields idx] | stats sum(b) as bytes by idx st | rename idx as index st as sourcetype

View solution in original post

Runals
Motivator

I'm trying to understand the relationship between your question and the query you listed. Is that an example of the type of query you are talking about creating, or something that is prod now? There are several potential issues with the query itself.

  1. You have earliest and latest in the query that covers 3 days - if you run this query more frequently than that, when you create queries against the aggregated data, the numbers will be off.
  2. You continue to use the name Bytes when you've converted the data to GB - when you (or someone else) create the queries for your dashboards, you will have to remember that bytes isn't really bytes.
  3. If you are doing a summary index data creation, I'd configure that in the scheduled search GUI and not use the collect command. The more frequent use of this command is to move a copy of raw data from one index to another. That said, I suppose it would work in this context.
  4. Minor thing, but you will need to append a wildcard in front of license_usage in the source field.
  5. Your question mentioned counts by host, but your query is just doing a rollup of all license. I'd adjust your query so that you aren't using the Usage data, but Rollover. Since the rollover data is created after midnight, you will need to adjust the _time field backwards. In the SoS app (maybe DMC too) Splunk uses a query that moves the time back 12 hours. At the least you will want to account for DST wherever you are. I've created this query that allows you to pick however many hours you want to adjust the time by: index=_internal source=*license* "type=RolloverSummary" | eval _time = _time - (3600 * 12) | eval GB = b/1024/1024/1024 | timechart sum(GB)
  6. We have a large environment and the metrics logs generally aren't able to account for all our hosts and whatnot. To create my own summary that figures out the types and counts of events any particular index/host combo sends, I basically use the following: | metasearch | eval host=lower(host) | fields host index sourcetype | rex field=sourcetype "(?<sourcetype>.+)(?:-\d+|-too_small)" | stats count by host index sourcetype . Metasearch is pretty fast; it is likely tstats would be faster /shrug. I have that search run every hour.
  7. If you want a per host count of license usage, you might as well break that down to the sourcetype level, but know that in the end as the system gets bigger Splunk will start to 'drop' fields for tracking internal data I think in the order of source, host, index, leaving just sourcetype. At any rate, I essentially use the following: index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" TERM(type=Usage) | stats sum(b) as bytes by idx st | rename idx as index st as sourcetype

I've mentioned 'essentially' a few times. In my environment we have data from individual units going into their own indexes and roughly split the indexes into 2 over arching groups. In order to facilitate the sharing of meta data I have 2 separate queries for items 6 & 7 as they populate separate summary indexes. I've created a CSV that maps my index list to several categories, string names for units, etc. One of those fields basically has binary t/f whether it belongs to group1. The query actually looks more like this for item 7 where I'm using a subsearch to bring back all of the indexes related to group 1: index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" TERM(type=Usage) [ inputlookup index_list | search group1=t reportable=t | rename index as idx | fields idx] | stats sum(b) as bytes by idx st | rename idx as index st as sourcetype

Get Updates on the Splunk Community!

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...