Splunk Search

Year to date splunk license bandwidth usage

goat
Explorer

I am currently running a search for license bandwidth :

index=_internal source=*metrics.log group=per_index_thruput series!=_* | eval totalGB = (kb/1024)/1024 | timechart span=1d sum(totalGB)

It works fine; however if I do a year to date as a time constraint, I only receive 30 days worth of results. Even when I chose "All Time", I still get 30 days worth. Is there a query I can run to view an historical license bandwidth usage?

Thanks,

1 Solution

Lowell
Super Champion

Your _internal index is cleared out based on the index retention policy, which by default is around 1 month for the internal index.

Take a look at Set a retirement and archiving policy

Here is one alternate means of getting similar info:

You can get total-license volume info from the "LicenseManager-Audit" component, this doesn't not contain the per-index totals, but it will show you your overall usage. Now, these events are also probably purged from your _internal index too. However, since these entries go into their own log file (license_audit.log), the log doesn't get rotated very often so you can still get this data back, using splunk file command... the end result is something like:

| file /opt/splunk/var/log/splunk/license_audit.log | search LicenseManager-Audit todaysBytesIndexed | kv | eval totalGB=todaysBytesIndexed/1024/1024/1024 | timechart span=1d sum(totalGB)

Moving forward, I would suggest setting up a summary indexing scheduled saved search that collects your metrics info on a regular basis so you can get to this information long term. I have a series of saved searches that I used to do this on my system. The first search runs every 15 minutes and takes snapshot of the metrics. The second saved search looks at all of the 15-minute snapshots and makes a daily snapshot. (So I can dig down to 15 minute window summary info when I want more fine-grained reporting, or use the daily summary info to quickly report over very long time frames.)

Here are some example entries in my savedsearches.conf:

[splunk_metrics_index]
action.summary_index = 1
cron_schedule = 1,16,31,46 * * * *
dispatch.earliest_time = -16m@m
dispatch.latest_time = -1m@m
displayview = flashtimeline
enableSched = 1
is_visable = 0
realtime_schedule = 0
search = index=_internal sourcetype=splunkd source=*metrics* "group=per_index_thruput" NOT series="_thefishbucket" tag::host=splunk | eval events=eps*kb/kbps | replace "default" with "main" in series | stats sum(events) as events, sum(kb) as kb, median(kbps) as kbps, median(eps) as eps by series | eval events=round(events,0) | eval kb=round(kb,1)


[splunk_metrics_daily]
action.summary_index = 1
cron_schedule = 35 5 * * *
description = Using the "splunk_metrics_(host|sourcetype|index)" quarter-hour summary index snapshots and build a daily total for quicker multi-day analysis summary index.  I am using an external cron job with fill_summary_index to ensure all base summary index searches have run at 4AM every day.  (This may not be needed any more since "realtime_schedule=0" was added.)
dispatch.earliest_time = -1d@d
dispatch.latest_time = @d
enableSched = 1
is_visable = 0
realtime_schedule = 0
search = index=summary (source=splunk_metrics_host OR source=splunk_metrics_source* OR source=splunk_metrics_index) | rex field=source "splunk_metrics_(?<metric>\w+)" | stats sum(events) as events, sum(kb) as kb, median(kbps) as kbps, stdev(kbps) as kbps_stdev, median(eps) as eps, stdev(eps) as eps_stdev by metric, series

Searching against the summary index is much faster than searching against the metrics events directly. Metrics are recorded every 30 seconds. So the 15 minute snapshot is a 1:30 reduction in total number of events. And the daily snapshots are a 1:48 reduction, so in your example where you are looking for a GB per daily value, you could use the daily summary index, and have a 1:1440 reduction in the total number of events processed.

(If you find this useful, I can post the other 15-minute summary searches splunk_metrics_host and splunk_metrics_sourcetype.)

View solution in original post

goat
Explorer

Thanks for the reply.

I should mention this is a Splunk server running on a windows 2003 platform. What would be the correct syntax in this case? The path I have is C:\Program Files\Splunk\var\log\splunk. I tried foward slashes, I tried escaping the backslashes; however the search doesn't work. Thanks

0 Karma

goat
Explorer

that worked. Thanks for the help

0 Karma

Lowell
Super Champion

I've updated the search. I was missing a "kv" search command in there. (This wouldn't be needed if we were not using the file search command.). See if this resolves the issue for you. I would try rebuilding the search one search command (separated by "|"s) at a time; that's generally the best way to track down where a search is breaking at.

0 Karma

Lowell
Super Champion

Your _internal index is cleared out based on the index retention policy, which by default is around 1 month for the internal index.

Take a look at Set a retirement and archiving policy

Here is one alternate means of getting similar info:

You can get total-license volume info from the "LicenseManager-Audit" component, this doesn't not contain the per-index totals, but it will show you your overall usage. Now, these events are also probably purged from your _internal index too. However, since these entries go into their own log file (license_audit.log), the log doesn't get rotated very often so you can still get this data back, using splunk file command... the end result is something like:

| file /opt/splunk/var/log/splunk/license_audit.log | search LicenseManager-Audit todaysBytesIndexed | kv | eval totalGB=todaysBytesIndexed/1024/1024/1024 | timechart span=1d sum(totalGB)

Moving forward, I would suggest setting up a summary indexing scheduled saved search that collects your metrics info on a regular basis so you can get to this information long term. I have a series of saved searches that I used to do this on my system. The first search runs every 15 minutes and takes snapshot of the metrics. The second saved search looks at all of the 15-minute snapshots and makes a daily snapshot. (So I can dig down to 15 minute window summary info when I want more fine-grained reporting, or use the daily summary info to quickly report over very long time frames.)

Here are some example entries in my savedsearches.conf:

[splunk_metrics_index]
action.summary_index = 1
cron_schedule = 1,16,31,46 * * * *
dispatch.earliest_time = -16m@m
dispatch.latest_time = -1m@m
displayview = flashtimeline
enableSched = 1
is_visable = 0
realtime_schedule = 0
search = index=_internal sourcetype=splunkd source=*metrics* "group=per_index_thruput" NOT series="_thefishbucket" tag::host=splunk | eval events=eps*kb/kbps | replace "default" with "main" in series | stats sum(events) as events, sum(kb) as kb, median(kbps) as kbps, median(eps) as eps by series | eval events=round(events,0) | eval kb=round(kb,1)


[splunk_metrics_daily]
action.summary_index = 1
cron_schedule = 35 5 * * *
description = Using the "splunk_metrics_(host|sourcetype|index)" quarter-hour summary index snapshots and build a daily total for quicker multi-day analysis summary index.  I am using an external cron job with fill_summary_index to ensure all base summary index searches have run at 4AM every day.  (This may not be needed any more since "realtime_schedule=0" was added.)
dispatch.earliest_time = -1d@d
dispatch.latest_time = @d
enableSched = 1
is_visable = 0
realtime_schedule = 0
search = index=summary (source=splunk_metrics_host OR source=splunk_metrics_source* OR source=splunk_metrics_index) | rex field=source "splunk_metrics_(?<metric>\w+)" | stats sum(events) as events, sum(kb) as kb, median(kbps) as kbps, stdev(kbps) as kbps_stdev, median(eps) as eps, stdev(eps) as eps_stdev by metric, series

Searching against the summary index is much faster than searching against the metrics events directly. Metrics are recorded every 30 seconds. So the 15 minute snapshot is a 1:30 reduction in total number of events. And the daily snapshots are a 1:48 reduction, so in your example where you are looking for a GB per daily value, you could use the daily summary index, and have a 1:1440 reduction in the total number of events processed.

(If you find this useful, I can post the other 15-minute summary searches splunk_metrics_host and splunk_metrics_sourcetype.)

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...