Getting Data In

How to identify host that is exhausting indexing quota?

nk-1
Path Finder

I got the daily indexing quota exceeded in our Splunk v6.1 instance.
I ran this query:

earliest=-2d@d host=* index=* | eval raw_len=len(_raw)/1024/1024 | stats sum(raw_len) as "size/MB" by date_mday, host

which gives me a table of date, size (in MB) of events, and hostnames.
Adding the numbers up, and comparing over the past couple of days, I can't see how the quota was exceeded.

Am I missing something in my query to identify the host that did the excessive logging?

0 Karma
1 Solution

somesoni2
Revered Legend

You should be querying the license usage log to accurate comparison of license (daily indexing volume) usage.

Try this (from license server. Can query from search head if you're forwarding license server internal logs to your indexers)

index=_internal source=*license_usage.log type=Usage earliest=-2d@d | eval host=if(isnull(h) OR len(h)=0,"SQUASHED",h) | bucket span=1d _time | stats sum(b) as usage by _time host | eval usage_GB=round(usage/1024/1024/1024,2) 

View solution in original post

somesoni2
Revered Legend

You should be querying the license usage log to accurate comparison of license (daily indexing volume) usage.

Try this (from license server. Can query from search head if you're forwarding license server internal logs to your indexers)

index=_internal source=*license_usage.log type=Usage earliest=-2d@d | eval host=if(isnull(h) OR len(h)=0,"SQUASHED",h) | bucket span=1d _time | stats sum(b) as usage by _time host | eval usage_GB=round(usage/1024/1024/1024,2) 

nk-1
Path Finder

Thanks for that query, somesoni2!
The host has been identified. It appears that the Forwarder on that host was not sending events for some time, and when the host was rebooted, all the backlogged events possibly got sent at once.
How does one flush the Forwarder before a reboot in such situations, to avoid a torrent of events?
These are not critical events to keep.

0 Karma

nk-1
Path Finder

Found this in the docs, e.g.
ignoreOlderThan = 2d

in inputs.conf

(looks like that should prevent excessive logging of older events)

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...