Getting Data In

How to identify host that is exhausting indexing quota?

nk-1
Path Finder

I got the daily indexing quota exceeded in our Splunk v6.1 instance.
I ran this query:

earliest=-2d@d host=* index=* | eval raw_len=len(_raw)/1024/1024 | stats sum(raw_len) as "size/MB" by date_mday, host

which gives me a table of date, size (in MB) of events, and hostnames.
Adding the numbers up, and comparing over the past couple of days, I can't see how the quota was exceeded.

Am I missing something in my query to identify the host that did the excessive logging?

0 Karma
1 Solution

somesoni2
Revered Legend

You should be querying the license usage log to accurate comparison of license (daily indexing volume) usage.

Try this (from license server. Can query from search head if you're forwarding license server internal logs to your indexers)

index=_internal source=*license_usage.log type=Usage earliest=-2d@d | eval host=if(isnull(h) OR len(h)=0,"SQUASHED",h) | bucket span=1d _time | stats sum(b) as usage by _time host | eval usage_GB=round(usage/1024/1024/1024,2) 

View solution in original post

somesoni2
Revered Legend

You should be querying the license usage log to accurate comparison of license (daily indexing volume) usage.

Try this (from license server. Can query from search head if you're forwarding license server internal logs to your indexers)

index=_internal source=*license_usage.log type=Usage earliest=-2d@d | eval host=if(isnull(h) OR len(h)=0,"SQUASHED",h) | bucket span=1d _time | stats sum(b) as usage by _time host | eval usage_GB=round(usage/1024/1024/1024,2) 

nk-1
Path Finder

Thanks for that query, somesoni2!
The host has been identified. It appears that the Forwarder on that host was not sending events for some time, and when the host was rebooted, all the backlogged events possibly got sent at once.
How does one flush the Forwarder before a reboot in such situations, to avoid a torrent of events?
These are not critical events to keep.

0 Karma

nk-1
Path Finder

Found this in the docs, e.g.
ignoreOlderThan = 2d

in inputs.conf

(looks like that should prevent excessive logging of older events)

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...