I got the daily indexing quota exceeded in our Splunk v6.1 instance.
I ran this query:
earliest=-2d@d host=* index=* | eval raw_len=len(_raw)/1024/1024 | stats sum(raw_len) as "size/MB" by date_mday, host
which gives me a table of date, size (in MB) of events, and hostnames.
Adding the numbers up, and comparing over the past couple of days, I can't see how the quota was exceeded.
Am I missing something in my query to identify the host that did the excessive logging?
You should be querying the license usage log to accurate comparison of license (daily indexing volume) usage.
Try this (from license server. Can query from search head if you're forwarding license server internal logs to your indexers)
index=_internal source=*license_usage.log type=Usage earliest=-2d@d | eval host=if(isnull(h) OR len(h)=0,"SQUASHED",h) | bucket span=1d _time | stats sum(b) as usage by _time host | eval usage_GB=round(usage/1024/1024/1024,2)
You should be querying the license usage log to accurate comparison of license (daily indexing volume) usage.
Try this (from license server. Can query from search head if you're forwarding license server internal logs to your indexers)
index=_internal source=*license_usage.log type=Usage earliest=-2d@d | eval host=if(isnull(h) OR len(h)=0,"SQUASHED",h) | bucket span=1d _time | stats sum(b) as usage by _time host | eval usage_GB=round(usage/1024/1024/1024,2)
Thanks for that query, somesoni2!
The host has been identified. It appears that the Forwarder on that host was not sending events for some time, and when the host was rebooted, all the backlogged events possibly got sent at once.
How does one flush the Forwarder before a reboot in such situations, to avoid a torrent of events?
These are not critical events to keep.
Found this in the docs, e.g.
ignoreOlderThan = 2d
in inputs.conf
(looks like that should prevent excessive logging of older events)