Getting Data In

Disable index when it exceeds 2G in one day

seanlon11
Path Finder

We have created a bunch of different indexes for all of our different systems. At some point, these systems will freak out, and produce 10-15G of data in a matter of minutes. We are still working to fix the freak out issue.

Is disabling that index when it hits 2G in a day the best way to prevent a license violation?

The only other option I have thought of is to send that forwarder's data to the null-queue. Is that a better option?

This is desirable for a couple of reasons: 1) avoid filling up the splunk indexer drive 2) avoid license violations

Thanks, Sean

0 Karma
1 Solution

maverick
Splunk Employee
Splunk Employee

If the behavior you describe above is, indeed, rare and you didn't have to worry filling up the disk, then I would just let it violate the license limit because you are allow a few violations within any 30-day rolling period with only a warning message. And even if your search is disabled eventually due to multiple violations, your indexing will never stop and you can always request a reset license from Splunk support, which you can use to reset the violations indicator back to zero.

However, if filling up the disk is a concern (and feasibly you are not able to allocate or add more disk capacity to handle the overage) then one thing you can try is creating an alert that triggers a script to run when you start to get close to going over your licensed limit, say, 1.8 GB or more. That script could maybe copy in a nullQueue config on the Splunk forwarders and then restart them or it could maybe stop the splunk forwarders altogether until you are able to resolve the reason for the overage or until the "Freak out" period if over, etc.

BTW, if you search your _internal index for the metrics events for each day, like this:


index=_internal metrics kb group="per_index_thruput" series="main" startdaysago=1 | eval totalGB = (kb / 1024) / 1024 | timechart span=1d sum(totalGB) as total  | search total > 1.8

you can run this search every five minutes or so to trigger your alert as your index exceeds 1.8 GB per day and run your script. Also, if you have more than one index, you could create a separate alert trigger for each one by specifying the "series=index_name_here" in the search sample above.

View solution in original post

deyeo
Path Finder

the search string doesn't work. i've got this instead:

Specified field(s) missing from results: 'totalGB'

0 Karma

maverick
Splunk Employee
Splunk Employee

If the behavior you describe above is, indeed, rare and you didn't have to worry filling up the disk, then I would just let it violate the license limit because you are allow a few violations within any 30-day rolling period with only a warning message. And even if your search is disabled eventually due to multiple violations, your indexing will never stop and you can always request a reset license from Splunk support, which you can use to reset the violations indicator back to zero.

However, if filling up the disk is a concern (and feasibly you are not able to allocate or add more disk capacity to handle the overage) then one thing you can try is creating an alert that triggers a script to run when you start to get close to going over your licensed limit, say, 1.8 GB or more. That script could maybe copy in a nullQueue config on the Splunk forwarders and then restart them or it could maybe stop the splunk forwarders altogether until you are able to resolve the reason for the overage or until the "Freak out" period if over, etc.

BTW, if you search your _internal index for the metrics events for each day, like this:


index=_internal metrics kb group="per_index_thruput" series="main" startdaysago=1 | eval totalGB = (kb / 1024) / 1024 | timechart span=1d sum(totalGB) as total  | search total > 1.8

you can run this search every five minutes or so to trigger your alert as your index exceeds 1.8 GB per day and run your script. Also, if you have more than one index, you could create a separate alert trigger for each one by specifying the "series=index_name_here" in the search sample above.

Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...