Security

How to keep our license in range

RobertRi
Communicator

Hi

This is a trivial question, but maybe someone has a really good answer to prevent the uncontrolled growth of license usage. In our case we have some forwarders and index some logfiles. Now a problem comes up and the admin decide to set a more meaningful tracelevel for the monitored logfile. Instead of 100MB per day this log produces 5 GB per day and the admin is happy to see the failure but I'm sweating because the license exeeded the maximum.

The first idea is, to tell the admins, please stop the splunk deamon, set your traces and after you have finished the troubleshooting, delete the logs and start the deamon again.

But I fear he doesn't remember my words the next time a problem comes up.

So do you have a more pragmatical way to prevent the uncontrolled license growth.

Thanks Rob

Tags (1)
0 Karma

Genti
Splunk Employee
Splunk Employee

To be honest i do not think that limiting your indexing speed (thruput) is the way to go. Limiting thruput means that events might take a long time getting indexed.

I would rather install the real time license usage app (4.2) or set up alerts that notify people that the license is almost being entirely consumed. Then take it from there.

proctorgeorge
Path Finder

Disclaimer: CAUTION: Do not alter the settings in limits.conf unless you know what you are doing. Improperly configured limits may result in splunkd crashes and/or memory overuse.

-I can't say I really know what I am doing so take this answer with a grain of salt.

A 100% effective, although unconventional, way to ensure that you never go over your indexing limit is to limit how fast the indexer can run.

On the Splunk index server, add a stanza to the limits.conf file:

$SPLUNK/etc/system/local/limits.conf:

[thruput]

maxKBps = #

To figure out what the # should be, divide the daily license cap (1GB: 1073741824 bytes) by 86400 (seconds in a day), to get your max Kbps rate (12427 bytes/sec, or 12KB). This doesn't sound like much, and it isn't for a single second, but if splunk runs steadily all day long, you'll get close to your limit, but not go over it.

Yes, this is ugly but it is the only sure-fire way I have found to limit Splunk's insatiable hunger. I hope someone has a better answers because I would sure like to know it.

0 Karma

proctorgeorge
Path Finder

I only used this solution for a brief period to make sure that I did not blow the license I had in the interim before getting a new license. I should also note that once the data gets into the Indexer then its time stamp is based off of when it was logged by the forwarder or if it had time stamp given by the reporting mechanism. Example: if it was currently 5pm and I did a search from 4-5pm and some piece of data was in the cache to be indexed with a timestamp of 4:30pm, then my search would not return it. later if I did the same search after the data had been indexed then it would show up.

0 Karma

proctorgeorge
Path Finder

Yes, the server would cache the data and trickle it into the indexer at the fixed rate. This is why it is so ugly, if you have a spike in data input then there is a slower response before it shows up in the index and if the data spike is larger then your daily limit then the cache could get larger and larger. What I am not so sure is if the thruput is limited at input or output on the indexer. If it is limited in input then the forwarders would hold the cache because they couldn't push to the indexer, if output is blocked then the cache would be on the indexer before it puts it on disk.

0 Karma

RobertRi
Communicator

Thank you for your answer.
If I understood it right, with this configuration you can limit the max throughput at the indexer. Which means that the server caches the data and only a few events per time goes into the indexer. This disables that data can be viewed near realtime or not ?

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...