Installation

How to resolve license issue so daily indexing volume isn't exceeded?

thambisetty
SplunkTrust
SplunkTrust

Hi Everyone,

I am working in a distributed Splunk environment with 3 indexers, 1 search head, and 1 master node. I have more than 7 forwarders installed on different servers. I have 100 GB daily volume. My problem is 100 GB limit exceeded daily. I have to keep on monitoring so that it doesn't exceed. Is there any solution to resolve this issue?

Please help me.

————————————
If this helps, give a like below.
Labels (3)

grijhwani
Motivator

Your solution will depend on the purpose of indexing.

If your logging is for purely for the purposes of operations management, and you are indexing all your default log files indiscriminately then there is probably a fair amount of room for configuring out log content that is not providing you any value. I have to say that few enough servers that saying "more than 7" gives us a working scale is quite a small number to be generating over 100GB daily, for most use cases. I would guess there is a lot of leeway for pruning the throughput. One of your options there is to simply blacklist entire logfiles or paths, if the content is providing no value (or insufficient value that you can justify the licence cost). The other is to get a little cleverer with your props.conf and transforms.conf to selectively filter out log transactions of low operational value from input streams based on their content. Of course, what you consider valuable or not is entirely down to the context of your use case, and a decision only you can make.

If you are logging data for the purposes of forensic or fraud investigation, and legal compliance (as in for PCI-DSS or SarbOx compliance), and you have already taken all the steps you can to remove extraneous content your only viable option is to increase your licence.

There is, of course, a third - but usually unfavourable - option: suggest to your superiors that they throttle their business, thereby generating less throughput. Probably best to avoid that one, though.

Footnote:

One gotcha that sometimes occurs - particularly on Linux or other ix platforms - is when logs are rotated periodically, and the newly rotated log is detected by Splunk as a new file, and re-indexed despite the content already being present. Oversights like that will multiply your through put by however many periodic rotations of each log are retained. Make sure that if you *are rotating logs, the retained logs are blacklisted.

jimodonald
Contributor

I can only think of two solutions...

1) increase your license. That will require justifying the value to your managers.

2) reduce the amount being indexed. You can start with eliminating any logs you don't use regularly. Then you can start filtering out events from the logs you don't use. See the related answer.

http://answers.splunk.com/answers/33004/how-to-filter-events-from-a-file-before-it-gets-to-splunk-in...

Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...