Installation

How to resolve license issue so daily indexing volume isn't exceeded?

thambisetty
SplunkTrust
SplunkTrust

Hi Everyone,

I am working in a distributed Splunk environment with 3 indexers, 1 search head, and 1 master node. I have more than 7 forwarders installed on different servers. I have 100 GB daily volume. My problem is 100 GB limit exceeded daily. I have to keep on monitoring so that it doesn't exceed. Is there any solution to resolve this issue?

Please help me.

————————————
If this helps, give a like below.
Labels (3)

grijhwani
Motivator

Your solution will depend on the purpose of indexing.

If your logging is for purely for the purposes of operations management, and you are indexing all your default log files indiscriminately then there is probably a fair amount of room for configuring out log content that is not providing you any value. I have to say that few enough servers that saying "more than 7" gives us a working scale is quite a small number to be generating over 100GB daily, for most use cases. I would guess there is a lot of leeway for pruning the throughput. One of your options there is to simply blacklist entire logfiles or paths, if the content is providing no value (or insufficient value that you can justify the licence cost). The other is to get a little cleverer with your props.conf and transforms.conf to selectively filter out log transactions of low operational value from input streams based on their content. Of course, what you consider valuable or not is entirely down to the context of your use case, and a decision only you can make.

If you are logging data for the purposes of forensic or fraud investigation, and legal compliance (as in for PCI-DSS or SarbOx compliance), and you have already taken all the steps you can to remove extraneous content your only viable option is to increase your licence.

There is, of course, a third - but usually unfavourable - option: suggest to your superiors that they throttle their business, thereby generating less throughput. Probably best to avoid that one, though.

Footnote:

One gotcha that sometimes occurs - particularly on Linux or other ix platforms - is when logs are rotated periodically, and the newly rotated log is detected by Splunk as a new file, and re-indexed despite the content already being present. Oversights like that will multiply your through put by however many periodic rotations of each log are retained. Make sure that if you *are rotating logs, the retained logs are blacklisted.

jimodonald
Contributor

I can only think of two solutions...

1) increase your license. That will require justifying the value to your managers.

2) reduce the amount being indexed. You can start with eliminating any logs you don't use regularly. Then you can start filtering out events from the logs you don't use. See the related answer.

http://answers.splunk.com/answers/33004/how-to-filter-events-from-a-file-before-it-gets-to-splunk-in...

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...