Monitoring Splunk

Approaches to manage logging level of Splunk Universal Forwarder

dstaulcu
Builder

With changes in Splunk pricing coming faster than our ability to increase funding sources, our team is stuck in a maintenance mode where we cannot on-board a new data source without first freeing up license/storage by tuning existing data sources.

One of the more superfluous sources of storage displacement is splunkd itself, having INFO level logs coming from our many universal forwarders on client systems. I would like to change the default logging level for many components of splunkd from INFO level to WARN or above on our client systems. For the time being, I plan to invoke this change through a script-based input, running each time splunk restarts.

Does anyone have a method to more elegantly manage splunkd logging levels via a Splunk app?

What sort of logging levels have you hushed, if any?

Are there any components you would absolutely keep logging at an INFO level? For instance, I certainly want to maintain INFO level logging among the deployment client and input processor component classes.

Labels (2)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dstaulcu,
Splunk internal logs doesn't contribute to license consumption, but they use storage.

You can change the log level of Splunk at [Settings -- Server settings -- Server logging].

But anyway I prefer to leave log level to default and change the retention or the max dimension of these logs without filtering anything because I could need them.
So set to e.g. 15 days the retention of _internal index and leave the log level to default.

To change retention, in $SPLUNK_HOME/etc/system/local/indexes.conf (if there isn't, copy in local the one in default folder) and modify the parameter frozenTimePeriodInSecs = 1296000

[_internal]
homePath   = $SPLUNK_DB\_internaldb\db
coldPath   = $SPLUNK_DB\_internaldb\colddb
thawedPath = $SPLUNK_DB\_internaldb\thaweddb
tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary
maxDataSize = 1000
maxHotSpanSecs = 432000
frozenTimePeriodInSecs = 1296000

Ciao.
Giuseppe

0 Karma

dstaulcu
Builder

Thank you. We are already reducing the size of the index by adjusting retention. The problem with that strategy is that there is valuable stuff in the _internal index, sourced from splunk servers, which we do want to retain/review over longer periods of time.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dstaulcu,
in this case, you have to filter events using the indication at https://docs.splunk.com/Documentation/Splunk/8.0.0/Forwarding/Routeandfilterdatad#Filter_event_data_...
In other words put in Indexers and (if present) Heavy Forwarders:
in $SPLUNK_HOME/etc/system local/props.conf

[splunkd]
TRANSFORMS-removeINFOandWARN = setnull

in $SPLUNK_HOME/etc/system local/transforms.conf

[setnull]
REGEX = ^\d+-\d+-\d+\s+\d+:\d+:\d+\.\d+\s[^ ]*\s+(INFO|WARN)
DEST_KEY = queue
FORMAT = nullQueue

Ciao.
Giuseppe

0 Karma

dstaulcu
Builder

Thank you again. That approach would certainly function but also shift costs to computation of regexes. Ideally we would filter on the client side to avoid network and computation costs in addition to storage.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dstaulcu,
in Splunk the only filtering on UF is possible on windows logs, filtering is always at Indexer level.
Surely you have computation costs but they aren't so high!
You haven't storage and license costs.

Ciao.
Giuseppe

0 Karma
Get Updates on the Splunk Community!

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...