Getting Data In

managing log.cfg through deployment server

dstaulcu
Builder

I am trying to minimize noise level (across WAN) by splunk to greatest degree possible..

With review of index=_internal source=splunkd, I see that each of my universal forwarders is forwarding lines from splunkd.log. This log file is very noisy with most components logging INFO level events by default. I want to change most of the logging levels to >= WARN.

I know this can be done by manually altering logging levels in .\etc\log.cfg. Does anyone have any experience managing this configuration as a deployment-app? I imagine it would be possible with deployment of a script to execute line changes.. Is this a bad idea?

inputs appreciated.

0 Karma
1 Solution

dstaulcu
Builder

Here is how I plan to package that solution as an app:

Create a new app with following files:

.\deployment-apps\UF-LogCfgMgr\bin\logcfg.bat

.\deployment-apps\UF-LogCfgMgr\bin\logcfg.vbs

.\deployment-apps\UF-LogCfgMgr\local\inputs.conf

12/28/2013 - Follow up. Method worked. Added condition to leave INFO logging levels to DEV/TEST deployment clients but to use WARN only for PROD deployment clients.

1/19/2013 - updated logcfg.vbs to use log-local.cfg construct, to further tweak logging levels, to, and to support x86 on x64. Supports windows only

View solution in original post

dstaulcu
Builder

Here is how I plan to package that solution as an app:

Create a new app with following files:

.\deployment-apps\UF-LogCfgMgr\bin\logcfg.bat

.\deployment-apps\UF-LogCfgMgr\bin\logcfg.vbs

.\deployment-apps\UF-LogCfgMgr\local\inputs.conf

12/28/2013 - Follow up. Method worked. Added condition to leave INFO logging levels to DEV/TEST deployment clients but to use WARN only for PROD deployment clients.

1/19/2013 - updated logcfg.vbs to use log-local.cfg construct, to further tweak logging levels, to, and to support x86 on x64. Supports windows only

Get Updates on the Splunk Community!

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...

September Community Champions: A Shoutout to Our Contributors!

As we close the books on another fantastic month, we want to take a moment to celebrate the people who are the ...

Splunk Decoded: Service Maps vs Service Analyzer Tree View vs Flow Maps

It’s Monday morning, and your phone is buzzing with alert escalations – your customer-facing portal is running ...