All Apps and Add-ons

Splunk_TA_windows - some Stanza from Inputs.conf does not work?

Path Finder

Hi all,

i have a strange situation where multiple universal forwarder do not forward all configured inputs.

We have nearly 40 Domain Controler with the same deployment Apps and configuration.
Half of them are Server 2008r2 and the other half is Server 2019.

I use the Splunk_TA_windows Version 7.0.0 on my indexer and as deployment app for my DC forwarders.

The 2008r2 have the forwarder Version and the Sever 2019 DC have 8.0.3.
The DCs are sending the logs directly to my single indexer.
The following inputs.conf is in the deployment app for those DCs:

disabled = 0
start_from = oldest
current_only = 0
checkpointInterval = 5
index = wineventlog
renderXml = false

disabled = 0
current_only = 0
blacklist1 = EventCode=%^(4658|5145|4661|4634|4656|4769|5140|4776|4768|4689|4688|4933|4932|4672|4648|4625|4770|4931|4662|4674|521|4673)$%
blacklist2 = EventCode="4771" Security_ID!="contoso\*"

disabled = 0
start_from = oldest
current_only = 0

Now i'm facing the problem that on 3 of those DCs all or partiatlly of the defined inputs does not work. (Mostly the Security Log does not get collected)
I do not find anything in the splunkd.log either on the indexer nor at the forwarder from my dcs. It was 1 of the 2008r2 DCs and 2 of the 2019 DCs who have problems right now.

I had also the same Problem on a fourth 2019 DC which i worked on to solve that problem. I moved it to a different Deployment Group and changed the inputs.conf in that group to remove the blacklists under security log, but that did not solved my problem. After this i reinstalled the Forwarder Several Times (Still with my DEV inputs.conv which had the blacklists removed) but also that solved not my problem. ... That was on friday and i was very mad .. because my forwarder was still sending Application and System Logs i decided to put it back into the default deployment group. Now what ... i enjoyed my weekend and after i startet this morning ... my fourth DC was working like a charm ... i have no idea what's going on and i have still no idea how to solve my problem with the other 3 DCs ...
Sometimes a restart helps sometimes not.

Thanks in advance for your suggestions on how to identify or solve this problem.

BR vess

Labels (2)
0 Karma


in cases like this id suggest to work it backwards ...
create a serverclass with the inputs you want
remove a host from all other serverclasses
make sure there is no other inputs.conf on that forwarder
add to desired serverclass
watch results
also, when changing inputs.conf, you have to restart the forwarder, if you didnt have the restart flag enabled on the serverclass, you could have pushed it many times and nothing will happen until a restart.
lastly, i will check that there arent any collision with other inputs.conf from differet / old TAs and / or inputs.conf in the \etc\system\local directory of the forwarder

Path Finder

Thanks for your input so far.

I've had this problem for some time now and most of the time with newly added systems.
In the time of solving the problem i did many of the points you've mentioned.
One of the bare strategies is to log in to the "client" server with the universal forwarder installed and add the config files directly under ..\etc\apps\ restart splunk, restart the server .. both multiple times.

In most cases "after a time span x (random up to 1 day)" the universal forwarder gets his stuff together and start collecting the logs. My great problem is that the uv could stop collecting one of my logs and i don't know that, which would be bad. For that I've created a report which looks over all hosts and the wineventlog:security events - if they drop below a value i get an email alert from splunk.
This is a workaround for this problem ... i had it in the past that out of curiosity one of my hosts stopped to send security logs .. but still sending other stanzas (logs) from the Splunk_TA_windows inputs.conf ...
This is a major flaw and i have no idea how to fix or to monitor that properly.

Thanks and best regards

0 Karma

Path Finder

I did the same for the remaining servers. Switching between two deployment groups and several restarts of the universal forwarder.
Nothing worked .. then i took a step through the nearest park to get the head free .. after that i took a look on the dashboard ... it worked .. but why ..
Does someone has an idea where to lookup for such problems? Can i lookup which inputs are streamed to the indexer or can i lookup the input stream coming from an indexer?

Best regards, vess

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!