I'm transitioning my hosts from one set of indexers in Seattle to another set in Atlanta, in between, a heavy forwarder. I created a new forwarder app to install on the hosts, and created a serverclass to assign it. The app was installed on the host, but the old app wasn't removed. Below is my config:
[global] restartSplunkd=true stateOnClient = enabled [serverClass:UF_config] filterType = blacklist blacklist.0 = swfctxfrm05 whitelist.0 = * [serverClass:UF_config:app:UFconfig] [serverClass:test_ctx] filtertype = whitelist whitelist.0 = swfctxfrm05 [serverClass:test_ctx:app:NewUFConfig] [serverClass:test_ctx:app:10_inputs_windows_citrix] [serverClass:test_ctx:app:Splunk_TA_Windows_4.8]
I thought the changing of filtertype from whitelist to blacklist would mean that I'd be able to add a host in the blacklist section before the whitelist section, which would then remove the UFConfig app from swfctxfrm05. swfctxfrm05 received the NewUFConfig app, but the UFConfig app is still there as well. Could someone help w/ this?
Try it like this. White list everything for the class name, then in the app class definition use the blacklist.
[serverClass:UF_config] filterType = whitelist whitelist.0 = * [serverClass:UF_config:app:UFconfig] filterType=blacklist blacklist.0 = swfctxfrm05
That's interesting. Once I made that change and reloaded the deploy server, the apps listed are 10inputswindowscitrix
Not sure why that last one is listed twice. Anyways, it still shows as being in the UFconfig serverclass. And the data isn't getting to the new indexers. Is there any place I can check to see if the data is making it to the heavy forwarder?
Thanks for the help, btw.
Ok that sounds promising that now you have the correct apps on swfctxfrm05 (besides the one listed twice. is it possible that SplunkTAwindows_4.8 was manually created on that host?)
First thing I would check is the splunkd.log file on swfctxfrm05 to see if it is making connections to the heavy forwarder.
You should see events like this in the splunkd.log on swfctxfrm05 , where the IP is the heavy forwarder.
TcpOutputProc - Connected to idx=10.10.10.10:9997
I don't have local or rdp access to the host in question, but I could check the _internal log via splunk. After doing so, I see that it's connecting to the heavy forwarder, so apparently I messed up my props.conf stanza for forwarding the traffic to the new indexers.
I think this particular problem is solved. Thank you very much for your time and attention 🙂
Also just want to make note that I dont use the Forwarder Management UI for maintaining any of my serverclass.conf settings. I find it easier to work directly in the conf file. While Im confident in the setting I am giving you, I cant say for sure that the Forwarder Management UI will like them.
You can read more about what I am talking about here under Limitations
And also some good info here