I am migrating my Splunk implementation to a new platform. As part of that process, I will run the two platforms in parallel while my users assert their apps and data are correct on the new systems before making the cut-over. To support keeping the data current on both platforms, I copied the indexed data from the old to the new systems (with the Indexers off of course), sent an update to the (Universal) Forwarders so their outputs.conf file has settings to support the two sets of Indexers. If I pull up "All Forwarders" over 24 hours in the Deploment Monitor App on each platform, the old one reports X but the new platform reports Y, about 1/3 the number. Is there a way that I can confirm this update was received by the Forwarders without logging onto each of them? I have over 100 of these guys to manage.
you can use 'splunk list deploy-clients' on the deployment server to see who checked in and when they last checked in with the DS. I don't think that will tell you who got what app, but at least you can see if the clients have checked in.
When a DS configuration is received by a splunk server, it logs that info :-). If you are collecting the forwarders' splunkd logs:
index=_internal component="DeployedApplication" sourcetype="splunkd" (Refreshed OR Installing OR Uninstalling) host=*
From my understanding:
This was true for Splunk 4.2. If you find the search has no results, try to wildcard the "component" filter.
OOPS, I may have caused some confusion. I just noticed that my IP Address references in the previous comment are wrong and should reflect {IP-O1} and {IP-O2}, the "old" platform instances of the Indexers. My apologies for allowing this to happen.
A bit later in the log, splunkd reports this:
Connected to idx={IP-N1}:8002
Connected to idx={IP-N2}:8002
Connected to idx={IP-N1}:8002
Connected to idx={IP-N2}:8002
Connected to idx={IP-N1}:8002
Connected to idx={IP-N2}:8002
Note that none of the New Indexers show up.
Here is an excerpt from the splunkd.log that shows what I mean:
Initializing connection for non-ssl forwaring to {IP-O1}:8002
Initializing connection for non-ssl forwaring to {IP-O2}:8002
tcpout group lb using Auto load balanced forwarding
Group lb initialized with maxQueueSize=512000 in bytes.
Initializing connection for non-ssl forwaring to {IP-N1}:8002
Initializing connection for non-ssl forwaring to {IP-N2}:8002
tcpout group new_ndxr using Auto load balanced forwarding
Group new_ndxr initialized with maxQueueSize=512000 in bytes.
All of these message come from "TcpOutputProc".
In the splunkd.log of the Forwarder, I see where both sets of Indexer IP Address:Port settings are initialized, but later in the log when it reports connecting to the Indexers, only the original settings are being reported.
I have tried restarting a couple of them and the results are the same.
Have you tried to restart one of them? So, if I understand you, there is the updated outputs.conf on the HWFs. What does it mean "they have initialized the connection"? Which one? the one to deployment-server or indexer?
Paolo,
Thank you for this search, it has helped to confirm that some of my Forwarders are not communicating with my second set of Indexers. Looking at a sampling of them, I see that they have gotten the updated outputs.conf and have initialized the connection but other than that, there are no messages in the splunkd.log indicating it is using them. Any ideas what might cause this?
you can use 'splunk list deploy-clients' on the deployment server to see who checked in and when they last checked in with the DS. I don't think that will tell you who got what app, but at least you can see if the clients have checked in.
This would 'normally' work but in my case, I have the Forwarders sending to two sets of Indexers and not all of them are showing up on the new Indexers. What complicates it for this method is that they are still connected to the old Deployment Server. I need to look at this from the new environment perspective. Thanks for the suggestion though, it would otherwise be what is needed.