We have hundreds of WIndows and Linux forwarders. Many have been cloned from other systems over the years. Recently, we have noticed that some of these hosts do not appear in search results due to invalid entries in
Removing these files and restarting Splunk seems to fix the problem as the files are recreated properly (or not at all).
What are the implications of automatically removing these files across all servers and restarting Splunk on each forwarder to ensure each system is reporting properly?
Inputs.conf contains the inputs that specify what you'd like splunk to do in terms of files to monitor, scripts to run, etc.
Server.conf provides contains the set of attributes and values you can use to configure server options, and are often times specific to the system.
Deleting them will cause any of the custom settings that have been put in place intentionally or otherwise, to be deleted.
Files may or may not be created on restart, depending on the conf file in question, but the custom settings in place will be lost.
By default, the server.conf that gets created in system/local/ contains the GUID of the server (as mentioned by yannK above) as well as the SSL certificate password, in hashed form.
The inputs.conf in the same location is created to set the default hostname for data arising from the box, if no other means of setting the host is supplied. This is the likely culprit for your hosts not showing up properly.
Well, server.conf generally isn't something you'd deploy via a Deployment Server. Inputs.conf generally would be deployed via an app managed by DS. But, if you deleted server.conf, and had server specific settings, they'd get trashed. This is really the only reason to avoid making this type of a change.
The forwarder configuration is controlled by a deployment server in this case. So the custom settings are located elsewhere, right?
Are there any other reasons NOT to do something like this?