This is one of those questions that could take days to solve with dedicated resources.
You need to ensure there are zero ERROR or WARN* messages occurring in the splunkd.log files on non-working forwarders & your indexers. Something as inconspicuous as "warning: cant find saved search: blah" can cause Splunk to stop in its tracks.
You could have networking issues. Telnet, Ping, Nslookup... all great tools for troubleshooting network issues.
You could have firewall issues. Telnet, Ping, Nslookup... all great tools for troubleshooting firewall issues.
./splunk cmd btool outputs list --debug <-- great for splunk outputs.conf issues, run on forwarders (heavy,light,universal)
./splunk cmd btool server list --debug <-- great for splunk server.conf issues, run on indexers
./splunk cmd btool inputs list --debug <-- great for splunk inputs.conf issues, run on all splunk machines. Indexers use this for ssl config, other machines use it to specify data inputs.
You could have a permissions issue. Check the service account the universal forwarder is running as. Does it have read permission on the data you're trying to read? Will Group Policy apply? If so, does it? etc.
Another famous mistake is broken stanza names... copy and paste sometimes leaves out a square bracket and half a stanza name, etc.
If you're victim of this, you'll find all your data in index=_internal usually and everything after the "deployment" will have the same sourcetype, index, etc because of the broken stanza in inputs.conf somewhere.
[batch://path/to/file]
...
...
monitor://path/to/file]
...
...
For example the above, would make everything after the monitor stanza end up in the wrong place.
Post the results of the above and further we may help you. Cheers.
... View more