SoS unfortunatly does not show much else for our universal forwarder. The most I could conclude from SoS was that file descriptor and memory usage are within acceptable limits and that CPU usage is at 100%. It also provided me with the information, that the TailingProcessor was reporting errors which was a good starting point.
After using system profiling tools I found out however that out of 24 threads of splunkd running only one of them is constantly at 100%. Further digging led me to a stanza that was causing errors for the TailingProcessor. It was not the stanza itself but rather the data that was being monitored. Temporary files were being created then removed in a certain spool folder and it seems that this was causing problems for the UFW. After blacklisting the said spool folder load dropped to almost zero and it is staying there for now.
When you say "There are about 1600 that are actively monitored", 1600 of what are you referring to?
Here is a link to a previous Answer that hopefully touches on your issue:
If you haven't already, download the SoS (Splunk on Splunk app). This already great app just got updated.
So before contacting support, use this app first and see what you can find.
Wish you success!