We have set up universal forwarder on one of the Linux servers, and it started forwarding events to Splunk, later we realized that there is a mismatch between Splunk event count actual raw log count of that server.
Flow as follows :
Are you using useAck? It is one of those damned-if-you-do-damned-if-you-dont features. if you use it, make sure you set a huge output queue and wait time on the forwarder. Systemic indexing congestion can cause the loss of data, although splunk does try to turn off listening on the data listening port when indexing queues fill. It happens. Also, normal system failures can cause loss. Believe it or not, using this setting is more likely to cause lost events than not, because it causes so much extra work and slowdown.
Check index retention; some of your events may have already expired.
Also, check lag ( ... | eval lag = _indextime - _time ). I have seen many cases where the UF falls farther and farther behind because there are too many files to sort through.