I've got a Splunk indexer (call it indexerA) on 6.1.5 which is forwarding logs for specific indexes to another Splunk indexer (call it indexerB) which is on 6.5.2. I ran this search on both using the exact same time period (1 hour from 2:30 to 3:30pm) and got different results:
index=_internal source=metrics group=per_index_thruput series="winevent_dc_index"
| rename series as index
| eval MB=kb/1024
|stats sum(MB) as MB by index
On indexerA the search returned 795.783 megabytes and 3,881 (metrics) events
On indexerB the search returned 1,192.564 megabytes and 3,996 (metrics) events
Net I did a simple index=winevent_dc_index | stats count on both with the same time-frame to see if the number of indexed log entries matched on the two systems. They did:
Events on indexerA winevent_dc_index: 596,399
Events on indexerB winevent_dc_index: 595,399
I then drilled down on one of the records on one box, then the other, and compared the source. They were identical, so nothing is being added.
probably the problem is that in IndexerB there are internal logs of both indexers (splunkd.log metrics.log, ...) related to indexing on winevent_dc_index index, insytead in IndexerA there aren't IndexerB logs.
Try to exclude IndexerB logs from your search and verify results.