I've got a Splunk indexer (call it indexerA) on 6.1.5 which is forwarding logs for specific indexes to another Splunk indexer (call it indexerB) which is on 6.5.2. I ran this search on both using the exact same time period (1 hour from 2:30 to 3:30pm) and got different results:
index=_internal source=metrics group=per_index_thruput series="winevent_dc_index"
| rename series as index
| eval MB=kb/1024
|stats sum(MB) as MB by index
On indexerA the search returned 795.783 megabytes and 3,881 (metrics) events
On indexerB the search returned 1,192.564 megabytes and 3,996 (metrics) events
Net I did a simple
index=winevent_dc_index | stats count on both with the same time-frame to see if the number of indexed log entries matched on the two systems. They did:
Events on indexerA winevent_dc_index: 596,399
Events on indexerB winevent_dc_index: 595,399
I then drilled down on one of the records on one box, then the other, and compared the source. They were identical, so nothing is being added.
Why are these metrics different?
probably the problem is that in IndexerB there are internal logs of both indexers (splunkd.log metrics.log, ...) related to indexing on winevent_dc_index index, insytead in IndexerA there aren't IndexerB logs.
Try to exclude IndexerB logs from your search and verify results.
I don't think that should be the case, because this is in the outputs.conf file global tcpout stanza that is on the system sending the logs (IndexerA):
forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.*
that should stop the _internal index and more that starts with an underscore from being sent.
I've just been looking at the _internal metrics on IndexerB and there is nothing I can see that tells you what system they originated on, so how would you know this is so and exclude from the results?