Splunk actually does a pretty good job of complaining about timestamp problems; it is just that most people do not look into it.
I also have:
IndexerLevel - Time format has changed multiple log types in one sourcetype
IndexerLevel - Valid Timestamp Invalid Parsed Time
IndexerLevel - Failures To Parse Timestamp Correctly (excluding breaking issues)
IndexerLevel - Future Dated Events that appeared in the last week
IndexerLevel - Too many events with the same timestamp
Among many others which may occur...
Note that in newer Splunk versions the data quality tab of the monitoring console will do most of the above.
@jkat54's comment is a good one. You might also want to look in index=_internal for log messages like "DateParserVerbose - Accepted time (Fri Aug 25 06:25:15 2017) is suspiciously far away from the previous event's time" and "DateParserVerbose - Failed to parse timestamp". They indicate potential problems with your timestamp extractions.
See http://runals.blogspot.com/2014/04/splunk-timestamps-and-dateparserverbose.html for a great discussion on the topic.
How about this:
index=index | eval skew=_indextime-_time | stats max(skew) as max min(skew) as min avg(skew) as avg by sourcetype host
_indextime is when it was indexed, _time is the time stamp extracted. It’s a starting point, from there you have to dig into the specific hosts and sourcetypes.