I was seeing the same behavior. Both resource_usage.log and disk_objects.log were not getting collected, so many DMC panels were blank (sometimes DMC can be tricky). I found that somehow we had two invalid props.conf entries in our _cluster app. To find this:
In the splunkd.log, look for the log entry that proceedes the TailReader error you are seeing above, in our case we saw:
`01-20-2017 13:04:38.063 -0500 ERROR IndexedExtractionsConfig - Invalid value='' for parameter='INDEXED_EXTRACTIONS'.
01-20-2017 13:04:38.063 -0500 ERROR TailReader - Ignoring path="C:\Program Files\Splunk\var\log\introspection\resource_usage.log.3" due to: Invalid indexed extractions configuration - see prior error messages
Notice the invalid parameter is INDEXED_EXTRACTIONS and it claims we have a value of ''.
2. On one of your peer indexers, run the following command. You may want to redirect the output to a file to make it easier to view.
splunk btool props list --debug
3. In the output from above, find the line where the invalid parameter is set. In your case as in ours it should be under heading [splunk_resource_usage] and [splunk_disk_objects]
4. Once you find the line, it should point you to the config file that has the bad value. In our case this was in etc\slave-apps\_cluster\local\props.conf, so we just fixed the copy of this file in the master-apps on the cluster master and redistributed the cluster bundle. For some reason we had the following 4 lines in this file:
We removed all four lines.
5. The change did not cause the indexers to restart, so I had to manually run a rolling restart splunk rolling-restart cluster-peers`. Once the peers restarted, the all the panels that rely on these logs started to render data.
Hope this helps,
... View more