We have indexes with retention of a year each, and when looking at _audit, it's pretty obvious that the queries against this index are mostly either daily or weekly. We would like to present to management the "waste" of storage (and maybe potentially making a case for using cheaper storage for older data, ie. older than 3 months). Is anybody aware of any visualization that does it? We tried to combine the information from audit with the retention information that we got via REST, however we don't have a cohesive view that can be appealing to management. Any ideas?
I'm not fully sure what you want to achieve. Data distribution is pretty easy to obtain as @gcusello already mentioned - you can do a rest call against indexer, you can use "dbinspect" command.
But if you mean "index usage" as "how users are searching from indexes" - well, that's more complicated and I don't think there is a way to give a 100% accurate answer to that question especially that while you can get a history of searches, they can contain many "dynamic" elements like eventtypes, subsearches and so on and the actual low-level search job logs are retained for a relatively short time.
It's exactly like @PickleRick said. There is no mechanism how you could look later what data even index level has accessed. We have asked that feature couple of years ago to fulfil e.g. GDPR requirements, but I haven't heard about this after that.
If you want to get this kind of information I think that you should start to ingest your search.log + info.csv for your all searches and then generate some queries against that data.
Maybe the easiest way is create again idea on ideas.splunk.com or try to look if there is already any for this.
it's a normal practice to use less performant (and lrss expensive) storage for cold buckets.
In the Monitoring Console [Indexing > Indexes and Volumes > Indexes and Volumes:Details]
you can find the distributio between Hot/Warm and Cold buckets and all the retention information.