there are plenty of things to check, below are the first steps i will suggest
start here, this will tell you how many events are indexed and how many unique hosts are sending data to each indexer
| tstats count as event_count dc(host) as u_host where index=* by splunk_server
if you have very uneven numbers there, start looking at outputs.conf
and verify your hosts have the appropriate outputs.conf
configurations
you can also start by checking load over time:
| tstats count as event_count where index=* by splunk_server _time span=1d | timechart span=1d max(event_count_ as total_events by splunk_server
hope it leads you in the right direction
there are plenty of things to check, below are the first steps i will suggest
start here, this will tell you how many events are indexed and how many unique hosts are sending data to each indexer
| tstats count as event_count dc(host) as u_host where index=* by splunk_server
if you have very uneven numbers there, start looking at outputs.conf
and verify your hosts have the appropriate outputs.conf
configurations
you can also start by checking load over time:
| tstats count as event_count where index=* by splunk_server _time span=1d | timechart span=1d max(event_count_ as total_events by splunk_server
hope it leads you in the right direction
Thank you @adonio.
When running - | tstats count as event_count by splunk_server _time span=1d | timechart span=1d max(event_count) as total_events by splunk_server
we see the following -
How come some of the cells are empty? These indexers were up and running every day...
forgot to add the where
clause to the tstats
see my fixed above
also, if there is no data / events that day for that index, then its 0 / null
Really really interesting @adonio - adding the where
clause changed everything.