You may want to interrogate the splunk indexer's contributions to the _internal index as a timechart by SOURCE. The difference in log events by time should correspond to to your hourly CPU temper tantrum. Hopefully you can see a periodic difference in the number of events by source, which may help you identify events that only occur in this span.
Do you have any batch operations indexing data every hour....maybe being directed to only one indexer instead of being load-balanced?
@bmacias84 - yes its definitely Splunk, I can see it consume cpu by watching 'top'
Are you sure its a Splunk process? If you are running nix server I would monitor all process with the Nix_TA or if a windows system windows_TA. Set the collection interval to 1min.
S.o.S is usually installed on every Splunk install, so to check the SH best thing to do is install S.o.S on them as well
Doh.. thanks @MuS, enabled it and can see at least it is searches that are causing the CPU spike, but I can't drill down to find out which search. The only searches it seems to list are those local to the indexer, not the distributed searches from the search head.
did you enable the cpu.sh input?
Not having much luck with SoS, the CPU report is all blank for some reason. Not sure what else it can provide.
if you are having hourly report searches or hourly monitoring of any large files then you will see the spike. No big deal