If anything, this represents the work at the end of an alert to decide if it's time to fire or actually emit the alert actions. It doesn't correlate with the running searches.
The concurrent workload of scheduled searches (both those you might consider alerts and otherwise) should be available in a accessible form within the Splunk Distributed Management Console, which uses the server/status/resource-usage/splunk-processes endpoint as its source information for live data (as accessed via |rest) and uses the introspection data for historical concurrency information.
Specifically, it digs into the data in the introspection index, from the resource_usage.log or sourcetype=-splunk_resource_usage data for component=PerProcess where data.search_props.type=scheduled.
Theoretically you could build a picture from scheduler.log, but you'd have to compute overlaps based on the dispatch_time and run_time of each alert, and this is pretty ungainly.
If you wouldn't mind turning this followup into a specific question -- how can we review the concurrent search load of our scheduled searches? -- I think it's a far more common goal and I don't see a clear question asked along these lines.
Keep in mind, of course, that the apportionment algorithm of the scheduler means that the concurrency of scheduled searches might drop in times of high contention with searches launched either via ad-hoc user behavior, or dashboard loads. (The Splunk search quota and apportionment machinery essentially considers searches stored in dashboards or invoked on-load by dashboard loads to be equivalent to user-typed searches.)
... View more