You'd have to more or less roll this yourself, eg, run a search every 5 minutes and look at the last x recent minutes to see if the number has changed drastically.
However, in the splunk world, you often tend to run your indexing system somewhere near capacity, so sometimes during a spike it can go over capacity, causing lag in the indexed data, which might make your volume appear to go down.
Options:
You could use 4.1 when it's released Real Soon Now(TM) run a real-time search on the forward's internal logs of its aggregate volume. This represents continuous load, but it might be that high. Would require evaluation.
You could use a frequently run search on your actual dataset over recent time, say 15 to 5 minutes ago, and alert if it rises drastically or droops drastically. For this type of search goal, you don't really even need full coverage, a sampling might be sufficient.
eg a search of
sourcetype=foo host=bar | stats count as event_count | eval event_count>50000 or event_count<200
This would emit one event only if they count is outside the threshhold, so you could make your alert condition be more events than 0.
... View more