I've been looking for ways to get fast results for inquiries about the number of events for:
And for #2 by sourcetype and for #3 by index.
Here are the ideas I've come up with, and I thought I'd share them, plus give a Splunk Answer that others can add to. If you have something clever in this general area (that's fast) please share it here.
Count of events for an index or across all of them with eventcount:
| eventcount summarize=false index=winevent_index
There is no way to restrict it to a particular sourcetype or source,
and the Time Picker has no effect on it -- It counts all events in
an index for all time.
Here is how to look at all the non-internal indexes:
| eventcount summarize=false index=* report_size=true
Similar search with tstats:
| tstats count where index=* groupby index,_time span=1d
This does respect the Time Picker, so if you do last 7 days you
get a count for each index, for each day.
This gives the count of events for one index, with Time Picker set to Week to date:
| tstats count where index=winevent_dc_index groupby index,_time span=1d
index _time count
winevent_dc_index 2019-02-03 7765708
winevent_dc_index 2019-02-04 9837331
winevent_dc_index 2019-02-05 10624149
winevent_dc_index 2019-02-06 10198089
winevent_dc_index 2019-02-07 5475228
But I hadn't been able to figure this out for a sourcetype-based search
until today. This works great on the main index, which has lots of sourcetypes:
| tstats count where index=main groupby index,sourcetype,_time span=1d
Whereas this search provides the count for a particular sourcetype, by index, by day:
| tstats count where sourcetype=syslog groupby index,sourcetype,_time span=1d
I finally decided that I'd like to see Events Per Second for all sourcetypes averaged over a given period. I'm using Last 7 days with this:
Here is a search that provides the EPS number per sourcetype over 7 days for all sourcetypes:
| tstats min(_time) as earliest_event max(_time) As mostRecent_event count where sourcetype=* NOT index=os by sourcetype
| eval elapsedTime = mostRecent_event - earliest_event, EPS = tostring(count / elapsedTime, "commas")
| convert ctime(earliest_event), ctime(mostRecent_event)
But, for something like syslog (which is so generic) this search is better because I can tell by index and host what the syslogs are:
| tstats min(_time) as earliest_event max(_time) As mostRecent_event count where sourcetype=syslog by index host
| eval elapsedTime = mostRecent_event - earliest_event, EPS = tostring(count / elapsedTime, "commas")
| convert ctime(earliest_event), ctime(mostRecent_event)
You seem to have worked this all the way through. You have not asked a question for us to help you solve. ???
Those last two searches with tstats are my best shot at getting the results I was looking for, and they seem to do the job, as well as being fast. I was hoping that others would offer suggestions I had not thought of, and that I might learn something new, and also I wanted to share what I had figured-out.
like this:
| tstats count as event_count where index=yourIndex by sourcetype _time span=1s
| timechart span=1s max(event_count) as events_per_sec by sourcetype
or like this:
| tstats count as event_count where index=main by sourcetype _time span=1d
| eval events_per_sec_day = round(event_count / 84600, 2)
I liked the second one of these two the best.
@wrangler2x very good, converting to an answer
what is the problem you are trying to solve?
how about the | metadata
command?
https://docs.splunk.com/Documentation/Splunk/7.2.3/SearchReference/Metadata
I'm putting together a list of the size of the log data for each sourcetype (that's already done) and now I'm adding a column to it that will reflect average EPS. So I could take events in 24 hours and divide by 86400 or take it for a week and divide by 604,800, for example.
metadata works fine as well. It seems to not be accurate for anything but All Time because of the way it works with buckets. So if you say Last 7 Days it might return last 9, for example.