I have a search yielding a series of events:
2017-05-15 68.222609
2017-05-16 68.243478
2017-05-17 68.276522
2017-05-18 68.292174
2017-05-19 68.326957
2017-05-20 68.333913
2017-05-21 68.333913
2017-05-22 68.356522
2017-05-23 68.382609
2017-05-24 68.419130
2017-05-25 68.436522
2017-05-26 68.448696
2017-05-27 68.450435
2017-05-28 68.448696
2017-05-29 68.457391
2017-05-30 68.570435
2017-05-31 68.593043
2017-06-01 67.612174
2017-06-02 67.622609
2017-06-03 67.626087
I want to transform it down to just the first and last
2017-05-15 68.222609
2017-06-03 67.626087
I can use earliest and latest to get everything down to a single event, like this
e_t e_v l_t l_v
05/15/2017 00:00:00 68.222609 06/03/2017 00:00:00 67.626087
and can probably figure out how to blow it into 2 events,
I can also use transaction and mvextract to get the first and last values out: How do I refer to the first, nth or last value of a multivalue field? , but I still have the "how do I turn this into 2 events" problem then
is there a better way than either of the above to do this?
Just add this to it:
... | multireport [| head 1] [| tail 1]
cool; don't need to run preceding searches twice!
...now if Splunk would just document multireport! That command is an interesting solution to a number of problems.....
Another option might be to use eventstats to get the total number of events, streamstats to get a running count, and then just keep the first and last
... | eventstats count as total | streamstats count | where count=1 OR count=total
Another really good solution! I was thinking:
| streamstats count as _c1 | sort - _c1 | streamstats count as _c2 | where _c1=1 OR _c2=1 | sort c1
but I really wasn't aware of eventstats.
Repost this as an answer and I'll throw it some points!
@wegscd, you can do this by using head 1
and tail 1
along with append to stitch results together. Provided results are time-series data in chronological or reverse chronological order(default).
Since the field names are not mentioned I am including a run-anywhere search similar to your scenario based on splunkd Metrics logs from Splunk's _internal index. This uses _time for time and average_kbps for value.
index=_internal sourcetype=splunkd log_level="INFO" component="Metrics" average_kbps=*
| head 1
| table _time average_kbps
| append [search index=_internal sourcetype=splunkd log_level="INFO" component="Metrics" average_kbps=*
| tail 1
| table _time average_kbps]
PS: The result gets only one event per search before correlating them through append. So this should perform well.
does this run search index=_internal sourcetype=splunkd log_level="INFO" component="Metrics" average_kbps=*
twice?
Yes first time to get first result and second time to get last result