Splunk Search

How to find rolling average for last7 days and fill the time gaps where there is no event in last 7 days

mnj1809
Path Finder

Hello,

I have a requirement to find the rolling average  and variance % as per below requirement. If there is no event for any date then we should have an 0 events for that missing date so that we've continuously dates in our report.

The "7d Rolling average Daily Event Count" column is the average count of events ingested each day for the last 7 days NOT including today (yesterday thru previous 6 days).

"Variance" is the difference in count of events between today's event count and the 7d rolling Avg.  (Today's event count minus the 7d rolling average event count).

"% Variance" is the percentage difference between today's event count and the 7d rolling average (Variance divided by 7d rolling average ).

"average Daily Variance" is the absolute value of the 7d rolling average of the % Variance values, not including today (yesterday thru previous 6 days).

Example:

data sourceLast event timeEvent Count7d rolling average event countVariance% Varianceaverage Daily Variance
test9/3/20212957206089744%24%
test9/2/202114382064-626-30%24%
test9/1/20212906205585141%23%
test8/31/20212753203671835%22%
test8/30/202121312036955%22%
test8/29/20212235201022511%23%
test8/28/202131261961116559%21%
test8/27/20212785193185444%20%
test8/26/202113311939-608-31%20%
test8/25/202116851950-265-14%20%
test8/24/202114261984-558-28%20%
test8/23/202119391965-26-1%21%
test8/22/20212467196650125%20%
test8/21/202114822010-528-26%20%
test8/20/202120262016100%20%

 

Thanks for your help in advance.

Labels (2)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

Concatenate the count and time, then split them up again after the untable.

| makeresults
| eval _raw="max_time_each_day	data_source	today_count
Sep 15,2021 07:25:01 AM EDT	ABC	14503
Sep 14,2021 23:59:51 PM EDT	ABC	51570
Sep 13,2021 23:59:57 PM EDT	ABC	56331
Sep 12,2021 23:59:59 PM EDT	ABC	55717
Sep 11,2021 23:59:51 PM EDT	ABC	54480
Sep 10,2021 23:59:49 PM EDT	ABC	65367
Sep 09,2021 23:59:59 PM EDT	ABC	61999
Sep 08,2021 23:59:57 PM EDT	ABC	55405
Sep 07,2021 23:59:51 PM EDT	ABC	62327
Sep 06,2021 23:59:48 PM EDT	ABC	54137
Sep 05,2021 23:59:56 PM EDT	ABC	49224
Sep 04,2021 23:59:54 PM EDT	ABC	47783
Sep 03,2021 23:59:52 PM EDT	ABC	52699
Sep 02,2021 23:59:53 PM EDT	ABC	70145
Sep 01,2021 23:59:57 PM EDT	ABC	79071
Sep 14,2021 10:05:16 AM EDT	XYZ	21
Sep 13,2021 10:32:58 AM EDT	XYZ	23
Sep 10,2021 11:30:07 AM EDT	XYZ	22
Sep 09,2021 09:51:28 AM EDT	XYZ	19
Sep 08,2021 09:56:16 AM EDT	XYZ	19
Sep 05,2021 04:32:44 AM EDT	XYZ	19
Sep 02,2021 10:03:06 AM EDT	XYZ	19
Sep 01,2021 04:32:54 AM EDT	XYZ	19
Sep 15,2021 04:32:00 AM EDT	PQR	229
Sep 14,2021 04:31:59 AM EDT	PQR	268
Sep 13,2021 04:32:03 AM EDT	PQR	302
Sep 12,2021 04:31:59 AM EDT	PQR	302
Sep 11,2021 04:32:15 AM EDT	PQR	297
Sep 10,2021 04:32:00 AM EDT	PQR	305
Sep 09,2021 04:32:04 AM EDT	PQR	267
Sep 08,2021 04:32:02 AM EDT	PQR	267
Sep 07,2021 04:32:12 AM EDT	PQR	305
Sep 06,2021 04:32:01 AM EDT	PQR	305
Sep 05,2021 04:31:53 AM EDT	PQR	195
Sep 04,2021 04:31:52 AM EDT	PQR	157
Sep 03,2021 04:32:01 AM EDT	PQR	267
Sep 02,2021 04:31:59 AM EDT	PQR	157
Sep 01,2021 04:32:53 AM EDT	PQR	305
Sep 14,2021 10:05:15 AM EDT	DST	103
Sep 13,2021 10:33:00 AM EDT	DST	109
Sep 10,2021 11:30:07 AM EDT	DST	106
Sep 09,2021 04:31:55 AM EDT	DST	105
Sep 08,2021 09:51:06 AM EDT	DST	36
Sep 07,2021 15:44:18 PM EDT	DST	71
Sep 02,2021 04:31:59 AM EDT	DST	105
Sep 01,2021 16:44:02 PM EDT	DST	105"
| multikv forceheader=1
| table max* data* today*


| eval day=strptime(max_time_each_day,"%b %d,%Y")
| fieldformat day=strftime(day,"%Y/%m/%d")
| eval today_count=today_count."!".max_time_each_day
| xyseries day data_source today_count
| makecontinuous day
| fillnull value=0
| untable day data_source today_count
| eval today_count=split(today_count,"!")
| eval max_time_each_day=mvindex(today_count,1)
| eval today_count=mvindex(today_count,0)
| eval max_time_each_day=if(isnull(max_time_each_day),strftime(day,"%b %d,%Y"),max_time_each_day)
| fields - day

View solution in original post

bowesmana
SplunkTrust
SplunkTrust

Without some idea of your data, it's not easy to give you a precise answer, however, this query is a simulated query that will produce some random data over the last month and then create a table like yours. It is the use of streamstats that will calculate the rolling averages for you.

| makeresults
| eval row=mvrange(1,20000)
| mvexpand row
| eval _time=_time-((random() % 30 + 1) * 86400)
| timechart span=1d count
| eval skip_day=relative_time(now(), "-9d@d")
| eval count=if(_time=skip_day, 0, count)
| fields - skip_day
| streamstats window=8 current=f avg(count) as avg7day
| eval variance=round(count-avg7day), varperc=round(variance/avg7day*100)
| streamstats window=8 current=f avg(varperc) as avgvarperc
| eval avgvarperc=round(avgvarperc), avg7day=round(avg7day)
| table _time count avg7day variance varperc avgvarperc
| rename count as "Event Count", avg7day as "7 Day Rolling Average", variance as Variance, varperc as "% Variance", avgvarperc as "Average Daily Variance"

I have introduced a 0 value day so you can see that it has no impact on the final table. I wasn't sure that the average daily variance calculation is correct based on your description, but hopefully this gives you enough to get what you want.

 

0 Karma

mnj1809
Path Finder

Hello,


Thanks for your prompt reply but query that you shared is not giving expected result.
I've achieved 7d rolling average and variance. But now I've to fill the gaps in "Last Event Time" column.  i.e.  I want to show missing dates with 0 event count. Could you please suggest how to fill that gap?

Data SourceLast Event TimeEvent Count7d rolling average event countVariance%Variance
data_source1Sep 13,2021 10:32:58 AM EDT2320315
data_source1Sep 10,2021 11:30:07 AM EDT2217530
data_source1Sep 09,2021 09:51:28 AM EDT1914536
data_source1Sep 08,2021 09:56:16 AM EDT1911873
data_source1Sep 05,2021 04:32:44 AM EDT19910112
data_source1Sep 02,2021 10:03:06 AM EDT19613217
data_source1Sep 01,2021 04:32:54 AM EDT19316534
data_source1Aug 31,2021 10:13:22 AM EDT190190
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust
| makeresults
| eval _raw="Data Source	Last Event Time	Event Count	7d rolling average event count	Variance	Percent_Variance
data_source1	Sep 13,2021 10:32:58 AM EDT	23	20	3	15
data_source1	Sep 10,2021 11:30:07 AM EDT	22	17	5	30
data_source1	Sep 09,2021 09:51:28 AM EDT	19	14	5	36
data_source1	Sep 08,2021 09:56:16 AM EDT	19	11	8	73
data_source1	Sep 05,2021 04:32:44 AM EDT	19	9	10	112
data_source1	Sep 02,2021 10:03:06 AM EDT	19	6	13	217
data_source1	Sep 01,2021 04:32:54 AM EDT	19	3	16	534
data_source1	Aug 31,2021 10:13:22 AM EDT	19	0	19	0"
| multikv forceheader=1
| fields - _* linecount
| table Data_Source Last_Event_Time Event_Count * Percent_Variance



| eval day=strptime(Last_Event_Time,"%b %d,%Y")
| fieldformat day=strftime(day,"%Y/%m/%d")
| makecontinuous day
| eval Event_Count=0
| filldown Data_Source
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Please share your query

0 Karma

mnj1809
Path Finder

Hello,

Thanks for your response. Here below is my query.

| tstats count as per_day_count latest(_time) as LTime_per_day where (index=xxxx earliest=-14d@d latest=now) groupby source _time
| bin span=1d _time
| lookup data_sources.csv source OUTPUTNEW data_source, Monitor
| search Monitor=Yes
| eval Last_event_time_per_day=strftime(LTime_per_day,"%b %d,%Y %H:%M:%S %p %Z")

| join type=outer data_source
[ tstats count latest(_time) as LTime values(sourcetype) as sourcetype where (index=yyyy* OR (index=zzzz sourcetype=*amp*)earliest=-d@d latest=now) groupby source _time
| bin span=1d _time
| lookup data_sources.csv source OUTPUTNEW data_source Monitor
| eval Last_event_time=strftime(LTime,"%b %d,%Y %H:%M:%S %p %Z")
| eval status= if(count > 0, "Data Received","NO Data Received")
]
| eval Last_event_time=strftime(LTime,"%b %d,%Y %H:%M:%S %p %Z")
| table data_source sourcetype count Last_event_time Last_event_time_per_day status per_day_count _time
| fillnull value="NO Data Received" status
| sort - status
| rename count as "Event Count"
| sort + sourcetype
| fillnull value="-"
| streamstats window=7 current=f list(per_day_count) as count1 by data_source
| stats latest(per_day_count) as today_count, max(Last_event_time_per_day) as max_time_each_day by _time data_source
| sort data_source, -_time

and here below is a sample subset returned by the above query:


max_time_each_daydata_sourcetoday_count
Sep 15,2021 07:25:01 AM EDTABC14503
Sep 14,2021 23:59:51 PM EDTABC51570
Sep 13,2021 23:59:57 PM EDTABC56331
Sep 12,2021 23:59:59 PM EDTABC55717
Sep 11,2021 23:59:51 PM EDTABC54480
Sep 10,2021 23:59:49 PM EDTABC65367
Sep 09,2021 23:59:59 PM EDTABC61999
Sep 08,2021 23:59:57 PM EDTABC55405
Sep 07,2021 23:59:51 PM EDTABC62327
Sep 06,2021 23:59:48 PM EDTABC54137
Sep 05,2021 23:59:56 PM EDTABC49224
Sep 04,2021 23:59:54 PM EDTABC47783
Sep 03,2021 23:59:52 PM EDTABC52699
Sep 02,2021 23:59:53 PM EDTABC70145
Sep 01,2021 23:59:57 PM EDTABC79071
Sep 14,2021 10:05:16 AM EDTXYZ21
Sep 13,2021 10:32:58 AM EDTXYZ23
Sep 10,2021 11:30:07 AM EDTXYZ22
Sep 09,2021 09:51:28 AM EDTXYZ19
Sep 08,2021 09:56:16 AM EDTXYZ19
Sep 05,2021 04:32:44 AM EDTXYZ19
Sep 02,2021 10:03:06 AM EDTXYZ19
Sep 01,2021 04:32:54 AM EDTXYZ19
Sep 15,2021 04:32:00 AM EDTPQR229
Sep 14,2021 04:31:59 AM EDTPQR268
Sep 13,2021 04:32:03 AM EDTPQR302
Sep 12,2021 04:31:59 AM EDTPQR302
Sep 11,2021 04:32:15 AM EDTPQR297
Sep 10,2021 04:32:00 AM EDTPQR305
Sep 09,2021 04:32:04 AM EDTPQR267
Sep 08,2021 04:32:02 AM EDTPQR267
Sep 07,2021 04:32:12 AM EDTPQR305
Sep 06,2021 04:32:01 AM EDTPQR305
Sep 05,2021 04:31:53 AM EDTPQR195
Sep 04,2021 04:31:52 AM EDTPQR157
Sep 03,2021 04:32:01 AM EDTPQR267
Sep 02,2021 04:31:59 AM EDTPQR157
Sep 01,2021 04:32:53 AM EDTPQR305
Sep 14,2021 10:05:15 AM EDTDST103
Sep 13,2021 10:33:00 AM EDTDST109
Sep 10,2021 11:30:07 AM EDTDST106
Sep 09,2021 04:31:55 AM EDTDST105
Sep 08,2021 09:51:06 AM EDTDST36
Sep 07,2021 15:44:18 PM EDTDST71
Sep 02,2021 04:31:59 AM EDTDST105
Sep 01,2021 16:44:02 PM EDTDST105

 

My requirement is that to fill the time gaps for data_source "XYZ" and "DST" and for these two data sources I should have today_count=0 for those gaps.

Thanks in advance.

Regards,

Tags (1)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

@mnj1809 

Oh dear... I have come up with a horrible piece of SPL to get from your final table to the result you want - but I have to believe that it is possible another way 😁

| makeresults
| eval _raw="max_time_each_day	data_source	today_count
Sep 15,2021 07:25:01 AM EDT	ABC	14503
Sep 14,2021 23:59:51 PM EDT	ABC	51570
Sep 13,2021 23:59:57 PM EDT	ABC	56331
Sep 12,2021 23:59:59 PM EDT	ABC	55717
Sep 11,2021 23:59:51 PM EDT	ABC	54480
Sep 10,2021 23:59:49 PM EDT	ABC	65367
Sep 09,2021 23:59:59 PM EDT	ABC	61999
Sep 08,2021 23:59:57 PM EDT	ABC	55405
Sep 07,2021 23:59:51 PM EDT	ABC	62327
Sep 06,2021 23:59:48 PM EDT	ABC	54137
Sep 05,2021 23:59:56 PM EDT	ABC	49224
Sep 04,2021 23:59:54 PM EDT	ABC	47783
Sep 03,2021 23:59:52 PM EDT	ABC	52699
Sep 02,2021 23:59:53 PM EDT	ABC	70145
Sep 01,2021 23:59:57 PM EDT	ABC	79071
Sep 14,2021 10:05:16 AM EDT	XYZ	21
Sep 13,2021 10:32:58 AM EDT	XYZ	23
Sep 10,2021 11:30:07 AM EDT	XYZ	22
Sep 09,2021 09:51:28 AM EDT	XYZ	19
Sep 08,2021 09:56:16 AM EDT	XYZ	19
Sep 05,2021 04:32:44 AM EDT	XYZ	19
Sep 02,2021 10:03:06 AM EDT	XYZ	19
Sep 01,2021 04:32:54 AM EDT	XYZ	19
Sep 15,2021 04:32:00 AM EDT	PQR	229
Sep 14,2021 04:31:59 AM EDT	PQR	268
Sep 13,2021 04:32:03 AM EDT	PQR	302
Sep 12,2021 04:31:59 AM EDT	PQR	302
Sep 11,2021 04:32:15 AM EDT	PQR	297
Sep 10,2021 04:32:00 AM EDT	PQR	305
Sep 09,2021 04:32:04 AM EDT	PQR	267
Sep 08,2021 04:32:02 AM EDT	PQR	267
Sep 07,2021 04:32:12 AM EDT	PQR	305
Sep 06,2021 04:32:01 AM EDT	PQR	305
Sep 05,2021 04:31:53 AM EDT	PQR	195
Sep 04,2021 04:31:52 AM EDT	PQR	157
Sep 03,2021 04:32:01 AM EDT	PQR	267
Sep 02,2021 04:31:59 AM EDT	PQR	157
Sep 01,2021 04:32:53 AM EDT	PQR	305
Sep 14,2021 10:05:15 AM EDT	DST	103
Sep 13,2021 10:33:00 AM EDT	DST	109
Sep 10,2021 11:30:07 AM EDT	DST	106
Sep 09,2021 04:31:55 AM EDT	DST	105
Sep 08,2021 09:51:06 AM EDT	DST	36
Sep 07,2021 15:44:18 PM EDT	DST	71
Sep 02,2021 04:31:59 AM EDT	DST	105
Sep 01,2021 16:44:02 PM EDT	DST	105"
| multikv forceheader=1
| table max* data* today*
| eval _time=strptime(max_time_each_day, "%b %d,%Y"), first_day=relative_time(now(), "@d")
| eval day=round(first_day-_time)
| eval d=mvrange(86400,(16*86400),86400)
| eventstats values(day) as day by data_source
| eval missing=mvmap(d,if(isnull(mvfind(day,d)),d,null()))
| streamstats c by data_source
| fields - d day
| eval missing=case(c=1 AND mvcount(missing)>0,mvappend(missing,_time),c=1 AND isnull(missing),_time, 1==1, _time)
| mvexpand missing
| eval t2=first_day-(missing)
| eval today_count=if(_time!=missing,0,today_count), max_time_each_day=if(_time!=missing,strftime(t2,"%b %d,%Y %H:%M:%S %p %Z"), max_time_each_day), _time=if(_time!=missing,t2,_time)
| fields - missing t2 first_day
| sort data_source -_time

In essence, what this does is to find out which dates are missing by

  • Work out what day this row related to (day)
  • Create an MV field of the missing days by using mvrange to calculate the 15 day range of expected values
  • (d)
  • Gets the first entry for each data source (streamstats c)
  • Using mvmap (Splunk 😎 to create another MV field of the missing days (missing)
  • Has to make sure that the field contains info for the CURRENT row
  • Expands the missing field to create the extra rows
  • The sets up the values of the empty entries - ensure that the original row data is not overwritten (_time!=missing)

bit of a hack, but not sure if makecontinuous can be used with the grouping. Other ways to create rows is to use an append or join to create the date range rows and then to aggregate back, but that would mean repeating the search in the subsearch to find out the data sources that exist and would still probably involve some mvexpand.

@ITWhisperer I am sure you can find a better way to do this...

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

@bowesmana Thanks for the vote of confidence 😀

@mnj1809 If you are not too concerned about the actual times (which I guess you aren't since you have bin span=1d, you could try something like this

| makeresults
| eval _raw="max_time_each_day	data_source	today_count
Sep 15,2021 07:25:01 AM EDT	ABC	14503
Sep 14,2021 23:59:51 PM EDT	ABC	51570
Sep 13,2021 23:59:57 PM EDT	ABC	56331
Sep 12,2021 23:59:59 PM EDT	ABC	55717
Sep 11,2021 23:59:51 PM EDT	ABC	54480
Sep 10,2021 23:59:49 PM EDT	ABC	65367
Sep 09,2021 23:59:59 PM EDT	ABC	61999
Sep 08,2021 23:59:57 PM EDT	ABC	55405
Sep 07,2021 23:59:51 PM EDT	ABC	62327
Sep 06,2021 23:59:48 PM EDT	ABC	54137
Sep 05,2021 23:59:56 PM EDT	ABC	49224
Sep 04,2021 23:59:54 PM EDT	ABC	47783
Sep 03,2021 23:59:52 PM EDT	ABC	52699
Sep 02,2021 23:59:53 PM EDT	ABC	70145
Sep 01,2021 23:59:57 PM EDT	ABC	79071
Sep 14,2021 10:05:16 AM EDT	XYZ	21
Sep 13,2021 10:32:58 AM EDT	XYZ	23
Sep 10,2021 11:30:07 AM EDT	XYZ	22
Sep 09,2021 09:51:28 AM EDT	XYZ	19
Sep 08,2021 09:56:16 AM EDT	XYZ	19
Sep 05,2021 04:32:44 AM EDT	XYZ	19
Sep 02,2021 10:03:06 AM EDT	XYZ	19
Sep 01,2021 04:32:54 AM EDT	XYZ	19
Sep 15,2021 04:32:00 AM EDT	PQR	229
Sep 14,2021 04:31:59 AM EDT	PQR	268
Sep 13,2021 04:32:03 AM EDT	PQR	302
Sep 12,2021 04:31:59 AM EDT	PQR	302
Sep 11,2021 04:32:15 AM EDT	PQR	297
Sep 10,2021 04:32:00 AM EDT	PQR	305
Sep 09,2021 04:32:04 AM EDT	PQR	267
Sep 08,2021 04:32:02 AM EDT	PQR	267
Sep 07,2021 04:32:12 AM EDT	PQR	305
Sep 06,2021 04:32:01 AM EDT	PQR	305
Sep 05,2021 04:31:53 AM EDT	PQR	195
Sep 04,2021 04:31:52 AM EDT	PQR	157
Sep 03,2021 04:32:01 AM EDT	PQR	267
Sep 02,2021 04:31:59 AM EDT	PQR	157
Sep 01,2021 04:32:53 AM EDT	PQR	305
Sep 14,2021 10:05:15 AM EDT	DST	103
Sep 13,2021 10:33:00 AM EDT	DST	109
Sep 10,2021 11:30:07 AM EDT	DST	106
Sep 09,2021 04:31:55 AM EDT	DST	105
Sep 08,2021 09:51:06 AM EDT	DST	36
Sep 07,2021 15:44:18 PM EDT	DST	71
Sep 02,2021 04:31:59 AM EDT	DST	105
Sep 01,2021 16:44:02 PM EDT	DST	105"
| multikv forceheader=1
| table max* data* today*


| eval day=strptime(max_time_each_day,"%b %d,%Y")
| fieldformat day=strftime(day,"%Y/%m/%d")
| xyseries day data_source today_count
| makecontinuous day
| fillnull value=0
| untable day data_source today_count

bowesmana
SplunkTrust
SplunkTrust

@ITWhisperer 

Knew it!

xyseries/makecontinuous/untable - neat - solves the issue of the split clause. Good technique

 

Tags (1)
0 Karma

mnj1809
Path Finder

Thanks for making my day Champ :)... Basically I am concerned about the actual times.. so your previous solution works perfectly. Only one last thing is left that suppose if I execute the SPL query today and if any data source not sent the data today , in this case I need the event_count =0 for today as well and as soon as I get the data for that data source then event_count should be that actual count.

Tags (2)
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Concatenate the count and time, then split them up again after the untable.

| makeresults
| eval _raw="max_time_each_day	data_source	today_count
Sep 15,2021 07:25:01 AM EDT	ABC	14503
Sep 14,2021 23:59:51 PM EDT	ABC	51570
Sep 13,2021 23:59:57 PM EDT	ABC	56331
Sep 12,2021 23:59:59 PM EDT	ABC	55717
Sep 11,2021 23:59:51 PM EDT	ABC	54480
Sep 10,2021 23:59:49 PM EDT	ABC	65367
Sep 09,2021 23:59:59 PM EDT	ABC	61999
Sep 08,2021 23:59:57 PM EDT	ABC	55405
Sep 07,2021 23:59:51 PM EDT	ABC	62327
Sep 06,2021 23:59:48 PM EDT	ABC	54137
Sep 05,2021 23:59:56 PM EDT	ABC	49224
Sep 04,2021 23:59:54 PM EDT	ABC	47783
Sep 03,2021 23:59:52 PM EDT	ABC	52699
Sep 02,2021 23:59:53 PM EDT	ABC	70145
Sep 01,2021 23:59:57 PM EDT	ABC	79071
Sep 14,2021 10:05:16 AM EDT	XYZ	21
Sep 13,2021 10:32:58 AM EDT	XYZ	23
Sep 10,2021 11:30:07 AM EDT	XYZ	22
Sep 09,2021 09:51:28 AM EDT	XYZ	19
Sep 08,2021 09:56:16 AM EDT	XYZ	19
Sep 05,2021 04:32:44 AM EDT	XYZ	19
Sep 02,2021 10:03:06 AM EDT	XYZ	19
Sep 01,2021 04:32:54 AM EDT	XYZ	19
Sep 15,2021 04:32:00 AM EDT	PQR	229
Sep 14,2021 04:31:59 AM EDT	PQR	268
Sep 13,2021 04:32:03 AM EDT	PQR	302
Sep 12,2021 04:31:59 AM EDT	PQR	302
Sep 11,2021 04:32:15 AM EDT	PQR	297
Sep 10,2021 04:32:00 AM EDT	PQR	305
Sep 09,2021 04:32:04 AM EDT	PQR	267
Sep 08,2021 04:32:02 AM EDT	PQR	267
Sep 07,2021 04:32:12 AM EDT	PQR	305
Sep 06,2021 04:32:01 AM EDT	PQR	305
Sep 05,2021 04:31:53 AM EDT	PQR	195
Sep 04,2021 04:31:52 AM EDT	PQR	157
Sep 03,2021 04:32:01 AM EDT	PQR	267
Sep 02,2021 04:31:59 AM EDT	PQR	157
Sep 01,2021 04:32:53 AM EDT	PQR	305
Sep 14,2021 10:05:15 AM EDT	DST	103
Sep 13,2021 10:33:00 AM EDT	DST	109
Sep 10,2021 11:30:07 AM EDT	DST	106
Sep 09,2021 04:31:55 AM EDT	DST	105
Sep 08,2021 09:51:06 AM EDT	DST	36
Sep 07,2021 15:44:18 PM EDT	DST	71
Sep 02,2021 04:31:59 AM EDT	DST	105
Sep 01,2021 16:44:02 PM EDT	DST	105"
| multikv forceheader=1
| table max* data* today*


| eval day=strptime(max_time_each_day,"%b %d,%Y")
| fieldformat day=strftime(day,"%Y/%m/%d")
| eval today_count=today_count."!".max_time_each_day
| xyseries day data_source today_count
| makecontinuous day
| fillnull value=0
| untable day data_source today_count
| eval today_count=split(today_count,"!")
| eval max_time_each_day=mvindex(today_count,1)
| eval today_count=mvindex(today_count,0)
| eval max_time_each_day=if(isnull(max_time_each_day),strftime(day,"%b %d,%Y"),max_time_each_day)
| fields - day

mnj1809
Path Finder

@ITWhisperer Thank you so much for your efforts. 🙂

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...