Hey Guys,
I want to create a table where I can check the total events from two different time ranges. Here is an example.
This search contains 971 events in total:
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/10/2019:00:00:00" latest="2/10/2019:23:59:59"
The second search contains 1604 events in total:
search source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/9/2019:23:59:59"
What I'm trying to do is combine this two searches and use a subsearch:
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/10/2019:00:00:00" latest="2/10/2019:23:59:59" [
search source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/9/2019:23:59:59" |
stats count as y |
table y] |
stats count as x |
table x, y
x should contain 971 events and y 1604 events, but instead, it's showing x has the value 0 and y is empty.
I'm still a newbie with Splunk. Can someone explain me what I could do ?
There are multiple ways to do it
*Best method (IMO) *
Single search fetches data for both time range and then count is calculated based on _time value compared with your static time range value.
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/10/2019:23:59:59"
| eval x=if(_time>=strptime("2/9/2019:00:00:00","%m/%d/%Y:%H:%M:%S"), 1,0)
| eval y=if(_time<strptime("2/9/2019:00:00:00","%m/%d/%Y:%H:%M:%S"), 1,0)
| stats sum(x) as x sum(y) as y
Simplex looking search
Uses a subsearch to it has limitations and doesn't have optimal performance but easy to understand
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/10/2019:00:00:00" latest="2/10/2019:23:59:59" | stats count as x
| appendcols [search source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/9/2019:23:59:59" | stats count as y]
Subsearches can be confusing. To help understand them better, keep these points in mind:
The subsearch executes first.
The results of the subsearch become part of the main query.
That means a query like index=foo source=bar [ index=foo | stats count | table count]
is equivalent to 'index=foo source=bar count=3`, which will only work if index 'foo' contains a field called 'count' and at least one event where count is 3.
In this case, the better approach is to create a single search over both days and bucket the results.
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/10/2019:23:59:59"
| bucket span=1d _time
| timechart count as y
There are multiple ways to do it
*Best method (IMO) *
Single search fetches data for both time range and then count is calculated based on _time value compared with your static time range value.
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/10/2019:23:59:59"
| eval x=if(_time>=strptime("2/9/2019:00:00:00","%m/%d/%Y:%H:%M:%S"), 1,0)
| eval y=if(_time<strptime("2/9/2019:00:00:00","%m/%d/%Y:%H:%M:%S"), 1,0)
| stats sum(x) as x sum(y) as y
Simplex looking search
Uses a subsearch to it has limitations and doesn't have optimal performance but easy to understand
source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/10/2019:00:00:00" latest="2/10/2019:23:59:59" | stats count as x
| appendcols [search source="tutorialdata.zip:*" host="ubuntu-js" source="tutorialdata.zip:./mailsv/secure.log" earliest="2/9/2019:00:00:00" latest="2/9/2019:23:59:59" | stats count as y]