Hi Legends,
I want to know is this type of splunk query possible to create?
We want a query which will pull 2 types of data, for ex. I ran a query by selecting time-picker as last 4 hours, so it will pull the data of last 4 hours from current time i.e. 09/03/2023 11:30 AM to 09/03/2023 03:30 PM now along with this data it should also pull last month's data for similar timeframe i.e. 09/02/2023 11:30 AM to 09/02/2023 03:30 PM.
The purpose of this query is to see the Month on Month growth .
You can use the info times to modify your index search
<your index> [| makeresults
| fields - _time
| addinfo
| eval row=mvrange(0,2)
| mvexpand row
| eval earliest=relative_time(info_min_time, "-".row."mon")
| eval latest=relative_time(info_max_time, "-".row."mon")
| fields earliest latest]
The subsearch is executed first to retrieve the earliest/latest (info_min_time/info_max_time) and create two rows with the times one month apart. Bear in mind that if the current month is longer than the previous month you could end up with two dates in the same month.
@ITWhisperer Thanks for your response, I tried your query its giving me earliest and latest time frame. The Ask over here is that I need data i.e. Count of my request on these time frame, for e.g. below query give me total hits of my API :
index="index_name" AND apiName!="" AND apiName=API_NAME
| search responseCode!=NULL
| stats count as Hits by apiName
| rename apiName as "API Name"
| table Hits
Now from time picker if i select 4Hrs , then I want a query which will pull 2 types of data,
a.) it will pull the data of last 4 hours from current time i.e. 13/03/2023 11:30 AM to 13/03/2023 03:30 PM
b.) it should also pull last month's data for similar timeframe i.e. 13/02/2023 11:30 AM to 13/02/2023 03:30 PM.
index="index_name" AND apiName!="" AND apiName=API_NAME [| makeresults
| fields - _time
| addinfo
| eval row=mvrange(0,2)
| mvexpand row
| eval earliest=relative_time(info_min_time, "-".row."mon")
| eval latest=relative_time(info_max_time, "-".row."mon")
| fields earliest latest]
| bin _time span=1d
| search responseCode!=NULL
| stats count as Hits by _time apiName
| rename apiName as "API Name"
@ITWhisperer Wow!!! this is exactly what i needed.
Need one more help , both days count are getting store in one variable i.e. Hits, _time and in apiName like below :
_time apiName Hits
2023-02-14 | apiName | 15416 |
2023-03-14 | apiName | 16441 |
How can i segregate in two separate variables? So that i can able to calculate %age increase in Hits.
I know i am asking a lot , since i am very new in Splunk. Please help me out .
index="index_name" AND apiName!="" AND apiName=API_NAME [| makeresults
| fields - _time
| addinfo
| eval row=mvrange(0,2)
| mvexpand row
| eval earliest=relative_time(info_min_time, "-".row."mon")
| eval latest=relative_time(info_max_time, "-".row."mon")
| fields earliest latest]
| bin _time span=1d
| search responseCode!=NULL
| stats count as Hits by _time apiName
| xyseries apiName _time Hits
| rename apiName as "API Name"
The issue with this is that your column names will be epoch times i.e. seconds since 1st Jan 1970. To fix this you could do something like this
| foreach 1*
[| eval newname=strftime(<<FIELD>>,"%F")
| eval {newname}='<<FIELD>>'
| fields - <<FIELD>> newname]
This assumes that epoch times begin with 1, which they will for the next decade or so
@ITWhisperer Thanks for your continuous support , i tried your query and got below output :
index="indexName" AND apiName!="" AND apiName=apiName[| makeresults
| fields - _time
| addinfo
| eval row=mvrange(0,2)
| mvexpand row
| eval earliest=relative_time(info_min_time, "-".row."mon")
| eval latest=relative_time(info_max_time, "-".row."mon")
| fields earliest latest]
| bin _time span=1d
| search responseCode!=NULL
| stats count as Hits by _time apiName
| xyseries apiName _time Hits
| rename apiName as "API Name"
| foreach 1*
[| eval newname=strftime(Hits,"%F")
| eval oldname='Hits'
| fields - oldname newname]
API Name 1676379600 1678798800
apiName | 23302 | 22722 |
How can i set column names in human readable Variable , instead of numbers ?
Try using the code exactly as I showed with the <<FIELD>> and braces around newname.
<<FIELD>> is substituted by the foreach command for each field name in the list. Putting braces around the fieldname on the left hand side of the eval uses the contents of the field as the new field name.
I'm just doing a dashboard on subsearches with "the good", "the bad" and "the ugly" uses of subsearches - I always tend to view them in the bad to ugly category, but this is a good example and it's a great use case for "macro-isation".
The way to calculate a second time range based on the first time range from the picker is to have a background search that will do that calculation and make a new token that can be used to search both current and previous range.
Here are two examples that show how it's done.