I want to run a Splunk query for all the values in the CSV file and replace the value with the field in the CSV file. I've imported the file into Splunk as an input lookup table, and I'm able to view the fields using an inputlookup query. But, I want to run that with all the sub queries where I'm fetching maximum count on a per hour, per day, per week and per month basis.
input file is ids.csv, which has around 800 rows and it's just one column, like below:
1234,
2345
2346
4567
...
query that im using:
| inputlookup ids.csv | fields ids as id | [search index="abc" id "search string here" |bin _time span="1hour" | stats count as maxHour by _time | sort - count | head 1] |appendcols[search
index="abc" id "search string here" |bin _time span="1day" | stats count as maxDay by _time | sort - count |head 1 ]|appendcols[search
index="abc" id "search string here" |bin _time span="1week" | stats count as maxWeek by _time | sort - count | head 1 ]|appendcols[search
index="abc" id "search string here" |bin _time span="1month" | stats count as maxMonth by _time | sort - count | head 1]
I'm not getting the expected results for this. I'm expecting a tabular format where I get the count for each time range with the specific ID by passing the ID field in the search subquery.
How can I solve this?
Thanks
... View more