I've got a lookup table with counts by date. This table is updated each night, and I would like to search by the date fields relative to the current date. Example:
5-26-2020 / 5-27-2020 / 5-28-2020
12 / 30 / 15
10 / 10 / 8
19 / 12 / 15
| inputlookup counts.csv | eval today=strftime(_time,"%m-%d-%Y") | stats sum(**today**)
I'm thinking of something akin to the INDIRECT function in excel.
There is something akin to indirection within eval
, but it doesn't do what you seek.
Put {}
around a field name to treat that field as a "pointer" to another field. For example,
... | eval foo = "date" | eval {foo} = strftime (_time,"%m-%d-%Y")
is a simplistic example that assigns the human-readable version of _time to the field called "date". Imagine 'foo' is read from a data source and you may see the usefulness of it.
I'm not aware of an SPL construct that lets you set the name of a field to a random string.
Consider posting a use case for that ability at https://ideas.splunk.com
If I understand correctly, you are saying that the field name is the date, in "%m-%d-%Y" order.
As an initial aside, please please please use ISO date order, you will save yourself loads of grief. "2020-05-02" is unambiguous. "05-02-2020" could be May 2 or Feb 5.
That being said, you are asking for the sum of all fields where the name of the field is a certain date. You can use the untable
command to put the name of each field on a record into one field with an arbitrary name, and the value into a second field with a second arbitrary name. I picked "fieldname" and "fieldvalue" for the example, but the names could have been "fred" and "wilma".
| inputlookup mydata.csv
| stats sum(*) as *
| eval placebo=1
| untable placebo fieldname fieldvalue
| where fieldname = strftime(_time,"%m-%d-%Y")