Hi,
I have the following query to report on license utilization, and now want to filter out on specific slave indexers:
index=_internal source=*license_usage.log type="Usage" | join type=left i
[rest count=0 /services/licenser/slaves
| rename label as slave
| rename title as i
| table i slave] | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _time, slave, st | eval GB=round(b/1024/1024/1024, 3) | fields _time, slave, st, GB
How would I do that, and where's the best place to put the filter in this query from a performance perspective? For example, I want a set of slaves, not all - slave=myservera OR slave=myserverb.
Since you asked about performance...
rename
s: rename label as slave, title as i
eval
s: eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx), sourcetypename = st
sourcetypename
is never used...so remove that?eval GB=round( b pow( 1024, 3 ) , 3)
An alternate approach to this is a dashboard where an input field lists the indexers (use the guid as the value and the title as the label). That token(s) would get passed into the main panel where you run this search filtered by the selection.
Try filtering as early as possible by using fields and drop any fields you don't want to use.
Also if you are going to use stats and join, wait until you have summarised your data with stats first before you join it.
Your initial query took 150 seconds to complete in my lab.
The query below took 22 seconds.
index=_internal source=*license_usage.log type="Usage"
| fields _time, st, b, i
| bin _time span=1d
| stats sum(b) as GB by _time, st, i
| eval GB=round(GB/1024/1024/1024, 3)
| rename st as sourcetypename
| join type=left i [
| rest count=0 /services/licenser/slaves
| rename label as slave
| rename title as i
| table i slave
]
Thanks, so I'm trying to combine the two answers here - where would you put the lookup on this one?
I wouldn't even bother about creating and maintaining a lookup to be honest. But that's my personal preference at least.
If you want to filter just by specific slaves simply apply the filter (search or where) in the rest call inside the join.
Of course a lookup is faster than running a rest query, but the difference is so tiny if the number of slaves is not huge (thousands) that I don't see too much benefit in having to create a lookup that you have to ensure is kept up-to-date.
Hope that makes sense.
If you give me more details on exactly what you are trying to filter on, I might be able to help more
Thanks. We manage a license file for many different splunk (12) instances, so I like the idea of a lookup. It allows me to easily separate the searches, knowing what is being reported on.
The rule is to insert your filter as first as possible, possibly in the first search.
To search for a set of slaves, the best way is to create a lookup countaining your slaves and put it in your search
index=_internal source=*license_usage.log type="Usage" | join type=left i [rest count=0 /services/licenser/slaves | rename label as slave | search [ | inputlookup slave_lookup.csv | table slave] | rename title as i | table i slave] | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _time, slave, st | eval GB=round(b/1024/1024/1024, 3) | fields _time, slave, st, GB
Eventually you could put slaves values in your lookup with a scheduled report to run every night or more frequently.
Bye.
Giuseppe