I have 2 searches:
This search will give me the total amount of license that we have:
index=_internal host=licenseservername source=*license* | head 1 | fields poolsz
This search will give me the total amount of license used so far today:
| rest /services/licenser/pools | rename title AS Pool
| search [rest splunk_server=local /services/licenser/groups
| search is_active=1 | eval stack_id=stack_ids | fields stack_id]
| eval licenseUsed=max(used_bytes) | fields licenseUsed
I have tried a variety of ways in combining these 2 searches so that I can get the percentage of license used so far today. Nothing works.
How can I combine these to get: TotalLicense, LicenseUsed, PercentUsed?
index=_internal source=*usage.log type=Usage | stats sum(b) as b max(poolsz) as poolsz by pool | eval percent = b/poolsz * 100
This should give you what you want. It might take a while to run over your time range. I use the one below and summarize every hour, and then you can do an eval over as long a time range as you have in summary. So I can go back 6 months and check indexing for report purposes. Default splunk only leaves 2-4 weeks in the logs.
To make summary every hour:
index=_internal source=*usage.log type=Usage | eval category="splunk_metric" |eval subcategory="indexing"| eval src_type="license_usage"| stats sum(b) as b by st h s pool poolsz category subcategory src_type | collect index=summary
To eval the summary:
index=summary category=splunk_metric subcategory=indexing src_type=license_usage | stats sum(b) as b max(poolsz) as poolsz by pool| eval percent = b/poolsz*100
Here is a small modification to show the last 30 days in a chart. It works for a single pool, but could be modified for multiple pools. Good to see how things are changing, and it is efficient:
index=_internal source=*license_usage.log type=RollOverSummary | timechart sum(b) as b max(poolsz) AS poolsz span=1d | eval PercentLicense = round((b/poolsz * 100),1) | fields - b poolsz
Here is a small modification to show how much each index is using. It is good to run this over time "yesterday" (or some 24 hour time period)
index=_internal source=*license_usage.log idx=* | stats sum(b) as b max(poolsz) as poolsz by pool, idx | eval percent = round(b/poolsz * 100,1) | eval GB=round(b/1024/1024/1024) | eval "License GB"=round(poolsz/1024/1024/1024) | fields - b poolsz | addcoltotals labelfield=idx label=Total GB percent
try this
index=_internal source=*license_usage.log* type=RollOverSummary | stats sum(eval(b/1024/1024/1024)) AS "Volume Used (GB)" sum(eval(b/poolsz*100)) AS "Quota Percentage Used" by pool | addcoltotals labelfield=pool label=Total | sort - "Volume Used (GB)"
Yeah. Very useful for knowing what happened yesterday. I am also keeping a copy of this. Nice.
This would work for a Daily Report of Volume used, but not for a running throughout the day Report.
index=_internal source=*usage.log type=Usage | stats sum(b) as b max(poolsz) as poolsz by pool | eval percent = b/poolsz * 100
This should give you what you want. It might take a while to run over your time range. I use the one below and summarize every hour, and then you can do an eval over as long a time range as you have in summary. So I can go back 6 months and check indexing for report purposes. Default splunk only leaves 2-4 weeks in the logs.
To make summary every hour:
index=_internal source=*usage.log type=Usage | eval category="splunk_metric" |eval subcategory="indexing"| eval src_type="license_usage"| stats sum(b) as b by st h s pool poolsz category subcategory src_type | collect index=summary
To eval the summary:
index=summary category=splunk_metric subcategory=indexing src_type=license_usage | stats sum(b) as b max(poolsz) as poolsz by pool| eval percent = b/poolsz*100
I came up with this:
earliest=@d index=_internal source=license type=Usage | stats sum(b) as BytesUsed max(poolsz) as PoolSize by pool | eval percent = BytesUsed/PoolSize | eval LicenseUsedInGB=round(BytesUsed/1024/1024/1024,2) | eval PoolSizeInGB=PoolSize/1024/1024/1024 | eval PercentUsed=round(percent*100,3) | fields PoolSizeInGB LicenseUsedInGB PercentUsed
It's very very close. The b isn't reported quite as regularly as the rest portion, that I think is real-ish time, the log only gets written every so often, but it should be very close.
Ah. I thought the query was off. My bad. My timepicker was off. Here is the query for "as of now, today".
earliest= @d index=_internal source=*usage.log type=Usage | stats sum(b) as b max(poolsz) as poolsz by pool | eval percent = b/poolsz * 100