Hello community.
We have a cluster architecture with 5 indexes. We have detected high license consumption, we are trying to identify the sources that generate it. I am using the following search to find out which Windows index host consumes the most license:
index=_internal type="Usage" idx=wineventlog
| eval MB=round(b/1024/1024, 2)
| stats sum(MB) as "Consumo de Licencia (MB)" by h
| rename h as "Host"
| sort -"Consumo de Licencia (MB)"
With this search I can see the hosts and the consumption in megabytes, but in the h field, there are no values or hosts, which I cannot identify and I need to know which are those hosts, since the sum of all of them gives me a high license consumption. What could be the cause of that?
this is the events from uknowns_host:
I cannot identify what they are, if they are a specific host, if it is a Splunk component, or something that is causing this license increase.
Regards
@sgarcia
Blank h (host) values in license usage logs are due to Splunk's "squashing" process.
This occurs to optimize log storage and performance when there are too many unique values to track individually
If possible, analyze your actual event data
Eg:
index=wineventlog
| stats sum(len(_raw)) as bytes by host
| eval GB=round(bytes/1024/1024/1024, 2)
| sort -GB
Also check Licensing usage reports and to split by host for last 60 days if available
or try from metrics.log to identify which ones have high thruput.
index="_internal" source="*metrics.log" group="per_host_thruput"
Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello @sgarcia,
This behavior in the logs is because of squash threshold limit being hit for license_usage.log for h,s, st fields. Additional approach to measure the volume of ingestion would be from metrics.log by using either per_host_thruput, per_source_thruput, per_sourcetype_thruput. With this thruput, you can look at the series field to know which component has the highest volume.
Additionally, the squash_threshold can be configured in limits.conf, but it is NOT advisable to update the limits without consulting Splunk Support because it can cause heavy memory issues if increased from the default value.
Thanks,
Tejas.