Security

High license consumption in splunk

sgarcia
Explorer

Hello community.


We have a cluster architecture with 5 indexes. We have detected high license consumption, we are trying to identify the sources that generate it. I am using the following search to find out which Windows index host consumes the most license:

index=_internal type="Usage" idx=wineventlog
| eval MB=round(b/1024/1024, 2)

| stats sum(MB) as "Consumo de Licencia (MB)" by h
| rename h as "Host"
| sort -"Consumo de Licencia (MB)"

With this search I can see the hosts and the consumption in megabytes, but in the h field, there are no values ​​or hosts, which I cannot identify and I need to know which are those hosts, since the sum of all of them gives me a high license consumption. What could be the cause of that?

 

sgarcia_0-1722359171769.png

sgarcia_1-1722359248064.png

 

this is the events from uknowns_host:

sgarcia_2-1722359315180.png

I cannot identify what they are, if they are a specific host, if it is a Splunk component, or something that is causing this license increase.

Regards

 

Labels (1)
0 Karma

PrewinThomas
Motivator

@sgarcia 
Blank h (host) values in license usage logs are due to Splunk's "squashing" process.
This occurs to optimize log storage and performance when there are too many unique values to track individually

If possible, analyze your actual event data
Eg:
index=wineventlog
| stats sum(len(_raw)) as bytes by host
| eval GB=round(bytes/1024/1024/1024, 2)
| sort -GB

Also check Licensing usage reports and to split by host for last 60 days if available

or try from metrics.log to identify which ones have high thruput.

index="_internal" source="*metrics.log" group="per_host_thruput"


Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!

0 Karma

tej57
Builder

Hello @sgarcia,

This behavior in the logs is because of squash threshold limit being hit for license_usage.log for h,s, st fields. Additional approach to measure the volume of ingestion would be from metrics.log by using either per_host_thruput, per_source_thruput, per_sourcetype_thruput. With this thruput, you can look at the series field to know which component has the highest volume.

Additionally, the squash_threshold can be configured in limits.conf, but it is NOT advisable to update the limits without consulting Splunk Support because it can cause heavy memory issues if increased from the default value.

Thanks,
Tejas. 

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...