Security

Blank h and s in license_usage.log?

Jason
Motivator

I'm looking to make some metrics dashboards off of the license_usage log, similar to the way the Deployment Monitor works.

However, looking at index=_internal source=*license_usage.log on the license master, there are a lot of entries with h="" s="" (original host and source fields blank).

What is going on here? (License master = 4.2.4, indexers/search heads = 4.3.1)

1 Solution

Vishal_Patel
Splunk Employee
Splunk Employee

An indexer will send a periodic breakdown of data indexed, split by s= h= st= by default. However, if the number of unique s,st,h tuples grows too large (1000 by default), we squash the s/h keys to avoid explosion in memory/processing overhead of the table.

NOTE: we introduced a tunable by setting, squash_threshold in server.conf in 4.3.1 where you can increase the threshold. It can be set in server.conf of indexers under the license stanza.

View solution in original post

Vishal_Patel
Splunk Employee
Splunk Employee

An indexer will send a periodic breakdown of data indexed, split by s= h= st= by default. However, if the number of unique s,st,h tuples grows too large (1000 by default), we squash the s/h keys to avoid explosion in memory/processing overhead of the table.

NOTE: we introduced a tunable by setting, squash_threshold in server.conf in 4.3.1 where you can increase the threshold. It can be set in server.conf of indexers under the license stanza.

Jason
Motivator

That makes sense, thanks. It would be great to have an option to just squash source, leaving host and sourcetype alone... submitting an ER.

theerroco
Engager

Agree that it would be nice to have the ability to tune which fields are squashed (s or h or both) along with the numeric limit. At 6.3.3, it only appears that adjusting the squash_threshold is an option.

0 Karma

mattlucas1
Engager

default is 2000

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...