Getting Data In

Low ingestion thruput without queues blocked.

hrawat_splunk
Splunk Employee
Splunk Employee

Problem: 

Indexing throughput drops linearly when new data sources/forwarders/apps are added.

hrawat_splunk
Splunk Employee
Splunk Employee

Indexing throughput drops linearly when the tuple( combination of the cross product of source, sourcetype and host) increases( Anything > 10k).
Run following search to find if you see channel explosion
index=_internal source=*metrics.log new_channels | timechart max(new_channels)

Each tuple can generate several pipeline input channels. Channel churn puts significant pause in the ingestion pipeline where managing these channels takes significantly long time for pipeline processors and thus ingest less data.

Solutions
For HEC INPUTS :
Increase following two on IDX ( or which ever layer the explosion is). Generally these values must be > 2 times max(new_channels)
[input_channels]
max_inactive =
* Internal setting, do not change unless instructed to do so by Splunk
Support.

lowater_inactive =
* Internal setting, do not change unless instructed to do so by Splunk
Support.

For S2S(UF/HF) INPUTS : Increase following on IDX ( or which ever layer the explosion is)
max_inactive =
* Internal setting, do not change unless instructed to do so by Splunk Support.

On Forwarding side(all UF/HFs) increase
autoLBFrequency upto 180 sec.

to4kawa
Ultra Champion

over splunk ver 8:

| tstats max(PREFIX("new_channels=")) where index=_internal source=*metrics.log by _time
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...