Custom authentication.conf for Search Head Cluster



Version - Splunk v7.1.0
Component - Search Head Cluster

Background - in our organization, we are using Splunk Infra to collect data from various components and servers. (over 400 VMs, over 200 indexes etc). Access to these indexes are provided via LDAP Authentication, where Various AD groups are mapped with various Splunk Roles. Since we have a huge number of AD groups and Splunk Roles, we made automated scripts to generate "authentication.conf" file and place it at etc/system/local.

Issue 1 - Very Slow LDAP authentication from Search Head Cluster.
Login page takes ~2 mins to authenticate and redirect to homepage. On a Similar setup (with Single Stand Alone Search Head) where we have Same LDAP configurations, Login takes less than 10 sec to redirect.
Only difference i can see in two different authentication.conf is that on slow instances of SH, we have big list of AD group mapping under [roleMap_Glue_HAP_LDAP] stanza. Rest two stanzas are same { [authentication] , [Glue_MYLDAP_LDAP] }
Que1 - is there any specified value for total AD groups that can be mapped to maintain performance.
Que2 - Is there a way to improve the performance of Splunk Search Heads Authentication.

Issue 2 - Unable to edit/update the configurations from UI [ "Authentication method" --> "LDAP Settings" --> "Map Groups"]
When we try to edit/update the configurations from the UI, We are getting the below error as -

Encountered the following error while trying to update: Splunkd daemon is not responding: (u"Error connecting to /servicesNS/-/search/admin/LDAP-groups/Glue_MYLDAP_LDAP%2CAB-App-MYSPLK-SPLK-CD-PQR-SVR00-PP-Dev-S-D: ('The read operation timed out',)",)

Things to note -
1) AD groups start appearing in splunk UI [ "Authentication method" --> "LDAP Settings" --> "Map Groups"] only when a user with an AD group login into the SH.
2) We have tried to map some AD groups (for future), Which exists in LDAP system but, no user of those AD Groups till now have tried logging in.

0 Karma


I would guess that your configuration is bringing in all AD groups for the organization. What we follow to to limit that is enforce a naming standard for AD groups that need access to splunk, In our instance, have the word Splunk in the AD group name. And then in the configuration, we filter groups having Splunk . This significantly decreases the number of AD groups that the system has to deal with.

By default the limit is 1000. You can view more details of this in the following answer -

Regarding the error, I have seen that occur when the system is at capacity. I would check resource utilization on your search head cluster. If you have high number of scheduled searches running, this might be eating up your CPU. Typically one search utilizes one CPU core.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...