Splunk Enterprise Security

Splunk Search head performance issue - Enterprise Security

Hemnaath
Motivator

Hi All, Currently we are facing performance issue while accessing the Splunk search head portal via web and ours is an Distributed Environment we have 5 indexer instances, four search heads, a scheduled search Job instance, 2 Heavy Forwarders, and a deployment manager/ License master running on the same instances and we are using 6.2.1 version in most of the instance except two instance that are running with 6.0.3 version.

Using Splunk user to execute the splunk related commands

OS details -
Linux
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 12
CPU family: 6
Hypervisor vendor: VMware
Virtualization type: full

Transparent Huge Page

cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
always madvise [never]

cat /sys/kernel/mm/redhat_transparent_hugepage/defrag
always madvise [never]

/etc/security/limits.conf details:
splunk soft nofile 1024000
splunk hard nofile 1024000
splunk soft nproc 180000
splunk hard nproc 180000

1) How to increase the performance of the search head as it is too slow to access the Splunk web portal?
2) Where/ which log file will throw some information about the performance of the search head?
3) How/where to check what are the saved search Jobs are running currently from this search head both real-time and historic ?

Kindly guide on the above question to analysis why there is lag in performance of the search head.

thanks in advance

Tags (1)
0 Karma

hunters_splunk
Splunk Employee
Splunk Employee

Hi Hemnaath,

Let me try to answer your questions:

1) How to increase the performance of the search head as it is too slow to access the Splunk web portal?
Please try the following:
* Increase ulimit settings on the indexer. You might want to increase OS parameters such as core file size, max open files, and max user processes to allow for a large number of buckets/forwarders/users.

ulimit -a
ulimit -c 1073741824 (1 GB) (unlimited)
ulimit -n 48 x default (48 x 1024 = 49,152) (65536)
ulimit -u 12 x default (12 x 1024 = 12,288) (258048)

For more detailed info, please refer to the doc:
docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ulimitErrors

2) Where/ which log file will throw some information about the performance of the search head?

You can view search performance statistics from the Distributed Monitoring Console (DMC). Select Settings from the menu and click Monitoring Console. Then go under the Search DMC menu.

3) How/where to check what are the saved search Jobs are running currently from this search head both real-time and historic ?

To view all running search jobs, select Activity > Jobs from the menu.

Hope it helps. Thanks!
Hunter

0 Karma

koshyk
Super Champion

i think your version is making things complex. Is there any chance you can upgrade all Splunk infrastructure to 6.4.x ? To make things everything to standard configuration will make things much easy.

the performance data will be in _internal index, metrics. Hope you are forwarding SH details to indexers?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...