Monitoring Splunk

Splunk Down

ramprakash
Explorer

Hi,

One of my Universal forwarder was down for a week. So when i noticed I restarted the services back again but it is not coming up. I am facing the below error. Can someone please help

Splunk> Needle. Haystack. Found.

Checking prerequisites...
WARNING: Data segment size limit (ulimit -d) is set low (134217728 bytes) Splunk may not work.
You may want to run "ulimit -d unlimited" before starting splunk.
WARNING: Resident memory size limit (ulimit -m) is set low (33554432 bytes) Splunk may not work.
You may want to run "ulimit -m unlimited" before starting splunk.
WARNING: File size limit (ulimit -f) is set low (1073741312 bytes) Splunk may not work.
You may want to run "ulimit -f unlimited" before starting splunk.
Checking mgmt port [8089]: open
Assertion failed: _linkp == nullptr, file /home/build/build-src/orangeswirl/src/util/TimeoutHeap.cpp, line 46
Dying on signal #6 (si_code=0), sent by PID 0 (UID 0). Attempting to clean up pidfile
ERROR: pid 8454562 terminated with signal 6
SSL certificate generation failed.

0 Karma
1 Solution

ashutoshab
Communicator

This is clear sign of ulimit restrictions. Your UF which is installed on unix os is been restricted by ulimit. Most of the Linux have ulimit feature. Ulimit is the number of open file descriptors per process. It is a method for restricting the number of various resources a process can consume. So ulimit is not allowing your UF to open more number of files.

You can get rid of the error by changing the ulimit settings.

There are a few ways you can check your current ulimit settings.

On the command line, you can type ulimit -a
You can restart Splunk Enterprise and look in splunkd.log for events mentioning ulimit:

index=_internal source=*splunkd.log ulimit

The monitoring console has a health check for ulimits.

To set new Limits
Depending on your Linux, there are various ways to change the limits.

  1. For earlier versions of Linux that use the init system, edit the /etc/security/limits.conf file.
  2. For the latest versions of Linux that run the systemd system, edit either /etc/systemd/system.conf, /etc/systemd/user.conf or, if Splunk software has been configured to run as a systemd service, /etc/systemd/system/splunkd.service.

Set limits using /etc/security/limits.conf

  1. Become a root user or user with root priviledges and Open /etc/security/limits.conf with a text editor.
  2. Add at least the following values, or confirm that they exist: ** hard nofile 64000
  3. hard nproc 8192
  4. hard fsize -1 *
  5. Save the file and exit the text editor.
  6. Restart the machine to complete the changes.

Set limits using the /etc/systemd configuration files

Editing the /etc/systemd/system.conf file sets system-wide limits, while editing /etc/systemd/user.conf sets limits for services that run under a specific user within systemd.

  1. Become the root user or an administrative equivalent with su Open /etc/systemd/system.conf with a text editor.
  2. Add at least the following values to the file:

    [Manager]
    DefaultLimitFSIZE=-1
    DefaultLimitNOFILE=64000
    DefaultLimitNPROC=8192

  3. Save the file and exit the text editor.

  4. Restart the machine to complete the changes.

View solution in original post

0 Karma

ashutoshab
Communicator

This is clear sign of ulimit restrictions. Your UF which is installed on unix os is been restricted by ulimit. Most of the Linux have ulimit feature. Ulimit is the number of open file descriptors per process. It is a method for restricting the number of various resources a process can consume. So ulimit is not allowing your UF to open more number of files.

You can get rid of the error by changing the ulimit settings.

There are a few ways you can check your current ulimit settings.

On the command line, you can type ulimit -a
You can restart Splunk Enterprise and look in splunkd.log for events mentioning ulimit:

index=_internal source=*splunkd.log ulimit

The monitoring console has a health check for ulimits.

To set new Limits
Depending on your Linux, there are various ways to change the limits.

  1. For earlier versions of Linux that use the init system, edit the /etc/security/limits.conf file.
  2. For the latest versions of Linux that run the systemd system, edit either /etc/systemd/system.conf, /etc/systemd/user.conf or, if Splunk software has been configured to run as a systemd service, /etc/systemd/system/splunkd.service.

Set limits using /etc/security/limits.conf

  1. Become a root user or user with root priviledges and Open /etc/security/limits.conf with a text editor.
  2. Add at least the following values, or confirm that they exist: ** hard nofile 64000
  3. hard nproc 8192
  4. hard fsize -1 *
  5. Save the file and exit the text editor.
  6. Restart the machine to complete the changes.

Set limits using the /etc/systemd configuration files

Editing the /etc/systemd/system.conf file sets system-wide limits, while editing /etc/systemd/user.conf sets limits for services that run under a specific user within systemd.

  1. Become the root user or an administrative equivalent with su Open /etc/systemd/system.conf with a text editor.
  2. Add at least the following values to the file:

    [Manager]
    DefaultLimitFSIZE=-1
    DefaultLimitNOFILE=64000
    DefaultLimitNPROC=8192

  3. Save the file and exit the text editor.

  4. Restart the machine to complete the changes.

0 Karma

ramprakash
Explorer

Thanks much for giving the detailed explanation. Let me change Ulimit and get back to you.

0 Karma

ashutoshab
Communicator

Please let me know if this worked.

If my answer was helpful, please accept this as a solution.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...