Getting Data In

What are the basic troubleshooting steps in case of universal forwarder and heavy forwarder not forwarding data to Splunk?

dkolekar_splunk
Splunk Employee
Splunk Employee

Most of the time, we are seeing that the Splunk universal forwarder or heavy forwarder is failing to forward data to the indexer. In this scenario, what troubleshooting steps should we take to investigate why this is happening?

1 Solution

dkolekar_splunk
Splunk Employee
Splunk Employee
  • The Splunk forwarder basically acts as an agent for log collection from remote machines.
  • The role of the Splunk forwarder is to collect the logs from remote machines and forward them to the indexer for further processing and storage.
  • Splunk universal Forwarders provide reliable, secure data collection from remote sources and forward that data into Splunk Enterprise for indexing and consolidation.
  • Below are the few most common checks which will help in identifying the problem and resolving it efficiently.

check if Splunk process is running on Splunk forwarder

For Windows check services | for Linux use below command

ps -ef |grep splunkd
Or
cd $SPLUNK HOME/bin
./splunk status

Check if Splunk forwarder forwarding port is open by using below command

netstat -an | grep 9997
If output of above command is blank, then your port is not open. You need to open it.

Check on indexer if receiving is enabled on port 9997 and port 9997 is open on indexer

Check if receiving is configured : on indexer, go to setting>>forwarding and receiving >> check if receiving is enabled on port 9997. If not, enable it.

Check if you are able to ping indexer from forwarder host

ping indexer name
If you are not able to ping to the server, then check network issue

Confirm on indexer if your file is already indexed or not by using the below search query

In the Splunk UI, run the following search - index=_internal "FileInputTracker" **

As output of the search query, you will get a list of log files indexed.

Check if forwarder has completed processing log file (i.e. tailing process by using below URL)

https://splunk forwarder server name:8089/services/admin/inputstatus/TailingProcessor:FileStatus
In tailing process output you can check if forwarder is having an issue for processing file

Check out log file permissions which you are sending to Splunk. Verify if Splunk user has access to log file

Checkout filesystem for last modification and verify if the forwarder is monitoring it

Verify inputs.conf and outputs.conf for proper configuration

Below are sample configuration files for comparison:

Inputs.conf example:

[monitor:///var/log/secure]
disabled = false
sourcetype = linux_secure

[monitor:///var/log/messages]
disabled = false

sourcetype = syslog

Outputs.conf example:

outputs.conf example:
[tcpout:imp_A]
server = impAserver01.domain:9997,impAserver02.domain:9997
autoLB = true
[tcpout]
defaultGroup = imp_B
[tcpout:imp_B]
server = impBserver01.domain:9997,impBserver02.domain:9997

autoLB = true

Checkout disk space availability on the indexer

**Check splunkd.log on forwarder at location $SPLUNK_HOME/var/log/splunk for any errors. Like for messages that are from 'TcpOutputProc', they should give you an indication as to what is occurring when the forwarder tries to connect to the indexer

tcpdump port 9997 data for any errors

tcpdump -i etho port 9997

Check out ulimit if you have installed forwarder on linux. and set it to unlimites or max (65535 -Splunk recommended)
- ulimit is limit set by default in linux is limit for number files opened by a process
- check ulimit command: ulimit -n
- set ulimit command: ulimit -n expected size

Finally, try restarting Splunk on the forwarder

View solution in original post

ddrillic
Ultra Champion

Most useful is I can't find my data!

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi dkolekar [Splunk],
to perform troubleshooting on Splunk data ingestion you can see at
https://docs.splunk.com/Documentation/Splunk/7.2.0/Data/Troubleshoottheinputprocess
http://docs.splunk.com/Documentation/Splunk/7.2.0/Troubleshooting/IntrototroubleshootingSplunk

Anyway, if you're not receiving any log from UF you could perform these tests:

  • check if ports are open using telnet,
  • see Splunk logs on UF ($SPLUNK_HOME/var/log/splunk/splunkd.log)
  • search on _internal on Splunk Enterprise.

If instead the problem is that UF is sending data (there are events in _internal) but there arent logs, check the log path and file names, probably they are wrong.
Check also the timestamp format, maybe UF failed in timestamp parsing and you have your logs with a wrong timestamp.

In other words, fix your checks to be sure that:

  • UF is sending,
  • logs arrive on Indexer,
  • logs have the correct timestamp format.

The above documentation can support you in your debugging.

Bye.
Giuseppe

dkolekar_splunk
Splunk Employee
Splunk Employee
  • The Splunk forwarder basically acts as an agent for log collection from remote machines.
  • The role of the Splunk forwarder is to collect the logs from remote machines and forward them to the indexer for further processing and storage.
  • Splunk universal Forwarders provide reliable, secure data collection from remote sources and forward that data into Splunk Enterprise for indexing and consolidation.
  • Below are the few most common checks which will help in identifying the problem and resolving it efficiently.

check if Splunk process is running on Splunk forwarder

For Windows check services | for Linux use below command

ps -ef |grep splunkd
Or
cd $SPLUNK HOME/bin
./splunk status

Check if Splunk forwarder forwarding port is open by using below command

netstat -an | grep 9997
If output of above command is blank, then your port is not open. You need to open it.

Check on indexer if receiving is enabled on port 9997 and port 9997 is open on indexer

Check if receiving is configured : on indexer, go to setting>>forwarding and receiving >> check if receiving is enabled on port 9997. If not, enable it.

Check if you are able to ping indexer from forwarder host

ping indexer name
If you are not able to ping to the server, then check network issue

Confirm on indexer if your file is already indexed or not by using the below search query

In the Splunk UI, run the following search - index=_internal "FileInputTracker" **

As output of the search query, you will get a list of log files indexed.

Check if forwarder has completed processing log file (i.e. tailing process by using below URL)

https://splunk forwarder server name:8089/services/admin/inputstatus/TailingProcessor:FileStatus
In tailing process output you can check if forwarder is having an issue for processing file

Check out log file permissions which you are sending to Splunk. Verify if Splunk user has access to log file

Checkout filesystem for last modification and verify if the forwarder is monitoring it

Verify inputs.conf and outputs.conf for proper configuration

Below are sample configuration files for comparison:

Inputs.conf example:

[monitor:///var/log/secure]
disabled = false
sourcetype = linux_secure

[monitor:///var/log/messages]
disabled = false

sourcetype = syslog

Outputs.conf example:

outputs.conf example:
[tcpout:imp_A]
server = impAserver01.domain:9997,impAserver02.domain:9997
autoLB = true
[tcpout]
defaultGroup = imp_B
[tcpout:imp_B]
server = impBserver01.domain:9997,impBserver02.domain:9997

autoLB = true

Checkout disk space availability on the indexer

**Check splunkd.log on forwarder at location $SPLUNK_HOME/var/log/splunk for any errors. Like for messages that are from 'TcpOutputProc', they should give you an indication as to what is occurring when the forwarder tries to connect to the indexer

tcpdump port 9997 data for any errors

tcpdump -i etho port 9997

Check out ulimit if you have installed forwarder on linux. and set it to unlimites or max (65535 -Splunk recommended)
- ulimit is limit set by default in linux is limit for number files opened by a process
- check ulimit command: ulimit -n
- set ulimit command: ulimit -n expected size

Finally, try restarting Splunk on the forwarder

woodcock
Esteemed Legend

@SloshBurch This could maybe be validated_best_practice.

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Yup! Actually our friends in Professional Services have a similar article that we've been talking about publishing.

Thanks for drawing my attention!

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...