Getting Data In

Nit getting the data into splunk indexer

loknath
Loves-to-Learn

Iam not able to see the file content in indexer, 

After restarting the universal Forwarder what can be the reason 

0 Karma

kiran_panchavat
Builder

@loknath 

1. check if Splunk process is running on Splunk forwarder

ps -ef | grep splunkd

OR

cd $SPLUNK HOME/bin
./splunk status

2. Check if Splunk forwarder forwarding port is open by using below command

netstat -an | grep 9997

If output of above command is blank, then your port is not open. You need to open it.

3. Check on indexer if receiving is enabled on port 9997 and port 9997 is open on indexer

Check if receiving is configured : on indexer, go to setting>>forwarding and receiving >> check if receiving is enabled on port 9997. If not, enable it.

4. Check if you are able to ping indexer from forwarder host

ping indexer name
If you are not able to ping to the server, then check network issue

5. Confirm on indexer if your file is already indexed or not by using the below search query

In the Splunk UI, run the following search :-

 index=_internal "FileInputTracker" **

As output of the search query, you will get a list of log files indexed.

6. Check if forwarder has completed processing log file (i.e. tailing process by using below URL)

https://splunk forwarder server name:8089/services/admin/inputstatus/TailingProcessor:FileStatus


In tailing process output you can check if forwarder is having an issue for processing file

7. Check out log file permissions which you are sending to Splunk. Verify if Splunk user has access to log file

8. Verify inputs.conf and outputs.conf for proper configuration

inputs.conf

Make sure the following configuration is present, and verify that the specified path has read access.

[monitor://<absolute path of the file to onboard>] 
index=<index name>
sourcetype=<sourcetype name>

outputs.conf

[tcpout:group1] 
server=x.x.x.x:9997

9. Verify Index Creation

Ensure the index is created on the indexer and matches the index specified in inputs.conf.

10. **Check splunkd.log on forwarder at location $SPLUNK_HOME/var/log/splunk for any errors. Like for messages that are from 'TcpOutputProc', they should give you an indication as to what is occurring when the forwarder tries to connect to the indexer

11. Checkout disk space availability on the indexer

12. Check out ulimit if you have installed forwarder on linux. and set it to unlimites or max (65535 -Splunk recommended)

- ulimit is limit set by default in linux is limit for number files opened by a process
- check ulimit command: ulimit -n
- set ulimit command: ulimit -n expected size
I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
0 Karma

isoutamo
SplunkTrust
SplunkTrust
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs?
If 1st one, then look that your UF has correct inputs.conf on place.
If 2nd one, then look that there is outputs.conf on place.

1st check those from server side. Then continue on UF side. Check that Splunk Forwarder service is running on it. Then look from splunkd.log what are happening there. Depending on your environment those conf files could be on UF or there could a DS which are managing all UFs.
0 Karma

kiran_panchavat
Builder

@loknath  To ensure proper monitoring, verify that the file you wish to track grants read access to the 'splunk' user.

I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
0 Karma

kiran_panchavat
Builder

@loknath  

Verify the following details:

  1. Confirm whether the inputs.conf file is configured to point to the correct monitoring directory.
  2. Ensure that the index has been created on the indexer before sending data from the Universal Forwarder (UF).
  3. Check the connection between the UF and the indexer. Make sure the receiving port is enabled on the indexer.
  4. Review the internal logs on the Splunk UF to gather insights.
  5. Examine the outputs.conf file for correct configurations.

Please review these details thoroughly.

I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
0 Karma
Get Updates on the Splunk Community!

Observability Release Update: AI Assistant, AppD + Observability Cloud Integrations & ...

This month’s releases across the Splunk Observability portfolio deliver earlier detection and faster ...

Stay Connected: Your Guide to February Tech Talks, Office Hours, and Webinars!

&#x1f48c;Keep the new year’s momentum going with our February lineup of Community Office Hours, Tech Talks, ...

Preparing your Splunk Environment for OpenSSL3

The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare ...