Getting Data In

Data not getting ingested into Splunk for multiple sources

sureshkumaar
Path Finder

I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name

Parent directory name is same for all 7 source files and sub directory name changes.

The log file resides in has an extension of *.log. But i am able to get only one log file sending events to Splunk.

sample inputs.conf provided below.

[monitor:///ABC-DEF50/Platform/*.log]
disabled = false
index = os_linux
sourcetype = nix:messages
crcSalt = <SOURCE>

Last week's data for the remaining 6 source files i was able to see in the Splunk after 2-3 days only.

I checked and could see delay in indexing is happening. How to fix this? kindly help

Labels (1)
Tags (1)
0 Karma

sureshkumaar
Path Finder

Below stanza's are collecting data related to firewall logs.

first stanza is from one deployment servers and last 2 stanza's are from another same deployment server.

But only 2nd stanza is working

[monitor:///SERVER50/firewall/]
whitelist = SERVER50M01ZT*\.log$
index = nw_fortigate
sourcetype = fortigate_traffic
disabled = false

[monitor:///SERVER51/firewall/]
whitelist = SERVER51M01ZT.*\.log$
disabled = false
index = nw_fortigate
sourcetype = fortigate_traffic

[monitor:///SERVER52/firewall/]
whitelist = SERVER52M01ZT.*\.log$
disabled = false
index = nw_fortigate
sourcetype = fortigate_traffic

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

can you show how those files are in file system (e.g. find /…. -type f)? Of course mask real IPs, FQDNs etc. Couple of lines is enough. 
You could check if your splunk user could see & read those by trying ls and cat for those as splunk user. If it cannot see those or content of those then you should use setfacl to give access to only splunk user. Never use any chmod which gives access to all users! This is actually security breach…

One thing which you could try as splunk user

splunk list inputstatus 

which shows is splunk read those files and if how much it has already reads.

r. Ismo

0 Karma

kiran_panchavat
Influencer

@sureshkumaar 

Ensure that the Splunk user (splunk) has the correct read permissions on /SERVER50/firewall/ and /SERVER52/firewall/.

Go to syslog forwarder :- 

Run the below 

ls -l /SERVER50/firewall/
ls -l /SERVER52/firewall/

If necessary, update permissions:

sudo chmod -R 755 /SERVER50/firewall/
sudo chmod -R 755 /SERVER52/firewall/
sudo chown -R splunk:splunk /SERVER50/firewall/
sudo chown -R splunk:splunk /SERVER52/firewall/

 Check splunkd.log for errors related to file monitoring.

grep -i "monitor" $SPLUNK_HOME/var/log/splunk/splunkd.log
grep -i "SERVER50" $SPLUNK_HOME/var/log/splunk/splunkd.log
grep -i "SERVER52" $SPLUNK_HOME/var/log/splunk/splunkd.log

 

I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
0 Karma

isoutamo
SplunkTrust
SplunkTrust
I don’t propose to do above commands as those have several really bad side effects!
0 Karma

kiran_panchavat
Influencer

@sureshkumaar 

Verify whether the logs are being received and processed by the syslog forwarders at the specified location.

/SERVER50/firewall/
/SERVER52/firewall/

 

I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
0 Karma

kiran_panchavat
Influencer

@sureshkumaar 

When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log

Run this command:-

/opt/splunkforwarder/bin/splunk btool inputs list --debug


INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying...

To verify how often the forwarder is hitting this limit, check the forwarder's metrics.log. (Look for this on the forwarder because metrics.log is not forwarded by default on universal and light forwarders.)

cd /opt/splunkforwarder/var/log/splunk

grep "name=thruput" metrics.log

Example: The instantaneous_kbps and average_kbps are always under 256KBps.

11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122

Solution


Create a custom limits.conf with a higher limit or no limit. The configuration can be in system/local or in an app that will have precedence on the existing limit.

Example: Configure in a dedicated app, in /opt/splunkforwarder/etc/apps/Gofaster/local/limits.conf

Double the thruput limit, from 256 to 512 KBps:

[thruput]
maxKBps = 512

Or for unlimited thruput:

[thruput]
maxKBps = 0
  • Unlimited speed can cause higher resource usage on the forwarder. Keep a limit if you need to control the monitoring and network usage.
  • Restart to apply.
  • Verify the result of the configuration with btool.
  • Later, verify in metrics.log that the forwarder is not reaching the new limit constantly.
I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
0 Karma
Get Updates on the Splunk Community!

Learn Splunk Insider Insights, Do More With Gen AI, & Find 20+ New Use Cases You Can ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Buttercup Games: Further Dashboarding Techniques (Part 7)

This series of blogs assumes you have already completed the Splunk Enterprise Search Tutorial as it uses the ...

Stay Connected: Your Guide to April Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...