Getting Data In

Why do some logs stop forwarding while others continue ?

kurtgad
New Member

In our environment, our application logs all go to a single log folder divided by application.

Ex...

/scratch/content/logs/app1/app1.log
/scratch/content/logs/app2/app2.log
/scratch/content/logs/app3/app3.log

My monitor stanza is very simple...

[monitor:///scratch/content/logs]
index = apps
blacklist = (weblogic)
whitelist = \.(log|txt|out)$

For some reason, every day from 1000 to 1500, the app2.log stops being indexed. The log itself is being updated on the server, those updates just aren't being indexed during the 5 hour window. All the other app logs on the same server continue indexing all day. Has anyone ever seen this behavior before?

0 Karma

teunlaan
Contributor

So you have 3 files and only 2 are send all the time? How quick are thes files growing?

Default is in de outputs.conf
[tcpout]
maxConnectionsPerIndexer = 2

The forwarder will only send 2 files a the same time. As long the UF didn’t reach EOF of a file, it won’t switch to another file.
No EOF can be caused by several options

  • File is very big (GB's)
  • Network connection is slow/full, so can't get data out.
  • etc.

You could change the setting too:
[tcpout]
maxConnectionsPerIndexer = 4

See if this helps

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...