Getting Data In

Why do some logs stop forwarding while others continue ?

kurtgad
New Member

In our environment, our application logs all go to a single log folder divided by application.

Ex...

/scratch/content/logs/app1/app1.log
/scratch/content/logs/app2/app2.log
/scratch/content/logs/app3/app3.log

My monitor stanza is very simple...

[monitor:///scratch/content/logs]
index = apps
blacklist = (weblogic)
whitelist = \.(log|txt|out)$

For some reason, every day from 1000 to 1500, the app2.log stops being indexed. The log itself is being updated on the server, those updates just aren't being indexed during the 5 hour window. All the other app logs on the same server continue indexing all day. Has anyone ever seen this behavior before?

0 Karma

teunlaan
Contributor

So you have 3 files and only 2 are send all the time? How quick are thes files growing?

Default is in de outputs.conf
[tcpout]
maxConnectionsPerIndexer = 2

The forwarder will only send 2 files a the same time. As long the UF didn’t reach EOF of a file, it won’t switch to another file.
No EOF can be caused by several options

  • File is very big (GB's)
  • Network connection is slow/full, so can't get data out.
  • etc.

You could change the setting too:
[tcpout]
maxConnectionsPerIndexer = 4

See if this helps

0 Karma
Get Updates on the Splunk Community!

The OpenTelemetry Certified Associate (OTCA) Exam

What’s this OTCA exam? The Linux Foundation offers the OpenTelemetry Certified Associate (OTCA) credential to ...

From Manual to Agentic: Level Up Your SOC at Cisco Live

Welcome to the Era of the Agentic SOC   Are you tired of being a manual alert responder? The security ...

Splunk Classroom Chronicles: Training Tales and Testimonials (Episode 4)

Welcome back to Splunk Classroom Chronicles, our ongoing series where we shine a light on what really happens ...