Getting Data In

Why does universal forwarder stops sending data and just sends data once restart is done?

glpadilla_sol
Path Finder

Hello community,

I have an issue with one forwarder, was working and suddenly stopped sending data to the Indexers.

The Splunk services at the UF are running but the data is not sent to the Indexers. The internal logs are not sent either.

But if I run a restart at the UF the logs are send but almost immediately are stopped again.

I have checked the logs but I cannot find a logical reason for this to happen.

I have changed the

[inputproc]

max_fd = <integer>

 From 100 to 8192 then I restarted the splunk service.

I have checked the ulimits and currently is in 6400.

 

If I stop and start the service and I check the logs I cannot see any clue about can be happening, these are the logs that appears before the UF stops sending data:

03-30-2023 05:07:26.260 +0200 INFO TailReader [20263 MainTailingThread] - Setting maxFDs to 8192

03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled.


03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled.
03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize https_proxy from server.conf for splunkd. Please make sure that the https_proxy property is set as https_proxy=http://host:port in case HTTP proxying needs to be enabled.
03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize the proxy_rules setting from server.conf for splunkd. Please provide a valid set of proxy_rules in case HTTP proxying needs to be enabled.
03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize the no_proxy setting from server.conf for splunkd. Please provide a valid set of no_proxy rules in case HTTP proxying needs to be enabled.
03-30-2023 05:07:31.674 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.675 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.675 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.675 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead
03-30-2023 05:07:31.695 +0200 INFO AutoLoadBalancedConnectionStrategy [20259 TcpOutEloop] - Will resolve indexer names at 330.000 second interval.
03-30-2023 05:07:36.671 +0200 INFO TailReader [20266 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'
03-30-2023 05:07:37.774 +0200 INFO DC:DeploymentClient [20219 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
03-30-2023 05:07:49.774 +0200 INFO DC:DeploymentClient [20219 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected

 

After certain time I see the message "The TCP output processor has paused the data flow. Forwarding to host_dest..."

But I assume this is because the Splunkd is not able of send data.

 

Do you have any idea about what can be going on?

 

Thanks in advance

Labels (1)
Tags (1)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @glpadilla_sol,

some question to better understand the situation:

  • at first, have you this kind of problems only on this UF?
  • if yes, should this UF send many logs?
  • could you have some network congestion problem in the netweork segment between the UF and Indexers?

please try to modify the following parameters on

UF's server.conf:

[queue]
maxSize = 10MB

and UF's limits.conf

maxKBps = 2048

in this way you enlarge the dimension of the UF queue

obviously with following UF's Splunk restart

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...