Getting Data In

How to send the logs from Universal forwaders to Heavy forwaders ?

nagesh
Loves-to-Learn Everything

I am trying to send the data from client machine (UF) installed and Heavy forwarder installed on other machine. But i am getting the below error.

12-06-2023 10:01:22.626 +0100 INFO  ClientSessionsManager [3779231 TcpChannelThread] - Adding client: ip=10.112.73.20 uts=windows-x64 id=86E862DA-2CDC-4B21-9E37-45DFF4C5EFBE name=86E862DA-2CDC-4B21-9E37-45DFF4C5EFBE

12-06-2023 10:01:22.626 +0100 INFO  ClientSessionsManager [3779231 TcpChannelThread] - ip=10.112.73.20 name=86E862DA-2CDC-4B21-9E37-45DFF4C5EFBE New record for sc=100_IngestAction_AutoGenerated app=splunk_ingest_actions: action=Phonehome result=Ok checksum=0

12-06-2023 10:01:24.551 +0100 INFO  AutoLoadBalancedConnectionStrategy [3778953 TcpOutEloop] - Removing quarantine from idx=3.234.1.140:9997 connid=0

12-06-2023 10:01:24.551 +0100 INFO  AutoLoadBalancedConnectionStrategy [3778953 TcpOutEloop] - Removing quarantine from idx=54.85.90.105:9997 connid=0

12-06-2023 10:01:24.784 +0100 ERROR TcpOutputFd [3778953 TcpOutEloop] - Read error. Connection reset by peer

12-06-2023 10:01:25.028 +0100 ERROR TcpOutputFd [3778953 TcpOutEloop] - Read error. Connection reset by peer

12-06-2023 10:01:28.082 +0100 WARN  TcpOutputProc [3779070 indexerPipe_1] - The TCP output processor has paused the data flow. Forwarding to host_dest=inputs10.align.splunkcloud.com inside output group default-autolb-group from host_src=prdpl2splunk02.aligntech.com has been blocked for blocked_seconds=60. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

Labels (2)
0 Karma

soniya-01
Loves-to-Learn

Sending logs from Universal Forwarders to Heavy Forwarders is like passing along important messages from one person to another in a relay. Here's a simple way to understand it:

  1. Imagine Passing Notes:

    • Think of Universal Forwarders as individuals who have notes (logs) with important information. Heavy Forwarders are the ones ready to collect and manage these notes.
  2. Universal Forwarders (Note Holders):

    • Universal Forwarders are like people holding notes (logs) and standing in a line. They generate logs from different sources on a computer.
  3. Heavy Forwarders (Note Collectors):

    • Heavy Forwarders are the ones waiting at the end of the line to collect these notes (logs) from the Universal Forwarders.
  4. Setting Up the Relay:

    • You set up a system where each person (Universal Forwarder) in the line passes their note (log) to the next person (Heavy Forwarder) until it reaches the end.
  5. Configuring Universal Forwarders:

    • On each computer with a Universal Forwarder, you configure it to know where the next person (Heavy Forwarder) is in line. This is like telling each note holder where to pass their note.
  6. Logs Move Down the Line:

    • As logs are generated, they move down the line from Universal Forwarder to Universal Forwarder until they reach the Heavy Forwarder.
  7. Heavy Forwarder Collects and Manages:

    • The Heavy Forwarder collects all the notes (logs) from different Universal Forwarders. It's like the person at the end of the line collecting all the notes to manage and make sense of them.
  8. Centralized Log Management:

    • Now, all the important information is centralized on the Heavy Forwarder, making it easier to analyze and keep track of everything in one place.

In technical terms, configuring Universal Forwarders to send logs to Heavy Forwarders involves setting up these systems to efficiently collect and manage logs from different sources across a network. It's like orchestrating a relay of information to ensure that important data reaches its destination for centralized management and analysis.

0 Karma

nagesh
Loves-to-Learn Everything

Yes, I have already created output.conf file and added the required info.

It is placed under the etc/system/local/ folder.

[tcpout]

defaultGroup = default-autolb-group

indexAndForward = 0

negotiateProtocolLevel = 0

sslCommonNameToCheck = *.<<stack>>.splunkcloud.com

sslVerifyServerCert = true

useClientSSLCompression = true

[tcpout-server://inputs1.<<stack>>.splunkcloud.com:9997]

[tcpout-server://inputs2.<<stack>>.splunkcloud.com:9997]

[tcpout-server://inputs14.align.splunkcloud.com:9997]


[tcpout:default-autolb-group]

disabled = false

server = 54.85.90.105:9997, inputs2.<<stack>>.splunkcloud.com:9997, inputs3.<<stack>>.splunkcloud.com:9997, .....
inputs15.<<stack>>.splunkcloud.com:9997

[tcpout-server://inputs15.<<stack>>.splunkcloud.com:9997]

sslCommonNameToCheck = *.<<stack>>.splunkcloud.com

sslVerifyServerCert = false

sslVerifyServerName = false

useClientSSLCompression = true

autoLBFrequency = 120

[tcpout:scs]

disabled=1

server = stack.forwarders.scs.splunk.com:9997

compressed = true
0 Karma

nagesh
Loves-to-Learn Everything

Any suggestions?

0 Karma

nagesh
Loves-to-Learn Everything

Yes, I have created output.conf file and added the required info.

It is placed under etc/system/local/ folder.

tcpout]

defaultGroup = default-autolb-group

indexAndForward = 0

negotiateProtocolLevel = 0

 

 

sslCommonNameToCheck = *.<<stack>>.splunkcloud.com

sslVerifyServerCert = true

useClientSSLCompression = true

[tcpout-server://inputs1.<<stack>>.splunkcloud.com:9997]

[tcpout-server://inputs2.<<stack>>.splunkcloud.com:9997]

[tcpout-server://inputs14.align.splunkcloud.com:9997]

[tcpout:default-autolb-group]

disabled = false

server = 54.85.90.105:9997, inputs2.<<stack>>.splunkcloud.com:9997, inputs3.<<stack>>.splunkcloud.com:9997, .....
inputs15.<<stack>>.splunkcloud.com:9997

[tcpout-server://inputs15.<<stack>>.splunkcloud.com:9997]

sslCommonNameToCheck = *.<<stack>>.splunkcloud.com

sslVerifyServerCert = false

sslVerifyServerName = false

useClientSSLCompression = true

autoLBFrequency = 120

[tcpout:scs]

disabled=1

server = stack.forwarders.scs.splunk.com:9997

compressed = true

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @nagesh ,

it seems that there's a block in connections between UF and HF.

At first:

  • did you enabled receiving on HF?
  • did you enabled forwardring to the HF on the UF?

Then, check the connection using telnet on the port you're using (default 9997).

If it's all ok, yiou should have, in your Splunk (not on the HF), the Splunk internal logs from that UF:

index=_internal host=<your_UF_hostname>

Ciao.

Giuseppe

0 Karma

nagesh
Loves-to-Learn Everything

yes , we have the connectivity.

splunk.exe cmd btool outputs list
UF node:
[tcpout-server://UFnode:9997]
[tcpout:default-autolb-group]
server =UFnode:9997
HF:

nagesh_0-1701857886624.png

 

Not getting the logs in splunk while using the index="_internal" host=""

0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK.

1. What is your setup? You seem to be trying to send the data to Cloud, right?

2. This is a log from where? UF or HF? Because it's trying to send to cloud directly. So if it's the UF's log, your output is not properly configured. If it's a HF's log, then you don't have your network port open on the firewall.

3. What's the whole point of pushing the data from UF via HF? Remember than UF sends data cooked but HF sends the data parsed which means roughly 6x the bandwidth (and you don't get to parse the data on the indexers so some parts of your configuration might not work the way you expect).

0 Karma

nagesh
Loves-to-Learn Everything

Yes, I am trying to send the data to splunk cloud.

The log file i am trying to receive from UF.

[root@HFNode bin]# telnet inputs2.align.<<stack>>.com 9997

Trying 54.159.30.2...

Connected to inputs2.<<stack>>.splunkcloud.com.

Escape character is '^]'.

^C^C^CConnection closed by foreign host.
Connected successfully.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK. So you have your UF pointed at the Cloud inputs, not at your HF. You should set your output to your HF.

0 Karma

nkamma
Loves-to-Learn Lots

Can you provide me ant suggestions to resolve this issue?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK. Because you're posting those config snippets a bit chaotically.

Please do a

splunk btool inputs list splunktcp

and

splunk btool outputs list splunktcp

On both of your components.

And while posting the results here please use either code block (the </> sign on top of the editing window here on the Answers forum) or the "preformatted" paragraph style. Makes it way easier to read.

0 Karma

nkamma
Loves-to-Learn Lots
From UF installed:-
[splunktcp]
_rcvbuf = 1572864
acceptFrom = *
connection_host = ip
evt_dc_name =
evt_dns_name =
evt_resolve_ad_obj = 0
host = prdpl2bcl1101
index = default
logRetireOldS2S = true
logRetireOldS2SMaxCache = 10000
logRetireOldS2SRepeatFrequency = 1d
route = has_key:tautology:parsingQueue;absent_key:tautology:parsingQueue

Splunkcloud inputs machine:
[root@servername bin]# ./splunk btool inputs list splunktcp
[splunktcp]
_rcvbuf = 1572864
acceptFrom = *
connection_host = ip
host = servername.aligntech.com
index = default
logRetireOldS2S = true
logRetireOldS2SMaxCache = 10000
logRetireOldS2SRepeatFrequency = 1d
route = has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:rulesetQueue;absent_key:_linebreaker:parsingQueue
[splunktcp://9997]
_rcvbuf = 1572864
connection_host = ip
host = servername.aligntech.com
index = default
0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK. So these are your inputs.

And your outputs?

0 Karma

nkamma
Loves-to-Learn Lots
Please below.
[root@prdpl2splunk02 bin]# ./splunk btool outputs list 
[rfs]
batchSizeThresholdKB = 131072
batchTimeout = 30
compression = zstd
compressionLevel = 3
dropEventsOnUploadError = false
format = json
format.json.index_time_fields = true
format.ndjson.index_time_fields = true
partitionBy = legacy
[syslog]
maxEventSize = 1024
priority = <13>
type = udp
[tcpout]
ackTimeoutOnShutdown = 30
autoLBFrequency = 30
autoLBVolume = 0
blockOnCloning = true
blockWarnThreshold = 100
cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256
compressed = false
connectionTTL = 0
connectionTimeout = 20
defaultGroup = default-autolb-group
disabled = false
dropClonedEventsOnQueueFull = 5
dropEventsOnQueueFull = -1
ecdhCurves = prime256v1, secp384r1, secp521r1
enableOldS2SProtocol = false
forceTimebasedAutoLB = false
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = (_audit|_introspection|_telemetry)
forwardedindex.filter.disable = false
heartbeatFrequency = 30
indexAndForward = 0
maxConnectionsPerIndexer = 2
maxFailuresPerInterval = 2
maxQueueSize = 500KB
negotiateProtocolLevel = 0
readTimeout = 300
secsInFailureInterval = 1
sendCookedData = true
sslCommonNameToCheck = *.align.splunkcloud.com
sslQuietShutdown = false
sslVerifyServerCert = true
sslVersions = tls1.2
tcpSendBufSz = 0
useACK = false
useClientSSLCompression = true
writeTimeout = 300
[tcpout-server://inputs1.stack.splunkcloud.com:9997]

[tcpout-server://inputs15.stack.splunkcloud.com:9997]
autoLBFrequency = 120
sslCommonNameToCheck = *.stack.splunkcloud.com
sslVerifyServerCert = false
sslVerifyServerName = false
useClientSSLCompression = true
[tcpout-server://inputs2.stack.splunkcloud.com:9997]
[tcpout-server://inputs3.stack.splunkcloud.com:9997]
[tcpout-server://inputs4.stack.splunkcloud.com:9997]
[tcpout-server://inputs5.stack.splunkcloud.com:9997]
[tcpout-server://inputs6.stack.splunkcloud.com:9997]
[tcpout-server://inputs7.stack.splunkcloud.com:9997]
[tcpout-server://inputs8.stack.splunkcloud.com:9997]
[tcpout-server://inputs9.stack.splunkcloud.com:9997]
[tcpout:default-autolb-group]
disabled = false
server = 54.85.90.105:9997, inputs2.stack.splunkcloud.com:9997, inputs3.stack.splunkcloud.com:9997,...... inputs15.stack.splunkcloud.com:9997
[tcpout:scs]
compressed = true
disabled = 1
server = stack.forwarders.scs.splunk.com:9997
UF Output:
[rfs]
batchSizeThresholdKB = 131072
batchTimeout = 30
compression = zstd
compressionLevel = 3
dropEventsOnUploadError = false
format = json
format.json.index_time_fields = true
format.ndjson.index_time_fields = true
partitionBy = legacy
[syslog]
maxEventSize = 1024
priority = <13>
type = udp
[tcpout]
ackTimeoutOnShutdown = 30
autoLBFrequency = 30
autoLBVolume = 0
blockOnCloning = true
blockWarnThreshold = 100
cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256
compressed = false
connectionTTL = 0
connectionTimeout = 20
defaultGroup = default-autolb-group
disabled = false
dropClonedEventsOnQueueFull = 5
dropEventsOnQueueFull = -1
ecdhCurves = prime256v1, secp384r1, secp521r1
enableOldS2SProtocol = false
forceTimebasedAutoLB = false
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = (_audit|_introspection|_internal|_telemetry|_configtracker)
forwardedindex.filter.disable = false
heartbeatFrequency = 30
indexAndForward = false
maxConnectionsPerIndexer = 2
maxFailuresPerInterval = 2
maxQueueSize = auto
readTimeout = 300
secsInFailureInterval = 1
sendCookedData = true
sslQuietShutdown = false
sslVersions = tls1.2
tcpSendBufSz = 0
useACK = false
useClientSSLCompression = true
writeTimeout = 300
[tcpout-server://prdpl2splunk02.domainame.com:9997]
[tcpout:default-autolb-group]
server = prdpl2splunk02.domainame.com:9997
0 Karma

PickleRick
SplunkTrust
SplunkTrust

At first glance it looks relatively OK. You have your inputs matching your outputs.

Check your splunkd.log on the sending UF and the receiving HF. There should be hints as to the reason for lack of connectivity. If nothing else helps - try to tcpdump the traffic and see what's going on there.

EDIT: OK, your initial post says that you get "Connection reset by peer" but it's a bit unclear which side this is from.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...