Getting Data In

Why Am I getting these errors in my Heavy Forwarder? It's not sending anything

adrifesa95
Engager

Hello,

 

I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it.

  • Root Cause(s):
    • More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct.
  • Last 50 related messages:
    • 03-15-2024 08:14:15.748 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=34.216.133.150 port=9997 connid=0 _numberOfFailures=2
    • 03-15-2024 08:14:15.530 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2
    • 03-15-2024 08:14:15.296 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=44.231.134.204 port=9997 connid=0 _numberOfFailures=2
    • 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=44.231.134.204:9997 connid=0
    • 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=35.162.96.25:9997 connid=0
    • 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=34.216.133.150:9997 connid=0
    • 03-15-2024 08:12:56.049 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2

This is my outputsconf

[tcpout]
defaultGroup = indexers
[tcpout:indexers]
server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com:9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com:9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997
forceTimebasedAutoLB = true
autoLBFrequency = 40

 

Labels (1)
0 Karma

adrifesa95
Engager

Hello,

I solved it installing again the credentials package of universal forwarder.

 

But now, it is connected but I am not recieving data.

can you help me troubleshoot a splunk deployment where I am sending high stick events to a heavy forwarder and the heavy has to forward them to the splunk cloud.

These are the .conf files

inputs.conf
[udp://1514]
sourcetype = pan:firewall
no_appending_timestamp = true
index = mx_paloalto
disabled = 0

[splunktcp://9997]
disabled = 0


outputs.conf
[tcpout]
defaultGroup = splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1
useACK = true

[tcpout:splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1]
server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com :9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9 997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com: 9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997
compressed = false

clientCert = $SPLUNK_HOME/etc/apps/100_tenant_splunkcloud/default/tenant_server.pem

sslCommonNameToCheck = *.tenant.splunkcloud.com
sslVerifyServerCert = true
sslVerifyServerName = true
useClientSSLCompression = true
autoLBFrequency = 120

[tcpout:scs]
disabled=1
server = tenant.forwarders.scs.splunk.com:9997
compressed = true
clientCert = $SPLUNK_HOME/etc/apps/100_tenant_splunkcloud/default/tenant_server.pem
sslAltNameToCheck = *.forwarders.scs.splunk.com
sslVerifyServerCert = true
useClientSSLCompression = false
autoLBFrequency = 120


server.conf
[general]
serverName = hvyfwd
pass4SymmKey = $7$7+sDZpk4U5p8+jEvGlsFjca8/McSNMoOO/O4HIN+nkKs0FoDGr5s6Q==

[sslConfig]
sslPassword = $7$FMfYp/ZEJtp12iajMolR3PORwlFOl4WgEuJSfl2YIjfBn7Dw7t/ILg==

[lmpool:auto_generated_pool_download-trial]
description = auto_generated_pool_download-trial
peers = *
quota = MAX
stack_id = download-trial

[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
peers = *
quota = MAX
stack_id = forwarder

[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
peers = *
quota = MAX
stack_id = free

[license]
active_group = Forwarder


and this is the output of the tcpdump:

[root@hvyfwd local]# tcpdump -i any udp port 1514
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
11:26:45.136626 IP static-confidential_ip.47441 > hvyfwd.fujitsu-dtcns: UDP, length 652
11:26:45.136752 IP static-confidential_ip.47441 > hvyfwd.fujitsu-dtcns: UDP, length 658
11:26:45.136771 IP static-confidential_ip.35720 > hvyfwd.fujitsu-dtcns: UDP, length 661
11:26:45.136796 IP static-confidential_ip.35720 > hvyfwd.fujitsu-dtcns: UDP, length 752
11:26:45.136861 IP static-confidential_ip.47441 > hvyfwd.fujitsu-dtcns: UDP, length 715



0 Karma

isoutamo
SplunkTrust
SplunkTrust

Are you getting IHF’s internal logs into SCP? Or any other logs via this IHF?

0 Karma

adrifesa95
Engager

Yes, it is active in Cloud Monitoring Console and receive events in _internal

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Ok, then you should check on SCP that those events didn’t come to wrong index or those haven’t wrong time stamp. You should look also into future like latest is now+1 year or something.

0 Karma

adrifesa95
Engager

nothing found

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Maybe you should check this?

[udp://<remote server>:<port>]
* Similar to the [tcp://] stanza, except that this stanza causes the Splunk
  instance to listen on a UDP port.
* Only one stanza per port number is currently supported.
* Configures the instance to listen on a specific port.
* If you specify <remote server>, the specified port only accepts data
  from that host.
* If <remote server> is empty - [udp://<port>] - the port accepts data sent
  from any host.
  * The use of <remote server> is not recommended. Use the 'acceptFrom'
    setting, which supersedes this setting.
* Generates events with source set to udp:portnumber, for example: udp:514
* If you do not specify a sourcetype, generates events with sourcetype set
  to udp:portnumber.

Even the example shows that : is not mandatory if you have only port definition, I would like to test it like

[udp://:1514]

to ensure that this is not an issue. 

0 Karma

adrifesa95
Engager

I see that index in Heavy Forwarder is empty

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Wait a second. You're looking for events on the HF? It doesn't (at least shouldn't) work that way. A forwarder, as the name says, is a component which forwards data from input(s) to output(s). If properly configured, HF should not index events locally.

0 Karma

adrifesa95
Engager

ok, but also nothing in SCP

0 Karma

adrifesa95
Engager

do i need to config rsyslog?

0 Karma

adrifesa95
Engager

Any help?

0 Karma

adrifesa95
Engager

Good morning,

 

I am suspicious of the certificates because it is doing ssl inspection and I am suspicious of the time because the scp has european time and the server has american time. Anything I could check?

 

Thank you

0 Karma

PickleRick
SplunkTrust
SplunkTrust

First and foremost - look into your _internal index for errors. There you should find some indication as to why the connections downstream don't work.

But your hunch about TLS inspection may be right. If your SSL visibility solution creates certificates with a CA your HF doesn't know - it will not connect to the receivers in the cloud because the connections are not trusted (the authenticity of the certificate cannot be verified by any known CA certificate).

If this is the case you have two possible solutions.

1) Create an exception in your TLS inspection policy (which makes sense in a typical use case since you typically don't need and don't want to inspect the Splunk traffic - there isn't much to be inspected there)

2) Deploy your organization's RootCA to the HF so that the cert created by your TLS inspection solution is deemed trusted.

I'd probably push for the former solution but YMMV.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

It's not mater if timezones are ok. Those certification calculations has done as UTC time (time_t) anyhow. The key point is that clocks shows correct time e.g. no bigger drift than couple of minutes. Usually if it's more than 5 min then it's didn't work anymore.

Has this works earlier or is this a new installation?

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

Has this ever working?

Maybe the easiest fix could be that just load again fresh UF package from your SCP and install it again into your HF. 

One reason could be that those certificates have expired. Another could be that your node have wrong time.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In September, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...