Getting Data In

Internal Log Forwarding

Mirza_Jaffar1
Loves-to-Learn

Configuring Internal Log Forwarding 

1- 1sh 2 indx 2 if and 4 uf 1 mc

2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_indexers] stanza in outputs.conf

3- what could be the issues with these simple setup not being to see the internal logs of the sh, idx, mc and if

Base Config output.conf

# BASE SETTINGS

[tcpout]
defaultGroup = primary_indexers

# When indexing a large continuous file that grows very large, a universal
# or light forwarder may become "stuck" on one indexer, trying to reach
# EOF before being able to switch to another indexer. The symptoms of this
# are congestion on *one* indexer in the pool while others seem idle, and
# possibly uneven loading of the disk usage for the target index.
# In this instance, forceTimebasedAutoLB can help!
# ** Do not enable if you have events > 64kB **
# Use with caution, can cause broken events
#forceTimebasedAutoLB = true

# Correct an issue with the default outputs.conf for the Universal Forwarder
# or the SplunkLightForwarder app; these don't forward _internal events.
# 3/6/21 only required for versions prior to current supported forwarders.
# Check forwardedindex.2.whitelist in system/default config to verify
#forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent)

[tcpout:primary_indexers]
server = server_one:9997, server_two:9997

# If you do not have two (or more) indexers, you must use the single stanza
# configuration, which looks like this:
#[tcpout-server://<ipaddress_or_servername>:<port>]
# <attribute1> = <val1>

 

# If setting compressed=true, this must also be set on the indexer.
# compressed = true

# INDEXER DISCOVERY (ASK THE CLUSTER MANAGER WHERE THE INDEXERS ARE)

# This particular setting identifies the tag to use for talking to the
# specific cluster manager, like the "primary_indexers" group tag here.
# indexerDiscovery = clustered_indexers

# It's OK to have a tcpout group like the one above *with* a server list;
# these will act as a seed until communication with the manager can be
# established, so it's a good idea to have at least a couple of indexers
# listed in the tcpout group above.

# [indexer_discovery:clustered_indexers]
# pass4SymmKey = <MUST_MATCH_MANAGER>
# This must include protocol and port like the example below.
# manager_uri = https://manager.example.com:8089

# SSL SETTINGS

# sslCertPath = $SPLUNK_HOME/etc/auth/server.pem
# sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
# sslPassword = password
# sslVerifyServerCert = true

# COMMON NAME CHECKING - NEED ONE STANZA PER INDEXER
# The same certificate can be used across all of them, but the configuration
# here requires these settings to be per-indexer, so the same block of
# configuration would have to be repeated for each.
# [tcpout-server://10.1.12.112:9997]
# sslCertPath = $SPLUNK_HOME/etc/certs/myServerCertificate.pem
# sslRootCAPath = $SPLUNK_HOME/etc/certs/myCAPublicCertificate.pem
# sslPassword = server_privkey_password
# sslVerifyServerCert = true
# sslCommonNameToCheck = servername
# sslAltNameToCheck = servername

Thanks for your time!

Labels (3)
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @Mirza_Jaffar1 

Lets strip out all those comments, it looks like your applied config is:

[tcpout]
defaultGroup = primary_indexers

[tcpout:primary_indexers]
server = server_one:9997, server_two:9997

In theory this should probably work, but takes a number of assumptions. I dont think the lack of a indexAndForward setting will be affecting this because either way it should forward the data, so I wont focus on that.

The first thing to check is on one of the hosts that arent sending their internal logs, check $SPLUNK_HOME/var/log/splunk/splunkd.log for any errors relating to output directly on the server. Try the keyword "tcpoutputfd" - Do you see any failures/errors? 

Can you confirm that you can connect to server_one and server_two from your hosts on port 9997? 

nc -vz -w1 server_one 9997

This will prove that the connectivity can be established correctly and that your indexers are listening. Are there any firewalls between your other servers and the indexers?

Lastly, what is the inputs.conf configuration on your indexers? Please check with btool - are using any custom SSL certificates or requiring client certs?

$SPLUNK_HOME/bin/splunk btool input list --debug splunktcp

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ok, this is the config. Now check your logs - splunkd.log on both ends of the connection for each component. There could be a lot of things that could have gone wrong - network traffic filtered, tls misconfigured, some overzealous IPS...

0 Karma

scelikok
SplunkTrust
SplunkTrust

Hi @Mirza_Jaffar1 ,

Maybe you did not enable receiving on indexers?

inputs.conf
[splunktcp://9997]

 

If this reply helps you an upvote and "Accept as Solution" is appreciated.
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Have you also add this into outputs.conf?

[indexAndForward]
index = false

https://help.splunk.com/en/splunk-enterprise/administer/distributed-search/9.4/deploy-distributed-se...

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...