Getting Data In

The indexer is not seeing the forwarding hosts or forwarded data

davidschatz
New Member

Hi,

The indexer (ubuntu) is not seeing data from the forwarder (also ubuntu). This is a new install of a Splunk free 6.5.3.

1) The forwarder's output.conf has:

[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = 192.168.x.x:8888

[tcpout-server://192.168.x.x:8888]

2) The forwarder can telnet to the indexer:

 sudo telnet 192.168.x.x 8888
 Trying 192.168.x.x...
 connected  to 192.168.x.x

3) On the forwarder, splunkd.log shows:

04-12-2017 18:30:53.058 -0400 INFO  TcpOutputProc - Connected to idx=192.168.x.x:8888

4) The indexer's input.conf has:

[default]
host = ip-192-168-x-x

[splunktcp://8888]
disabled = 0

[tcp://:8888]
connection_host = dns
source = tcp:8888
disabled = 0

5) The indexer's splunkd.log shows:

04-12-2017 14:16:59.138 -0400 WARN  IndexerService - Received event for unconfigured/disabled/deleted index=_internal 
    with source="source::/opt/splunkforwarder/var/log/splunk/conf.log" host="host::ip-192-168-x-x" 
    sourcetype="sourcetype::splunkd_conf".  So far received events from 1 missing index(es).
04-12-2017 14:17:11.201 -0400 WARN  IndexerService - Received event for unconfigured/disabled/deleted 
    index=_introspection with source="source::/opt/splunk/var/log/introspection/disk_objects.log" host="host::ip-192-168-20-11" 
    sourcetype="sourcetype::splunk_disk_objects".  So far received events from 2 missing index(es).
04-12-2017 14:19:33.320 -0400 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1583156490 
    bytes from src=192.168.10.11:48660 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid 
    source sending data to splunktcp port or valid source sending unsupported payload.

6) On the indexer, the search ''index=_internal host=*", shows:
No results found

0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi davidschatz,
it's difficult to understand your problem whitout seeing configuration, but I noted a strange thing:
why did you configured on the same port inputs from Forwarders (splunktcp) and inputs from network (tcp)?
probably there's a conflict?

Bye.
Giuseppe

View solution in original post

0 Karma

jonmargulies
Path Finder

Hi David,

The logs you posted show that the Splunk indexer doesn't think you have the _internal and _introspection indexes. These two indexes exist on every Splunk server by default, and they are configured in $SPLUNK_HOME/etc/system/default/indexes.conf. So I see a few possibilities:
1) You edited $SPLUNK_HOME/etc/system/default/indexes.conf. If you did that, the best thing you can do is figure out what else you've been changing in $SPLUNK_HOME/etc/system/default, pull out your changes and copy them into notes, and then reinstall Splunk from scratch. It's really important that you leave $SPLUNK_HOME/etc/system/default alone.
2) You created a local indexes.conf, either in $SPLUNK_HOME/etc/system/local or in an app, and in that indexes.conf your expressly disabled _internal and _introspection.
3) Splunk couldn't create the index directories (by default, these are in $SPLUNK_HOME/var/lib/splunk) due to a permissions issue. Maybe you installed Splunk as root but then ran it as a regular user without changing any permissions? If it is a permissions issue, you'd have to fix that. It might be as simple as stopping Splunk, recursively chowning the whole Splunk directory to a regular user, and then setting Splunk to start as that regular user (instructions here: http://docs.splunk.com/Documentation/Splunk/6.5.3/Installation/RunSplunkasadifferentornon-rootuser#U...).

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi davidschatz,
it's difficult to understand your problem whitout seeing configuration, but I noted a strange thing:
why did you configured on the same port inputs from Forwarders (splunktcp) and inputs from network (tcp)?
probably there's a conflict?

Bye.
Giuseppe

0 Karma

davidschatz
New Member

Hi Guiseppe and Jon,

Guys - thanks for your help and detailed answers. In the end, the problem had been user error (mine, of course). So the following are basic troubleshooting steps which can be used to identify any installation/configuration errors, and hopefully save others my head banging of a new install at 3 AM.

1) Insure that the Splunk service is indeed running on the indexer host (no splunk, no communication):
ps aux -l | egrep splunk
sudo splunk status

2) Insure that the port which the indexer host is listening on matches the one which which the forwarders are sending data to (the deaf ear does not listen):
sudo netstat -l

3) Insure that ALL firewalls between the forwarder and listener port are open (threading the needle):
telnet 192.168.x.x 9997

4) Insure that Splunk is running on the forwarder (no quarterback, no pass)
sudo splunk status

5) Insure that the the forwarder is correctly "registered" with the indexer (no voter registration, no vote):
sudo splunk list forward-server

Finally, it's a good check to reboot both indexer and forwarder after installation ("registering" the indexer). Restarting the splunk services on both should be enough, but if sudo splunk enable-bootstart was missed then there will be no Splunk service on restart.

Again, thanks.

David

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi davidschatz,
you already did the the first check (telnet) and the first problem (connection) should be OK because in Forwarder's logs there is "TcpOutputProc - Connected to idx=192.168.x.x:8888".

the second check is a very stupid question: is your _internal index enabled?
you can verify it in [Settings -- Indexes], but it's a very stupid check!

the third check is to verify if forwarder and indexers have time aligned, do you use an NTP server?
you can also check this point running a search with time period "Always":

index=_internal host=forwarder_hostname 

After, which is the user you're using to run Splunk, root or splunk?
this isn't a problem on splunkd. log logs, but it could be a problem for other logs.

there are some interesting link in Splunk Doc:

Bye.
Giuseppe

0 Karma

davidschatz
New Member

Hi Guiseppe,

Thanks for your quick answer.

Good call on the port conflict, but still no hosts seen by the indexer. I removed the unneeded 8888 port input on inputs.conf on the indexer with your fix (as seen below), but the search of "index=_internal host=*" still produces no results.

Following the "telnet 192.168.x.x 8888" command (which connects the forwarder successfully with the indexer), what are the next debugging steps?

David

===========================
[default]
host = ip-192-168-10-12

[splunktcp://8888]
disabled = 0

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...