Before forwarding data I checked to see if it was indexing
properly and it seemed to have no problems. However, once I turned on
forwarding, the data shows up like so in the primary instance of
Splunk:
_linebreaker\x00\x00\x00\x00\x6_time\x00\x00\x00\x00\xB1294233707\x00\x00\x00\x00\x6_conf\x00\x00\x00\x00gsource::/var/log/folder/SG1_main__10105132644.log|host::a-a.host.domain.com|bcoat_proxysg|\x00\x00\x00\x00\x10MetaData:Source\x00\x00\x00\x007source::/var/log/bcftpupload/SG1_main__10105132644.log\x00\x00\x00\x00\xEMetaData:Host\x00\x00\x00\x00!host::a-a.host.domain.com\x00\x00\x00\x00\x14MetaData:Sourcetype\x00\x00\x00\x00\x1Asourcetype::bcoat_proxysg\x00\x00\x00\x00\x10_MetaData:Index\x00\x00\x00\x00\x8default\x00\x00\x00\x00\x6_meta\x00\x00\x00\x00\xE0timestartpos::0
timeendpos::14 _subsecond::.171 date_second::47 date_hour::13
date_minute::21 date_year::2011 date_month::january date_mday::5
date_wday::wednesday date_zone::0
punct::.______..._/___://..//.?=&=&=&=_-_/.._/\x00\x00\x00\x00\x6_path\x00\x00\x00\x00//var/log/folder/SG1_main__10105132644.log\x00\x00\x00\x00 disabled\x00\x00\x00\x00\x6false\x00\x00\x00\x00\x8_rcvbuf\x00\x00\x00\x00\x81572864\x00\x00\x00\x00 _charSet\x00\x00\x00\x00\x6UTF-8\x00\x00\x00\x00\x00\x00\x00\x00\x5_raw\x00\x00\x00\x4G\x00\x00\x00\xE\x00\x00\x00\x5_raw\x00\x00\x00\x1
I am trying to forward data from splunk (forward-only) to splunk (our primary instance). I have setup a listener on the primary instance in inputs.conf:
[tcp://34002]
connection_host = none
host = bluecoat
sourcetype = bcoat_proxysg
The forwarder indexes the log data like so:
[monitor:///var/log/folder/]
disabled = false
whitelist = SG
sourcetype = bcoat_proxysg
What is going on here?
The input port currently defined on the indexer :
[tcp://34002]
...should be instead :
[splunktcp://34002]
The indexer is receiving cooked data on a TCP port configured to receive uncooked data.
Another reason this could happen - if you set up forwarders on your endpoints, point them at your Indexer, then go look at the data and see this. You may have setup a TCP input for port 9997 vs. turning on receiving. No worries, just go delete that input and then turn on 'Receiving' in your indexer. Remember forwarders forward data to the indexer, by default TCP:9997, but you don't turn on an 'input' for this rather tell your Splunk server to 'listen' or receive the data, from the forwarders.
🙂
This was my mistake. I'd set it up as a Data Input. Once I configured it to listen to 9997 under Receiving, it all worked perfectly.
The input port currently defined on the indexer :
[tcp://34002]
...should be instead :
[splunktcp://34002]
The indexer is receiving cooked data on a TCP port configured to receive uncooked data.
Thanks guys, this solved my issue as well for the Splunk App for Windows Infrastructure which I was really stressed about. lol cheers
Great anwser. Thanks !!
Interesting thing... this fix corrected the issue for me as well but I noticed something strange after the fact. When you look in your data inputs in the GUI after you change tcp: to splunktcp: it appears that you have no TCP data inputs. Also, after making this change and restarting Splunk I did see a startup warning/informational message about a "possibly typo" in inputs.conf.
Everything works now, just two weird things I noticed afterward.
From Splunk's point of view, splunktcp
isn't a single input - it is Splunk-to-Splunk communication. And so it is not listed under the settings for TCP inputs. You will find it under Forwarding & Receiving instead in the Splunk GUI.
Ran into a similar issue. The problem I had was that my outputs was setting to push "compressed" data to a port that wasn't configured with compression on the server.. Just remove the 'compressed' line... and restart.