Getting Data In

Can't get data from Universal Forwarder to show up in index and search

Explorer

I have a simple setup: a Universal Forwarder monitoring a couple of files and sending the cooked data over TCP port 8080 to an indexer which is also used for search. The UF is running on Ubuntu; the indexer on Windows Server 2008. Both are version 4.3.1.

I had to make some changes to sourcetype definitions and also wanted things to go in a special index, and the data is all intact on the UF, so I decided to delete all the data on the indexer:

splunk.exe clean eventdata main

And from the Forwarder:

splunk clean all

I then switched off the forwarder and did my new setup. inputs.conf:

[monitor:///home/ubuntu/data_collector/queries1.log]
index=reporting

[monitor:///home/ubuntu/data_collector/queries2.log]
index=reporting

And outputs.conf:

[tcpout]
defaultGroup = 210.XXX.XXX.44_8080

[tcpout:210.XXX.XXX.44_8080]
server = 210.XXX.XXX.44:8080

[tcpout-server://210.XXX.XXX.44:8080]

I've created the reporting app on the indexer.

I even deleted and re-added the forward-server on the UF for good measure.

I turned the UF back on. The data from splunkd.log on the UF is showing up in the indexer, but I don't get any data at all in my reporting index.

I tried it without the index line in inputs.conf - in the hope that I could get the data showing up in main again - but no such luck.

Here's a sample of splunkd.log from the indexer:

04-17-2012 19:43:53.281 +1000 INFO  databasePartitionPolicy - creating new bucket C:\Program Files\Splunk\var\lib\splunk\audit\db\hot_v1_9
04-17-2012 19:43:53.281 +1000 INFO  databasePartitionPolicy - lazy loading database for: C:\Program Files\Splunk\var\lib\splunk\audit\db\hot_v1_9, id=9, ts=1334655831 dirMgr::nextId=9]
04-17-2012 19:43:53.281 +1000 INFO  HotDBManager - index=_audit Creating new hot (id=9, time=1334655831)
04-17-2012 19:43:53.281 +1000 INFO  loader - Server supporting SSL v2/v3
04-17-2012 19:43:53.281 +1000 INFO  loader - Using cipher suite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM
04-17-2012 19:43:53.531 +1000 INFO  ProcessTracker - (child_0__Fsck)  Fsck - Rebuild --bloom-only bucket C:\Program Files\Splunk\var\lib\splunk\audit\db\db_1334655799_1334649015_8 took 203.1 milliseconds
04-17-2012 19:43:53.687 +1000 INFO  TailingProcessor - TailWatcher initializing...
04-17-2012 19:43:53.687 +1000 INFO  TailingProcessor - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk.
04-17-2012 19:43:53.687 +1000 INFO  TailingProcessor - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new.
04-17-2012 19:43:53.687 +1000 INFO  TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version.
04-17-2012 19:43:53.687 +1000 INFO  TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk.
04-17-2012 19:43:53.687 +1000 INFO  BatchReader - State transitioning from 2 to 0 (initOrResume).
04-17-2012 19:43:53.687 +1000 INFO  WatchedFile - Will begin reading at offset=2366820 for file='C:\Program Files\Splunk\var\log\splunk\audit.log'.
04-17-2012 19:43:54.156 +1000 INFO  WatchedFile - Will begin reading at offset=17887152 for file='C:\Program Files\Splunk\var\log\splunk\metrics.log'.
04-17-2012 19:43:54.297 +1000 INFO  HotDBManager - index=_internal No hot found for event ts=1334655831, closest match=null [expanded span=0] hotbucketsize=0 numbucks=1 maxhot=3
04-17-2012 19:43:54.297 +1000 INFO  databasePartitionPolicy - creating new bucket C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_14
04-17-2012 19:43:54.297 +1000 INFO  databasePartitionPolicy - lazy loading database for: C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_14, id=14, ts=1334655831 dirMgr::nextId=14]
04-17-2012 19:43:54.297 +1000 INFO  HotDBManager - index=_internal Creating new hot (id=14, time=1334655831)
04-17-2012 19:43:56.484 +1000 INFO  ExecProcessor - Ran script: "C:\Program Files\Splunk\bin\splunk-admon.exe", took 2296.9 milliseconds to run, 0 bytes read
04-17-2012 19:43:56.640 +1000 INFO  WatchedFile - Will begin reading at offset=665463 for file='C:\Program Files\Splunk\var\log\splunk\web_service.log'.
04-17-2012 19:43:57.593 +1000 INFO  ExecProcessor - Ran script: "C:\Program Files\Splunk\bin\splunk-perfmon.exe", took 890.6 milliseconds to run, 0 bytes read, exited with code -1
04-17-2012 19:43:59.875 +1000 INFO  ExecProcessor - message from ""C:\Program Files\Splunk\bin\splunk-regmon.exe" --driver-path "C:\Program Files\Splunk\bin""  splunk-regmon - SysmonMigrator::read - 'sysmon.conf' was not found, no migration is required.
04-17-2012 19:43:59.875 +1000 INFO  ExecProcessor - message from ""C:\Program Files\Splunk\bin\splunk-regmon.exe" --driver-path "C:\Program Files\Splunk\bin""  splunk-regmon - No enabled entries have been found for regmon or procmon in the conf file.
04-17-2012 19:44:00.140 +1000 INFO  ExecProcessor - Ran script: "C:\Program Files\Splunk\bin\splunk-regmon.exe" --driver-path "C:\Program Files\Splunk\bin", took 921.9 milliseconds to run, 0 bytes read
04-17-2012 19:44:02.625 +1000 INFO  ExecProcessor - Ran script: "C:\Program Files\Splunk\bin\splunk-wmi.exe", took 890.6 milliseconds to run, 0 bytes read
04-17-2012 19:44:10.687 +1000 INFO  ProcessTracker - (child_1__Fsck)  Fsck - Rebuild --bloom-only bucket C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\db_1334655799_1330647338_13 took 1093.8 milliseconds
04-17-2012 19:44:18.359 +1000 INFO  databasePartitionPolicy - rebuildMetadata called: full=true path=C:\Program Files\Splunk\var\lib\splunk\audit\db reason= repaired_buckets
04-17-2012 19:44:19.406 +1000 INFO  databasePartitionPolicy - rebuildMetadata called: full=true path=C:\Program Files\Splunk\var\lib\splunk\_internaldb\db reason= repaired_buckets
04-17-2012 19:48:32.594 +1000 WARN  PipelineInputChannel - channel "source::/home/ubuntu/data_collector/queries1.log|host::ip-10-166-206-183|web|remoteport::44680" ended without a done-key
04-17-2012 19:55:30.063 +1000 WARN  ProcessRunner - Process with pid 2604 did not exit within a given grace period after being signaled to exit. Will have to forcibly terminate.
04-17-2012 20:06:46.858 +1000 WARN  PipelineInputChannel - channel "source::/home/ubuntu/data_collector/queries1.log|host::ip-10-166-206-183|web|remoteport::44996" ended without a done-key
04-17-2012 20:07:38.717 +1000 ERROR AuthenticationManagerSplunk - Login failed. Incorrect login for user: admin
04-17-2012 20:09:41.108 +1000 WARN  PipelineInputChannel - channel "source::/home/ubuntu/data_collector/queries1.log|host::ip-10-166-206-183|web|remoteport::45229" ended without a done-key
04-17-2012 20:31:57.961 +1000 WARN  PipelineInputChannel - channel "source::/home/ubuntu/data_collector/queries1.log|host::ip-10-166-206-183|web|remoteport::45511" ended without a done-key
04-17-2012 21:13:58.781 +1000 WARN  PipelineInputChannel - channel "source::/home/ubuntu/data_collector/queries1.log|host::ip-10-166-206-183|web|remoteport::45809" ended without a done-key

Could the 'ended without a done key' errors be the culprit here?

Tags (1)
0 Karma
1 Solution

Explorer

I ended up moving all the old data to a new location and adding it with splunk add oneshot in the CLI. Weird, whatever I did - even cleanly installing everything, using new indexes, etc., it wasn't indexing the old data.

View solution in original post

0 Karma

Explorer

I ended up moving all the old data to a new location and adding it with splunk add oneshot in the CLI. Weird, whatever I did - even cleanly installing everything, using new indexes, etc., it wasn't indexing the old data.

View solution in original post

0 Karma

Communicator

Sanity check --- I know you said you created a reporting app, but did you create a reporting index as as well?

Also, what are the splunkd logs from the forwarder saying about the file monitors? If you can search, run the following, if not, look in the raw splunkd file for the two components in the query below.

host=<forwarder host> sourcetype=splunkd (component="TailingProcessor" OR component="WatchedFile")

Explorer

Thanks for the tip. I left it running overnight and I ended up getting some data that came in overnight, but no archived data. This is pretty weird as the data is definitely there in the files, and since I'd cleaned all the data from the indexes on both the UF and the indexer I don't see why it would only use this data. Odd.

0 Karma