Deployment Architecture

index not re-indexing after clean, forwarder splunkd.log look good, not seeing any action on the indexer splunkd.log

edchow
Explorer

This was working originally, however, I realised I had put in the wrong sourcetype, I changed the inputs.conf files to remove the sourcetype, cleaned the index, and then reloaded the deployment server, forwarder and indexers to re-index the data with the default sourcetype...

Now the forwarders have grabbed the right inputs.conf and outputs.conf files, and the logs indicate the are connecting nicely to the indexers. Unfortunately, the indexer logs are still empty 😞 Thought it might be an issue with the index so I tried creating a new 1, web1 but its still not working 😞 and I see absolutly nothing in the indexer logs to indicate the forwarders traffic is being received, in addition to this there are some strange "failed to build" in the indexer logs, is there anything else I can check?

Please see my config and logs below:

FORWARDER1 INPUTS.CONF

ubuntu@ip-10-171-2-174:/$ sudo cat /opt/splunkforwarder/etc/apps/webservers4/local/inputs.conf
[monitor:///opt/log/www1]
index = web1
host = www1
[monitor:///opt/log/www2]
index = web1
host = www2
[monitor:///opt/log/www3]
index = web1
host = www3

FORWARDER1 OUTPUTS.CONF

ubuntu@ip-10-171-2-174:/$ sudo cat /opt/splunkforwarder/etc/apps/webservers4/local/outputs.conf
[tcpout]
defaultGroup=splunk

[tcpout:splunk]
server=:9997,:9997

FORWARDER1 LOGS:

ubuntu@ip-10-171-2-174:/$ sudo tac /opt/splunkforwarder/var/log/splunk/splunkd.l og | more
09-03-2012 11:46:14.503 +0000 INFO TcpOutputProc - Connected to idx=10.171.82.8
8:9997
09-03-2012 11:46:14.398 +0000 INFO TcpOutputProc - Connected to idx=10.166.203.
114:9997
09-03-2012 11:45:14.441 +0000 INFO TcpOutputProc - Connected to idx=10.171.82.8
8:9997
09-03-2012 11:45:14.395 +0000 INFO TcpOutputProc - Connected to idx=10.166.203.
114:9997
09-03-2012 11:44:15.334 +0000 INFO TcpOutputProc - Connected to idx=10.171.82.8
8:9997
09-03-2012 11:44:14.443 +0000 INFO TcpOutputProc - Connected to idx=10.166.203.
114:9997
09-03-2012 11:43:16.924 +0000 INFO TcpOutputProc - Connected to idx=10.171.82.8
8:9997
09-03-2012 11:42:45.020 +0000 INFO TcpOutputProc - Connected to idx=10.166.203.
114:9997
09-03-2012 11:42:44.593 +0000 INFO TcpOutputProc - Connected to idx=10.171.82.8
8:9997
09-03-2012 11:41:44.958 +0000 INFO TcpOutputProc - Connected to idx=10.166.203.
114:9997
09-03-2012 11:41:44.589 +0000 INFO TcpOutputProc - Connected to idx=10.171.82.8
8:9997
09-03-2012 11:40:44.898 +0000 INFO TcpOutputProc - Connected to idx=10.166.203.

INDEXER1 LOGS:

ubuntu@ip-10-166-203-114:~$ sudo tac /opt/splunk/var/log/splunk/splunkd.log | more
09-03-2012 11:31:35.464 +0000 INFO IndexProcessor - reloading index config: end
09-03-2012 11:31:35.464 +0000 INFO IndexProcessor - request state change RECONFIGURING to RUN
09-03-2012 11:31:35.464 +0000 INFO databasePartitionPolicy - Bucket /opt/splunk/var/lib/splunk/audit/db/db_1341834080_1341832188_0 has events at least 55 days old, past max 30 days, will not repair --bloom-only
09-03-2012 11:31:35.463 +0000 INFO databasePartitionPolicy - currentId for /opt/splunk/var/lib/splunk/proxy/db after openDatabases = 0; 0 buckets being rebuilt
09-03-2012 11:31:35.463 +0000 INFO databasePartitionPolicy - rebuildMetadata called: full=true path=/opt/splunk/var/lib/splunk/proxy/db reason=initopenMetaData failed
09-03-2012 11:31:35.463 +0000 WARN databasePartitionPolicy - failed to open metadata for /opt/splunk/var/lib/splunk/proxy/db, will attempt full rebuild
09-03-2012 11:31:35.463 +0000 INFO databasePartitionPolicy - CREATION TIME for /opt/splunk/var/lib/splunk/proxy/db : 1346671895
09-03-2012 11:31:35.463 +0000 INFO databasePartitionPolicy - index proxy initialized with [300,60,188697600,,/opt/frozen,,2097152000,20,true,20000,5,5,false,3,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,
18446744073709551615ms,2592000,true,false]
09-03-2012 11:31:35.463 +0000 INFO HotDBManager - index=proxy Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=2097152000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
09-03-2012 11:31:35.463 +0000 INFO IndexProcessor - indexes.conf - indexThreads param autotuned to 2
09-03-2012 11:31:35.463 +0000 INFO IndexProcessor - initializing with fullInit=1, reloading=1
09-03-2012 11:31:35.463 +0000 INFO IndexProcessor - Reloading index config: shutdown subordinate threads, now restarting
09-03-2012 11:31:35.462 +0000 INFO IndexProcessor - Got a list of 1 added, modified, or removed indexes
09-03-2012 11:31:35.462 +0000 INFO IndexProcessor - setting process pool max groups low priority=1
09-03-2012 11:31:35.462 +0000 INFO IndexProcessor - setting process pool max groups=20
09-03-2012 11:31:35.461 +0000 INFO IndexProcessor - request state change RUN to RECONFIGURING
09-03-2012 11:31:35.461 +0000 INFO IndexProcessor - reloading index config: start
09-03-2012 11:31:35.459 +0000 INFO IndexProcessor - reloading index config: request received
09-03-2012 11:27:18.044 +0000 INFO IndexProcessor - reloading index config: end
09-03-2012 11:27:18.044 +0000 INFO IndexProcessor - request state change RECONFIGURING to RUN
09-03-2012 11:27:18.044 +0000 INFO databasePartitionPolicy - Bucket /opt/splunk/var/lib/splunk/audit/db/db_1341834080_1341832188_0 has events at least 55 days old, past max 30 days, will not repair --bloom-only
09-03-2012 11:27:18.043 +0000 INFO databasePartitionPolicy - currentId for /opt/splunk/var/lib/splunk/web1/db after openDatabases = 0; 0 buckets being rebuilt
09-03-2012 11:27:18.043 +0000 INFO databasePartitionPolicy - rebuildMetadata called: full=true path=/opt/splunk/var/lib/splunk/web1/db reason=initopenMetaData failed
09-03-2012 11:27:18.043 +0000 WARN databasePartitionPolicy - failed to open metadata for /opt/splunk/var/lib/splunk/web1/db, will attempt full rebuild
09-03-2012 11:27:18.043 +0000 INFO databasePartitionPolicy - CREATION TIME for /opt/splunk/var/lib/splunk/web1/db : 1346671638
09-03-2012 11:27:18.042 +0000 INFO databasePartitionPolicy - index web1 initialized with [300,60,188697600,,/opt/frozen,,2097152000,20,true,30000,5,5,false,3,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,1
8446744073709551615ms,2592000,true,false]
09-03-2012 11:27:18.042 +0000 INFO HotDBManager - index=web1 Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=2097152000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
09-03-2012 11:27:18.042 +0000 INFO IndexProcessor - indexes.conf - indexThreads param autotuned to 2
09-03-2012 11:27:18.042 +0000 INFO IndexProcessor - initializing with fullInit=1, reloading=1
09-03-2012 11:27:18.042 +0000 INFO IndexProcessor - Reloading index config: shutdown subordinate threads, now restarting
09-03-2012 11:27:18.042 +0000 INFO IndexProcessor - Got a list of 1 added, modified, or removed indexes
09-03-2012 11:27:18.042 +0000 INFO IndexProcessor - setting process pool max groups low priority=1
09-03-2012 11:27:18.042 +0000 INFO IndexProcessor - setting process pool max groups=20
09-03-2012 11:27:18.041 +0000 INFO IndexProcessor - request state change RUN to RECONFIGURING
09-03-2012 11:27:18.041 +0000 INFO IndexProcessor - reloading index config: start
09-03-2012 11:27:18.039 +0000 INFO IndexProcessor - reloading index config: request received
09-03-2012 11:02:44.568 +0000 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.

Tags (1)
0 Karma
1 Solution

MHibbin
Influencer

Your problem is probably that you have just cleaned the index where the data is stored, which is not a problem, however Splunk retains information (in most cases) about the files being monitored, so it knows whether or not to index a file (most cases don't require a file to be re-indexed) so it does CRC checks and such to maintain this standard, this information is then stored in the "fishbucket" index (internal). You should read the following blog about the fishbucket...
http://blogs.splunk.com/2008/08/14/what-is-this-fishbucket-thing/

and... http://wiki.splunk.com/Community:HowSplunkReadsInputFiles

So you can clean this index on the forwarder to re-read the file, however, this will cause that forwarder to re-read/index ALL of it's inputs, so if that's ALL of you data you should be okay.

And here's another answer there are many more... http://splunk-base.splunk.com/answers/46780/reset-splunkforwarder-to-re-read-file-from-beginning

Hope this helps,

MHibbin

View solution in original post

kristian_kolb
Ultra Champion

You'd also have to clean the fishbucket on the forwarder side, since that is where the forwarder keeps track of which files/events it has already processed/forwarded.

See the following previous posts;

http://splunk-base.splunk.com/answers/2954/how-can-i-re-index-all-the-data-in-my-environment

http://splunk-base.splunk.com/answers/2834/light-forwarder-syslog-fishbucket-problem

Hope this helps,

Kristian

MHibbin
Influencer

Your problem is probably that you have just cleaned the index where the data is stored, which is not a problem, however Splunk retains information (in most cases) about the files being monitored, so it knows whether or not to index a file (most cases don't require a file to be re-indexed) so it does CRC checks and such to maintain this standard, this information is then stored in the "fishbucket" index (internal). You should read the following blog about the fishbucket...
http://blogs.splunk.com/2008/08/14/what-is-this-fishbucket-thing/

and... http://wiki.splunk.com/Community:HowSplunkReadsInputFiles

So you can clean this index on the forwarder to re-read the file, however, this will cause that forwarder to re-read/index ALL of it's inputs, so if that's ALL of you data you should be okay.

And here's another answer there are many more... http://splunk-base.splunk.com/answers/46780/reset-splunkforwarder-to-re-read-file-from-beginning

Hope this helps,

MHibbin

edchow
Explorer

Thanks guys,

I tried the following on both forwarders and the second indexer (to see if I needed to clear the indexer fish bucket aswell). Took a couple of goes clearing "all" on the forwarders (clearing just the fishbucket is not support for some silly reason), but it finally worked. Looks like you only need to clear the forwarders too.

Thanks again.

0 Karma

MHibbin
Influencer

HAHA... sorry, don't have much of a life. 🙂

0 Karma

kristian_kolb
Ultra Champion

Goddammit MHibbin, you're fast....again! /k

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...