Monitoring Splunk

Indexes not showing data

Builder

SORRY THIS LOOKS LIKE A HUGE POST but the config files take up a lot of space.

I've read a few questions about this but so far I haven't found what I'm looking for. I can't pin when this happened but I installed Deployment Monitor app today and realized some of my indexes were disabled. They were:

_audit
_internal
_thefishbucket
_blocksignature

I enabled these indexes, restarted, and they disabled themselves again! I thought maybe bucket conflict but across all these indexes??? I did a clean on all 4 of these indexes and restarted. They now stay enabled but nothing is showing up in them. They have hot buckets and warm buckets now but so far, nothing when i search index=_internal in splunk.

On my indexer, which is also my search head right now, I have no outputs.conf in /opt/splunk/etc/system/local and also my inputs.conf is pretty blank in /opt/splunk/etc/system/local. (i don't recall this being populated) I have this in my indexes.conf

[main]
maxTotalDataSizeMB = 250000

[_internal]
maxTotalDataSizeMB = 250000
disabled = 0

[_thefishbucket]
disabled = 0

[_audit]
disabled = 0

[_blocksignature]
disabled = 0

For reference here is my /opt/splunk/etc/system/default/inputs.conf file:

[default]
index         = default
_rcvbuf        = 1572864
host = $decideOnStartup



[monitor://$SPLUNK_HOME/var/log/splunk]
index = _internal

[monitor://$SPLUNK_HOME/etc/splunk.version]
_TCP_ROUTING = *
index = _internal
sourcetype=splunk_version

[batch://$SPLUNK_HOME/var/spool/splunk]
move_policy = sinkhole
crcSalt = <SOURCE>

[batch://$SPLUNK_HOME/var/spool/splunk/...stash_new]
queue       = stashparsing
sourcetype  = stash_new
move_policy = sinkhole
crcSalt     = <SOURCE>


[fschange:$SPLUNK_HOME/etc]
#poll every 10 minutes
pollPeriod = 600
#generate audit events into the audit index, instead of fschange events
signedaudit=true
recurse=true
followLinks=false
hashMaxSize=-1
fullEvent=false
sendEventMaxSize=-1
filesPerDelay = 10
delayInMills = 100

[udp]
connection_host=ip

[tcp]
acceptFrom=*
connection_host=dns

[splunktcp]
route=has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue
acceptFrom=*
connection_host=ip

[script]
interval = 60.0

[SSL]
# default cipher suites that splunk allows. Change this if you wish to increase the security
# of SSL connections, or to lower it if you having trouble connecting to splunk.
cipherSuite = ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM

And here is /opt/splunk/etc/system/default/outputs.conf:

[tcpout]
maxQueueSize = 500KB
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = _audit
forwardedindex.filter.disable = false
indexAndForward = false
autoLBFrequency = 30
blockOnCloning = true
compressed = false
disabled = false
dropClonedEventsOnQueueFull = 5
dropEventsOnQueueFull = -1
heartbeatFrequency = 30
maxFailuresPerInterval = 2
secsInFailureInterval = 1
maxConnectionsPerIndexer = 2
forceTimebasedAutoLB = false
sendCookedData = true
connectionTimeout = 20
readTimeout = 300
writeTimeout = 300
useACK = false

The only changes that were made were last Friday I installed a new temp license and I also did a clean on the main index of splunk. I created some additional indexes at the time and modified some forwarders to send data to these indexes. The Splunk License Usage app was working fine and I was able to search _internal in the past. I'm not quite sure what has happened since last Friday.

Today, I also installed the Splunk Deployment Monior app and the AppDynamics App for Splunk. I don't know if the change in the internal db came after installing one of these apps? One thing I did for the AppDynamics app is i made sure my home dir for splunk was set to /opt/splunk by doing export SPLUNKHOME=/opt/splunk.

I looked at my buckets and I didn't see any numbers that appeared to clash. Again I wiped the _internal index and others so I would assume it had a fresh start.

I looked at the logs and found these errors:

02-18-2013 17:17:13.623 -0500 INFO ProcessTracker - (child3Fsck) Fsck - Rebuild --bloom-only bucket /opt/splunk/var/lib/splunk/internaldb/db/db13612220871360247972_42 took 1775.7 milliseconds

02-18-2013 17:16:35.503 -0500 INFO BatchReader - Removed from queue file='/opt/splunk/var/log/splunk/audit.log.5'.
02-18-2013 17:16:39.834 -0500 INFO BatchReader - Removed from queue file='/opt/splunk/var/log/splunk/audit.log.4'.
02-18-2013 17:16:41.814 -0500 WARN ProcessTracker - (child1Fsck) Fsck - Failed to repair index=internal bucket=db13527771851352777185_41
02-18-2013 17:16:43.979 -0500 INFO BatchReader - Removed from queue file='/opt/splunk/var/log/splunk/audit.log.3'.
02-18-2013 17:16:48.892 -0500 INFO BatchReader - Removed from queue file='/opt/splunk/var/log/splunk/audit.log.1'.

I looked and there is no bucket with this number (ending in 41) at the end in the _internaldb directory.

Could one of these apps I installed borked something? I didn't touch anything with indexes and for this stuff to just stop indexing is puzzling.

Any ideas here? I never had this happen. I see buckets in the db directories, but in Manager/Indexes they all show up with no events and nothing indexed. ???

I noticed these two options were set to false. Don't know if I should try setting to true:

forwardedindex.filter.disable = false
indexAndForward = false

Tags (1)
0 Karma

Contributor

What does the output from

splunk cmd btool indexes list --debug

look like? You can run this tool for each of your config files to see if an app is overriding your system/default settings.

Builder

I removed the trial extended license we got from splunk and went back to the free license and the problem is solved. It's something to do with the license we got. I'll have to contact them. Thanks everyone for our input.

0 Karma

Builder

I didn't find any but as a precaution stopped splunk and made sure all files in /opt/splunk are owned by splunk. I fun splunk as splunk user, not root. I restarted and nothing changed....

0 Karma

Influencer

find /opt/splunk ! -user splunk

Any rogue files not owned by the splunk user ?

0 Karma