Getting Data In

Splunk Add-on for Java Management Extensions: Why the error "Received event for unconfigured/disabled/deleted index"?

alex3
Path Finder

Hello,

We recently installed the Splunk Add-on for Java Management Extensions. We have it working in our test environment but not production. We get this error message:

Received event for unconfigured/disabled/deleted index=jmx with source="source::com.sun.messaging.jms.server:type=Connection,subtype=Config,id=2333287862699082496" host="host::[removed]:0" sourcetype="sourcetype::jmx". So far received events from 1 missing index(es)

We have a distributed deployment. The index has been created on the indexers. When I login to the Indexers and go to Settings > Data > Indexes, I can see the 'jmx' index. However, if I login to the Management node and go to Settings > Distributed Environment > Indexer Clustering, the 'jmx' index isn't there.

As far as I can tell, I've configured Test and Prod identically, so I'm not sure what the issue is. Does anyone have any ideas of what I can check?

Labels (1)
Tags (3)
0 Karma

SinghK
Builder

Ideally index should be in the app you created for indexes.conf

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @alex3,

you should check how the jmx index was created on the cluster.

Ciao.

Giuseppe

0 Karma

alex3
Path Finder

Thank you for your suggestion gcusello!

I created the index by adding it to /opt/apps/splunk/etc/master-apps/_cluster/local/indexes.conf on the management node. Then, I went to the web-ui of the management node, Settings > Distributed Environment > Indexer Clustering > Edit > Configuration Bundle Actions. I clicked Validate and Check Restart, then Push. Everything appeared to run successfully.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @alex3,

good for you, see next time!

if this answer solves your need, please accept it for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

alex3
Path Finder

Sorry Giuseppe, I was unclear. This is the process that I used to create the index before I made this post. I thought that this should work but it is not working correctly.

0 Karma

SinghK
Builder

Once you do that you can check if all has replicated to slaves. By going to /opt/etc/apps/slave-apps /your app and check of the indexes.conf has the settings as on master app

0 Karma

alex3
Path Finder

None of my nodes have a etc/apps/slave-apps directory, including my test environment where it is working.

0 Karma

SinghK
Builder

Apologies my bad, it’s opt/splunk/etc/slave-apps on indexers.

you need to find your apps which configured indexes and check if it got updated.

0 Karma

alex3
Path Finder

I was able to find splunk/etc/slave-apps/_cluster/local/indexes.conf and it does have my index, jmx, on both indexer nodes so it does appear to have replicated correctly on the indexers.

0 Karma

SinghK
Builder

Then just restart the cluster once. clear the messages and see if new messages come back in.

0 Karma

alex3
Path Finder

I restarted the index and search head clusters and I am still having the same issue.

0 Karma

SinghK
Builder

Run btools to check indexes.conf on indexer. Share results or check if index shows up in the list

0 Karma

alex3
Path Finder

I ran the following command on one of my indexer nodes.

./splunk btool indexes list

And I do see my index there:

[jmx]
archiver.enableDataArchive = false
archiver.maxDataArchiveRetentionPeriod = 0
assureUTF8 = false
bucketMerge.maxMergeSizeMB = 1000
bucketMerge.maxMergeTimeSpanSecs = 7776000
bucketMerge.minMergeSizeMB = 750
bucketMerging = false
bucketRebuildMemoryHint = auto
coldPath = /opt/splunk-data/cold/jmx/colddb
coldPath.maxDataSizeMB = 0
coldToFrozenDir =
coldToFrozenScript =
compressRawdata = true
datatype = event
defaultDatabase = main
enableDataIntegrityControl = true
enableOnlineBucketRepair = true
enableRealtimeSearch = true
enableTsidxReduction = false
federated.dataset =
federated.provider =
fileSystemExecutorWorkers = 5
frozenTimePeriodInSecs = 31536000
homePath = /opt/splunk-data/home/jmx/db
homePath.maxDataSizeMB = 650000
hotBucketStreaming.deleteHotsAfterRestart = false
hotBucketStreaming.extraBucketBuildingCmdlineArgs =
hotBucketStreaming.removeRemoteSlicesOnRoll = false
hotBucketStreaming.reportStatus = false
hotBucketStreaming.sendSlices = false
hotBucketTimeRefreshInterval = 10
indexThreads = auto
journalCompression = gzip
maxBloomBackfillBucketAge = 30d
maxBucketSizeCacheEntries = 0
maxConcurrentOptimizes = 6
maxDataSize = 1024
maxGlobalDataSizeMB = 0
maxGlobalRawDataSizeMB = 0
maxHotBuckets = auto
maxHotIdleSecs = 0
maxHotSpanSecs = 7776000
maxMemMB = 5
maxMetaEntries = 1000000
maxRunningProcessGroups = 8
maxRunningProcessGroupsLowPriority = 1
maxTimeUnreplicatedNoAcks = 300
maxTimeUnreplicatedWithAcks = 60
maxTotalDataSizeMB = 5000
maxWarmDBCount = 300
memPoolMB = auto
metric.compressionBlockSize = 1024
metric.enableFloatingPointCompression = true
metric.maxHotBuckets = auto
metric.splitByIndexKeys =
metric.stubOutRawdataJournal = true
metric.timestampResolution = s
metric.tsidxTargetSizeMB = 1500
minHotIdleSecsBeforeForceRoll = auto
minRawFileSyncSecs = disable
minStreamGroupQueueSize = 2000
partialServiceMetaPeriod = 0
processTrackerServiceInterval = 1
quarantineFutureSecs = 2592000
quarantinePastSecs = 77760000
rawChunkSizeBytes = 131072
repFactor = auto
rotatePeriodInSecs = 60
rtRouterQueueSize = 10000
rtRouterThreads = 0
selfStorageThreads = 2
serviceInactiveIndexesPeriod = 60
serviceMetaPeriod = 25
serviceOnlyAsNeeded = true
serviceSubtaskTimingPeriod = 30
splitByIndexKeys =
streamingTargetTsidxSyncPeriodMsec = 5000
suppressBannerList =
suspendHotRollByDeleteQuery = false
sync = 0
syncMeta = true
thawedPath = /opt/splunk-data/cold/jmx/thaweddb
throttleCheckPeriod = 15
timePeriodInSecBeforeTsidxReduction = 604800
tsidxDedupPostingsListMaxTermsLimit = 8388608
tsidxReductionCheckPeriodInSec = 600
tsidxTargetSizeMB = 1500
tsidxWritingLevel = 2
tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary
waitPeriodInSecsForManifestWrite = 60
warmToColdScript =
0 Karma

SinghK
Builder

And on indexers do you see index as enabled or disabled ? 

or do you see any errors in splunkd logs related to index?

0 Karma

alex3
Path Finder

It does show the index as enabled on the indexers.

I found these logs in the search head splunkd.log:

05-26-2022 12:54:26.980 +0000 WARN DateParserVerbose [30526 merging] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Thu May 26 12:54:24 2022). Context: source=/opt/apps/splunk/var/log/splunk/jmx.log|host=[removed]|jmx-too_small|144797
05-20-2022 16:34:20.408 +0000 INFO SpecFiles - Found external scheme definition for stanza="ibm_was_jmx://" from spec file="/opt/apps/splunk/etc/apps/Splunk_TA_jmx/README/inputs.conf.spec" with parameters="config_file, config_file_dir"
05-23-2022 21:52:22.077 +0000 WARN IndexerService [30474 indexerPipe] - Received event for unconfigured/disabled/deleted index=jmx with source="source::com.sun.messaging.jms.server:type=Connection,subtype=Config,id=2333287862699082496" host="host::[removed]:0" sourcetype="sourcetype::jmx". So far received events from 1 missing index(es).
0 Karma

SinghK
Builder

By slaves I meant cluster nodes.

0 Karma

alex3
Path Finder

Yes, we have two indexer nodes and four search head nodes and I restarted all of them, but I still got the same message.

0 Karma
Get Updates on the Splunk Community!

Splunk is Nurturing Tomorrow’s Cybersecurity Leaders Today

Meet Carol Wright. She leads the Splunk Academic Alliance program at Splunk. The Splunk Academic Alliance ...

Part 2: A Guide to Maximizing Splunk IT Service Intelligence

Welcome to the second segment of our guide. In Part 1, we covered the essentials of getting started with ITSI ...

Part 1: A Guide to Maximizing Splunk IT Service Intelligence

As modern IT environments continue to grow in complexity and speed, the ability to efficiently manage and ...