After upgrading from version 7.0.1 to 8.0.2, the errors below appear.
Splunk is not indexing some internal logs like license_usage.log, and license consumption has increased a lot, but I think it is the splunk's own log.
BatchReader-0
Root Cause(s):
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.
Last 50 related messages:
03-05-2020 09:32:47.238 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
03-05-2020 09:32:45.582 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume).
03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume).
03-05-2020 09:32:37.971 -0300 INFO TailReader - tailreader0 waiting to be un-paused
03-05-2020 09:32:37.971 -0300 INFO TailReader - Starting tailreader0 thread
03-05-2020 09:32:37.968 -0300 INFO TailReader - Registering metrics callback for: tailreader0
03-05-2020 09:32:37.969 -0300 INFO TailReader - batchreader0 waiting to be un-paused
03-05-2020 09:32:37.969 -0300 INFO TailReader - Starting batchreader0 thread
03-05-2020 09:32:37.969 -0300 INFO TailReader - Registering metrics callback for: batchreader0
TailReader-0
Root Cause(s):
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.
Last 50 related messages:
03-05-2020 09:32:47.238 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
03-05-2020 09:32:45.582 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume).
03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume).
03-05-2020 09:32:37.971 -0300 INFO TailReader - tailreader0 waiting to be un-paused
03-05-2020 09:32:37.971 -0300 INFO TailReader - Starting tailreader0 thread
03-05-2020 09:32:37.968 -0300 INFO TailReader - Registering metrics callback for: tailreader0
03-05-2020 09:32:37.969 -0300 INFO TailReader - batchreader0 waiting to be un-paused
03-05-2020 09:32:37.969 -0300 INFO TailReader - Starting batchreader0 thread
03-05-2020 09:32:37.969 -0300 INFO TailReader - Registering metrics callback for: batchreader0
The hashing algorithm on the pass4SymmKey has changed quite a bit between the two versions you mentioned.
Enter a new key/password on your nodes in plain text, and cycle Splunk. I t should resolve the issue for you.
The indexer displays this message:
INFO LicenseUsage - type = Message - License usage logging not available for slave licensing instances, please see license_usage.log on license master = https: //xxx.xxx.xxx.xxx: 8089 for usage breakdown
The hashing algorithm on the pass4SymmKey has changed quite a bit between the two versions you mentioned.
Enter a new key/password on your nodes in plain text, and cycle Splunk. I t should resolve the issue for you.
splunk is unable to index the internal license_usage.log logs.
Does anyone have any ideas?
Thanks in advanced!
James