Splunk Enterprise

Splunkd is not working

Javoraqa
Engager

[bin]$ ./splunk start

Splunk> Like an F-18, bro.

Checking prerequisites...

Checking http port [8000]: open

Checking mgmt port [8089]: open

Checking appserver port [127.0.0.1:8065]: open

Checking kvstore port [8191]: open

Checking configuration... Done.

Checking critical directories... Done

Checking indexes... Validated: _audit _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket add_on_builder_index analysis_meta aws_db_status captain_america databricks_hec_webhook databricks_sqs_s3 databricks_webhook databricksjobruns databricksjobs dl_cluster em_meta em_metrics history infra_alerts iron_man2 main mysql1 platform_versions rss_feed summary talend_error tmc_info_warn trackme_metrics trackme_summary Done

Checking filesystem compatibility... Done Checking conf files for problems...

Invalid key in stanza [email] in /home/******/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True). Invalid key in stanza [mariadb] in /home/******/splunk/etc/apps/splunk_app_db_connect/default/db_connection_types.conf, line 240: supportedMajorVersion (value: 3). Invalid key in stanza [mariadb] in /home/******/splunk/etc/apps/splunk_app_db_connect/default/db_connection_types.conf, line 241: supportedMinorVersion (value: 1).

Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Done Checking default conf files for edits...

Validating installed files against hashes from '/home/******/splunk/splunk-8.0.5-a1a6394cc5ae-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed.

Starting splunk server daemon (splunkd)... Done [ OK ]

Waiting for web server at http://127.0.0.1:8000 to be available..................................................splunkd 18464 was not running.

Stopping splunk helpers... [ OK ] Done.

Stopped helpers. Removing stale pid file... done.

WARNING: web interface does not seem to be available!

 

I have checked splunkd.log file but still cant figure out what is the error which is not allowing to start the splunk daemon

Can someone please help on above issue. 

@richgalloway  help needed

Labels (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Can you paste at least ERROR and WARN entries from your splunkd.log (after My GUID is)?
r. Ismo

Javoraqa
Engager

09-07-2020 04:29:01.635 -0700 WARN outputcsv - sid:scheduler_c3BsdW5rLXN5c3RlbS11c2Vy_c3BsdW5rX2FwcF9pbmZyYXN0cnVjdHVyZQ__RMD596ce4d2fa27924d1_at_1599478140_27360 Found no results to append to collection 'em_entity_cache'.
09-07-2020 04:29:17.497 -0700 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf
09-07-2020 04:29:38.814 -0700 INFO IndexerIf - reloading index config: request received
09-07-2020 04:29:38.956 -0700 INFO DatabaseDirectoryManager - Start-up refreshing bucket manifest index=warn_logs
09-07-2020 04:29:38.957 -0700 INFO DatabaseDirectoryManager - idx=warn_logs Writing a bucket manifest in hotWarmPath='/home/*****/splunk/var/lib/splunk/warn_logs/db', pendingBucketUpdates=0 . Reason='Refreshing manifest at start-up.'
09-07-2020 04:29:38.965 -0700 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/home/*****/splunk/var/lib/splunk/warn_logs/db
09-07-2020 04:29:38.965 -0700 INFO IndexProcessor - reloading index config: start
09-07-2020 04:29:38.965 -0700 INFO IndexProcessor - request state change from=RUN to=RECONFIGURING
09-07-2020 04:29:38.965 -0700 INFO IndexProcessor - Initializing: readonly=false reloading=true
09-07-2020 04:29:38.965 -0700 INFO IndexProcessor - Got a list of count=1 added, modified, or removed indexes
09-07-2020 04:29:38.965 -0700 INFO IndexProcessor - Reloading index config: shutdown subordinate threads, now restarting
09-07-2020 04:29:38.965 -0700 INFO HotDBManager - idx=warn_logs minHotIdleSecsBeforeForceRoll=auto; initializing, current value=600
09-07-2020 04:29:38.965 -0700 INFO HotDBManager - idx=warn_logs Setting hot mgr params: maxHotSpanSecs=7776000 maxHotBuckets=3 minHotIdleSecsBeforeForceRoll=auto maxDataSizeBytes=786432000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
09-07-2020 04:29:38.965 -0700 INFO HotDBManager - closing hot mgr for idx=warn_logs
09-07-2020 04:29:38.965 -0700 INFO IndexWriter - idx=warn_logs, Initializing,
09-07-2020 04:29:38.966 -0700 INFO IndexWriter - openDatabases complete currentId=-1 idx=warn_logs
09-07-2020 04:29:38.966 -0700 INFO IndexProcessor - Initializing indexes took usec=116 reloading=true indexes_initialized=1
09-07-2020 04:29:38.966 -0700 INFO IndexProcessor - request state change from=RECONFIGURING to=RUN
09-07-2020 04:29:38.966 -0700 INFO IndexProcessor - reloading index config: end
09-07-2020 04:30:01.385 -0700 WARN outputcsv - sid:scheduler_c3BsdW5rLXN5c3RlbS11c2Vy_c3BsdW5rX2FwcF9pbmZyYXN0cnVjdHVyZQ__RMD5087bf7ab8bb80e59_at_1599478200_27362 Found no results to append to collection 'em_entity_cache'.
09-07-2020 04:30:42.249 -0700 INFO IndexProcessor - handleSignal : Disabling streaming searches.
09-07-2020 04:30:42.249 -0700 INFO IndexProcessor - request state change from=RUN to=SHUTDOWN_SIGNALED
09-07-2020 04:30:42.249 -0700 INFO UiHttpListener - Shutting down webui
09-07-2020 04:30:42.263 -0700 INFO UiHttpListener - Shutting down webui completed
09-07-2020 04:30:42.538 -0700 INFO IndexProcessor - ingest_pipe=0: active realtime streams have hit 0 during shutdown
09-07-2020 04:30:47.304 -0700 INFO loader - Shutdown HTTPDispatchThread
09-07-2020 04:30:47.304 -0700 INFO ShutdownHandler - Shutting down splunkd
09-07-2020 04:30:47.304 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin"
09-07-2020 04:30:47.326 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker"
09-07-2020 04:30:47.326 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_JustBeforeKVStore"
09-07-2020 04:30:47.336 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_KVStore"
09-07-2020 04:30:48.326 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_DFM"
09-07-2020 04:30:48.326 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_Thruput"
09-07-2020 04:30:48.326 -0700 INFO ShutdownHandler - shutting down level "ShutdownLevel_TcpInput1"
09-07-2020 04:30:48.326 -0700 INFO TcpInputProc - Running shutdown level 1. Closing listening ports.
09-07-2020 04:30:48.326 -0700 INFO TcpInputProc - Done setting shutdown in progress signal.
09-07-2020 04:30:48.326 -0700 INFO TcpInputProc - Shutting down listening ports
09-07-2020 04:30:48.326 -0700 INFO TcpInputProc - Stopping IPv4 port 9997
09-07-2020 04:30:48.326 -0700 INFO TcpInputProc - Setting up input quiesce timeout for : 90.000 secs
09-07-2020 04:30:49.006 -0700 INFO TcpInputProc - Waiting for connection from src=172.22.4.45:38090 to close before shutting down TcpInputProcessor.
09-07-2020 04:30:57.370 -0700 ERROR ExecProcessor - message from "/home/*****/splunk/bin/python2.7 /home/*****/splunk/etc/apps/webhooks_input/bin/webhook.py" 172.22.164.90 - - [07/Sep/2020 04:30:57] "GET /en-US/splunkd/__raw/services/messages?output_mode=json&sort_key=timeCreated_epochSecs&sort_dir=desc&count=1000&_=1599478173857 HTTP/1.1" 404 -
09-07-2020 04:30:57.370 -0700 ERROR ExecProcessor - message from "/home/*****/splunk/bin/python2.7 /home/*****/splunk/etc/apps/webhooks_input/bin/webhook.py" 172.22.164.90 - - [07/Sep/2020 04:30:57] "GET /en-US/splunkd/__raw/services/messages?output_mode=json&sort_key=timeCreated_epochSecs&sort_dir=desc&count=1000&_=1599478178555 HTTP/1.1" 404 -
09-07-2020 04:30:57.564 -0700 ERROR ExecProcessor - message from "/home/*****/splunk/bin/python2.7
allow Empty/Default cluster pass4symmkey=true rrt=restart dft=180 abt=600 sbs=1
09-07-2020 04:33:26.391 -0700 INFO ClusteringMgr - clustering disabled
09-07-2020 04:33:26.391 -0700 WARN SHCConfig - Default pass4symkey is being used. Please change to a random one.
09-07-2020 04:33:26.392 -0700 INFO SHClusterMgr - initing shpooling with: ht=60.000 rf=3 ct=60.000 st=60.000 rt=60.000 rct=5.000 rst=5.000 rrt=10.000 rmst=600.000 rmrt=600.000 pe=1 im=0 is=0 mor=5 pb=5 rep_port= pptr=10
09-07-2020 04:33:26.392 -0700 INFO SHClusterMgr - shpooling disabled
09-07-2020 04:33:26.420 -0700 WARN WorkloadConfig - Failed to read workload-pools in the workload_pools.conf file. There are no workload-pool stanzas.
09-07-2020 04:33:26.420 -0700 INFO WorkloadManager - Workload management for splunk node=******.*****.com with guid=59453183-4D77-48F9-BDC4-5D6276DB2690 has been disabled.
09-07-2020 04:33:26.423 -0700 INFO ulimit - Limit: data file size: unlimited
09-07-2020 04:33:26.423 -0700 INFO ulimit - Limit: open files: 4096 files
09-07-2020 04:33:26.423 -0700 INFO ulimit - Limit: user processes: 14993 processes
09-07-2020 04:33:26.423 -0700 INFO ulimit - Limit: cpu time: unlimited
09-07-2020 04:33:26.423 -0700 INFO ulimit - Linux transparent hugepage support, enabled="always" defrag="madvise"
09-07-2020 04:33:26.423 -0700 WARN ulimit - This configuration of transparent hugepages is known to cause serious runtime problems with Splunk. Typical symptoms include generally reduced performance and catastrophic breakdown in system responsiveness under high memory pressure. Please fix by setting the values for transparent huge pages to "madvise" or preferably "never" via sysctl, kernel boot parameters, or other method recommended by your Linux distribution.

@isoutamo 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Have you configured systemd based boot start? And have you updated recently your index  definitions?

0 Karma

Javoraqa
Engager

Have you configured systemd based boot start? And have you updated recently your index  definitions?

@isoutamo ,

I haven't configured any boot start configuration.

And last time I created index from UI i.e. warn_logs was the index name last I created.

Also I was trying to send logs using universal forwarder from remote server.

And created configuration in

 /etc/system/local/inputs.conf

Sending data from directory to my receiver.

In conf file I added monitor, index, sourcetype.

  Then I restarted UF everything was fine and using splunk list monitor I can see my log file were picked up by universal forwarder.

After that, I created manually same index as given in conf file in Splunk UI

Then restarted the splunk enterprise it gave me above error.

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
It’s quite big change that there is mistake/typo in your indexes.conf file. Can you post it here?
0 Karma

Javoraqa
Engager

@isoutamo ,

I haven't done any changes in default directory,

PFB indexes.conf file

 

################################################################################
# "global" params (not specific to individual indexes)
################################################################################
sync = 0
indexThreads = auto
memPoolMB = auto
defaultDatabase = main
enableRealtimeSearch = true
suppressBannerList =
maxRunningProcessGroups = 8
maxRunningProcessGroupsLowPriority = 1
bucketRebuildMemoryHint = auto
serviceOnlyAsNeeded = true
serviceSubtaskTimingPeriod = 30
serviceInactiveIndexesPeriod = 60
maxBucketSizeCacheEntries = 0
processTrackerServiceInterval = 1
hotBucketTimeRefreshInterval = 10
rtRouterThreads = 0
rtRouterQueueSize = 10000
selfStorageThreads = 2
fileSystemExecutorWorkers = 5

################################################################################
# index specific defaults
################################################################################
maxDataSize = auto
maxWarmDBCount = 300
frozenTimePeriodInSecs = 188697600
rotatePeriodInSecs = 60
coldToFrozenScript =
coldToFrozenDir =
compressRawdata = true
maxTotalDataSizeMB = 500000
maxGlobalRawDataSizeMB = 0
maxGlobalDataSizeMB = 0
maxMemMB = 5
maxConcurrentOptimizes = 6
maxHotSpanSecs = 7776000
maxHotIdleSecs = 0
maxHotBuckets = 3
minHotIdleSecsBeforeForceRoll = auto
quarantinePastSecs = 77760000
quarantineFutureSecs = 2592000
rawChunkSizeBytes = 131072
minRawFileSyncSecs = disable
assureUTF8 = false
serviceMetaPeriod = 25
partialServiceMetaPeriod = 0
throttleCheckPeriod = 15
syncMeta = true
maxMetaEntries = 1000000
maxBloomBackfillBucketAge = 30d
enableOnlineBucketRepair = true
enableDataIntegrityControl = false
maxTimeUnreplicatedWithAcks = 60
maxTimeUnreplicatedNoAcks = 300
minStreamGroupQueueSize = 2000
warmToColdScript=
tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary
homePath.maxDataSizeMB = 0
coldPath.maxDataSizeMB = 0
streamingTargetTsidxSyncPeriodMsec = 5000
journalCompression = gzip
enableTsidxReduction = false
suspendHotRollByDeleteQuery = false
tsidxReductionCheckPeriodInSec = 600
timePeriodInSecBeforeTsidxReduction = 604800
datatype = event
splitByIndexKeys =
tsidxWritingLevel = 1
archiver.enableDataArchive = false
archiver.maxDataArchiveRetentionPeriod = 0
tsidxTargetSizeMB = 1500
metric.tsidxTargetSizeMB = 1500
metric.enableFloatingPointCompression = true
metric.compressionBlockSize = 1024
waitPeriodInSecsForManifestWrite = 60

#
# By default none of the indexes are replicated.
#
repFactor = 0

[volume:_splunk_summaries]
path = $SPLUNK_DB

[provider-family:hadoop]
vix.mode = report
vix.command = $SPLUNK_HOME/bin/jars/sudobash
vix.command.arg.1 = $HADOOP_HOME/bin/hadoop
vix.command.arg.2 = jar
vix.command.arg.3 = $SPLUNK_HOME/bin/jars/SplunkMR-h1.jar
vix.command.arg.4 = com.splunk.mr.SplunkMR
vix.env.MAPREDUCE_USER =
vix.env.HADOOP_HEAPSIZE = 512
vix.env.HADOOP_CLIENT_OPTS = -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr
vix.env.HUNK_THIRDPARTY_JARS = $SPLUNK_HOME/bin/jars/thirdparty/common/avro-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/avro-mapred-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-compress-1.19.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-io-2.4.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/libfb303-0.9.2.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/parquet-hive-bundle-1.10.1.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/snappy-java-1.1.1.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-exec-0.12.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-metastore-0.12.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-serde-0.12.0.jar
vix.mapred.job.reuse.jvm.num.tasks = 100
vix.mapred.child.java.opts = -server -Xmx512m -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr
vix.mapred.reduce.tasks = 0
vix.mapred.job.map.memory.mb = 2048
vix.mapred.job.reduce.memory.mb = 512
vix.mapred.job.queue.name = default
vix.mapreduce.job.jvm.numtasks = 100
vix.mapreduce.map.java.opts = -server -Xmx512m -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr
vix.mapreduce.reduce.java.opts = -server -Xmx512m -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr
vix.mapreduce.job.reduces = 0
vix.mapreduce.map.memory.mb = 2048
vix.mapreduce.reduce.memory.mb = 512
vix.mapreduce.job.queuename = default
vix.splunk.search.column.filter = 1
vix.splunk.search.mixedmode = 1
vix.splunk.search.debug = 0
vix.splunk.search.mr.maxsplits = 10000
vix.splunk.search.mr.minsplits = 100
vix.splunk.search.mr.splits.multiplier = 10
vix.splunk.search.mr.poll = 2000
vix.splunk.search.recordreader = SplunkJournalRecordReader,ValueAvroRecordReader,SimpleCSVRecordReader,SequenceFileRecordReader
vix.splunk.search.recordreader.avro.regex = \.avro$
vix.splunk.search.recordreader.csv.regex = \.([tc]sv)(?:\.(?:gz|bz2|snappy))?$
vix.splunk.search.recordreader.sequence.regex = \.seq$
vix.splunk.home.datanode = /tmp/splunk/$SPLUNK_SERVER_NAME/
vix.splunk.heartbeat = 1
vix.splunk.heartbeat.threshold = 60
vix.splunk.heartbeat.interval = 1000
vix.splunk.setup.onsearch = 1
vix.splunk.setup.package = current

################################################################################
# index definitions
################################################################################

[main]
homePath = $SPLUNK_DB/defaultdb/db
coldPath = $SPLUNK_DB/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary
maxMemMB = 20
maxConcurrentOptimizes = 6
maxHotIdleSecs = 86400
maxHotBuckets = 10
maxDataSize = auto_high_volume

[history]
homePath = $SPLUNK_DB/historydb/db
coldPath = $SPLUNK_DB/historydb/colddb
thawedPath = $SPLUNK_DB/historydb/thaweddb
tstatsHomePath = volume:_splunk_summaries/historydb/datamodel_summary
maxDataSize = 10
frozenTimePeriodInSecs = 604800

[summary]
homePath = $SPLUNK_DB/summarydb/db
coldPath = $SPLUNK_DB/summarydb/colddb
thawedPath = $SPLUNK_DB/summarydb/thaweddb
tstatsHomePath = volume:_splunk_summaries/summarydb/datamodel_summary

[_internal]
homePath = $SPLUNK_DB/_internaldb/db
coldPath = $SPLUNK_DB/_internaldb/colddb
thawedPath = $SPLUNK_DB/_internaldb/thaweddb
tstatsHomePath = volume:_splunk_summaries/_internaldb/datamodel_summary
maxDataSize = 1000
maxHotSpanSecs = 432000
frozenTimePeriodInSecs = 2592000

[_audit]
homePath = $SPLUNK_DB/audit/db
coldPath = $SPLUNK_DB/audit/colddb
thawedPath = $SPLUNK_DB/audit/thaweddb
tstatsHomePath = volume:_splunk_summaries/audit/datamodel_summary

[_thefishbucket]
homePath = $SPLUNK_DB/fishbucket/db
coldPath = $SPLUNK_DB/fishbucket/colddb
thawedPath = $SPLUNK_DB/fishbucket/thaweddb
tstatsHomePath = volume:_splunk_summaries/fishbucket/datamodel_summary
maxDataSize = 500
frozenTimePeriodInSecs = 2419200

# this index has been removed in the 4.1 series, but this stanza must be
# preserved to avoid displaying errors for users that have tweaked the index's
# size/etc parameters in local/indexes.conf.
#
[splunklogger]
homePath = $SPLUNK_DB/splunklogger/db
coldPath = $SPLUNK_DB/splunklogger/colddb
thawedPath = $SPLUNK_DB/splunklogger/thaweddb
disabled = true

[_introspection]
homePath = $SPLUNK_DB/_introspection/db
coldPath = $SPLUNK_DB/_introspection/colddb
thawedPath = $SPLUNK_DB/_introspection/thaweddb
maxDataSize = 1024
frozenTimePeriodInSecs = 1209600

[_telemetry]
homePath = $SPLUNK_DB/_telemetry/db
coldPath = $SPLUNK_DB/_telemetry/colddb
thawedPath = $SPLUNK_DB/_telemetry/thaweddb
maxDataSize = 256
frozenTimePeriodInSecs = 63072000

[_metrics]
homePath = $SPLUNK_DB/_metrics/db
coldPath = $SPLUNK_DB/_metrics/colddb
thawedPath = $SPLUNK_DB/_metrics/thaweddb
datatype = metric
#14 day retention
frozenTimePeriodInSecs = 1209600
splitByIndexKeys = metric_name

# Internal Use Only: rollup data from the _metrics index.
[_metrics_rollup]
homePath = $SPLUNK_DB/_metrics_rollup/db
coldPath = $SPLUNK_DB/_metrics_rollup/colddb
thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb
datatype = metric
# 2 year retention
frozenTimePeriodInSecs = 63072000
splitByIndexKeys = metric_name

# NOTE: When adding a new index, please also add an entry in cfg/bundles/cluster/default/indexes.conf.in
# with repFactor=0, homePath, coldPath, and thawedPath

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You said:

After that, I created manually same index as given in conf file in Splunk UI

Then restarted the splunk enterprise it gave me above error.

I would like to see those changes.

r. Ismo 

0 Karma

Javoraqa
Engager

@isoutamo 

This was my inputs.conf file in UF path - etc/system/local/inputs.conf

[monitor:///data/*****/logs/]
disabled = 0
host = *****.******.com
index=warn_logs
sourcetype = *****_exceptions
whitelist = .+ERROR.+$

0 Karma

isoutamo
SplunkTrust
SplunkTrust
You don't change that indexes.conf on indexer side?
0 Karma

Javoraqa
Engager

@isoutamo ,
No changes were done in indexes.conf file on indexer side.

0 Karma

Javoraqa
Engager

@niketn @woodcock  @efika 

Can you please help on above issue.

0 Karma

Javoraqa
Engager

@isoutamo 

i got some error logs from splunkd.log file, are this errors not allowing splunkd to restart

 

09-08-2020 00:52:18.877 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/audit/db/db_1599484818_1599484739_40/optimize.result': Permission denied
09-08-2020 00:52:18.879 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/_internaldb/db/db_1599484817_1599484682_41/optimize.result': Permission denied
09-08-2020 00:52:18.880 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/_introspection/db/db_1599484771_1599484678_40/optimize.result': Permission denied
09-08-2020 00:52:18.881 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/_metrics/db/db_1599484797_1599484678_81/optimize.result': Permission denied
09-08-2020 00:52:18.881 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/_metrics/db/db_1599484797_1599484678_80/optimize.result': Permission denied
09-08-2020 00:52:18.885 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/em_metrics/db/db_1599484799_1599484768_13/optimize.result': Permission denied
09-08-2020 00:52:18.886 -0700 ERROR BucketMover - Failed to create file='/home/*****/splunk/var/lib/splunk/defaultdb/db/db_1599484769_1599484768_28/optimize.result': Permission denied
09-08-2020 00:52:23.703 -0700 ERROR TailingProcessor - Skipping stanza 'batch://$SPLUNK_HOME\var\spool\splunk\...stash_syndication_input' due to error: Failed to regex-split wildcarded path: $SPLUNK_HOME\var\spool\splunk\...stash_syndication_input for stanza batch://$SPLUNK_HOME\var\spool\splunk\...stash_syndication_input..
09-08-2020 00:52:23.703 -0700 ERROR TailingProcessor - Skipping stanza 'batch://$SPLUNK_HOME\var\spool\splunk\...stash_web_input' due to error: Failed to regex-split wildcarded path: $SPLUNK_HOME\var\spool\splunk\...stash_web_input for stanza batch://$SPLUNK_HOME\var\spool\splunk\...stash_web_input..
09-08-2020 00:52:28.920 -0700 ERROR ExecProcessor - message from "/home/*****/splunk/bin/python2.7 /home/*****/splunk/etc/apps/webhooks_input/bin/webhook.py" 127.0.0.1 - - [08/Sep/2020 00:52:28] "HEAD /robots.txt HTTP/1.1" 404 -

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Yep, I think so. Can you change the ownerships to those to user which are running splunk processes?
0 Karma

Javoraqa
Engager

@isoutamo ,
Even after changing the ownership of files, still facing same issue

below are some updated logs 
09-08-2020 04:42:27.881 -0700 ERROR ExecProcessor - message from "/home/*****/splunk/bin/python2.7 /home/*****/splunk/etc/apps/webhooks_input/bin/webhook.py" 127.0.0.1 - - [08/Sep/2020 04:42:27] "HEAD /robots.txt HTTP/1.1" 404 -
09-08-2020 04:42:27.898 -0700 WARN outputcsv - sid:scheduler_c3BsdW5rLXN5c3RlbS11c2Vy_c3BsdW5rX2FwcF9pbmZyYXN0cnVjdHVyZQ__RMD51a41784832141e6b_at_1599565320_7 Found no results to append to collection 'em_entity_cache'.
09-08-2020 04:42:27.936 -0700 WARN outputcsv - sid:scheduler_c3BsdW5rLXN5c3RlbS11c2Vy_c3BsdW5rX2FwcF9pbmZyYXN0cnVjdHVyZQ__RMD596ce4d2fa27924d1_at_1599565320_8 Found no results to append to collection 'em_entity_cache'.
09-08-2020 04:42:27.937 -0700 FATAL HTTPServer - Could not bind to ip 127.0.0.1 port 8000

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Could not bind to ip xx port yy, means that there are something already used that ip+port.
Can you check It with netstat -napt and check which process is bind to it.
0 Karma

Javoraqa
Engager

@isoutamo ,

Issue is resolved, it was web-hook app which was creating issue.

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...