All Apps and Add-ons

High CPU usage and kvstore errors with Splunk Stream

Engager

I'm having issues with Splunk Stream consuming all of my deployments servers CPU usage. My deployment server is constantly at 99% CPU usage. I have 165 deployment clients and only 150 of those clients are stream forwarders. The phone home interval is set to 600 and the ping interval is set to 900. I'm running Splunk Enterprise 7.2.5.1, Stream 7.1.3, and the UFs are 7.2.5.1.

Approximately every two seconds I see these logs in splunk_app_stream.log:

2019-05-15 14:44:41,138 INFO    stream_kvstore_utils:178 - search_head_shc_member:: server_roles [u'license_master', u'cluster_search_head', u'deployment_server', u'search_head', u'search_peer', u'kv_store']
2019-05-15 14:44:41,138 INFO    stream_kvstore_utils:177 - is_kv_store_ready, kv store status :: ready
2019-05-15 14:44:41,138 INFO    stream_kvstore_utils:176 - splunk fatal error: False kv store fatal error: False

Running ps -aux, I see these proccess consuming 5 to 25% of my CPU per process.

splunk   19594 20.0  0.1 154652 29656 ?        S    17:17   0:00 /opt/splunk/bin/python /opt/splunk/bin/runScript.py rest_validate_streamfwdauth.ValidateStreamfwdAuth
splunk   19613 21.0  0.1 154652 29880 ?        S    17:17   0:00 /opt/splunk/bin/python /opt/splunk/bin/runScript.py rest_validate_streamfwdauth.ValidateStreamfwdAuth
splunk   19615 20.0  0.1 154652 29808 ?        S    17:17   0:00 /opt/splunk/bin/python /opt/splunk/bin/runScript.py rest_validate_streamfwdauth.ValidateStreamfwdAuth

Anyone else run into the same issues?

1 Solution

Engager

The high CPU usage was cause by a permissions issue on some files in the steam folder. "sudo chown -R spunk:splunk /opt/splunk" fixed the issue. The events I showed in my splunk_app_stream.log file are normal.

View solution in original post

0 Karma

Engager

The high CPU usage was cause by a permissions issue on some files in the steam folder. "sudo chown -R spunk:splunk /opt/splunk" fixed the issue. The events I showed in my splunk_app_stream.log file are normal.

View solution in original post

0 Karma

Explorer

My /opt/splunk is all owned by splunk unfortunately. I have about 150 machines checking into the stream app.

0 Karma

Explorer

I have a bunch of these in the log - going really fast and would seem to. contribute to CPU issues.

2020-01-29 08:07:57,622 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:57,622 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:57,649 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:57,712 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:57,781 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,082 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,322 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,507 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,525 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,579 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,625 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:58,694 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
2020-01-29 08:07:59,223 INFO    streamfwdauth:23 - cacheDateLastUpdated::0 appsDateLastUpdated::1580304470039
0 Karma

Yes. i do not have answer yet i am just wanted to let you know that i have had the same issue

State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!