Activity Feed
- Got Karma for Re: Too many search jobs found in the dispatch directory. 11-05-2021 10:22 AM
- Karma 10+ Betas, Previews and Advisory Boards are Available for Sign-up! for aniles. 10-28-2021 03:48 PM
- Got Karma for Re: After upgrading a search head cluster to Splunk 6.3, why are all our launcher app icons missing?. 12-07-2020 01:49 PM
- Got Karma for How can I disable the login notification message that a new Splunk version is available?. 10-20-2020 09:01 PM
- Got Karma for Re: How can I disable the login notification message that a new Splunk version is available?. 10-20-2020 09:01 PM
- Got Karma for Re: Deployment client's phoneHomeIntervalInSecs setting is not being honored according to splunkd_access.log. 09-21-2020 02:16 PM
- Karma Re: For Splunk Enterprise, Splunk Light, and Hunk pre 6.3, default root certificates expire on July 21, 2016 - Recommendations? for yannK. 06-05-2020 12:48 AM
- Karma Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this? for mataharry. 06-05-2020 12:48 AM
- Karma Re: Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this? for yannK. 06-05-2020 12:48 AM
- Karma Re: Why is using a timestamp column for the rising column in DB Connect suboptimal? for jcoates_splunk. 06-05-2020 12:48 AM
- Karma Why didn't upgrade from Splunk Enterprise 6.2.x to 6.3.x also upgrade the expiration dates on my default SSL certs? for weeb. 06-05-2020 12:48 AM
- Karma Re: Upgrading my Splunk Enterprise 6.2.x to 6.3.x did not upgrade the expiration dates on my default SSL certs, why? for weeb. 06-05-2020 12:48 AM
- Karma Re: Splunk Enterprise Security: How to configure datamodel_summary effectively for performance? for mcronkrite. 06-05-2020 12:48 AM
- Karma Splunk Universal Forwarder On AIX Fails To Start After Upgrade To Spunk 6.4.x for dshakespeare_sp. 06-05-2020 12:48 AM
- Karma Re: Splunk Universal Forwarder On AIX Fails To Start After Upgrade To Spunk 6.4.x for dshakespeare_sp. 06-05-2020 12:48 AM
- Karma Re: How can I determine the lag between when an app's scheduled search is supposed to run and when it actually runs? for nnmiller. 06-05-2020 12:48 AM
- Got Karma for In Splunk 6.4, I noticed server.conf.spec has reference to Common Criteria mode. Does Splunk support that yet?. 06-05-2020 12:48 AM
- Got Karma for In Splunk 6.4, I noticed server.conf.spec has reference to Common Criteria mode. Does Splunk support that yet?. 06-05-2020 12:48 AM
- Got Karma for In Splunk 6.4, I noticed server.conf.spec has reference to Common Criteria mode. Does Splunk support that yet?. 06-05-2020 12:48 AM
- Got Karma for Re: In Splunk 6.4, I noticed server.conf.spec has reference to Common Criteria mode. Does Splunk support that yet?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
27 | |||
3 | |||
5 | |||
1 | |||
1 | |||
3 | |||
3 | |||
1 | |||
1 | |||
2 |
11-17-2016
04:58 PM
http://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/SplunkandTHP
... View more
PRODUCT ADVISORY: Pre 6.3, Splunk Enterprise, Splunk Light and HUNK default root certificates expire on July 21, 2016.
(Updated: May 19, 2016)
SUMMARY
Instances of Splunk Enterprise, Splunk Light and HUNK that are older than 6.3 AND that are using the default certificates will no longer be able to communicate with each other after July 21, 2016 unless the certificates are replaced OR Splunk is upgraded to 6.3 or later.
Please note that for all Splunk Enterprise versions, the default root certificate that ships with Splunk is the same root certificate in every download.
That means that anyone who has downloaded Splunk has server certificates that have been signed by the same root certificate and would be able to authenticate to your certificates. To ensure that no one can easily snoop on your traffic or wrongfully send data to your indexers, we strongly recommend that you replace them with certificates signed by a reputable 3rd-party certificate authority.
IMPACT
Failure to replace expired certificates prior to this will result in the immediate cessation of network traffic for any connection which uses them.
Expiration of Splunk certificates does not affect:
1) Splunk instances that are in Splunk Cloud
SSL certificates used for Splunk Cloud instances are not the default Splunk certificates
Forwarder to Splunk Cloud traffic is not impacted, however, relay forwarders (forwarder to forwarder) can be impacted if you chose to use default Splunk certificates for this communication
2) Splunk instances that use certificates that are internally generated (self-signed) or obtained from an external Certificate Authority (CA).
3) Splunk instances in your configuration that are upgraded to 6.3 or above and use that version’s root certificates.
4) Splunk instances that do NOT use SSL - (This is the default configuration for forwarder to indexer communication)
Certificate expiration DOES affect Splunk deployments where:
Any or all Splunk instances in your deployment run a release prior to 6.3 and use Splunk default certificates. This includes
Search Heads
Indexers
License Masters
Cluster Masters
Deployers
Forwarders
RECOMMENDATIONS
There are several options that you can take to resolve certificate expiration. You must take action prior to July 21, 2016.
1) Remain at your current Splunk version (pre- 6.3) and manually upgrade the current default root certificates with the provided shell script that is appropriate for your operating system. Note that the shell script only replaces the current default root certificate with a new (cloned) certificate with a future expiration date. The script does not replace a Splunk default certificate with your own certificate.
The script is available at:
http://download.splunk.com/products/certificates/renewcerts-2016-05-05.zip
Update: minor script changes to update messages and remove redirect of stderr to /dev/null when checking OpenSSL version
Please be sure to read the README.txt included in the zip file before running the script.
2) Upgrade all Splunk instances in your environment to 6.3 or above and use self-signed or CA-signed certificate. We strongly recommend this as the most secure option. Replace current default root certificates with your own certificates. Download the following document to learn about hardening your Splunk infrastructure:
Splunk Security: Hardening Standards
3) Remain at your current Splunk version (pre- 6.3) and use self-signed or CA-signed certificate. Replace current default root certificates with your own certificates. Download the following document to learn about hardening your Splunk infrastructure.
Splunk Security: Hardening Standards
4) Upgrade ALL Splunk instances to 6.3 or above and use those default root certificates.
Note: Prior to the upgrade, if in use please remove the existing Splunk default certificate copies of ca.pem and cacert.pem
Refer to: Upgrading my Splunk Enterprise 6.2.x to 6.3.x did not upgrade the expiration dates on my default SSL certs, why?
See the following link to learn about adding certificates:
Securing Splunk Enterprise
Use the following procedure to configure default certificates:
Configure Splunk forwarding to use the default certificate
... View more
For Splunk Enterprise, Splunk Light and HUNK default root certificates prior to 6.3 will expire on July 21, 2016
What are the suggested recommendations?
... View more
Splunk has submitted Splunk Enterprise 6.4.0 for Common Criteria evaluation.
Until we are common criteria certified we do not recommend nor support configuring Splunk Enterprise in common criteria mode.
This can also be referenced in the Splunk documentation: About securing Splunk Enterprise
... View more
Splunk Enterprise 6.4 release shows some .conf.spec files (eg. server.conf, authentication.conf ...) that has references to Common Criteria mode for some of the attributes.
Is Splunk 6.4 NIAP certified yet and supported?
... View more
12-17-2015
02:40 PM
1 Karma
6.3.2 maintenance release is now available for download and has the corrected Cookie.py which fixes SPL-107449.
The one line change in the 6.3.2 Cookie.py is the same as the workaround provided.
... View more
11-18-2015
11:30 AM
For completeness:
6.3 and 6.3.1
Linux/Solaris/Mac OS
$SPLUNK_HOME/lib/python2.7/Cookie.py
Windows
%SPLUNK_HOME%\Python-2.7\lib\Cookie.py
... View more
11-18-2015
09:15 AM
There was no change in the location of Cookie.py under 6.3.1
It remains to be found in $SPLUNK_HOME/lib/python2.7
... View more
11-13-2015
04:27 PM
11 Karma
As previously posted this is a known issue SPL-107449 where the UI is missing app icons and navigation drop downs.
It has been reported in 6.3 and 6.3.1
http://docs.splunk.com/Documentation/Splunk/6.3.1/ReleaseNotes/Knownissues
This is related from python 2.7.8 upgrade to python 2.7.9 where setting a cookie with brackets in the name can cause the issue.
Python has reported this bug in 2.7.9 and fixed in 2.7.10 as referenced in this link:
https://bugs.python.org/issue22931
Below are 2 options that can be used prior to the fix which is tentatively targeted for the next 6.3.x maintenance release.
Note: When the maintenance release is applied, it will overwrite this modified Cookie.py
Option 1:
1) Make a backup copy of $SPLUNK_HOME/lib/python2.7/Cookie.py
Keep in mind if for some reason you have custom changes in this Cookie.py, it will be overwritten by the replacement file for this fix in an upcoming maintenance release
2) Modify $SPLUNK_HOME/lib/python2.7/Cookie.py as the user who starts Splunk with the following changes:
From:
_LegalCharsPatt = r"[\w\d!#%&'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=]"
To:
_LegalCharsPatt = r"[\w\d!#%&'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\[\]\}\{\=]"
3) Restart your Splunk search head(s)
Option 2:
1) Download the modified Cookie.py from here that includes the same fix as in Option 1
If you have issues with the download, please contact Support.
2) Make a backup copy of $SPLUNK_HOME/lib/python2.7/Cookie.py
3) Replace $SPLUNK_HOME/lib/python2.7/Cookie.py with the modified file
4) Restart your Splunk search head(s)
... View more
11-12-2015
04:13 PM
2 Karma
What you are seeing in 6.2 - 6.2.6 is a known issue, SPL-109387 where both UF and LWF will log this benign error every 10 minutes.
ERROR DiskMon - None such on disk: .../splunkforwarder/var/run/splunk/dispatch
This is related when the app .../splunkforwarder/etc/apps/introspection_generator_addon has been enabled and information relating to disk object partitions is attempted to be retrieved.
Below are 3 different workarounds:
Option 1:
Edit ../splunkforwarder/etc/system/local/server.conf and add the following entry:
[introspection:generator:disk_objects__partitions]
disabled = true
Option 2:
In ../splunkforwarder/etc/log.cfg (requires restart),
Increase the logging level of category.DiskMon=INFO to
category.DiskMon=CRIT
Option 3:
3) create on that UF/LWF an empty directory called... /splunkforwarder/var/run/splunk/dispatch
... View more
11-11-2015
04:21 PM
Yes that should be a quick check in your 6.2.6 SHC environment. If shpoolManaged does not exist in the non-scheduler artifact directories, the growing dispatch should be investigated separate from this known issue.
... View more
11-05-2015
11:55 AM
13 Karma
This is a known issue, SPL-107610/SPL-108806, where a SHC control file called shpoolManaged is preventing the dispatch reaper to perform its usual removal of search artifacts.
This issue does not affect 6.3
Note: the use of clean-dispatch is also affected with this issue where some artifacts may still remain after its run.
While a fix is anticipated in the next maintenance release, the following workaround can be implemented on your SHC members to mitigate the issue.
1) Create a script in $SPLUNK_HOME/bin/scripts called remove_SHC_control_files.sh
with the following content below (update the SPLUNK_HOME variable based on its location on the SH member)
2) Ensure the script belongs to the user who Splunk runs as and is executable.
#!/bin/bash
# PLEASE SET THIS BEFORE RUNNING THE SCRIPT
SPLUNK_HOME=/opt/splunk
VERBOSE=0
FILE_COUNT=0
for i in `find $SPLUNK_HOME/var/run/splunk/dispatch -type f -name 'shpoolManaged' | grep -v 'scheduler_'`
do
if [ $VERBOSE -eq 1 ]
then
echo "`date` - Deleting unneeded control file $i"
fi
rm $i
FILE_COUNT=`expr $FILE_COUNT + 1`
done
echo "`date` - $FILE_COUNT control file(s) deleted in non-scheduler artifacts"
3) Perform the following steps on each SHC member:
4) Edit the script and set the $SPLUNK_HOME variable based on where $SPLUNK_HOME is on the SHC members.
5) Copy the script to some location where it can be run by the user who Splunk runs as. Recommend: $SPLUNK_HOME/bin/scripts
6) Make sure the script belongs to the user who Splunk runs as and is executable
7) As the user who Splunk runs as, use crontab -e to set up a cronjob to run the script every minute.
Optionally, you can set up the script to log what its doing to a file.
For Example:
[root@sup-centos3-cu splunk62_SHC3]# crontab -e
* * * * * /opt/splunk/bin/scripts/remove_SHC_control_files.sh >> /opt/splunk/var/log/splunk/SHC_control_file_removal.log
😎 Verify that the script is doing its job by checking:
No SHC control file named shpoolManaged should be found in any non-scheduler artifact's directory
find $SPLUNK_HOME/var/run/splunk/dispatch -type f -name 'shpoolManaged' | grep -v 'scheduler_'
9) After a few minutes, the dispatch reaper should start catching up and the artifact count should come back to a reasonable number.
... View more
11-05-2015
11:50 AM
5 Karma
Under 6.2.6 in my Search Head cluster (SHC) environment, I am starting to see the number of files grow in dispatch that are beyond their ttl and causing me to constantly monitor disk usage.
Dispatch reaper does not seem to be working.
I have a cron job to run clean-dispatch to try to get ahead of this but is this a known issue?
... View more
10-23-2015
11:26 AM
This known issue is being tracked under ADDON-6014 which is currently being investigated.
... View more
10-05-2015
02:55 PM
4 Karma
In splunkd.log
To verify if the SHC instance is functioning as a captain or member during a timeline, look for these entries:
Captain:
10-05-2015 00:13:06.524 +0000 INFO SHPRaftConsensus - Now leader for term 16
10-05-2015 00:13:06.524 +0000 INFO SHPoolingMgr - Making node the captain
10-05-2015 00:13:06.526 +0000 INFO SHPoolingMgr - makeOrChangeSlave - master_shp = https://splunk-search-head-cluster-node20:8089
Member :
10-05-2015 00:13:06.600 +0000 INFO SHPRaftConsensus - All hail leader https://splunk-search-head-cluster-node20:8089 for term 16
10-05-2015 00:13:06.708 +0000 INFO SHPoolingMgr - makeOrChangeSlave - master_shp = https://splunk-search-head-cluster-node20:8089
... View more
10-05-2015
02:54 PM
1 Karma
I know I can run the following to get the current SHC captain,
splunk show shcluster-status -auth <username>:<password>
but for debugging, what text can I search for to see the sequence of the dynamic captain change over time in my SHC instances?
... View more
03-27-2015
12:01 PM
1 Karma
This references a known issue:
SPL-95121, SPL-93893 - Splunk 6.2 installer fails if msi database on the machine is partially corruputed. MSI log will contain the message:
GetPreviousSettings: Error: DetermineContextForAllProducts failed witht: 0x65b.
This is expected to be fixed beyond 6.2.2
... View more
03-25-2015
01:48 PM
2 Karma
From the support case and diag provided, the $SPLUNK_HOME/var/log/splunk/splunkd_stderr.log had the following message:
2015-03-24 12:56:06.797 -0700 splunkd started (build 255606)
Gap in numbered regexes: expected attribute=whitelist.1 not found (context: stanza='serverClass:myapp')
A review of $SPLUNK_HOME/etc/system/loca/serverclass.conf showed the incorrect whitelist number sequence.
[serverClass:test]
blacklist.0=1.1.1.4
blacklist.1=1.1.1.3
whitelist.2=1.1.1.1
whitelist.3=1.1.1.18
Once the whitelist sequence was corrected to start at 0, the crashes no longer occurred.
eg.
[serverClass:test]
blacklist.0=1.1.1.4
blacklist.1=1.1.1.3
whitelist.0=1.1.1.1
whitelist.1=1.1.1.18
A known issue, SPL-98561 has been logged to prevent crashes when the whitelist/blacklist number sequence has a gap.
This is targeted for a future maintenance release beyond 6.2.2
... View more
03-25-2015
01:45 PM
1 Karma
Deployment server was working fine but suddenly when I run $SPLUNK_HOME/bin/splunk reload deploy-server -class test it is crashing main splunkd.
$SPLUNK_HOME/var/log/splunk has a crashxxxx.log for each attempt.
Here is what I see on the command line after a run.
[host1]# /opt/splunk/splunk_621/bin/splunk reload deploy-server -class test
Your session is invalid. Please login.
Splunk username: admin
Password:
Login successful, running command...
An unforeseen error occurred:
Exception: <class 'httplib.BadStatusLine'>, Value: ''
Traceback (most recent call last):
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/splunk/clilib/cli.py", line 1145, in main
parseAndRun(argsList)
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/splunk/clilib/cli.py", line 938, in parseAndRun
retVal = makeRestCall(cmd=command, obj=subCmd, restArgList=objUnicode(argList), sessionKey=authInfo)
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/splunk/rcUtils.py", line 650, in makeRestCall
serverResponse, serverContent = simpleRequest(uri, sessionKey=sessionKey, getargs=getargs, postargs=postargs, method=method)
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/splunk/rest/__init__.py", line 470, in simpleRequest
serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload)
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/httplib2/__init__.py", line 1421, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/httplib2/__init__.py", line 1171, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/opt/splunk/splunk_621/lib/python2.7/site-packages/httplib2/__init__.py", line 1147, in _conn_request
response = conn.getresponse()
File "/opt/splunk/splunk_621/lib/python2.7/httplib.py", line 1067, in getresponse
response.begin()
File "/opt/splunk/splunk_621/lib/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/opt/splunk/splunk_621/lib/python2.7/httplib.py", line 373, in _read_status
raise BadStatusLine(line)
BadStatusLine: ''
Please file a case online at http://www.splunk.com/page/submit_issue
... View more
12-29-2014
11:09 AM
SPL-83975 was reported under Windows but also seen in some instances on Linux. This bug has been fixed as of 6.1.4 as referenced here
... View more
12-10-2014
05:29 PM
DEPMON-142 has been fixed as of Deployment Monitor 5.0.4 which is currently available for download
... View more
12-10-2014
11:50 AM
8 Karma
If you run $SPLUNK_HOME/bin/splunk cmd splunkd clean-dispatch help
this will provided the usage information:
Sample from Splunk 6.1.4
$SPLUNK_HOME/bin/splunk cmd splunkd clean-dispatch help
Use this command to move jobs whose last modification time is earlier than the specified time from the dispatch directory to the specified destination directory.
usage: splunkd clean-dispatch '<destination directory where to move jobs>' '<latest job mod time>'
example: splunkd clean-dispatch /tmp/old-dispatch-jobs/ -1month
example: splunkd clean-dispatch /tmp/old-dispatch-jobs/ -10d@d
example: splunkd clean-dispatch /tmp/old-dispatch-jobs/ 2011-06-01T12:34:56.000-07:00
... View more
10-28-2014
03:36 PM
6 Karma
This has been reported as a known issue, SPL-92490 and being investigated.
http://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/Knownissues#Upgrade_issues
... View more
09-09-2014
11:19 PM
Since 6.0.3, the cluster search command no longer returns the cluster_count by default.
eg. showcount = false
Prior to 6.0.3, the default of showcount = true
Since displaying the count could have a performance impact, from 6.0.3+ a user can pass showcount = true to the cluster command to return the cluster_count.
eg. index=_internal | cluster showcount=true | table cluster_count, _raw
SPL-83560 updates the documentation for the cluster command default showcount option
... View more