Activity Feed
- Got Karma for Re: File will not be read, seekptr checksum did not match for a file in splunk. 10-18-2024 01:31 AM
- Got Karma for Re: Indexer Discovery Error (IndexerDiscoveryHeartbeatThread). 07-11-2024 07:27 AM
- Got Karma for Re: Forget Password Keyfor splunk Indexer cluster. 07-09-2024 11:57 AM
- Got Karma for Re: systemd start restart for splunk not working as expected. 07-09-2024 10:36 AM
- Got Karma for Re: systemd start restart for splunk not working as expected. 07-09-2024 10:36 AM
- Got Karma for Re: systemd start restart for splunk not working as expected. 07-03-2024 09:10 PM
- Got Karma for Re: Where to create an index in a clustered environment?. 05-14-2024 10:03 PM
- Got Karma for Re: ERROR Configuration from app=<appname> does not support reload: server.conf/[clustering]/master_uri. 02-29-2024 11:35 PM
- Got Karma for Re: File will not be read, seekptr checksum did not match for a file in splunk. 02-08-2024 08:02 AM
- Got Karma for Re: Which works best in a SHC? Even or Odd number of search heads to avoid the SHC Service becoming not available?. 01-26-2024 10:13 AM
- Got Karma for Re: Run a Scheduled Report on Demand. 01-05-2024 09:28 AM
- Got Karma for Re: Run a Scheduled Report on Demand. 01-05-2024 09:24 AM
- Got Karma for Re: Run a Scheduled Report on Demand. 01-05-2024 09:24 AM
- Got Karma for Re: Scripting admin credentials in scripted install. 12-13-2023 07:23 AM
- Got Karma for Re: Scripting admin credentials in scripted install. 12-13-2023 07:22 AM
- Got Karma for Re: ERROR DeployedApplication - Failed to install app=/web/splunk/etc/master-apps/s; reason=Application does not exist. 12-12-2023 06:19 AM
- Got Karma for Re: Applying quarantine and removing quarantine. 11-22-2023 05:49 AM
- Got Karma for Re: Forced bundle replication failed. Reverting to old behavior - using most recent bundles on all. 11-20-2023 12:13 PM
- Got Karma for Re: Which works best in a SHC? Even or Odd number of search heads to avoid the SHC Service becoming not available?. 11-14-2023 02:12 AM
- Got Karma for Re: can we get the previous results of scheduled report?. 11-02-2023 05:10 PM
Topics I've Started
No posts to display.
04-17-2019
02:16 PM
Try this:
cp -RP $SPLUNK_HOME/etc/apps/* deployer_name:/$SPLUNK_HOME/etc/shcluster/apps/
... View more
04-17-2019
12:07 PM
Just recreate the directories:
mkdir -p /opt/splunk/etc/shcluster/apps
mkdir -p /opt/splunk/etc/shcluster/users
Then SCP your app/users from the search head to the deployer:
scp -RP $SPLUNK_HOME/etc/apps/app_name deployer_name:/$SPLUNK_HOME/etc/shcluster/apps/
scp -RP $SPLUNK_HOME/etc/apps/users deployer_name:/$SPLUNK_HOME/etc/shcluster/users/
... View more
04-17-2019
12:00 PM
1 Karma
Assuming you have not pushed out a new version since deleting the directory on the deployer, just copy the contents from the app directory back to the deployer.
From the search head:
scp -RP $SPLUNK_HOME/etc/apps/ deployer_name:/$SPLUNK_HOME/etc/shcluster/apps/
After copying over, you probably want to check the contents of the local directory. It can/will contain local changes made by users on the search heads. If you deploy again with that directory intact, it will merge those into the default directory of you app.
... View more
04-17-2019
10:22 AM
Based on the output you provided, the files within your .gz do not appear to have a file extension. Splunk therefore interprets those as binary files and will not attempt to ingest them.
... View more
04-17-2019
09:33 AM
I also just noticed that your serviceAccount is labeled as "tenant-pod-nonroot", but your SPLUNK_USER is set to root. Without knowing your environment, that sounds like a conflict as well.
... View more
04-17-2019
08:12 AM
You'll need to rebuild the image using the settings I mentioned in the DockerFile.
The image that you are pulling down in your deployment yaml is using what is pre-built by Splunk.
image: splunk/universalforwarder:latest
Also I don't see your mountPath listed, what is that set to?
... View more
04-16-2019
09:03 AM
You'll also need to ensure that IP Forwarding is enabled on the OS in order to allow Docker to do what you are attempting.
I don't know Mac OS, but the Linux equivalent is set via sysctl:
net.ipv4.conf.all.forwarding = 1
and/or
net.ipv6.conf.all.forwarding = 1
Without those Docker will never be able to communicate with the outside world.
... View more
04-16-2019
08:30 AM
In your Docker image /opt is owned by root, so the splunk user cannot write there.
Create /opt/splunk if it does not exist and recursively change ownerships
chown -RP splunk:splunk /opt/splunk
You can also run your image as root so that it can write to /opt as such:
docker run -d -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_USER=root" -p image_name:image_version
Or you can rebuild your image so that /opt/splunk is created and owned by the splunk user.
To do so, ensure that your DockerFile contains the following lines in the RUN command, and rebuild it (the mkdir and chown's are what you really need to focus on):
RUN mkdir -p ${SPLUNK_HOME} \
&& wget -qO /tmp/${SPLUNK_FILENAME} http://<remote_host>/${SPLUNK_FILENAME} \
&& tar xzf /tmp/${SPLUNK_FILENAME} --strip 1 -C ${SPLUNK_HOME} \
&& rm /tmp/${SPLUNK_FILENAME} \
&& rm /tmp/${SPLUNK_FILENAME}.md5 \
&& mkdir -p /var/opt/splunk \
&& cp -R ${SPLUNK_HOME}/etc ${SPLUNK_BACKUP_DEFAULT_ETC} \
&& rm -fR ${SPLUNK_HOME}/etc \
&& chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_HOME} \
&& chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_BACKUP_DEFAULT_ETC} \
I've used both methods with success. Rebuilding with the above lines allowed me to deploy to Kubernetes without issue.
In my deployment yaml I have set the following under spec/containers:
volumeMounts:
- mountPath: /opt/splunk/var/run
This should get you on the right track.
... View more
04-16-2019
07:45 AM
From the web UI on any of your search heads go to Activity > Jobs then sort by runtime and/or size.
That will help you quickly identify searches that are consuming a lot of resources.
... View more
04-16-2019
07:20 AM
In a clustered search head environment one of your search heads takes on the additional role of captain. The captain is responsible for keeping the cluster in sync and also scheduling jobs and replication, while also acting as a "normal" search head. The captain will always utilize more resources than the other nodes in the cluster.
You can determine which node is the captain through the web UI or by using the following command from the CLI on any of the SHC members:
splunk show shcluster-status
I suspect the output will correlate with your node that is the most busy.
... View more
04-08-2019
01:57 PM
I think you may be confusing the container/image OS with the platform OS.
The platform OS would be the server on which you are running Docker, while the container/image OS is the OS that actually runs inside the container.
Splunk supports running Docker on any (or most) versions of Linux with a Kernel version > 4.0 according to the documentation. But, the officially published Splunk Docker image is built on either Centos or Debian.
You can build your own image using another flavor of Linux (CoreOS e.g.) but I do not believe that is officially supported by Splunk.
https://github.com/splunk/docker-splunk/tree/develop/base
... View more
04-08-2019
11:22 AM
Yes, Splunk released an officially supported Docker image in v7.2. But that image does not use CoreOS as the underlying operating system.
... View more
04-08-2019
07:11 AM
What is the size of your index and datamodel acceleration?
The datamodel acceleration results reside on the indexers, and still have to be pulled down to the search heads for you to view. Depending on the size of your search artifacts, and those of other search activity happening on the SHC, you could still very well hit the limits.
Also, when you accelerate 30 days of data (or any range), that 30 days is rolling. Meaning, the scheduler runs jobs in the background to keep your acceleration up to date as new data comes in. Those also count against the numbers mentioned above.
... View more
04-05-2019
03:33 PM
Configure logrotate and/or manually purge Splunk log files.
These are unfortunately located at /opt/splunk/var/log/splunk and /opt/splunk/var/log/introspection, which obviously count against your available space on /opt (which is usually small on a standard Linux install).
I generally symlink those directories to /var/log/splunk and /var/log/introspection, with /var/log being on it's own disk, VG, and LV
/dev/mapper/varlogvg01-varloglv01 e.g.
... View more
04-05-2019
03:24 PM
The total size of your datamodel acceleration is presented nicely by the web UI on any search head member in your cluster.
Settings > Datamodels > All >
Then expand the row for the datamodel you want to find info about. The "Size on Disk" is what I believe you are looking for, and this number represents the total size, across all indexers.
... View more
04-05-2019
03:06 PM
To clarify, datamodel acceleration is completed small portions at a time and are run as scheduled jobs/searches, which are counted against your concurrency limits. Though they generally have lowest priority and will be the first ones skipped, when necessary.
Have you checked for any orphaned, local versions of datamodels.conf ?
e.g.
$SPLUNK_HOME/etc/users/user-name-here/app-name-here/local/datamodels.conf
$SPLUNK_HOME/etc/apps/app-name-here/local/datamodels.conf
... View more
04-05-2019
12:44 PM
To remove the node from the SHC, perform the steps below on that node (while Splunk is running):
Remove the member:
splunk remove shcluster-member
Disable the member:
splunk disable shcluster-config
Clean the KVStore:
splunk clean kvstore --cluster
If you want to re-add this member, I would again verify your DNS entry (check for duplicate records and check /etc/hosts if Linux).
Then follow this steps to add the member back into the cluster:
Execute these commands in sequence on the problem node:
splunk stop
splunk clean all
splunk start
Re-initialize the node:
splunk init shcluster-config -auth : -mgmt_uri : -replication_port -replication_factor -conf_deploy_fetch_url : -secret -shcluster_label
splunk restart
Additional documentation can be found here: https://docs.splunk.com/Documentation/Splunk/7.2.5/DistSearch/Addaclustermember#Add_a_member_that_was_previously_removed_from_the_cluster
... View more
04-05-2019
12:15 PM
In your search head cluster, the captain also acts as the scheduler. If the number of searches (scheduled or real time) exceeds the concurrency limits, then the scheduler will skip them.
The scheduler will try to re-execute the skipped searches if they fall within the configured window/skew, but if not they will not be attempted again until their next scheduled date/time (if scheduled).
The DMC provides useful insight into this behavior, and you can adjust a number of config files to modify this behavior.
However, one of the easiest tests is to increase the User-level and/or Role-level concurrent search jobs limit.
This can be done from the web UI on any of the search head cluster members by going to Settings » Access controls » Roles » admin (e.g.).
Within that section you will find the concurrency settings.
... View more
04-05-2019
11:48 AM
You'll need to increase the threshold within limits.conf.
By default, search_process_memory_usage_threshold is set to 4GB (version dependent), but that setting is overruled by search_process_memory_usage_percentage_threshold .
Both require that enable_memory_tracker be set to true, and in that case a process is killed when it exceeds the default value of 25% set by search_process_memory_usage_percentage_threshold.
Stanza from limits.conf:
search_process_memory_usage_percentage_threshold = float
To use this setting, the “enable_memory_tracker” setting must be set
to “true”.
Specifies the percent of the total memory that the search process is
entitled to consume.
* Search processes that violate the threshold percentage are terminated.
* If the value is set to zero, then splunk search processes are allowed to
grow unbounded in terms of percentage memory usage.
* Any setting larger than 100 or less than 0 is discarded and the default
value is used.
* Default: 25%
... View more
04-05-2019
11:24 AM
Glad to help! Also worth noting, Red Hat recently acquired CoreOS. So if your goal is to move away from Red Hat (usually because of support pricing), you might want to look other options. Oracle Enterprise Linux is a good choice, and of course there are many others.
... View more
04-05-2019
08:26 AM
1 Karma
This appears related and is apparently fixed in 7.2.4: "SPL-162469, SPL-163577, SPL-162764 After upgrade to 7.2 Splunk is unable to start - KVStoreConfigurationThread crash".
See the "Unsorted Issues" section:
https://docs.splunk.com/Documentation/Splunk/7.2.4/ReleaseNotes/Fixedissues
... View more
04-05-2019
08:19 AM
Have you tried installing from the CLI instead?
... View more
04-05-2019
08:17 AM
2 Karma
CoreOS is designed specifically for running containers. It could be an option if you are wanting to use the Splunk Docker images (though not officially supported), but not for running Splunk on physical hardware.
... View more
04-05-2019
08:14 AM
1 Karma
Have you looked at your settings for vm.overcommit_memory?
This appears to be related: https://docs.splunk.com/Documentation/Splunk/7.2.1/ReleaseNotes/LinuxmemoryovercommittingandSplunkcrashes
And have you disabled Transparent Huge Pages? https://docs.splunk.com/Documentation/Splunk/7.2.1/ReleaseNotes/SplunkandTHP
After your upgrade, did you verify that the OS is still honoring any ulimit settings that you had in place for your previous version of Splunk?
Also, you note that you are running RHEL 7.5. Depending on your kernel version, this is where the Spectre/Meltdown mitigations are introduced. I've seen cases where these mitigations have introduced a 20-40% performance degradation.
... View more
- « Previous
- Next »