Activity Feed
- Got Karma for Re: How to move index from one hard drive to another in Splunk clustered environment?. 09-20-2024 03:45 AM
- Got Karma for Re: How to move index from one hard drive to another in Splunk clustered environment?. 02-18-2024 01:30 AM
- Posted Re: Internal server error 500 on system/licensing- How to resolve? on Dashboards & Visualizations. 10-27-2023 07:57 AM
- Got Karma for Re: Why is there splunk_instrumentation error after upgrade to Splunk Enterprise 9.0.2?. 02-17-2023 09:19 AM
- Posted Re: Why is there splunk_instrumentation error after upgrade to Splunk Enterprise 9.0.2? on Splunk Enterprise. 02-17-2023 08:42 AM
- Got Karma for Re: I can't increase number of open files. 08-24-2021 01:26 PM
- Got Karma for Re: How to move index from one hard drive to another in Splunk clustered environment?. 06-01-2021 12:10 PM
- Got Karma for Re: How to move index from one hard drive to another in Splunk clustered environment?. 01-27-2021 12:22 AM
- Got Karma for Re: How to move index from one hard drive to another in Splunk clustered environment?. 01-26-2021 04:16 AM
- Posted Re: How do I disable Transparent Huge Pages (THP) and confirm that it is disabled? on Monitoring Splunk. 11-18-2020 01:45 PM
- Posted Re: I can't increase number of open files on Deployment Architecture. 08-17-2020 10:26 AM
- Posted Re: Data backup on Deployment Architecture. 08-17-2020 10:11 AM
- Posted Re: Web interface does not seem to be avaible splunk enterprise on Installation. 08-17-2020 09:51 AM
- Karma Re: index field list for FrankVl. 08-17-2020 09:48 AM
- Posted Re: Web interface does not seem to be avaible splunk enterprise on Installation. 08-17-2020 09:44 AM
- Posted Re: How to find if a particular index is being used? on Splunk Search. 08-17-2020 09:42 AM
- Posted Re: No data is getting displayed on dashboard on Getting Data In. 08-17-2020 09:35 AM
- Posted Re: Web interface does not seem to be avaible splunk enterprise on Installation. 08-17-2020 09:32 AM
- Posted Re: syslog on Getting Data In. 08-17-2020 09:28 AM
- Got Karma for Re: How to move index from one hard drive to another in Splunk clustered environment?. 08-03-2020 08:17 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 |
10-27-2023
07:57 AM
Do you maybe have a lot of license messages? I've seen that triggering this issue.
... View more
02-17-2023
08:42 AM
1 Karma
Fixed in 9.0.4
... View more
11-18-2020
01:45 PM
Most answers here are from long ago, so I will add this option here as well, that will work on more modern versions of RHEL based systems that run tuned version 2.11 and above. The easiest way I found to disable THP is using tuned. mkdir /etc/tuned/splunk_idx
cat << EOF > /etc/tuned/splunk_idx/tuned.conf
#
# tuned configuration
#
[main]
summary=Optimize for Splunk Indexer
description=Configures THP for better Splunk performance
include=latency-performance
[vm]
transparent_hugepages=never
transparent_hugepage.defrag=never
EOF
tuned-adm profile splunk_idx I tested this on RHEL/CentOS version 7.8+ and it works fine.
... View more
08-17-2020
10:26 AM
1 Karma
services started with the legacy init system, do not honor limits.conf, since it comes up right in the beginning of the system startup, before it reads the limits and pam. As other mentioned, use systemd to start the service, and you can add it in the unit file
... View more
08-17-2020
10:11 AM
You can only back up warm or cold buckets. Here is a quick and dirty way to do it: If you are running Linux, this will copy all non-hot buckets created in the last day to /tmp. replace /tmp with you desired target directory warm_buckets=$(find /opt/splunk/var/lib/splunk -mmin -1440 -type d -name "db_*") for i in $warm_buckets; do mkdir -p /tmp/$i/rawdata; done for i in $warm_buckets; do rsync -auvn "$i"/*/journal.gz /tmp$i; done Hope this helps
... View more
08-17-2020
09:51 AM
That means Splunk is not running. start Splunk in debug mode $SPLUNK_HOME/bin/splunk start --debug
... View more
08-17-2020
09:44 AM
run "ps -ef | grep splunk" and check what user Splunk is running.
... View more
08-17-2020
09:42 AM
There are several ways you can see if there is data in a specific index. You can try " | dbinspect index=<your_index> ", or "|tstats count where index=<your_index> by _time span=1d"
... View more
08-17-2020
09:35 AM
Sourcetypes do not need to exist on the search head. Does the search return results if you remove everything after the raw search? (from the first pipe, till the end)
... View more
08-17-2020
09:32 AM
What user are you running Splunk as?
... View more
08-17-2020
09:28 AM
Do you mean you tell rsyslog to listen on multiple ports, and write data from specific ports to a specific file? If its TCP you should see a connection in netstat from that host, and you would see what port it's connected to. Or you can use tcpdump to see the traffic. However netstat and tcpdump will not be very useful if you use a load-balancer to send the traffic. In that case, something like ngrep would be more useful.
... View more
07-22-2020
06:41 AM
According to Splunk support, the add-on does not currently support UCS Central, only UCS Manager.
... View more
07-22-2020
06:18 AM
Here is the answer I got from support: The app is not compatible with UCS Central, only with the older UCS manager. I posted a ER on ideas to add support for UCS central. You can upvote it here: https://ideas.splunk.com/ideas/APPSID-I-139
... View more
07-01-2020
04:14 AM
I have the same issue.
... View more
01-28-2020
10:06 AM
10Gbe should be more than enough for indexing and searches. If it's not enough, you should probably add more indexers instead of more NIC's.
The only reason you should be concerned about throughput is when you're using SmartStore, as the traffic to S3 will increase depending on your setup and search behavior. But with many indexers capable of doing 10Gbs you will probably saturate your uplinks to AWS or your local S3 Storage solution.
Also, Splunk will perform a lot better with more but slimmer servers instead of fewer fat servers. Having 10 servers with 128 cores is not the same than 20 servers with 64 cores. More smaller servers = better.
... View more
01-28-2020
09:25 AM
It will use all throughput the OS makes available to it, be it 1Gbe or 200Gbe
... View more
01-28-2020
07:21 AM
1 Karma
Splunk is not aware of the underlying network setup, it will utilize the bandwidth the host makes available to it. In the case of bonding in Linux, Splunk will use the bond0 interface (or whatever name you give it) which is the master of all the interfaces you configured it to be. Splunk will not be aware of the slave interfaces.
There are multiple modes of bonding in Linux, the mode I use is 802.3ad (mode 4), which provides load-balancing and fault tolerance. In my case this will be 2x10Gbe which will give me 20Gbe. A supported switch is required to utilize this configuration.
To check what mode your interface is configured, you can run:
cat /proc/net/bonding/bond0 which should give an output similar to this:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 5c:b9:01:9c:c5:90
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 15
Partner Key: 32990
Partner Mac Address: 00:23:04:ee:be:01
Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 10
Permanent HW addr: 5c:b9:01:9c:c5:90
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 5c:b9:01:9c:c5:90
port key: 15
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32667
system mac address: 00:23:04:ee:be:01
oper key: 32990
port priority: 32768
port number: 31276
port state: 61
Slave Interface: eno50
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 5c:b9:01:9c:c5:91
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 5c:b9:01:9c:c5:90
port key: 15
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32667
system mac address: 00:23:04:ee:be:01
oper key: 32990
port priority: 32768
port number: 14892
port state: 61
TL;DR, if configured correctly with a supported switch, Yes.
... View more
12-10-2019
09:09 AM
Had the same problem. Please post your reply as an answer, so people know that it was actually answered.
... View more
11-16-2018
05:44 AM
2 Karma
Thanks for making me aware of the typo. Fixed.
The reason for doing it one at a time, is to minimize the downtime, and a massive cluster-wide bucket fix-up. Taking your time moving them over ensures that the replication factor will remain reasonable and searches will be able to continue, since everyone has a different RF/SF setup.
Enabling and disabling maintenance mode will fix up missing buckets after the data move.
... View more
10-30-2018
07:57 AM
11 Karma
I would not suggest making any changes on a Indexer locally. Here is my suggestion:
Lets assume that the original Index is at /opt/splunk/var/lib/splunk/defaultdb , and the new location will be at /splunk/defaultdb .
In order to limit the down-time of each indexer to the minimum, we will do it in a few steps. First, while the service is still running, rsync the data from the old location to the new one:
Note: When running the rsync command, use a trailing slash only after the source path, and not after the destination path.
rsync -auv /opt/splunk/var/lib/splunk/defaultdb/ /splunk/defaultdb .
This will create an initial copy of the data to the new location. The initial sync may take some time, depending on the size of your data, there also may be a lot of changes to the data when that process is done due to bucket rolling from hot/warm/cold.
Now we want to do another rsync to send the recent changes, this will be a lot faster. But now we will add the --delete argument, so it deletes rolled buckets from the new location. like so:
rsync -auvv --delete /opt/splunk/var/lib/splunk/defaultdb/ /splunk/defaultdb
if you want to be able to look at what rsync did, you can send output to a log file by adding the --log-file=/tmp/rsync-`date %s`.out to the command.
Put the CM in maintenance mode and stop Splunk, and do the final sync.
On the master:
$SPLUNK_HOME/bin/splunk enable maintenance-mode --answer-yes
On the Indexer:
$SPLUNK_HOME/bin/splunk stop
Do the final sync:
rsync -auvv --delete /opt/splunk/var/lib/splunk/defaultdb/ /splunk/defaultdb
Move away the old index data to a backup location:
mv /opt/splunk/var/lib/splunk/defaultdb /opt/splunk/var/lib/splunk/OLD.defaultdb
Latstly, create a symlink from the new location to the old one:
ln -s /splunk/defaultdb /opt/splunk/var/lib/splunk/defaultdb .
Now you can start splunk, and not have to mess around with indexes.conf
After you start the indexer, make sure that the buckets are visible by check the RF/SF on the CM, after that you can take the cluster out of maintenance mode, to fill in the few buckets that been rolled while that indexer was down.
$SPLUNK_HOME/bin/splunk disable maintenance-mode --answer-yes
Repeat the same process 1-7 on each indexer in the cluster, one indexer at a time.
After doing this on all indexers, you can change the path in indexes.conf on the master, and push out the new bundle.
A Indexer restart is required when changing the path of a index, so the master will initiate a restart.
After you did all that, and have confirmed that all the buckets are visible in Splunk, you can remove the old data, and delete the symlink:
rm -rf /opt/splunk/var/lib/splunk/OLD.defaultdb
rm /opt/splunk/var/lib/splunk/defaultdb
Please comment if I missed something.
I hope this helps.
... View more
10-08-2018
04:27 PM
Make sure permissions are correct on the destination, use rsync -auv /old/index/dir/ /new/index/dir to preserve the timestamps, permission and ownership, when done rename the old index directory with the mv command, then ln -s /new/index/dir /old/index/dir .
Make sure splunk is not running, however you can do an initial rsync while Splunk is up, and when that's sync is done, shut down Splunk, and do the final rsync with the --delete flag. this will delete the files that have disappeared from the index while you were doing the initial sync.
I go for the linking route, because I'm clustered, and I don't want to mess around with indexes.conf by changing the location of the indexes. But if you are not clustered, you can just rename the old folder, create a new folder with the same name, then mount the new partition in the same place the old one was, like this:
mv /index/folder/location /index/folder/location.OLD
mkdir /index/folder/location
mount /index/folder/location
The last one assuming you put it in fstab.
After confirming that everything is in working order, you can delete the old data.
... View more
10-08-2018
03:58 PM
No, you will have to wait for the data that was already indexed to reach the 14 day retention you set it to.
... View more
10-08-2018
03:41 PM
That's not the most ideal setup, but if you want to split them up, then the rsync route should be the way to do it, otherwise just increase the current partition.
I highly recommend having the indexes on a separate physical storage than the OS and /opt.
... View more
10-08-2018
03:26 PM
If it's just your Splunk installation, I don't see why you should not just extend the current partition, which can usually be done live on-the-fly with no downtime, depending on your OS and configuration.
If there are indexes in /opt, I would create a new partition for the indexes, rsync them over there, and linking the new path to the old one. But this has to be done carefully.
... View more
10-08-2018
03:07 PM
It looks like the data was ingested on 10/07/2018 06:06:42 with a event time-stamp of 07/06/2014 14:56:32. The data is not necessarily old, it was just never on-boarded correctly.
Your event does not have a timestamp, so you need to DATETIME_CONFIG = CURRENT in props.conf for that sourcetype, otherwise this will keep on happening.
... View more