Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the confli...
See more...
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the conflicting bucket appeared on 2 indexers, I renamed on both indexers. I'll check tomorrow again to see if it made any difference
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, b...
See more...
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, but maybe this works if you can separate those node physically in network level? Do this with your own risk! I’m expecting that this is not an supported way to do it! Split those members into two groups and keep deployer in bigger group where is majority of nodes. This group should automatically recover the lost of other members. If not do normal stuff for removing members, sync SHC &kvstore. For second group you must replicate current deployer to it. In docs there is instructions how to replace/recover deployer. Then you probably need to do manually captain election to get another SHC up and running. I’m not sure if you can change those deployers to new names or not. If not then you will probably get some issues later on! I think that better way is just create additional SHC and deployer and then migrate needed apps and users from old to this new. This is official and supported way. Anyhow you must do an offline backup from kvstore and nodes before start migration and definitely you should try it in test environment first!
@liangliang Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/...
See more...
@liangliang Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Migratefromstandalonesearchheads
@liangliang You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not ...
See more...
@liangliang You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not have site awareness. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/DeploymultisiteSHC https://community.splunk.com/t5/Deployment-Architecture/How-multisite-SH-clusters-work/m-p/594465
Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to m...
See more...
Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to migrat We plan to network isolate the existing cluster nodes, and then configure the isolated part to another cloned one, and finally delete the unnecessary apps on both clusters. Is this feasible?
Hey @Splunkers, Looking for valuable insights for this use case. I wanted to extract the numbers at the end of the log (highlighted in bold). Pls help. Sample log: 74.133.120.000 - LASTHOP:142...
See more...
Hey @Splunkers, Looking for valuable insights for this use case. I wanted to extract the numbers at the end of the log (highlighted in bold). Pls help. Sample log: 74.133.120.000 - LASTHOP:142.136.168.1 - [19/May/2025:23:30:12 +0000] "GET /content/*/residential.existingCustomerProfileLoader.json HTTP/1.1" 200 143 "/cp/activate-apps?cmp=dotcom_sms_selectapps_111324" "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Mobile Safari/537.36" 384622
If you search for only that first single event in that time index=abc,source=xxx.trc GetDbfRecordFromCache do nothing else, but then look at the _raw event in the display, are the characters encode...
See more...
If you search for only that first single event in that time index=abc,source=xxx.trc GetDbfRecordFromCache do nothing else, but then look at the _raw event in the display, are the characters encoded in the data or are then <>? If you then open the event with the little arrow and select Show Source what does the raw event data look like - is it encoded or not?
@lcguilfoil Your event search does not have a time range associated with it, so it will be running an all time search and so when you click the drilldown the search is still running and will not re...
See more...
@lcguilfoil Your event search does not have a time range associated with it, so it will be running an all time search and so when you click the drilldown the search is still running and will not respond to the drilldown <earliest>$global_time.earliest$</earliest>
<latest>$global_time.latest$</latest> Add the time range to your event search.
9 years later, same problem, you saved me--thanks. /var/log/secure and /var/log/messages both being monitored, both had the same log line at the beginning.
Bringing this back to life (maybe). Splunk UBA comes with an instance of Splunk. We install UF on all our nix machines to monitor them (performance and security). Well this install conflict with...
See more...
Bringing this back to life (maybe). Splunk UBA comes with an instance of Splunk. We install UF on all our nix machines to monitor them (performance and security). Well this install conflict with what UBA installs when setting up UBA (8089). SO how do we overcome this OR how do we use the UBA Installed Splunk instance to connect to the deployment server and have the configuration we push to all the other servers go on this as well?
Please install that above app into your sh and then add this script part into your dashboard’s first line! After that you don’t need to guess what you have in which token. It just shows all defined to...
See more...
Please install that above app into your sh and then add this script part into your dashboard’s first line! After that you don’t need to guess what you have in which token. It just shows all defined tokens with vale’s to you! Currently I always use it when I have some other token than time picker and one or two other. It really helps you!